text
stringlengths
5.43k
47.1k
id
stringlengths
47
47
dump
stringclasses
7 values
url
stringlengths
16
815
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
4.1k
8.19k
score
float64
2.52
4.88
int_score
int64
3
5
Computer users tend to take for granted the existence of magnetic and optical memory devices that offer breathtaking data-storage capacity with extremely high reliability at very low cost. The technological wonders of these rugged devices routinely escape users' notice, because the internal functioning of disk drives is hidden from view. Nevertheless, magnetic and optical drives are arguably among the world's most sophisticated electromechanical devices. Equally astonishing is the rapid advance of this technology, far outpacing the highly visible technology sectors of semiconductor electronics and computer software. Magneto-optical (MO) data storage represents a combination of optical data-storage techniques and magnetic storage media. This article aims to show that this hybridization is an ongoing process, with the most recent developments being among the most exciting in the 40-year history of the technology. Two of the most striking physical characteristics of magnetic and optical drive technology are the exponential growth of the data-storage density in commercial products and the mechanical performance of recording heads following a track of recorded information on the medium. The storage density metric is called "areal" density, or bit count per unit area on the medium. It has recently been doubling every 8 to 12 months in magnetic disk storage and about every 24 months in MO storage, according to DISK/TREND, Inc., a market research firm in Mountain View, Calif. This advance translates into an exponential drop in the cost of a stored bit of informationvalue to the customer unheard of in any other industry. The mechanical operation of disk drives is stunning when one realizes that a "flying" magnet head in the drive is analogous (through scaling) to a 747 aircraft flying over rolling terrain a few inches off the ground. The interface between the aerodynamically supported "slider" and the disk surface containing the user's precious data must be maintained reliably (without crashes) over the entire average five-year life of the drive. Optical drives are perhaps equally amazing in tracking data "remotely," with about 50,000 times greater separation between the head and the disk. A mechanical servo-control system provides for a 1µm-diameter focused light beam to follow a comparable-size feature on the disk surface to within 0.1µm accuracy while the surface undergoes vibratory motions in two dimensions with amplitudes in the range of 100µm. In the 25 years since its earliest commercial development, optical data storage has become an industry that generates about 15% of the total annual data storage market revenues of roughly ~$60 billion worldwide, and it probably accounts for a similar fraction of the total installed worldwide digital data storage capacity (~1018B, according to DISK/TREND and the National Storage Industry Consortium of San Diego, Calif.). The most common optical storage devices today are the ubiquitous compact disc (CD) and the emerging digital versatile disc (DVD) used in audio, video, and computer applications for distributing content and information. As optical disc storage has developed into a robust technology in the removable media sector over the past 15 years, rewritable optical discs have also progressed. The leading rewritable optical media types today are MO and phase-change, which share somewhat complementary attributes. MO technology has led the way in opening markets for rewritable, removable optical storage, with several generations of International Standards Organization (ISO) drives (commonly found in optical jukeboxes) and the Sony MiniDisc (commonly found in consumer applications primarily in the Far East and Europe). This article offers an overview of MO data storage technology and looks ahead to emerging directions that will make aspects of MO technology more important in magnetic disk drives in the future. The usual advantage of optical disc storage is media removability combined with large-capacity random-access data storage. MO is the rewritable optical-recording method best suited for high-performance, extended-lifetime, high-density applications. MO storage material is a thin magnetic film (similar to magnetic disks). The recording process is thermally assisted magnetic recording (atomic magnet reorientation), known to data storage engineers as a rapid and dependable process with practically infinite cyclability. Phase-change media, while providing a popular and viable low-cost alternative write-once or rewritable option, are inferior to MO in both raw performance (data recording rate and storage density) and cyclability. In phase-change recording, the recording/erasure processes involve crystalline-amorphous atomic structural changes (atoms move around); therefore these processes are slower and more prone to wear-out phenomena. The similarity of MO and magnetic recording is the basis for an expectation of an exciting synergy in the future. The merger and hybridization of these heretofore independently successful technologies is a possibility, as reflected in a string of research and development efforts publicized over the past five years [14, 6, 7]. Optical recording emerged as a viable technology in the 1960s and 1970s when it was made technically feasible by the development of low-cost, compact light sources of adequate power, namely solid-state diode lasers. In classical optical recording, the system designer arranges for an intense, focused light beam to interact with a storage medium (see Figure 1a). Focusing light implies that the location of the light focal point in the medium is relatively remote (a few millimeters) from the optical components that generate and guide the light beam. These so-called "far-field" optics represent a key distinction from magnetic recording, whereby the head is placed very close to the recording medium (today less than 0.1µm). The optical readback process is performed with the same light beam, though with optical power reduced from the writing level by a factor of from 3 to 6. In reading, the light is used to sense the induced physical change from the writing process, usually through some combination of optical reflection, transmission, and change in the state of the light beam's polarization, or orientation of the internal electric field in the electromagnetic wave. Some variants of this standard form of optical recording are known to optical storage engineers. One approach is "near-field recording," whereby an optical advantage can be gained by utilizing the light in very close proximity to its zone of emergence from the light generation and guidance device. Another approach is to use "volumetric" optical storage. In one example, the storage medium has multiple recording layers. A focused, far-field light beam is easily refocused at variable depths in the medium, making feasible multilayer storage media, provided the individual layers are sufficiently transparent. Volumetric storage can multiply the storage capacity of a disk or tape significantly, since the third dimension of the storage medium is used, instead of just spreading the data over a plane. To place MO recording in a useful context, the characteristics of today's common optical storage methodologies need to be defined. Figure 1(a) outlines the process of a focused light beam writing a track of information on a moving optical storage medium. This writing is carried out by modulating the optical power sent to the focusing lens, resulting in local physical change in the recording material. The substrate material carrying the recording medium may be a disk, tape, or card. The recording configuration in Figure 1(a) is a serial one, because the recorded marks are created sequentially on a track as the medium moves under the focused beam. In a serial digital recording application, this scheme might be called "bit-by-bit" recording. This approach is contrasted with a possibly more parallel method in which multiple beams record multiple data streams concurrently or in which blocks or frames of data might be recorded optically at one instant at various 3D positions in the medium by an extended beam, as in holographic recording (see Orlov's "Volume Holographic Data Storage" in this section). This article is limited to serial recording systems. Types of optical recording can be differentiated further according to the functionality provided to the user and by the reversibility of the writing process. Audio and computer CDs are the most common types of read-only optical media (often called ROM for "read-only memory"). In this case, the media are pre-written at the production factory, with specified information (such as recorded music or a set of computer program files) replicated thousands of times. The original information content was recorded onto a master disc by a process very much like that in Figure 1(a), then accurately copied by a mass replication process onto low-cost media the end user can read but cannot write on. The replication process for ROM discs is highly parallel; a single disc is made from the master in a few seconds (~1091010B replicated simultaneously). The physical embodiment is a sequence of small, light-scattering pits along the disc track. ROM optical media are contrasted with two forms of writable media: write-once, read-many (WORM) and rewritable, erasable, or write/read (W/R). WORM media is exemplified by CD-recordable media (CD-R) allowing the user to write information on the disc one time, then read it back an unlimited number of times. WORM media is preferred when a vendor wants to create a certified body of information that cannot be altered without extraordinary means. Consequently, professional-quality WORM optical media has become a legal standard as an information repository when an audit trail must be established and preserved, since ROM and WORM optical media involve essentially irreversible writing processes. In order to allow multiple recordings, rewritable optical media must be readable at every stage. Therefore, the recording process must be highly physically reversible. However, few physical processes are perfectly reversible, so a rewritable medium also needs a measure of cyclability, that is, an expected count of the number of reliable rewrites the medium can sustain. Because different physical processes are available to implement optical rewritability, engineers find there are different regimes of cyclability for different embodiments of W/R media. MO media excels by meeting an ISO specification to sustain at least 106 writing cycles and at least 107 reading cycles without unacceptable recording degradation. By contrast, CD-RW exemplifies a consumer-rewritable optical-disc product based on a phase-change medium that may have a factor of 102104 times lower cyclability than typical MO media. Note that W/R media (magnetic or optical) are used in ROM or WORM mode when adequate software or hardware protections are in place to prevent inadvertent overwriting or erasure of data. MO recording is a form of magnetic recording in which light is used as a source of medium heating in the writing and erasure processes (thermomagnetic recording) and as a probe of the magnetic state of the medium in reading (see Figure 2). In each of the figure's three panels, a thin film of magnetic material deposited on a smooth substrate (such as a disc and tape carrier) is shown in cross-section. The magnetic polarization of the MO material is oriented perpendicularly to the film plane, a necessity for MO readout to work. This magnetic orientation requires careful material selection and processing, since it is usually energetically favorable for the magnetic polarization to lie in the plane of the film (longitudinal orientation), as in magnetic disks. Heating MO media is necessary for writing and erasing, as in Figure 2(a) and (c). In general, magnetic recording is achieved when an applied magnetic field overcomes the medium's resistance to switching, called its "coercivity." All magnetic materials steadily lose their magnetic properties, however, including coercivity, as their temperature is elevated. To record information at high density on a surface implies that the region of controlled switching is very small. In MO recording, this condition is achieved by combining a relatively uniform magnetic field from a coil device with strong localized heating from a focused light beam (see Figure 3). (The applied magnetic field is about 600 times stronger than the Earth's compass-influencing field.) When the medium cools to room temperature, the freshly reversed magnetic polarization is said to be "frozen in." Two distinct means of thermomagnetic recording of a magnetization pattern along a track on the moving medium are found in MO drives, one using laser intensity (power) modulation (LIM), the other using magnetic field modulation (MFM) (see Figure 4). In LIM writing, the magnetic field is held constant; in MFM recording, the laser power can be kept on continuously or pulsed at exactly the data clock rate. A binary data bit sequence can be encoded in magnetic domains on the medium in two ways: In pulse position modulation (PPM), the drive records a binary 1 or 0 corresponding to the existence or absence, respectively, of a small circular magnetic domain. Alternatively, the drive may use pulse width modulation (PWM), whereby a binary 1 or 0 corresponds to the existence or absence, respectively, of a magnetization transition from + to or from to +. This method is used in magnetic recording (see Figure 4 for a comparison of these methods). PWM encoding has advantages for achieving greater linear-bit density, since even the smallest circular domain encodes binary bits at both its leading and trailing edges. Consequently, information can be packed more densely with an equivalent number of written features. Because the commercial application of MO recording is mostly in disk drive devices, this article compares MO only with magnetic disk drives. For many years, personal computers have included both floppy disk drives and hard disk drives (HDDs). A floppy disk drive is a relatively low-cost, low-performance device that supports removable magnetic media of relatively low capacity (typically 1.44MB per cartridge). Removable media are convenient for nonelectronic transport of user data files between computers. An HDD, by contrast, is a device within which the magnetic disk(s) is normally sealed; it provides much higher performance (faster data access and throughput) and higher capacity than a floppy drive. Magnetic recording involves close-range interaction between the head and medium. A clean environment is critical for maintaining this mechanical interface. Media removability is an obvious convenience to users but introduces a significant reliability risk for the storage device itself. There have been some widely used products with removable hard disks, and as expected, they offer significantly improved performance and capacity compared to floppy drives but are less dependable than conventional HDDs. Compare this situation to MO drives. All optical drives excel in media removability with superb reliabilitya direct consequence of the far-field head being well removed from the disk. When light is focused through the substrate onto the disk's second surface, as in Figure 3, the outer surface need not be pristine. Some amount of surface dust and dirt is tolerated by the optical system, because the light beam is roughly a million times more diffuse when passing through the exposed entry surface of the disk than it is at the focal point. The use of second-surface focusing means it is tolerable to not include a protective cartridge around a CD (though gross contamination causes light blockage, absorption, or scattering that is eventually detrimental to performance). Besides media removability and device reliability, three other characteristics are of interest to users of disk storage devices: capacity, speed, and cost. When comparing the storage capacity of magnetic HDDs and MO drives, areal density has to be considered. Figure 5 compares average areal density over time for HDDs and MO drives. Until recently, MO recording had an advantage, because optical drives had a much higher track density (number of tracks per unit radial distance on the disk). Since 1990, however, magnetic recording has shown a significant increase in the annual growth rate of areal density, initially doubling from the historical average rate of about 30%60% with the widespread introduction of magnetoresistive head technology. More recently, that rate has increased to 100%200% annually, depending on the product. The MO recording industry has been managed more conservatively, reflecting an annual areal density growth rate of about 40% from 1992 to 2000; the same rate is now projected out to 2008. When comparing MO drives and HDDs, it is important to remember that a huge storage capacity advantage for removable media systems results from the fact that capacity per drive is limited in part by the number of disk cartridges associated with a particular drivea number without theoretical limit. Drive cost comparisons are somewhat complicated, since an HDD is bundled with dedicated media capacity, while an MO drive is not. MO drives are generally considered to be the most expensive of the optical disk drives, because an optical head for MO readout involves specialized optics for sensing the polarization state of reflected light. Moreover, MO drive manufacturing has probably not yet capitalized on economies of scale, generally due to limited market penetration. (Cost is both the cause and the result of relatively modest product volumes.) Readout in MO recording is a matter of detecting the pattern of magnetization in the storage medium utilizing rotation of the plane of polarization upon reflection of polarized light from a magnetic mirrorcalled the magneto-optic Kerr effect, as in Figures 2(b) and 3. The drive's reading power is increased enough to raise the signal-to-noise ratio (SNR) as high as possible without heating the track to a level that degrades the written magnetic information. A system's designer tries to maximize the MO signal while keeping the reflectivity below 30%. The remaining fraction of light power is absorbed during writing and must be high enough to heat the disk efficiently. A viable data channel in a recording device requires adequate SNR; therefore, the designer must consider noise minimization. The principal noise components (called "media noise") in an MO recording system are associated with the laser, the read-channel electronics, the light detectors, and the disk. Differential detection, as in Figure 3, cancels some but not all of these noise components. SNR is perhaps the single most important parameter governing data-channel performance, and thus has the greatest influence over the storage system performance metrics of interest to the usercapacity and data-throughput rate. In general, great care must go into a disk's optical, thermal, and magnetic designs in order to achieve sufficient and balanced system performance. Since the mid-1990s, several new research thrusts have begun to promise enhanced MO technology applicability and value from this form of rewritable optical storage. There is an important distinction between the schemes to extend the diffraction limit in readout discussed earlier and the more revolutionary approaches to MO recording in terms of system architecture. The following are some of the most notable developments. High-density MO drive. In late 1995, TeraStor of San Jose, Calif., began developing a novel MO drive using focusing optics with a solid immersion lensa moderately high refractive index lens placed in close proximity (<<l) to the disk recording surface (not a near-field technique) . This design effectively increases the numerical aperture of the light-incident medium, reducing the spot size, as in Figures 1(c) and (d), and increasing the recording areal-density potential. However, TeraStor disbanded in 2000 without shipping a product. Flying MO head technology. In 1996, Quinta, a data storage startup company in San Jose, Calif., began developing flying MO head technology. When Seagate acquired Quinta in 1997, this technology was named Optically Assisted Winchester. In it, micro-optics and a microcoil are attached to a flying slider (the carrier for a HDD head), along with an optical-fiber light-delivery system to achieve a low-mass, low-cost means of realizing the first-surface configuration for MO recording. This design introduced MO recording technology into an HDD architecture while preserving HDD performance. Seagate has exhibited prototype drives but has not yet marketed a product based on the original concept. Hybrid schemes. Hybrid schemes combining magnetic and optical recording elements show promise in the laboratory of outperforming traditional magnetic-only component combinations. Hitachi, Philips, and Sharp have publicized their work in this area [3, 6, 7], while other companies have expressed interest in such approaches. A common element in these methods is the incorporation of laser light into the recording and readout processes. Momentary heating of the recording medium is an effective way to mitigate a looming problem of writability in ordinary magnetic recording by using the thermomagnetic recording concept from MO technology. (Media coercivity in HDDs is being increased steadily to preserve the magnetic stability of ever-shrinking magnetic bit cells; writing heads have a definite physical limitation in their output magnetic switching fields.) Near-field optical schemes. Research in the early 1990s demonstrated MO recording in domains smaller than 0.1µm by using a near-field optical source. Such an approach overcomes the diffraction limit of light, allowing electromagnetic field dimensions to be determined by the physical extent of the source, such as an aperture in a waveguide and the width of a laser cavity. While this technique allows much smaller optical "spots," such spots exist only in the proximity of the source, with the allowable separation typically on the same order as the source dimension. Moreover, such near-field schemes normally have the drawback that the transmission efficiency for optical power is extremely low. Therefore, much more efficient near-field optical sources need to be developed to produce energy flux sufficient for thermomagnetic recording. Some of these recent thrusts could extend optical recording beyond its historical regimes. MO storage has already proven itself in applications requiring very rugged, highly reliable, removable, rewritable optical media. The devices commercialized so far have won acceptance in the professional and consumer markets that demand high storage capacity and moderate random data-access performance with media removability. These applications are fundamentally different from those addressed by HDDs. Implementation of new approaches in conventional MO recording shows promise in boosting areal density 10-fold. For example, a number of techniquesmagnetic super-resolution (MSR), magnetic amplifying MO system (MAMMOS), and domain well dynamic displacement (DWDD)improve MO readout resolution without needing a smaller focused light spot, which would be physically limited by the available light wavelength and objective lens. Meanwhile, the convergence of optical storage approaches with magnetic storage is an exciting new development. MO recording offers the key technologies of thermomagnetic recording and patterned media, which may be instrumental in alleviating a slowdown in HDD advances [2, 6]. The HDD industry is beginning to appreciate the potential of these approaches to help sustain its historical progress in cost and performance. Such hybridization would be a remarkable development for the HDD industry, which for the past 40 years has relentlessly pursued incremental progress based on scaling, punctuated by the timely introduction of stepwise improvements in component technology. MO technology is well-positioned to continue the flow of engineering solutions to users with an insatiable appetite for storing information. 1. Alex, M., et al. Optically assisted magnetic recording (paper WeF-03). In Proceedings of the Joint Magneto-Optical Recording International Symposium/Asia-Pacific Data Storage Conference (Nagoya, Japan, Oct. 30Nov. 2, 2000). 3. Katayama, H., et al. New magnetic recording method using laser-assisted read/write technologies. In Proceedings of Magneto-Optical Recording International Symposium on Optical Memory and Optical Data Storage 1999 (Monterey, Calif., Jan. 1013 1999). 7. Saga, H., et al. A new recording method combining thermomagnetic writing and flux detection (paper Pd-08). In Proceedings of the International Symposium on Optical Memory (Tsukuba, Japan, Oct. 2022, 1998), 188189. Figure 4. Two meansLIM and MFMof thermomagnetic recording of a magnetization pattern along a track on the moving medium. In PPM coding, binary Is are represented by the center position of the small nearby circular marks. In PWM coding, binary Is are represented by edge marks, which can be of N clock lengths, whereby the range of N depends on the particular code being used. ©2000 ACM 0002-0782/00/1100 $5.00 Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. The Digital Library is published by the Association for Computing Machinery. Copyright © 2000 ACM, Inc. No entries found
<urn:uuid:7b60fd7b-fa58-4b39-b217-e14e3a1ba898>
CC-MAIN-2022-33
https://cacm.acm.org/magazines/2000/11/7509-magneto-optical-data-storage/fulltext
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00098.warc.gz
en
0.927664
5,066
2.859375
3
In this chapter we examine technological changes that improved Americans’ ability to move people and goods, as well as the economic and political forces that helped shape the growth of transportation networks. The transportation revolution in the United States began when Americans taking advantage of features of the natural environment to move people and things from place to place began searching for ways to make transport cheaper, faster, and more efficient. Over time a series of technological changes allowed transportation to advance to the point where machines have effectively conquered distance. People can almost effortlessly travel to anywhere in the world and can inexpensively ship raw materials and products across a global market. But this technology is not ubiquitous, and it is not necessarily democratic. As a famous science fiction writer once said, the future is already here, it’s just not very evenly distributed. Modern transportation infrastructure is controlled to a great extent by large corporations, but the benefits of transport are depended on by everyone. And transportation technology itself requires specific conditions such as abundant, cheap, portable energy in the form of fossil fuels, and public infrastructure created by our own and foreign governments, that even those large corporations depend upon but don’t control. When we think of transportation, it is natural to think first about going places. Getting on a plane in one hemisphere and getting off on the other side of the world is a life-changing opportunity which was unavailable to most people as little as a generation ago, and unthinkable two generations ago. But more crucial to our daily lives than the freedom offered by world travel is the cargo from the other side of the world that reaches us quickly in the holds of jets and more slowly but in almost unimaginable volume in containers on ships. The global transportation of foods, raw materials, and finished goods goes virtually unnoticed in our daily lives, but makes our contemporary consumer lifestyle possible. Although even the early stages of the transportation revolution allowed people like seventy-year old Achsah Ranney, from Chapter Five’s Supplement, to travel regularly between her children’s homes in Massachusetts, New York, and Michigan, the more significant change was the ability of her sons and of other Americans to move freight from place to place. The ability to effectively ship food and other goods to where they were needed allowed people to stay put, and even to concentrate themselves in cities in a way they had never been able to do before. The growth of eastern cities depended just as much on the transportation revolution as did the building of new cities in the west. As we have already seen, early Americans made amazing journeys with very primitive methods of transportation. The people who crossed Beringia and settled North and South America were able to cover startlingly long distances on foot. European explorers crossed dangerous oceans to visit the Americas in tiny ships. Human and animal power has been used extensively throughout American history, and is still used today to reach remote areas off the grid. But it is clear that improvements in transportation technology have been among the most powerful drivers of change in our history. And the transportation revolution has certainly changed our relationship with the American environment. Technological improvements to ocean-going ships in the fifteenth century made European colonialism possible in the first place. Ships became bigger, faster, and safer. More people and goods could leave the safety of coastal waters and cross the oceans, and the places these improved ships connected became centers of trade, population, and wealth. This pattern of growth repeated itself as new technologies were developed to help Americans expand across the continent. As we have seen, American colonists depended on trade with England and with the sugar planters of the West Indies to make their outposts in New England and Virginia successful. But from the beginning of the American Revolution to the conclusion of the War of 1812, relations between the new nation and Britain were tense and trade suffered. If it had not found a way to ship people and goods to and from its own frontier, the United States would have remained a coastal nation focused on ports like Boston, New York, Philadelphia, and Charleston. The barely-remembered Whiskey Rebellion of 1791, when George Washington led United States troops against American farmers in western Pennsylvania, was really about transportation. Farmers west of the Appalachian mountains could not easily haul wagon-loads of grain to eastern markets, so they turned their harvests into a more portable product by distilling grain into whiskey. The farmers believed the government’s excise tax on distilled spirits had been instituted to drive them out of the whiskey business for the benefit of large Eastern distillers. Since they had few other sources of income, the tax was a serious issue for westerners. Luckily, the incoming Jefferson administration repealed the tax in 1801 and increasing Ohio River shipping provided new outlets for western produce. Roads and Rivers On the eve of the Revolution, the only road that did not hug the east coast followed the Hudson River Valley into western New York on its way to Montreal (this was one reason colonial Americans seemed continually obsessed with the idea of conquering Montreal and bringing it into the United States). Less than thirty years later, riders working for the Post Office Department carried mail to nearly all the new settlements of the interior. The postal system’s designer, Benjamin Franklin, understood that in order for the new Republic to function, information had to flow freely. Franklin set a low rate for mailing newspapers, insuring that news would circulate widely in the newly-settled areas. But it was one thing carrying saddlebags filled with letters and newspapers to the frontier, and something else moving people and freight. Rivers were the first important routes to the interior of North America. The Ohio River, which begins at Pittsburgh and flows southwest to join the Mississippi, helped people get to their new farms in the Ohio Valley and then helped them carry their farm produce to markets. The Ohio River Valley became one of the first areas of rapid settlement after the Revolution, along with the Mohawk River Valley in western New York. The importance of river shipping is illustrated by the fact that over fifty thousand miles of tributary rivers and streams in the Mississippi watershed were used to float goods to the port of New Orleans. The dependence of western farmers on the Spanish port also explains why New Orleans was a considered strategic city by the United States in the War of 1812. Thomas Jefferson’s 1803 purchase of the Louisiana Territory had actually begun as an attempt to buy the city of New Orleans, and Andrew Jackson’s defense of the port during the War of 1812 was vital to insuring the success of western expansion. Early westward expansion depended on rivers, and towns and cities built during this era were usually on a waterway. Pittsburgh, Columbus, Cincinnati, Louisville, St. Louis, Kansas City, Omaha, and St. Paul all owe their locations to the river systems they provide access to. Buffalo, Cleveland, Detroit, Chicago, and Milwaukee utilize the Great Lakes in the same way. These lakeside cities exploded after the Erie Canal opened a route from the Great Lakes to the Atlantic, and allowed New York to overtake New Orleans as the nation’s most important commercial port. The 363-mile Erie Canal was so successful that another four thousand miles of canals were dug in America before the Civil War. In 1800, it took nearly two weeks to reach Buffalo from New York City, a month to get to Detroit, and six grueling weeks of travel to arrive at the swampy lake-shore settlement that would become Chicago. Thirty years later, Buffalo was just five days away, Detroit about ten days, and Chicago less than three weeks. Horses pulled canal boats from towpaths on shore, eliminating the strain of travel for the boats’ passengers. Floating along on calm water was infinitely more comfortable than spending weeks on a wagon, in a cramped stage coach, or on horseback. The number of people willing to make long trips increased accordingly. And the amount of freight shipped to New York, after the canal cut shipping costs by over ninety percent, increased astronomically. Goods flowed along the Canal in both directions, offering life-changing opportunities. As mentioned previously, within ten years of the Erie Canal’s completion, the last fulling mill processing homespun cloth in Western New York shut its doors. Women no longer had to spend their time spinning wool and weaving their own textiles to make their family’s clothing. They could buy bolts of wool and cotton fabrics from the same merchant at the local general store who ground their family’s grain into flour and shipped it on the Canal to eastern cities. With fewer demands on their time, many women were able to not only improve their own quality of life, but contribute to family income by taking in piece-work, raising cash crops, or keeping cows and churning butter for sale to their local merchants. The Age of Steam Steam technology changed the nature of transportation. Until steam engines were put on riverboats, shipping had depended on either wind and river currents or on human and animal power. Goods could easily be floated south from farms on the nation’s rivers, but it was much more difficult and expensive to ship products against the rivers’ currents to the frontier. Flatboats and rafts accumulated at downstream ports, and were often broken down and burned as firewood. Steam engines made it possible to sail upstream as easily and nearly as quickly as down, causing an explosion of travel and shipping that radically changed frontier life. Steam engines were a product of early European industrialism. The first steam patent was granted to a Spanish inventor named Jerónimo Beaumont in 1606, whose engine drove a pump used to drain mines. Englishman James Watt’s 1781 engine was the first to produce rotary power that could be adapted to drive mills, wheels, and propellers. Robert Fulton, an American inventor who had previously patented a canal-dredging machine, visited Paris and caught steamboat fever. Fulton sailed an experimental model on the Seine, and then returned home and launched the first commercial American steamboat on the Hudson River in 1807. The Clermont was able to sail upriver 150 miles from New York City to Albany in 32 hours. In 1811, Fulton built the New Orleans in Pittsburgh and began steamboat service on the Mississippi. Although Robert Fulton died just a few years later of tuberculosis, his partners Nicholas Roosevelt and Robert Livingston carried on his business, and the age of riverboats was underway. Like Fulton’s prototype and the Clermont, the New Orleans was a large, heavy side-wheeler with a deep draft. It was not the most efficient design for shallow water, and it did not take long for ship-builders to settle on the familiar shallow-draft rear-paddle riverboats that carried freight on the Mississippi and its tributaries well into the 20th century. The shallower a riverboat’s draft, the farther upriver it could travel. Steam-powered riverboats soon pushed the transportation frontier to Fort Pierre in the Dakota territory and even to Fort Benton, Montana. Riverboats made it possible to ship goods in and out of nearly the whole area Thomas Jefferson had acquired in the Louisiana Purchase just a generation earlier. And steam-powered ocean shipping made the markets of Britain and Europe readily accessible to farmers and merchants in the middle of North America. The other transportation technology enabled by steam power, of course, was the railroad. But railroads were even more revolutionary than steamboats. In spite of their power and speed, steam-powered riverboats depended on rivers or occasionally on canals to run, but a railroad could be built almost anywhere. Suddenly, the expansion of American commerce was no longer limited by the routes nature had provided into the frontier. America’s first small railroads had actually been built on the East Coast before a steam engine was available to power them. Trains of cars were pulled by horses and looked a lot like stage-coaches on rails. But after Englishman George Stephenson’s locomotives began pulling passengers and freight in northwestern England in the mid-1820s, Americans quickly switched to steam. The first locomotive used to pull cars in the United States was the Tom Thumb, built in 1830 for the Baltimore and Ohio Railroad. Although Tom Thumb lost its maiden race against a horse-drawn train, Baltimore and Ohio owners were convinced by the demonstration of steam technology and committed to developing steam locomotives. The railroad, which had been established in 1827 to compete with the Erie Canal, already advertised itself as a faster way to move people and freight from the interior to the coast. Adding steam engines accelerated rail’s advantage over canal and river shipping. Over 9,000 miles of track had been laid by 1850, most of it connecting the northeast with western farmlands. The Mississippi River was still the preferred route to market from Louisville and St. Louis south. But Cincinnati and Columbus became connected by rail to the Great Lake ports at Sandusky and Cleveland, giving the northern Ohio Valley faster access to New York markets. Detroit and Lake Michigan were also connected by rail, making the long steamboat trip around the northern reach of Michigan’s lower peninsula unnecessary. By 1857, rail travelers could reach Chicago in less than two days and could be almost anywhere in the northern Mississippi Valley in three. On the eve of the Civil War in 1860, Chicago was already becoming the railroad hub of the Midwest. The Illinois Central Company had been chartered in 1851 to build a rail line from the lead mines at Galena to Cairo, where the Ohio and Mississippi Rivers joined. Galena is also located on the Mississippi on the northern border of Illinois, but rapids north of St. Louis made transporting ore on the river impossible, illustrating the advantage of rails over rivers. A railroad line to Cairo, with a branch line to Chicago, would also attract settlers and investors to Illinois. Young Illinois attorney Abraham Lincoln helped the Illinois Central lobby legislators and receive the first federal land grant ever given to a railroad company. The company was given 2.6 million acres of land, and Illinois Senator Stephen Douglas helped design the checkerboard distribution of parcels that would become common for railroad land grants. The map below shows the extent of the land the government gave to the Illinois Central Company, which a few years later showed its gratitude by helping to finance Lincoln’s Presidential campaign against Douglas. The North’s advantage over the Confederate South in railroad miles and the Union Army’s ability to move troops and supplies efficiently had a definite impact on the outcome of the Civil War. In the years following the war, the shattered South added very little railroad track and repaired only a small percentage of the tracks the Union Army had destroyed during the war. While railroads languished in the South, rail miles in the North exploded. In 1869, the West Coast was connected through Chicago to the Northeast, when the Union and Central Pacific Lines met at Promontory Point Utah on May 10th. The building of a transcontinental railroad was made possible by the Pacific Railroad Act, which President Lincoln had signed into law in 1862. Public or Private? The Pacific Railroad Act was the first law allowing the federal government to give land directly to corporations. Previously the government had granted land to the states for the benefit of corporations. The Act granted ten square miles of land to the railroad companies for every mile of track they built. Land next to railroads always increased in value. The unprecedented gift of ten square miles of rapidly-appreciating land for every mile of track was a tremendous incentive to railroad companies to lay just as much track as they possibly could. Decisions to build lines were frequently based on the land granted, rather than on whether or not railroad companies expected the new lines to carry enough traffic or generate enough freight revenue to pay for themselves. In the eighteen years between the original Illinois Central grant of 1851 and the completion of the transcontinental line in 1869, privately-owned railroads received about 175 million acres of public land at no cost. This amounts to about seven percent of the land area of the contiguous 48 states, or an area slightly larger than Texas. For comparison, the Homestead Act distributed 246 million acres to American farmers over a 72-year period between 1862 and 1934, but required homesteaders to live on and to farm the land continuously for five years or pay for their parcel. The justification for the residency requirement was that the government was concerned homesteaders would become speculators and flip their farms. Railroad land grants were made with no similar stipulations because railroad corporations were expected to sell the lands they were given at a substantial profit. It has often been argued that a national infrastructure project as large as a transcontinental railway could never have been built without government assistance. The West Coast and western territories needed to be brought into the Union, some historians have argued, and the only way to achieve this was with government-supported railroads. Ironically, the same people who make this argument usually also claim that it would have been disastrous for the government to have owned the railroads it had made possible with its legislation, loans, and land grants. An undertaking of this scope and scale, they say, requires that corporations be given monopolies and grants of natural resources and public credit. These arguments make it seem inevitable that giant corporations taking huge gifts from the public sector were the only way for America to move forward and build a rail network. However, history shows that this was not the only way a national rail system could have been built. There are numerous examples of rail systems built and managed by the public sector in foreign countries, especially during the nineteenth century when nearly every rail system outside the United States was state-owned and operated. However, for the sake of simplicity we will restrict the comparison to the United States. The Northern Pacific Railway, a private corporation chartered by Congress in 1864, built 6,800 miles of track to connect Lake Superior with Puget Sound. In return, the corporation was given 40 million acres of land in 50-mile checkerboards on either side of its tracks. Not only did the Northern Pacific rely on the government for land and financing, the railroad used the services of the U.S. Army to protect its surveyors and to move uncooperative Indians out of its way. When the Northern Pacific’s proposed route cut through the center of the Great Sioux Reservation, established by the 1868 Fort Laramie Treaty, the corporation pressured the government to break the treaty. George Custer announced that gold had been discovered in the Black Hills after an 1874 mission protecting Northern Pacific surveyors, and Washington let the treaty be disregarded by both the railroad and the prospectors. The Indians responded with the Great Sioux War of 1876, which culminated in the Battle of Little Big Horn, where Custer and his Seventh Cavalry were wiped out by Sitting Bull and Crazy Horse leading a force of Lakota, Cheyenne, and Arapaho warriors. But although the Indians won the battle, they lost the war. Less than a year later, Sioux leaders ceded the Black Hills to the United States in exchange for subsistence rations for their families on the reservation. In contrast, Canadian-American railroad entrepreneur James Jerome Hill built his Great Northern Railroad line from St. Paul to Seattle during the last decades of the nineteenth century without causing a war and without receiving a single acre of free public land. The Great Northern bought land from the government to build its right of way and to resell to settlers. Hill claimed proudly that his railway was completed “without any government aid, even the right of way, through hundreds of miles of public lands, being paid for in cash.” The Great Northern system connected the Northwest with the rest of the nation through St. Paul, using a web of over 8,300 miles of track. And because Hill only built lines where traffic justified them rather than adding track just to collect free land, the Great Northern was one of the few transcontinental railroad companies to avoid bankruptcy in the Panic of 1893. Regardless of the ways they were financed and built, the proliferation of railroads caused explosive growth. Chicago was a frontier village of 4,500 people in 1840. When Lincoln helped the Illinois Central receive the first land grant in 1851, the city’s population was about 30,000. Twenty years later Chicago was the center of a rapidly-growing railroad network, and the city held ten times the people. In 1880 Chicago’s population was over 500,000, and ten years later Chicago had over a million residents. We will take a closer look at the changes railroads brought to Chicago in a Chapter Seven. America’s transportation revolution did not end with steamboats and railroads and was not limited to public transportation technologies. The development of the automobile ushered in a new era of personal mobility for Americans. Internal combustion engines were inexpensive to mass produce and much easier to operate than steam engines. With the development of automobiles and trucks around the turn of the twentieth century, it no longer required a huge capital investment and a team of engineers to purchase and operate motorized transportation. Even the workers on Henry Ford’s assembly lines could aspire to owning their own Model Ts, especially after Ford doubled their wages to $5 a day in January 1914. Engineers had experimented with building smaller machines using steam engines, and there were several examples in Europe and America of successful steam-powered farm tractors, trucks, and even a few horseless carriages. But internal combustion engines delivered much greater power relative to their mass, allowing smaller machines to do more work. The first internal combustion farm tractor was built by John Froehlich at his small Waterloo Gasoline Traction Engine Company in 1892. Others began applying internal combustion to farm equipment, and between 1907 and 1912 the number of tractors in American fields rose from 600 to 13,000. Eighty companies manufactured more than 20,000 tractors in 1913. After an auspicious beginning, Froehlich’s little Iowa company grew slowly and began building farm tractors in volume only after World War I. The Waterloo company built a good product, and was acquired by the John Deere Plow Company in 1918. Deere remains the world leader in self-propelled farm equipment. The first internal combustion truck was built by Gottlieb Daimler in 1896, using an engine that had been developed by Karl Benz a year earlier. World War I spurred innovation and provided a ready market for internal combustion trucks that were much less expensive than their steam-powered rivals. By the end of the war gasoline-powered trucks had overtaken the steam truck market. Most large trucks now burn diesel fuel rather than gasoline, using a compression-ignition engine design patented by Rudolf Diesel in 1892. Internal combustion trucks and tractors, like cars, allowed people to go farther, carry more, and do more work than had been possible using human and animal power. And they were much more affordable than comparable steam-based vehicles and easier to build at a scale that encouraged individual use and ownership. Trucking eventually challenged rail transport, especially after the development of semi-trailers and the Interstate Highway System. Although the first diesel truck engines only produced five to seven horsepower, they advanced quickly. Indiana mechanic Clessie Cummins built his first, six-horsepower diesel engine in 1919. The business bearing his name is now a global corporation doing $20 billion in annual business, mostly in diesel engines. Cummins’s current heavy truck engine is rated at 600 horsepower. While it is easy to focus on the inventions and technological innovations of the internal combustion era, we should not lose sight of the infrastructure improvements that made these innovations valuable. Without paved roads to run on, there would have been far fewer cars and trucks and their impact on society and the environment would have been much different. The biggest road-building project in American history was the construction of the Interstate Highway System, financed by the Federal-Aid Highway Acts of 1944 and 1956. Unlike the transcontinental railroad project of the 1860s, the Interstate Highway System was paid for by the federal government and the roads are owned by the states. The system includes nearly 47,000 miles of highway, and the project was designed to be self-liquidating, so that the cost of the system did not contribute to the national debt. In addition to the Interstate System, American states, counties, cities and towns maintain systems of roads totaling nearly four million miles, about two-thirds of which are paved. Gasoline vs. Ethanol The economic trade-off of internal combustion for the farmers and teamsters who first adopted it was that speed and power came at a price. Where horses and oxen were readily available in farm communities and were cheap to maintain, tractors and trucks were a substantial investment. And unlike horses and oxen, tractors and trucks needed to be fueled with petroleum that made them dependent on a faraway industry. However, this dependence was not inevitable. Henry Ford and Charles Kettering, the chief engineer at General Motors, had both believed that as engine compression ratios increased, their companies’ engines would transition from gasoline to ethyl alcohol. We are all aware that the shift to ethanol did not happen, but why it did not is less well-known and may surprise you. Most history books faithfully repeat the inaccurate story that Edwin Drake’s famous 1858 oil strike in Titusville Pennsylvania came just as the world was running out of expensive whale oil. Actually, there was a thriving market for alcohol fuel in the mid-nineteenth century United States. Ethanol was price-competitive with kerosene, and unlike kerosene it was produced by many small distillers, creating widespread competition that would continue to drive down prices. Unfortunately for ethanol producers and fuel consumers, the alcohol fuel industry was wiped out when the Lincoln administration imposed a $2.08 per gallon tax on distilled alcohol between 1862 and 1864. A gallon of Standard Oil kerosene still cost only 58 cents, so kerosene took over the American fuel market. Of course, after kerosene became the only available fuel, Standard Oil was free to raise prices as it saw fit. But ethanol still had its advocates. The very first American internal combustion engine, built in 1826 by Samuel Morey, had used grain alcohol because it was inexpensive and readily available. Nearly a century later, Henry Ford’s Model T was designed to be convertible between kerosene, gasoline, and ethanol. General Motors chief engineer Kettering was convinced it was only a matter of time until ethanol became the fuel of choice. So why aren’t we all driving cars running renewable fuels? Part of the answer, as you have probably already guessed, is that Standard Oil made the auto industry an offer they couldn’t refuse. The oil company used its vast distribution network to make gasoline available everywhere it was needed, and insured that the price was so low that competitors could not profit if they entered the market. Standard Oil pioneered the practice of pricing below their cost of production to run competitors out of the business. The profits of the company’s many other divisions subsidized their short-term losses on gasoline. Predatory pricing was one of the principal charges made against the company in the 1911 antitrust case that resulted in the breakup of the Standard Oil Trust. But Standard Oil’s predatory pricing does not tell the whole story of why we do not run cars on ethanol. The rest of the story, if anything, is even more sinister. It has long been known that using gasoline at high compression results in engine knocking. It was also well-known that ethanol did not knock. Charles Kettering at General Motors had argued for years that the “most direct route which we now know for converting energy from its source, the sun, into a material suitable for use as a fuel is through vegetation to alcohol.” The technology was simple and Americans had been distilling alcohol fuels for generations. Unfortunately, Kettering worked for a corporation whose major shareholder was the Du Pont family, who also happened to own the largest corporation in the chemical industry. It would be impossible for DuPont to profit or for General Motors to gain a competitive advantage using alcohol fuels, since the distilling technology was universally available and the product was un-patentable. However, there was an extremely profitable alternative. Tetraethyl Lead (TEL) was a lubricating compound that could be added to gasoline to eliminate knocking. General Motors received a patent on its use as an anti-knock agent, and Standard Oil was granted a patent on its manufacture which was later extended to include DuPont. The three companies founded Ethyl Corporation to market TEL and other fuel additives. Unfortunately, lead is a powerful neurotoxin, linked to learning disabilities and dementia. The federal government had misgivings about allowing lead additives, and in 1925 the Surgeon General temporarily suspended TEL’s use and government scientists secretly approached Ford engineers seeking an alternative. In the 1930s, 19 federal bills and 31 state bills were introduced to promote alcohol use or blending. But the American Petroleum Industries Committee lobbied hard against them. Under intense industry pressure, the Federal Trade Commission even issued a restraining order forbidding commercial competitors from criticizing Ethyl gasoline as unsafe. By the mid-1930s, 90 percent of all gasoline contained TEL. Airborne lead pollution increased to over 625 times previous background levels, and the average IQ levels of American children dropped 7 points during the leaded-gas era. By the 1980s, over 50 million American children registered toxic levels of lead absorption and 5,000 Americans died annually of lead-induced heart disease. When public concern continued to increase, the Ethyl Corporation was sold in 1962 in the largest leveraged buyout of its time. In the 1970s the newly-established Environmental Protection Agency finally took the stand other federal agencies had been afraid to take. The EPA declared emphatically that airborne lead posed a serious threat to public health, and the government forced automakers and the fuel industry to gradually eliminate the use of lead. TEL is now illegal in automotive gasoline, although it is still used in aviation and racing fuels. Unleaded gasoline is now used in all new internal combustion cars. But while pure ethanol has powered most automobiles in Brazil since the 1970s, most Americans continue to use a blend containing just 10% ethanol to 90% gasoline. Two additional forms of transportation became increasingly important as the twentieth century ended and the twenty-first century began. Commercial airplanes are only a little over a hundred years old and the first air cargo and airmail shipments were flown in 1910 and 1911. Air cargo was considered too expensive for all but the most valuable shipments until express carriers such as UPS and Federal Express revolutionized the shipping business in the 1990s. The global economy now measures air freight volumes in ton-miles. In 2014, the world shipped more than 58 billion ton-miles of goods. Air freight also allows perishable items like fresh fruits and vegetables to be transported across oceans and continents from producers to consumers. This is a big business. Over 75 million tons of fresh produce are air-shipped annually, worth more than $50 billion. For nonperishable items, container shipping has created a single global market. Standardized containers were invented by a trucker named Malcolm McLean, who realized it would save a lot of time and energy if his trucks didn’t need to be loaded and unloaded at the port, but could just be hoisted on and off a cargo ship. McLean refitted an oil tanker and made his first trip in 1956, carrying fifty-eight containers from Newark to Houston. Current annual shipping now exceeds 200 million semi-trailer sized containers. Containers can be shipped by sea, rail, truck, and even air, allowing just-in-time operators like Wal-Mart to manage a supply chain that relies much less on warehoused inventory, and more on product in transit. But just as shifting from horse power to a gasoline truck or tractor a hundred years ago involved economic trade-offs, shopping at Wal-Mart today introduces a new level of dependence. We not only rely on transportation systems and the fuels they run on, but also on supply-chain software, international trade agreements and currency fluctuations, and even on the political situations of faraway nations. As long as the costs of inputs like fuel and infrastructure like ports, highways, and open borders remains low, the global market is a great deal for the consumer and a source of immense profits to businesses and their shareholders. But a company like Wal-Mart is just as dependent on factors it cannot control as its customers are. If any of these factors change, who will bear the cost? - Bill Kovarik, “Henry Ford, Charles Kettering and the Fuel of the Future,” Automotive History Review, Spring, 1998. Available online at www.environmentalhistory.org - Marc Levinson, The Box: How the Shipping Container Made the World Smaller and the World Economy Larger, 2006. - Vaclav Smil, Creating the Twentieth Century: Technical Innovations or 1867-1914 and their Lasting Impact, 2005 - George Rogers Taylor, The Transportation Revolution, 1815-1860, 1977.
<urn:uuid:5e529fcf-88ab-4dd5-bd3b-c87fb76969ca>
CC-MAIN-2022-33
https://human.libretexts.org/Bookshelves/History/National_History/Book%3A_American_Environmental_History_(Allosso)/Chapters/13%3A_Transportation_Revolution
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00695.warc.gz
en
0.972637
6,807
4.09375
4
Copyright 2011 Rick Boozer In my earlier article about the experience of earning my Masters degree, I mentioned that, “I learned so many fascinating new things that I never suspected, nor had I ever seen them mentioned in any popular astronomy publication. For instance, in certain unusual cases, there is a way to use radio signals from quasars to give image resolutions of less than a microarcsecond. That’s much finer than the Hubble Space Telescope’s best resolution and even better than from the widest baseline radio interferometer with dishes on opposite sides of the world!” After some reflection, I thought it might be a good idea for me to share some of the cutting edge information about quasars with fellow astronomy enthusiasts. Indeed, as shall be shown, astronomical discoveries may have unexpected practical applications. Thus I wrote this article. However, before I cover the new stuff, some background information may be in order. In astronomy it is not unusual that a strange new discovery is considered too far away to have practical uses, but later reveals a useful application beyond anything that came to mind when it was first detected. For instance, helium was discovered from its absorption lines in the Sun’s spectrum long before it was physically identified on Earth. This observation sparked a worldwide search until it was found in certain oil wells in the U.S. Of course, this discovery brought the myriad industrial and scientific applications for which helium is now used. No wonder the name of this gas is derived from the Greek word Helios meaning Sun. In this article I present information about a very strange and incredibly remote type of object that at first glance may appear to have little significance to our understanding of local phenomena or our everyday lives. However, these objects offer surprising new investigatory directions into the understanding of interstellar conditions within our own small section of the Milky Way galaxy, while also yielding at least one “down to Earth” practical application and a possible utility that future interstellar travelers may want to use. Quasi-stellar Objects or QSOs are some of the brightest known objects in the universe seen in gamma rays, X-rays, ultraviolet and visible light. It is not unusual for a QSO to shine at a level millions of times greater than the entire radiant output of our galaxy! QSOs appear extremely faint because of their cosmically extreme lookback distances of anywhere between nearly a billion to around 13 billion light-years. Though they are not stars, they are usually seen as a star-like point of light and so are called quasi-stellar. Of course, even members of the general public have heard the name for a special type of QSO that emits very strongly in radio frequencies: quasar for quasi-stellar radio object. Somewhat confusingly, it is not uncommon nowadays for scientists to call a QSO a “quasar” even when referring to a QSO that does not appear as a strong radio emitter. The Nature of the Beast Residing at the very center of a host galaxy, the source of the QSO’s immense radiant power is an Active Galactic Nucleus AKA an AGN. An AGN consists of an enormous black hole massing the equivalent of hundreds of millions or even billions of Suns along with a surrounding accretion disk of swirling matter that is pulled inward by the black hole’s gravitational field. It is the accretion disk that is the source of the intense radiation. Heat induced by the compression of continually in-falling gases piling up within the accretion disk brings the disk to incandescent brilliance. The intense magnetic field produced by the rotating black hole attracts some of the accretion disk’s matter away from the disk. Matter siphoned off in this manner then gets shot out in two opposing continuous jets of plasma near the black hole’s north and south magnetic poles. In some instances, the strength of the black hole’s magnetic field is so strong that the plasma is ejected at speeds approaching that of light! This plasma may emit magnetically induced radio emission that is highly polarized, known as synchrotron radio waves. A classical quasar’s strong radio signature is the result of this synchrotron radiation. The tiny star-like appearance of a QSO is not only due to its enormous distance, but also its relatively compact physical size. Shortly after their initial discovery in the 1960’s, it was observed that the optical brightness of an entire QSO would sometimes vary drastically over time periods as small as several days. Because nothing can travel faster than light, these short pulses in increased brightness would imply that the AGN could be no bigger than the distance light could travel from one side of the QSO to the opposite side during the fluctuation period. Thus it was deduced that the size of the emitting part of a QSO could be no more than mere light-days to light-months across: a distance much smaller than the typical separation of stars in the tightest packed part of a normal galaxy. Indeed, the radiation equivalent of billions of Sun’s can be confined to a region within the AGN that is roughly on the scale of our own Solar System’s diameter! As the reader will see, sophisticated observational techniques developed in later decades confirmed the relatively small size for a QSO’s AGN and even allowed a highly accurate direct measurement of its diameter. The short-term optical variation is normally the result of processes occurring within the QSO itself. A description of what is known of these processes would fill a long article all by itself; therefore, coverage of this topic is not appropriate for this short written piece and would also deflect our attention from other interesting features of QSOs. From this point on, our focus will be on observed variations in a QSO’s radio emission and some surprising discoveries associated with these observations. The Mystery of Rapid Radio Signal Variations One perplexing problem presented itself when radio intensity variations measured in hours were seen. It was calculated that for these fluctuations to be in the QSO itself, that the QSO’s temperature would have to be at least 1019 Kelvin! This temperature seemed absurdly high. Quantum mechanics dictates that any body with a temperature greater than about 1012 K should emit copious amounts of a special type gamma ray known as inverse Compton radiation. (Savolainen and Koralev 2008) It then seemed unlikely that the QSO was actually causing the radio fluctuations when no inverse Compton radiation was detected. (Tsang and Kirk 2006) Clearly some mechanism external to the QSO must be in play. Observed radiation fluctuations, whether they are in visible light, radio or any other part of the electromagnetic spectrum are called scintillation. All of us have seen scintillation of visible light with our own eyes as the twinkling of stars in the night sky and an explanation of this twinkling may give some insight into its radio counterpart. In the case of visible star scintillation, turbulence in the upper atmosphere causes changes in the optical properties of a high level layer of air to make the starlight seen by an observer appear to either vary rapidly in brightness or cause equally fast apparent shifts in the star’s position. Astute sky gazers may notice that even when the stars twinkle noticeably, any planets that are visible at the same time will shine with an unvaryingly steady light. Why is there a marked difference in the perception of these two types of objects when light from each is passing through turbulent cells of air? The farther away from an observer that an object of a given physical size is, the smaller it appears to be. Expressed differently, the object’s apparent size measured as an angle will be smaller with increasing distance. Turbulent air cells generally measure but a fraction of an arcsecond across. Though a star’s physical size is much greater than a planet’s physical size and much greater still than any air cell’s physical size, the immense distance of the star is so great that its apparent angular size is much smaller than the apparent angular size of the invisible cells of air turbulence that are causing the twinkling; therefore, any changes in the foreground turbulence will greatly affect the appearance of the star. Though the apparent angular diameter of a planet may be so small that an earthbound observer will perceive it as a point, it is still proportionately much closer to the observer than would be any star. In other words, the ratio of a planet’s physical diameter to its distance is enormously larger than the ratio of a star’s physical diameter to its distance. Since a planet’s apparent angular diameter is also usually much larger than the apparent angular diameter of any cell of air turbulence, the light from the planet will appear not to vary. The following illustration depicts this principle and is, of course, not drawn to scale. The observer’s location is marked with X and both turbulent air cells are of identical absolute physical size and of equal absolute distance from the observer. Though the star is physically much larger than the planet, the planet is much closer. Thus the apparent angular diameter, α, of the star is smaller than the apparent angular diameter, β, of the planet. The cell in front of the star has a wider apparent angular diameter than the star; therefore, the star twinkles. The same size cell in front of the planet has a smaller apparent angular diameter than the apparent angular diameter of the planet, so that the planet does not twinkle. Getting back to radio variations of QSOs, the question being asked was, “Were the observed rapid fluctuations being induced into the signal as the QSO’s radio waves traveled through some turbulent cell of material on their way toward Earth?” The first clue that this situation might be the case came in 1998 when two radio telescopes located extremely far apart (one in Australia and one in New Mexico) made simultaneous observations of a QSO designated PKS 0405-385. When a particular intensity fluctuation pattern appeared at the Australian radio telescope, the same variation would show up approximately two minutes later at the New Mexico instrument. This situation was very strange because if the variation was intrinsic to the QSO, the same variation should have shown up on the New Mexico instrument only milliseconds later, that is, after the time it takes light to travel the distance between the two instruments. (Jauncey et al. 2002; Bignall et al. 2007; Savolainen and Koralev 2008) Soon, simultaneous observations of other QSOs revealed delay times that were often much longer. The extremely tenuous gas and dust spread throughout the space between the stars within our galaxy is called the Interstellar Medium or ISM. Most of it contains only a few hydrogen atoms per cubic meter and is thus a better vacuum than the best that science has ever achieved, though randomly interspersed throughout the ISM are occasional denser clouds of dust and gas. As all amateur astronomers know, some of these nebulae can be seen in visible light. In other words, the nebulae may shine by reflecting the light of nearby stars or their constituent atoms may absorb ultraviolet radiation from local stars and re-emit the absorbed energy as visible light. Conversely, a cloud may be dense enough that the dust within it absorbs the light of the stars behind it, producing what appears to be a black void within the sky that is commonly referred to as a coal sack. Another alternative may also occur where such a cloud is so extremely thin that it may be transparent enough as to be nearly or completely optically invisible. After the observations were made by the Australians and Americans, astrophysicists were strongly suspecting that QSO radio scintillation could be the result of turbulent cells as the radio waves from the QSO pass through the last type of cloud described in the immediately preceding paragraph. That would explain the excessively long difference in arrival times seen at the widely separated radio receivers. In other words, a particular turbulent cell might induce a characteristic fluctuation pattern at the Australian radio telescope, but that same cell might have to travel a few minutes before it was in a position to cause the same fluctuation to appear at the American radio telescope. Other corroborating evidence was needed to clinch this conclusion, but that confirmation was not long in coming. Something very strange began to be seen in very short-term radio intensity variations in QSOs that went up then down over time spans ranging from minutes to several days. A gradual orderly change in these short-term scintillation patterns showed up over the course of a year and the same cycle of change repeated again on following years. (Jauncey et al. 2002; Linsky et. al 2007; Savolainen and Kovalev 2008) As any scientifically literate person knows, a year is the time it takes the Earth to complete one orbit around the Sun. It was soon realized that this was the clincher as far as proving that scintillation was being induced by material in the ISM relatively close to us. The speed of the Earth’s orbit around the Sun is around 30 km s-1; however, the speed of the material in the local ISM is also close to 30 km s-1. (Jauncey et al. 2002) When the motion of the Earth is approximately parallel to the velocity of the ISM, they have a low relative speed and the variation of the scintillation pattern is slow. But six months later, when their motions are in opposite directions, they have a high relative speed and the variations are observed to be much faster. Thus, we are given conclusive proof of two facts: 1) that the variations are turbulence induced scintillation and 2) that the Earth indeed orbits around the Sun a la Copernicus! (Jauncey et al. 2002; Savolainen and Kovalev 2008) After this conclusive evidence was obtained, short-term variations of QSO radio intensity were christened Interstellar Scintillation or ISS for short. Any turbulent interstellar cloud inducing radio variation is called a screen. But the evidence got even better. When computer models were constructed using the fluctuation times as input data, the theoretical predicted distance and position for each screen was almost a perfect match for a known “local” thin interstellar cloud that was at least barely detectable in either visible light or ultraviolet light! (Linsky et al. 2007) All of the evidence put together was about as close to a smoking gun as one ever gets in science. But here is the exciting part. Those same radio variations can be used to reveal fine details of the structure of a QSO in far greater resolution than any ground-based or orbiting telescope is capable of accomplishing! For decades the finest resolutions astronomers attained were achieved using a technique called Very Long Baseline Interferometry or VLBI. VLBI involves multiple radio telescopes observing the same object at the same time but separated by thousands of kilometers to give them the same resolution as a single stupendous radio telescope with a dish as wide as the distance between the two most widely separated radio telescopes. However, the scintillation technique even out-performs VLBI. The previously introduced analogy of visible atmospheric scintillation when stars twinkle may be extended to illustrate how such incredibly fine resolution is obtained. Remember that if an object located behind a turbulent cell of air (from the point of view of the observer) has a smaller apparent angular diameter than the cell, the object will appear to scintillate. But if the object has a bigger apparent angular diameter than the cell, no twinkling is seen. What if an observer was able to somehow detect a turbulent air cell and measure its apparent angular diameter? During the course of a night, a number of different turbulent air cells of varying diameters might come between the observer and an observed object. The observer would then be able to notice the maximum apparent angular diameter of a cell that caused twinkling and a minimum apparent angular diameter for a cell that did not cause twinkling. He/she would then know that the apparent angular diameter of the observed object would have to be an angle with a size between the diameters of the former and the latter. As mentioned before, turbulent cells within an interstellar screen cause the radio scintillations that are equivalent to atmospheric twinkling. The method described in the immediately preceding paragraph has been used to measure the extremely tiny angular diameter of various QSOs. In fact, the resolution obtained is so fine, that astronomers have even resolved structures within the AGNs of some of the closer QSOs! Angular resolutions on the order of 1 micro-arcsecond can be achieved. In comparison, this resolution is around 1000 times finer than that of the Hubble Space Telescope at its shortest usable wavelength! (Jauncey et al. 2002) And since the distance to a QSO can be determined from the amount of cosmological redshift observed in its emitted light (grist for another entire article), the actual physical size of the QSO can be calculated from its apparent angular diameter. In this case, even assuming a QSO is halfway across the observable universe at a lookback distance of about 6.8 billion light-years, a structure of a mere three light-months in physical diameter can be measured. (Jauncey et al. 2002) But just as radio scintillation can be used to gather information about a QSO, it can also be employed to investigate the properties of interstellar space in our neighborhood. This convenient situation is the result of the fact that the screening clouds have to be relatively close to our solar system. How do we know this? There is a maximum distance away from us that a turbulent cell of a particular physical size can be and still induce scintillation in a QSO. This distance is where the apparent angular diameter of the cell equals the apparent angular diameter of the QSO. Any farther away would lead to a situation in which the angular diameter of the QSO would be greater than the angular diameter of the turbulent cell and thus no scintillation would occur. (Bignall et al. 2007) So the fraction of material capable of producing fast variability is restricted to the ISM in the Sun’s vicinity. Furthermore, the scarcity of detected screens relative to the overall number of QSOs observed indicates that such clouds of scattering material are few and far between in our immediate section of the galaxy. (Bignall et al. 2007) ISS observations indicate that there are on average 1.7 screens along any line of sight, with a typical line of sight usually having between only 1 and 3 screens (Linsky et. al 2007) One may wonder what produces the turbulence in the interstellar cloud material. It is thought that areas of the highest scintillation-causing turbulence occur at places where the outer edges of two or more of these clouds come in contact with their different speeds of motion and travel direction. The slightly different velocities of the two clouds produce turbulence where they interact. (Linsky et. al 2007) Because they are on the outside of the cloud, these border edges lack shielding from ionizing radiation put out by one giant blue-white star that is relatively near our Solar System and several local white dwarf stars. The result is a much larger than normal number of fast freely moving electrons that increase turbulence to an even higher level, making these interacting areas hot beds for the production of scintillation. (Linsky et. al 2007) Finding Our Way Around the Earth with QSOs Everyone nowadays is familiar with the Global Positioning System that employs a fleet of special navigational satellites in Earth orbit. GPS has pretty much totally supplanted celestial navigation for ship and airplane travel. Even more immediate to people’s everyday lives is the fact that the technology has trickled down to the individual level in the form of automobile navigation systems and emergency location in life or death situations. But to figure locations anywhere on the face of the Earth within an accuracy of mere meters requires exacting determination of satellite positions at ultra-precise times. Constantly occurring variations in the tilt of Earth’s axis have to be continually taken into account for the system to function with pinpoint accuracy. The tilt variations are detected by referencing the locations of QSOs because their distances are so immense that their motion is not detectable as a change in the object’s position and thus they “stay put” in their apparent relative places all over the sky. VLBI measurements have been used to obtain precise positions of a number of QSOs and have been compiled into a catalog to serve as base navigational references. The catalog is called the International Celestial Reference Frame abbreviated ICRF. (Ma et al. 1998) Beyond Terrestrial Navigation Finally, it would stand to reason that QSOs might eventually be used as a natural “galactic” GPS in the event that humanity ever achieves the capability to travel multiple light-year distances. Extremely miniscule changes in observed relative positions of QSOs in relation to each other would be attributable to a traveler’s change in position within the galaxy and thus could be used for navigation purposes. Assuming measurement capabilities continue to progress as they have heretofore, it is not unreasonable to expect that equipment to measure such incredibly minute deviations may be achievable by any future civilization technically advanced enough for interstellar travel. Who knows what other uses we’ll find for these exotic objects as time goes on? For that matter, what as-yet-to-be-conceived applications may follow once we know more about the nature of what we now call dark matter and dark energy? After all, those two vaguely descriptive names were chosen because we couldn’t choose better ones since we don’t really know what those properties physically represent! In short, judging by the past history of scientific discovery, it would seem unwise for anyone to say that any particular realm of scientific knowledge will always only be of purely academic interest. Bignall, H.E, D. L. Jauncey, J. E. J. Lovell, A. K. Tzioumi, J-P. Macquart, and L. Kedziora-Chudczer “Observations of Intrahour Variable Quasars: Scattering in our Galactic Neighbourhood” Astronomical and Astrophysical Transactions, 26 (2007) 567 - 573 Jauncey, David, Hayley Bignall, Jim Lovell, Tasso Tzioumis, Lucyna Kedziora-Chudczer, J-P Macquart, Steven Tingay, Dave Rayner and Roger Clay, “Interstellar Scintillation and PKS 1257-326” ATNF News (October 2002) Linsky, Jeffrey L., Barney J. Rickett, and Seth Redfield “The Origin of Radio Scintillation In the Local Interstellar Medium”, The Astrophysical Journal, 675 (2008) 413-419 Ma, C, E. F. Arias, T. M. Eubanks, A. L. Fey, A.-M. Gontier, C. S. Jacobs, O. J. Sovers, B. A. Archinal and P. Charlot, “The International Celestial Reference Frame as Realized by Very Long Baseline Interferometry”, The Astronomical Journal, 116 (1998) 516-546 Savolainen,T. and Y. Y. Kovalev. “Serendipitous VLBI Detection of Rapid, Large-amplitude, Intraday Variability in QSO 1156+295” Astronomy and Astrophysics 489 (2008) L33-L36 Tsang, O. and J. G. Kirk “The Inverse Compton Catastrophe and High Brightness Temperature Radio Sources” Astronomy and Astrophysics 463 (2007) 145-152 The lookback distance is how far the light traveled from an object to reach the Earth. In the case of the farthest detectable QSOs, this distance is about half of the true present-day distance between the Earth and the QSO – called the comoving distance. The reason why is that the universe was continually expanding while the light was en route, causing the Earth and the QSO to become further and further apart during the transit time as the space between them was stretched wider by the expansion. A wave of light is lengthened (i.e., cosmologically redshifted) because the space it is traveling through is stretched by the continual expansion of the Universe; which in turn, stretches the wave of light. Some inaccurately term it as a cosmological Doppler shift. But the relative motion of a light emitting object, not the Universe’s expansion, causes a true Doppler shift!
<urn:uuid:d2546755-6750-4a36-9c73-f9fcb0af5518>
CC-MAIN-2022-33
http://astromaven.blogspot.com/2011/02/qsos-faraway-objects-with-local.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00496.warc.gz
en
0.941697
5,175
3.015625
3
To avoid system errors, if Chrome is your preferred browser, please update to the latest version of Chrome (81 or higher) or use an alternative browser. Click here to login if you're an NAE Member Recover Your Account Information Author: Alan H. Epstein Once Aircraft no longer required human operators, they could be made much smaller. Since the first controlled powered flight by the Wright brothers 100 years ago, aircraft size and range have been important measures of progress in aviation. Large payloads have always required large aircraft (although the definition of large has evolved over time). In the early days of aviation, the fuel capacity required for long-range flight also dictated large size; thus, size and range were coupled. The first aircraft sold by the Wright brothers in 1909 to the U.S. government, the Wright B Flyer, had a 40-foot wing span, a takeoff weight of 1,400 pounds, a range of 90 miles, and required a pilot and an observer. In terms of payload and range, the B Flyer was a state-of-the-art airplane. Ten years later, the first aircraft to cross the Atlantic Ocean, the Navy-Curtiss NC-4, had a wingspan of 126 feet, a crew of six, and a takeoff weight of 27,000 pounds. This airplane was designed by NAE member Jerome Hunsaker. Because the NC-4 had a range of only 1,400 miles flying at 75 miles per hour, it had to be refueled along the way. By the time of Lindbergh’s solo nonstop transatlantic flight in 1927, aeronautical technology had progressed to the point that an aircraft with a wingspan of 46 feet and weighing only 5,100 pounds could fly nonstop 3,600 miles from New York to Paris. In one sense, the Lindbergh airplane had an optimal design - the flight control, navigation, and payload were embodied in one person. Twelve years passed before the next transatlantic milestone, the start of scheduled commercial service in 1939 by the 84,000 pound Boeing Model 314 Clipper with a 152-foot span and a nominal range of 5,200 miles at 184 miles per hour. By this time, large size was dictated as much by commercial considerations (basically seat mile cost) as by technical considerations. Commercial considerations continued to dictate the size of aircraft throughout the jet era. Eighty-four years after the Wright brothers’ flight, technology had advanced sufficiently that the Aerosonde, a model airplane with a 10-foot span and weighing about 30 pounds, could cross the Atlantic. Lindbergh’s heroic, lone pilot was replaced by inexpensive avionics, a microprocessor, and a satellite navigation receiver. Like the NC-4, the Aerosonde cruised at about 75 miles per hour. In 2003, the even smaller TAM-5, a model airplane weighing just 11 pounds, made a similar transatlantic flight. In contrast to this smallest transoceanic flyer, we now have commercial aircraft weighing more than a million pounds with transpacific range. Thus, by the end of the first century of flight, the connection between aircraft range and size that had been so important in the early years of aviation had been broken. In military aircraft, range and size have been important historically, but military planners now focus on achieving effects, such as destroying a building. Figure 2 presents a historical perspective of the number of sorties (one flight by one aircraft) flown to ensure a hit on a 60-by-100 foot building. In World War II, B-17s flew more than 3,000 sorties. A B-17 required an aircrew of 10 and at least that many in the ground crew, so more than 60,000 people were involved, a true army of the air. By the time of the Vietnam War, only 44 sorties were required to hit a 60-by-100 foot building. In the latest Iraq war, only one was required. So the 750,000 tons that took flight in 1944 were reduced by a factor of 30,000 to 25 tons. Because the number of aircraft on a mission cannot be reduced below one, further reductions in aerial mass for military missions must be in aircraft size. The bombers I just discussed all had gross takeoff weights of about 50,000 pounds. How large must an aircraft be to get the job done? Consider, for example, the task of destroying a well defended target, such as a bridge. In 1972, this was typically done with F-4 Phantoms, an expensive proposition given the inaccuracy of unguided bombs and aircraft loss rates. One contemporary analysis put the price at $12 to $15 million, not including the cost of necessary "extras," such as combat air patrols, tankers, and defense suppression, which together formed a typical "strike package" of 12 to 24 aircraft. In the 1991 Gulf War, two cruise missiles with a range comparable to that of an F-4 (without refueling) could accomplish the same job for about $2 million. In 1972, more than a million pounds of aircraft left the runway at the beginning of a mission; 25 years later only 4,000 pounds were needed, a 25-fold reduction in gross weight and a 10-fold decrease in cost, not to mention the value of the lives that were saved. Uninhabited Air Vehicles These dramatic decreases in vehicle size and mission cost were principally enabled by the microelectronics revolution, which reduced avionics mass while providing real-time computation, navigation, electro-optics, and autonomy that greatly improved the accuracy of weapons and so reduced the mass of the weapons required. Perhaps most important, microelectronics have enabled the elimination of people from aircraft, thereby removing an important limitation to shrinking aircraft size. Uninhabited (and largely autonomous) air vehicles (UAVs) are the latest innovation in military aircraft. Although the concept of UAVs, and examples of limited utility, have been around for more than 50 years, advanced avionics have enabled highly capable UAVs only in the last decade. The value of the UAVs used in recent conflicts has been acknowledged by the operational community. The largest of the current vehicles is Global Hawk, a high-flying reconnaissance aircraft weighing about 25,000 pounds with the wingspan of a small airliner. Global Hawk has a demonstrated transpacific range and endurances of longer than 24 hours. Its smaller and perhaps better known cousin, Predator, is the size of a light plane. In the war in Afghanistan, Predator became the first UAV to fire weapons in combat. The concept of uninhabited fighter and bomber aircraft is being advanced in flight testing of the X-45 and X-46 uninhabited combat air vehicles (UCAVs). In the Iraq conflict, a host of smaller UAVs have been fielded, some weighing as little as 5 to 10 pounds; these include Pointer, Dragon Eye, and Desert Hawk. These small UAVs are flown by teams of a few soldiers for local reconnaissance or base security surveillance. As a historical footnote, these model airplane-sized UAVs sell for about $25,000, the same price the Wright brothers charged the U.S. government for its first airplane. Jane’s All the World’s Aircraft is an 800-page annual compendium of airplanes currently produced and used around the world. The popularity of UAVs can be judged by the size of Janes’ Unmanned Aerial Vehicles and Targets, which is just as thick. The primary attraction of smaller UAVs is their low cost compared to the cost of manned aircraft. For this reason, UAVs are being embraced by operators, even though they may have less capability than the more expensive manned systems they replace. The low price of UAVs may eventually alter the business landscape of military aviation. Because of the low development cost of small UAVs, new, small companies can and are entering the market, resulting in a proliferation of offerings. In contrast, current large aerospace concerns are organized to produce $100-million airplanes in $100-billion programs. If large manned programs are displaced by low-cost UAVs, large companies will find themselves challenged to establish viable business models for systems that cost only a few tens of thousands of dollars. Military combat pilots also feel threatened as it seems increasingly likely that no new manned combat aircraft will be in production by midcentury. As military aviation transitions to and gains experience with UAV operations, it is also increasingly plausible that unpiloted flight will move into the commercial air transport arena. Routine transport flights are no more challenging than combat missions in terms of decision making and autonomy, so that it is reasonable to extrapolate that in the not too distant future, routine commercial flights could be automated. But what about nonroutine events and emergencies? It is true that pilots with extraordinary skills have saved severely damaged aircraft. But less skilled pilots are responsible for the majority of aviation accidents. In the history of commercial jet aviation, about 70 percent of accidents have involved crew error. As aircraft have become more mechanically reliable, one category of error, controlled flight into terrain (the pilot literally flying an airworthy airplane into the ground because of a lack of situational awareness) has become the leading single cause of fatal accidents. Crew error implies that appropriate, established procedures were not followed. Thus, automation may not have to be as flexible as the very best pilots in all possible situations to maintain current standards of air safety, or even to improve upon them. Automated controls that do nothing more (or less) than reliably "follow the book" may result in a safer air fleet on average than we have now. Technical issues aside, civil aviation may be harder to change than military aviation. Labor is a major issue in civil aviation, and it is difficult to imagine that pilot unions will look kindly on automated transports. Perhaps only new organizations that do not have such stakeholders will be able to deploy the new technology. Once automated transport airplanes are ready to go, will anyone dare to board them? Probably not without a significant experience base in air cargo or military operations and strong additional inducements, such as low fares. Indeed, the last few decades of air travel suggest that the traveling public values low fares above all else. At the start of the second century of flight, it is appropriate to ask how large an airplane need be. Aircraft exist mainly for transportation - of people, freight, sensors, ordinance, etc. We can classify missions as either mass specific (carrying things by the pound, such as freight or people) or function specific (accomplishing a task, such as reconnaissance, air superiority, or ground attack). The payload for a mass-specific mission is fixed by assumption (although in many cases the payload could be divided among several vehicles if that proved advantageous). In contrast, the payload mass for a function-specific mission is not fixed. It is determined by payload physics, contemporary technology, and the level of investment. How small can a payload be? Take the example of destroying a bridge. Given the accuracy of current weapons (10 meters or less), perhaps 500 to 1,000 kilograms of explosives are required. But the minimum explosive mass required to destroy a bridge might be the mass carried by a human sapper who places a few charges at key locations, no more than 10 to 100 kilograms. With advances in explosives and warhead design, the amount might be reduced even further. A sufficiently capable group of future small air vehicles could deliver their payload right to the critical spots, flying under the bridge if necessary, and accomplish the mission with one-tenth to one-hundredth the total payload required by current guided weapons. This implies that the vehicles might be an order of magnitude or more smaller than today’s guided weapons for many targets. Another example is a visual reconnaissance mission. Thirty years ago a payload consisting of a television camera and microwave downlink weighed about 100 kilograms and was carried on a remotely piloted vehicle with a takeoff weight of 1,000 kilograms. That same payload functionality can be realized today in a gram or two. Thus, the vehicle required to carry a payload that can accomplish the same mission could weigh no more than 50 to 100 grams. Based on this idea, the Defense Advanced Research Projects Agency (DARPA) initiated a microair vehicle (MAV) program in the late 1990s. DARPA somewhat arbitrarily defined an MAV as an air vehicle measuring 6 inches or less in every dimension. The concept is traceable to a Rand Corporation workshop in 1992 (Hundley and Gritton, 1992) that was developed in depth at the Massachusetts Institute of Technology Lincoln Laboratory (Davis et al., 1996). One of the most successful aircraft developed in this program was the AeroVironment Black Widow shown in Figure 3 (Grasmeyer and Keenon, 2001). With a 6-inch wing span and a 56-gram mass, it can fly for nearly 30 minutes on high-performance batteries and broadcast color video images from a distance of 2 kilometers. The Black Widow was a capable flyer, but it was subject to a hazard peculiar to very small, quiet air vehicles - bird attacks. For large aircraft, such problems have largely been limited to 1950s horror movies. The DARPA program was a great learning experience for both the technical and operational communities. The technical community discovered that the 6-inch size was right at the edge of the state of the art at the time. Some aspects of the 6-inch airplane came out as expected. Aerodynamics at this scale could be readily calculated so that the poor performance relative to large airplanes (lift-to-drag ratios of 3 to 6 rather than 15 to 20) was no surprise. Instruments for navigation and control (such as 6-gram GPS receivers and 1-gram gyroscopes) were commercially available. Designing subsystems that did not exist at this scale was more difficult. Propulsion and power systems were two of the most vexing problems. The smallest model airplane engine is ten times too large for a 6-inch airplane that needs about 2 watts of flight power to cruise at 15 meters per second. Battery propulsion had the advantages of low noise and availability, but the best commercial batteries store 25 percent less energy per unit mass than the jet fuel used by a modern gas turbine engine. The energy storage problem was exacerbated by the relatively low volume of a small airplane. Poor aerodynamics combined with limited energy storage space resulted in short flight times, tens of minutes at best. Low transmit powers and small antennas limited radio communications to a range of 2 to 4 kilometers. Stability and control systems proved to be challenging as well. These airplanes were first flown by remote control rather than autonomously, but the dynamics of aircraft at this size are too fast for most people to control. Fast dynamics can be tamed with control loops, but this requires a very small, very fast servo motor, much smaller than was then available on the market. Payload was another challenge. In the original Lincoln Laboratory study, the technological potential for payloads was in the range of a few grams; DARPA chose not to invest in payloads in its program. Thus, these early vehicles were limited to daylight imaging cameras. System integration was difficult because there was no room for components, such as electrical connectors, in an aircraft with a mass budget of only a few grams of avionics. The Army and Marines were supportive of the MAV concept, although they wanted aircraft with longer endurances and ranges, day/night imaging, and the ability to fly and perch in urban environments - capabilities that first-generation MAVs could not provide. Also, although these vehicles may be disposable in wartime, they are still too expensive to lose in peacetime training. The principal conclusion in 2000 was that a 6-inch aircraft was too small for a vehicle with the desired performance characteristics. Thus, the near-term focus was shifted to the 8- to 16-inch size range, which has improved the aerodynamics and is better suited to existing payload and subsystem technologies. What about in the longer term? One way to address the question is to look at the engineering disciplines underlying much of aviation - aerodynamics, propulsion, and structures. Figure 4 shows the trend in the size of air vehicles with payloads that are a constant fraction of the initial mass. The calculations for this figure are for subsonic aircraft optimally designed for maximum range at each size (Drela et al., 2003). The figure also shows the performance of the Global Hawk and TAM-5, which have many design requirements in addition to long range, especially low cost. The structure becomes more efficient as aircraft size decreases, but the aerodynamics and propulsion become less efficient. The cubed-square law results in the volume decreasing relative to the area as size is reduced. Thus, smaller aircraft (less than a few thousand kilograms) have somewhat inferior aerodynamics and propulsion and relatively less fuel volume, and therefore less range. Nevertheless, aircraft that weigh less than a kilogram can have very useful, even transoceanic range. This should come as no surprise because birds as small as a 6-gram hummingbird migrate more than 2,000 kilometers without refueling. As airplanes are scaled down, the aerodynamic and engine performance, the volume/wetted area, and the communications range all decrease while the necessary control bandwidth increases. However, some aircraft parameters do not vary with size. Flight speed, for example, is a function of installed power, not size; flight altitude is a function of flight speed. Recent transatlantic model airplanes had cruise speeds and altitudes comparable to those of the 1919 NC-4 because the performance of current model airplane engines is only marginally better than the performance of the large engines of 80 years ago. Takeoff and landing speed and, therefore, runway length scale with cruise speed and installed power, rather than aircraft size. This implies that 6-inch transonic aircraft would require runways comparable in length to those of large aircraft (1 to 2 miles). Of course, the runways don’t have to be very wide. A more practical solution may be based on favorable structural scaling with decreasing size, which implies that variable geometry is much less costly for very small aircraft than for very large ones. Birds make good use of variable geometry when landing and taking off. A major reason for the relatively low performance of current MAVs is the poor performance of subsystems, especially the propulsion subsystem. Historically, small airplanes have had small budgets to solve large engineering challenges, and progress is paced by the level of investment. The recent development of semiconductor-based micromachined devices, known as microelectromechanical systems (MEMS), has opened the way to new approaches to small engines. A micromachine gas turbine engine, for example, is sized to power an MAV in the 50-to 100-gram class (Figure 5) (Epstein, 2003). The performance of early versions will be no better than the performances of early turbojets in the 1940s, but greatly superior to the performance of battery-powered vehicles. These engines should increase flight speed and extend the range of MAVs. Other approaches, such as high-performance, miniature, internal combustion engines and fuel cells, are also being pursued. MEMS, improved microelectronics, and associated technologies can greatly improve air vehicles, down to the scale of a few inches. Can aircraft be smaller still? Most flying insects are an order of magnitude smaller than MAVs, but there is comparatively little quantitative analysis or engineering experience at these length scales. Developments in biotechnology and nanotechnology may help, but, at the moment, neither the utility nor the challenges of subcentimeter-span aircraft are well understood. One thing is clear though, smaller is more difficult. Forecasting the future of aeronautics can be risky. History is littered with inaccurate predictions of the future of flight. Lord Kelvin once opined that "aircraft flight is impossible." Millikan, von K?rm?n, Kettering, and others stated in a 1941 National Academy report that "the gas turbine can hardly be considered a feasible application to airplanes." Unbeknownst to them, the first jet plane had flown in Germany the previous year. Nevertheless, it seems fitting that I close with some remarks about what lies ahead. First, autonomous air vehicles of all sizes will predominate, especially in military aviation, the inevitable result of the continued development of microelectronics and software. Second, the performance of UAVs will improve rapidly both because of increased investment and because small systems can be developed much more quickly than large military aircraft, which now take decades to reach the field. Indeed, progress can be faster for small aircraft for much the same reason that geneticists study fruit flies rather than elephants. Third, improving technology will enable the development of very capable air vehicles as small as a few inches in size. Large aircraft will always have their place, but the future of aeronautics will be small. Brzezinski, M. 2003. The Unmanned Army. New York Times, April 20, 2003. Davis, W.R., B.B. Kosicki, D.M. Boroson, and D.F. Kostishack. 1996. Micro air vehicles for optical surveillance. Lincoln Laboratory Journal 9(2): 197-213. Drela, M., J.M. Protz, and A.H. Epstein. 2003. The role of size in the future of aeronautics. Paper no. AIAA-2003-2902. Reston, Va.: American Institute of Aeronautics and Astronautics. Epstein, A.H. 2003. Millimeter-Scale, MEMS Gas Turbine Engines. ASME Paper GT-2003-38866. In Proceedings of ASME Turbo Expo 2003. New York: ASME. Grasmeyer, J.M., and M.T. Keenon. 2001. Development of the black widow micro air vehicle. Paper no. AIAA-2001-0217. Reston, Va.: American Institute of Aeronautics and Astronautics. Hundley, R.O., and E.C. Gritton. 1992. Future technology-driven revolutions in military operations: results of a workshop. Santa Monica, Calif.: RAND National Defense Research Institute. See PDF version for figures.
<urn:uuid:c380d102-1c81-45b8-8ff5-0d5296622bd1>
CC-MAIN-2022-33
https://www.nae.edu/19579/19582/21020/7350/7507/TheRoleofSizeintheFutureofAeronautics
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00297.warc.gz
en
0.95493
4,624
3.875
4
- Self reflection refers to your in-depth awareness of the cognitive, emotional, and behavioral aspects that govern your life. - This is a process of mirroring and assessing yourself consciously. - Self reflective people are thoughtful, introspective, and purposeful. - They know their wishes and desires, dreams and goals, strengths and weaknesses. - When you know the deeper aspects of yourself, self reflection takes place. Are you aware of your inner psychic processes? Have you ever tried to reflect upon your deepest feelings? Do you know the values that guide your life and living? Have you been in a process of self-awareness? If yes, then you are already conscious of the process known as Self reflection. It is also commonly known as reflective awareness. In this process, you delve deep within yourself and try to understand your thoughts, feelings, desires, attitudes, motivation, and much more. People who are self-reflective are aware of themselves. They actually know themselves much better than anyone else. Maybe all the deeper parts of oneself get revealed through this contemplative process. Let’s understand this reflection process in detail… Self reflection – definition and meaning Self reflection is the process to understand, evaluate, and give thought to one’s inner mental workings. It’s about reviewing and mulling over the cognitive and emotional aspects that make you who you are. When you are contemplating and giving serious thought to your inner psychic processes, it means you’re into a process of self reflection. You’re analyzing your character, motives, wishes, and desires. Maybe you are trying to get an idea about all those psychic processes that regulate your actions and behavior. Sometimes, self reflection also means an introspective process. You’re trying to communicate with your inner world and reflect upon the weaknesses that are restricting you to achieve your life goals. This type of personal reflection helps you to take time out of your everyday life. You are meditating on your thoughts, feelings, and behavior. It is a deep dive analysis of you that unleashes your deepest secrets and wildest choices. When you know yourself in the best possible ways, you’ll be able to carve a life of your choice. People who self reflect are aware of their attitudes and motives. They know what matters to them in life. What does self reflection look like? Self reflection is the ability that helps in self-awareness. It leads to personal growth and is en route to happiness and success. The process gives you a clear idea of what your values and goals are. It makes you aware of the ‘why’ and ‘how’ behind your thoughts, feelings, and actions. It’s a highly useful process because it helps to untangle conflicts, heal wounds, and remove the deepest scars of your life. Self reflection takes out buried memories that might not be pleasant. You’re already on the path of self-validating and accepting your feelings as they are, without evaluations. This reflective awareness acts as a control mechanism. It allows you to look beyond consciousness. You are made to dig deeper into the unconscious and reflect upon those thoughts that actually mirror your true self. When you know yourself well, you become an aware being. You know what drives you to take action and improve your life and living. Signs of self reflection You must be eager to know the typical signs of self reflection. So let’s get started: - Acceptance of your helpful and unhelpful thoughts. - Self reflection shows a non-judgmental acceptance of your feelings. - It leads to admitting mistakes. - When you reflect and mirror your thoughts, you become a self-aware person. - You will never avoid taking hard decisions in life - There will be fewer worries about past and future happenings. - Clarity of feelings comes from self reflection. - Free expression of feelings occurs. - The person has less nitpicky tendencies. - You will not become defensive when others try to evaluate you. - You will prefer to live in the moments of life. - Asking questions to ‘self’ will occur too often. - Many unanswered questions will come to your mind. - There is so much that you are unaware of. Maybe seeking solutions to your problems also. - No pretensions but only acceptance. - You will have varied perspectives and it will make you more tolerant and accepting of others’ mistakes. - You will focus on reaching your full potential. - Honesty is the key that keeps you aware of what you love, what you hate, and what you can’t let go of in life. - You will choose your words wisely so that it appears firm, yet respectful for others to follow. Why is self reflection important? Self reflection is a conscious practice of self-awareness. You are trying to understand and perceive things, and trying to evaluate the psychic processes deeply. Thus, this process is not just experiencing, but it refers to a non-judgmental mastery of your thinking and feeling processes. Do you remember the last moment when you closely watched your inner processes? Are you aware that you need to reflect on these processes to improve your life? The reality is we live in a fast-paced world. We all are busy realizing big goals in life. In this hustle-bustle, we tend to forget that we really need a break. We ignore our deepest desires and wishes and have forgotten to slow down and push the pause button in our life. Self reflection teaches us just that. It reminds us to take some time out and look within ourselves. Let’s see how self reflection can give you a definite purpose in life. The points of importance of self reflection The power of self reflection and its subtle influences on your daily life are as follows: - Self reflection helps to improve our perceptions. It reduces cognitive bias and helps to decide with clarity. - It improves self-connection. - Self reflection is important to know your deepest pains and sufferings so that you can move towards emotional healing. - This process helps to identify faulty thoughts and beliefs so that they cannot consume and overpower you. - Self reflection improves your understanding of the world around you. - It helps you to fine-tune your needs, wishes, and aspirations according to the demands of social living. - Self reflection is a mindfulness practice that makes you aware of all things happening in your life. - It helps you to respond more and react less in stressful situations because now you’re quite good at emotional self-regulation. - It reveals the mysteries that lie in you. - Self reflection can put to rest all your anxious thoughts. - It stops the mind chatter and makes you feel relaxed and poised. - You are aware of your life’s challenges and can take up necessary actions to overcome them. - Self-love and self-awareness come from reflecting on your thoughts and feelings. - It embarks you on a journey of realizing your fullest potential. - You will be able to make life choices with awareness and clarity. - Self reflection is an art in itself. It enables you to know the strengths and weaknesses of your personality. - Your inner dialogue asks you several questions that are in-depth and provides clues to why you think, feel, and behave in the way that you do. The key elements of self reflection Do you know that real-time self reflection is not as common as we may think? Most people just self reflect just for the sake of doing it. In the real sense of the word, Self-reflection is a mirror of our inner world. It’s a retrospection of our mistakes and drawbacks; you are noticing, assessing, and probably evaluating every small detail of your inner world. If you are into the process of self reflection, you must know that it involves three key elements. They are as follows: If you want to self-reflect, you will have to be an open-minded person. An open-minded attitude refers to your ability to see things as they are. It means flexibility in outlook and attitudes. Being an open-minded person, you will appreciate your deepest feelings as it is. You will never try to judge your feelings as right or wrong. Openness is the key to self reflection. It allows self-acceptance. Maybe you will be able to understand your weaknesses in a new way and try to overcome them. You will have fewer stereotypes about yourself and the people around you. Openness helps you to cultivate your qualities. You become an informed being who is ready to take the world. Openness can be developed in the following ways: - Attend to your thoughts and feelings - Resolve your conflicts in a healthy way - Accept other people as they are - Be flexible and never judge people by their outward behavior. Look for the hidden intentions behind their actions. - Accept your mistakes openly and feel responsible for your actions. This element of self-reflection symbolizes watchfulness and monitoring of your thoughts, beliefs, attitudes, and behavior. If you have good observation power, you’ll quickly understand your maladaptive and disturbing thoughts. It’s likely that you’ll get to know your negative emotions and maladaptive behavior patterns also. Observation improves attention and you will have a better idea of what’s going on inside you. Observation can lead to self reflection by making you aware of the following things. - Awareness of your unhelpful thoughts - Unhealed wounds and trauma - Broken relationships, if any - Anxieties and fear responses - Irrational thoughts - Faulty beliefs - The clarity in points of view or varied opinions Observation can be improved by attending to the moments of life. If you cultivate mindfulness, you’ll be able to calm down your wandering mind. Sometimes the self reflection is possible only if you can put a stop to the continuous flow of unhelpful thoughts and negative feelings. Mindfulness meditation connects you to the inner world of absolute bliss. Thus, this is a wonderful way to self reflect. It can actually turn you into a fully functioning being who is self-aware, conscious, and alert. Self reflection relies on objective thinking and rational judgments about yourself. It refers to your ability to separate your feelings and behavior from biased attitudes and evaluations. Being objective means you’re free from stereotypes. Your evaluations are based on logic and reasoning. Most often we have irrational and intrusive thoughts that cloud our perception of reality. We cannot see and judge things in an objective way. Thus, we fall prey to wrong beliefs about ourselves. These act as obstacles to self reflection. You need to be a fair-minded person to live a righteous life. Self reflection is not possible if your ideas are biased and prejudiced. You need to detach yourself fully from biased ideas to become an aware and informed being. You can cultivate objectivity in the following ways: - Do not assign meaning to your thoughts because meanings are highly subjective in nature. No two individuals will assign the same meaning to a particular situation. - Accept all your feelings in a non-judgmental way. - Stop cultivating your biased ideas. - Try to remain open-minded and flexible as much as possible. - Use a thought diary and write your thoughts and the immediate triggers. Also, refer to the feelings that come with these thoughts. - Try to reframe your negative thoughts with positive self-talk. - Remind yourself that your thoughts (good and bad) are a part of you. Thus, accept them as authentic and true. Avoid judging these thoughts because in most cases they are irrational, unhelpful, and biased. This will lead to developing good objectivity skills. Self reflection exercises Self reflection is a time when you look back on yourself. Just as when you see yourself in a mirror and it shows the real ‘you,’ self reflection mirrors your inside processes. It makes you see, feel, realize, and understand the secrets of your unconscious mind. Sometimes it is necessary to check in with yourself and see the performance of ‘you’ as a whole. So, self reflection exercises just help you do this contemplation in the right way. Self reflection tests your patience and tenacity. You’ll learn the secrets of yourself that were unknown or you never wished to know them. Self reflection is all about self-examination. It’s a learning tool that increases awareness and guides your life’s path toward success. For example, you were given the responsibility of handling a big project but you didn’t perform up to the mark. If you sit quietly, you will self-reflect on all those things that have happened over the last few days, or even weeks. In this process, you will ask yourself several questions just to get an idea about what went wrong. Why did you fail miserably? Have you kept your performance bar too high? These questions are examples of self reflection. Maybe you are trying to understand and evaluate what could have been done to improve your skills, thereby improving the outcome of the project. You can start self reflection by looking into the past happenings, like rewinding each episode and I-uncover the hidden issues one by one. To get started, we have some exercises for you. When you don’t understand how to initiate the process of self reflection, you can follow these guidelines as an easy practice tool. Exercise 1 – To understand why the breakup happened after so many years If you were given the opportunity to identify the causes of the breakup by self-reflecting on your life, how would you do it? Just set aside 10 minutes and ask yourself some reflective questions. - What have I learned from my relationship? - Was I really happy or was I pretending to be happy? - How much accountability do I have over this relationship? - Did I react much more than what was necessary? - What barriers were there between the two of us? - Should I give this relationship one more chance? - What could have been done to avoid this breakup? - Was it my fault? Did I give my 100% into the relationship? Exercise 2 – You failed in an office project and got to hear harsh words from the boss A careful self-reflection helps you learn big lessons in life. You become aware of your shortcomings and would know how to turn them into opportunities for growth and innovation. - What lessons did I learn from this failure? - Do I need to show more patience in understanding my competitors? - What barriers came across and how should I deal with them? - Did I acquire any new knowledge? - What will I do next time to become successful? - How can I improve my work skills so that my mistakes do not get repeated again? - Should I speak about my problems in the office? - Should I seek support from co-workers if needed? Exercise 3 – To reflect on your life’s purpose - Do I know my life goals? - What purpose makes me aware of myself? - What is that one thing in my life which matters the most? - Will I be able to align my thoughts, feelings, and actions with my life’s purpose? 15 benefits of self reflection After knowing so much about the process of self reflection, you must be eager to know how self reflection is beneficial for your physical, mental, and emotional well-being. Self reflection is for the brave hearts. It’s all about asking several questions to you and answering each question honestly. While doing so, you’ll get hold of the deeper aspects of yourself. There are several short-term and long-term benefits of self reflection. Some of them are as follows: 1. You will get to know your values Knowing oneself is not easy as it may sound. But, if you try doing it consciously, nothing is impossible. Self reflection helps to identify the core values that guide your life. The more you delve deep inside you; it becomes easier to know your likes, dislikes, weaknesses, etc. In this way, you will become an aware being. 2. Realizing your potential The potentials are skills and abilities that make you who you are. When you know your innate talents, you can nurture them at any time. You will take action to turn these potentials into actionable goals. Then, you will be able to focus your energy and invest in it to achieve your goals. 3. Big picture thinking If you think big, you will achieve big. Self reflection helps to highlight your strengths and turns your weaknesses into opportunities for further learning. A good practice of self reflection helps to get a sense of yourself and where you stand in the crowd. If your focus is good and you are aware of your life goals, you will never get sidetracked. Your focus will be intact and the bigger picture of realizing life goals will take a concrete shape. Sometimes self reflection also allows you to keep aside your emotions and think in objective ways. You know where you fall short and what needs to be done to correct yourself. 4. Self reflection helps to face fears and insecurities The benefit of self reflection is that it allows you to become conscious of the things that are holding you back in life. It teaches you the art of conflict resolution and letting go of those negative thoughts and feelings that acts as barriers to personal growth. Self reflection helps in healing the wounds that can make you anxious. You are confident to face your fears and conquer them steadily. 5. Better decision-making ability Self reflection helps in good decision-making. As you become conscious of your inner call, you are becoming an informed person. Thus, you will not waste time on unnecessary stuff but will notice those things that matter the most in life. You will be less distracted, and more empowered to make the right choices in life. 6. You will have wonderful relationships Self reflection helps to develop emotional intelligence. You will be able to connect with others with humility and gratitude. You will have secure and loving relationships because you will accept people as they are, not the way you want them to be. 7. Less anxiety and stress levels Self reflection is the key to mental well-being. The more aware you are of your inner world, you will not think about what has not happened, or what could happen in the future. It means your worries will take a back seat. Your objective mind can reason out the causes of your anxiousness. Then, it will tell you the ways to eliminate anxiety fully. 8. Self reflection improves sleep When you’re less anxious, you will sleep well. Always be grateful for all the good things that life has offered you. Self reflection brings greater peace and happiness as well. 9. Self-acceptance and compassion Self reflection leads to better awareness. You become a well-informed individual with lots of good talents to work on. Maybe, you will be compassionate towards others as well. 10. SWOT analysis Self reflection helps to identify your threats and opportunities. When you are aware of your blind spots, you know how to overcome them. Moreover, you will be motivated to take some decisive action to improve your personal and professional life. 11. Self reflection helps to develop plans and strategies for personal growth Self reflection is a personal analysis process. It helps you to plan and chalk out some ways to develop yourself. You will be able to think clearly and make an action plan to reach your goals. Your mind will focus on the problems that need solutions and will make plans accordingly. Self reflection helps to resolve issues on a priority basis. Your mind will not wander aimlessly rather it will work systematically to find solutions to the issues you are facing in reality. 12. Better self-esteem Self reflection leads to a deeper understanding of us. Thus, it’s obvious that when we know our innate nature well, we will be able to show respect for who we are and what we can achieve in life. This improves our self-esteem and social regard. 13. Improves work performance Self reflection enables you to know your weaknesses. Thus, you will work on those pitfalls to improve your work efficiency and performance. This will help you get better in your career and personal life. It will lead to better self-confidence. Moreover, your resilience will get better and you will be able to face all odds with a lot of inner strength and courage. 14. General happiness and inner peace Self reflection leads to improved happiness and general well-being. You will always act with purpose. You will feel agile and energetic in your life’s journey. 15. Stress handling gets better You will be able to tackle stressful situations in life in a better way. It will improve your skills and your problem-solving abilities will also improve. Thus, you will handle trying times with more confidence and courage. How do you self-reflect? (12 ways to start a journey of self reflection) Self reflection means giving serious thought to your psychological and emotional state of being. The process requires your willingness to delve deeper into the core processes that make you the way you are. There are several ways to practice self reflection. Here, we give you 12 ways to practice this art consciously and transform your life the way you want. 1. Asking questions to yourself The art of self reflection starts with answering deep questions about yourself. Once you decide to self reflect you will have to ask many questions to know what’s going on in your life. Some of the thought-provoking reflection questions could be: - What is my goal in life? - Am I doing enough to reach my goal? - What is holding me back these days? - Am I comfortable talking about my deepest pains and sufferings? - Do I need to practice more self-awareness every day to transform my world into a happier and peaceful abode? - What else can I do to hone my skills? - How am I feeling right now? Sit in silence and observe all the thoughts that come to you in a given moment. Analyze your feelings and accept them without inhibitions and fears. Let the thousand thought trains cross your mind and remain calm without doing anything. You’ll feel empowered and in tune with yourself. Self reflection involves witnessing your thoughts and feelings come and go in a given moment. So, a great way to track them is by writing in a journal. This type of reflective essay or reflective paper note is a handy tool to keep a track of your thoughts and feelings. You can write about the details of your internal state of mind. It is a great way to look for old patterns and habits that you need to let go of, for good mental health and happiness. Journaling is a writing exercise. Whenever you feel confused about your puzzling thoughts, just sit down and write about everything and anything that comes to your mind. It’s a great method to connect with you. 4. Take a nature walk You can take a nature walk by visiting a nearby park and declutter your mind. It relaxes you and helps you develop clarity and self-awareness. When you are away from home for some time, it helps you to rejuvenate. It improves mood and clears up the mind. Thus, whenever you feel stuck in a rut of problems, try to get some fresh air. It’s a real-time ‘me-feeling’ that can energize you. You will be in a better position to connect with yourself. 5. Talk out loud and express yourself Sometimes, self reflection requires talking to oneself. If you want to connect with yourself at very deeper levels, try to talk out loud all that crosses your mind. You will also be able to relate to your negative feelings about something and overcome them. 6. Practice breathing exercises Deep breathing relaxes your wandering mind. It brings you into a conscious state of mind where you will be able to reflect upon your thoughts and feelings with clarity and awareness. You should practice breathing exercises to calm down and feel control of your overwhelming emotions. Relaxation techniques stop mind chatting. It brings you to peace with yourself. 7. Analyze the past events A big part of self reflection involves understanding and analyzing old patterns of thoughts. These disturbing thoughts are no longer serving any good purpose in life. Thus, in order to let go of these thoughts and old habits, you’ll have to be aware of them. This awareness comes from an analysis of these past events. Ask yourself whether your reactions to those events were well-thought-out or it was an instant reaction to the spur of the moment. The moment you get the answer, you become an informed person. Your ‘self reflection’ is successfully done. 8. Practice gratitude Think about what life has offered you. Feel grateful for all the good things and achievements that have come to you for so long. Every day, reflect on at least three things that you are grateful for and feel the joy that fills your heart. 9. Keep a track of how you feel You should regularly track your feelings regularly. This is a method to observe all those faulty feelings that cause unnecessary overwhelming trends. Notice the trends in moods and whether you frequently do get mood swings. Try to analyze the situations that make you feel bad. This helps a lot in self reflection. Tap the troubling emotions and remove them on the way. Seeing and knowing the causes of your instant reaction are the best way to reflect on your feelings. It helps to create better response patterns. 10. Do a regular self-check Simply sit down and think about your career, relationships, goals, and aspirations. Self-check involves analyzing each and every aspect of your life. It makes you realize the areas of your life that need improvement. Also, think about what new things you can do in your life. Are you happy with the efforts that you have put on so far to improve your life? 11. Set life goals that satisfy your purpose in life You should develop specific life goals to reduce confusion and improve focus and clarity. The clearer you are about what you wish to do in life, you will be in a better position to reflect on those things that may hold you back in life. 12. Seek support from family and friends Sometimes self reflection can be a puzzling affair. You may lose sight of your thoughts and feelings. At times, you may feel overwhelmed and may feel that nothing is going according to the plans. In such a situation, you should seek support from your family members and friends. They will guide you in the right direction and help you wade through emotional turmoil and unhelpful thoughts. Feel confident about your ability but at the same time remember that there is always help nearby. You can also consult a life coach or therapist to understand the deeper aspects of you. Self reflection activities The self reflection activities are tools and techniques that aim to develop your understanding and clarity about yourself. It focuses on introspection and personal empowerment. You can do several activities in your daily life to reflect on your wildest thoughts and deepest feelings. You will get to know why you behave in ways that you do; or why certain things come to you easily while others don’t? Two self-reflection activities that suit all age groups of people are - Mindfulness meditation - Grounding techniques This activity makes you aware of the present moments. Neither do you ruminate about past happenings nor do you think too much about the unknown future? You’re just living the moments of your life with absolute bliss and happiness. - Clear away your negative thoughts - Be aware of your 5 senses and what information they are trying to convey - Remain open to all the experiences that come to you - Experience the inner solitude and reflect upon all the thoughts that come to you at that moment - Think about the moments of anger, frustration, or anxiety and why did it all happen - Hunt for peaceful solutions to the problems you might be facing daily - Ask open-ended questions to know the ‘present’ situation. Mindfulness meditation allows you to let go of those thoughts, feelings, and faulty beliefs that no longer serve a good purpose in your life. It is one of the best activities to self-reflect and builds up a greater awareness of your inner world. When we talk about grounding, we actually mean your connection with the energy of Mother Earth. It means to experience the moment and live in ‘here and ‘now.’ This technique is the best-known way to self-reflect. It makes you attend to the senses and bring all your conscious thoughts to the present moment. One of the most popular techniques is the 54321 technique. It is used to reflect on your unhelpful thoughts and untapped emotions that were suppressed and were not serving any good purpose in life. Self reflection books At times when you take some moments out of your busy schedule to reflect upon your thoughts and feelings, you actually feel empowered. Don’t you? Exploring the underlying causes of your emotions and behavior is highly insightful. To develop this insight in the right way, you can read some self-improvement books that are both encouraging and motivating. Some of the best-sellers are given below: 1. The Self-Aware Universe: How Consciousness Creates The Material world – Amit Goswami 2. 7 Mindsets To Master Self-Awareness – by Elizabeth Diamond 3. Alchemy 365: A Self Awareness Workbook – Brenda Lightfeather Marroy 4. Insight – The Power of Self-Awareness in a Self-Deluded World – Tasha Eurich Introspection vs. self reflection Introspection and self reflection are the two processes to know yourself in a better way. But are these processes mean the same thing? No, they are closely connected with each other but not entirely the same. The differences are highlighted below: |It is the ability to mirror and see-through one’s cognitive and emotional processes clearly.||It is the analysis of one’s mental state. Calmly review thoughts and feelings leading to accepting them as real and a part of you.| |It’s meditating and self-absorbing in the deepest aspects of oneself. More engrossed process than introspection. John Dewey proposed self reflection as a method of ideal learning.||Introspection was first identified by Wilhelm Wundt in mainstream psychology as a method to understand oneself.| |The art of self reflection relies more on self-awareness.||Introspection relies on an overall self-evaluation.| The connection between self reflection and self-awareness Self reflection involves mirroring your thoughts and seeing your inner processes closely. This process decides the quality of awareness that you will have about your life and well-being. Thus, these two processes are interwoven and cannot be separated from each other. You can say that one decides the functionality of the other. When you set aside some time to reflect every day and closely observe and think about your thoughts and emotions, you’re self-reflecting on your internal state of being. The more you practice this process consciously; you become a self-aware person. The video link given below shows the concept of self reflection in a detailed manner. Do check out. Summing Up from ‘ThePleasantMind’ Self reflection gives you an opportunity to grow as a person. You can improve your life and living. When you reflect on your true thoughts and feelings, you become a fully functioning and poised person. You know your blind spots and strengths much more than others. This process leads to awakening and you are in tune with the positive side of human growth and development. It is a slow process that needs patience. With time, you will be equipped to self-reflect in the best possible way. You should always start this practice slowly and move on to achieve greater heights. Apart from this, if you are looking for person-centric psychotherapy that helps to attain closure of past and present sufferings, then do click here! A Psychologist with a master's degree in Psychology, a former school psychologist, and a teacher by profession Chandrani loves to live life simply and happily. She is an avid reader and a keen observer. Writing has always been a passion for her, since her school days. It helps to de-stress and keeps her mentally agile. Pursuing a career in writing was a chance occurrence when she started to pen down her thoughts and experiences for a few childcare and parenting websites. Her lovable niche includes mental health, parenting, childcare, and self-improvement. She is here to share her thoughts and experiences and enrich the lives of few if not many.
<urn:uuid:30a561e4-9528-463c-91bc-3f97111aead2>
CC-MAIN-2022-33
https://thepleasantmind.com/self-reflection/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00096.warc.gz
en
0.94242
7,089
3.140625
3
More and more of us are trying to cut down our sugar intake as much as possible, recognising the spectrum of harmful effects sugar has on the body. So, as sweeteners provide the sweetness we crave minus the sugar - it makes sense to swap our sugar for sweeteners, right? Turns out it's not as simple as that! Read on to discover everything you need to know about sweeteners, including why sweeteners may be the lesser of two evils - rather than a healthful solution to ditching sugar. What Are Sweeteners? Sweeteners are added to food and drinks to create a sweet taste without sugar and its associated calorie and carbohydrate content. Most sweeteners are low or zero calorie and so they are often found in so-called 'diet foods'. The food industry saw a surge in the use of sweeteners when the sugar tax was introduced, making use of them to sweeten their products and save on their taxes! Aside from food products, you'll also find sweeteners in drinks, chewing gum and toothpaste. Where Do Sweeteners Come From? Where sweeteners comes from truly is complex, as with a quick Google - you will see the same sweetener labelled "Natural" or "Artificial" in different articles. Erythritol for example can be found naturally in plants, yet commercially it is chemically and artificially produced from corn. Therefore depending on the source, it may be classed as natural or artificial. Due to the cost of producing sweeteners, most commercially produced sweeteners are likely to be synthetically made, so if you do choose to consume sweeteners, do check this thoroughly. - Natural - Natural sweeteners are normally derived from naturally sweet compounds such as sugar alcohols (interestingly also produced in the body as a by-product of metabolism). Natural sweeteners are often extracted from plants, leaves, fruit and veg - some are even derived from sugar itself. There can still be processing involved, but the chemical structure of the sweetener has not been changed. Refined white sugar for example would be classed as a highly refined natural sugar - Interesting! - Artificial - Artificial sweeteners are synthetic sugar substitutes. Confusingly, they may be derived from naturally occurring compounds, but then the manufacturing process modifies their chemical composition resulting in a synthetic sweetener. This is often done with enzymes, bacteria or Fungi. You may even find these sweeteners labelled as "naturally occurring" which as you can see, it's all rather complex! How Sweeteners Are Made There are many different kinds of sweeteners. Some are extractions of a single compound (such as a sugar alcohol), others are manmade and some combine several different ingredients. Either way, the compound providing the sweet flavour is isolated and extracted using techniques such as fermentation, enzymes or heat to gain a potent source which can be granulated, powdered or compacted into the small tablet that we all recognise as a sweetener. How Do Sweeteners Work? Food molecules interact with taste buds on the surface of the tongue to signal flavour and taste to the brain via taste receptors. Sweeteners mimic the molecular structure of sugar to bind with your specific taste receptor for sweetness and this is perceived by the brain as consuming a sweet food - amazing right? The difference between sugar and sweeteners is that the body will not breakdown sweeteners for calories, as they do not contain a source of energy such as the carbs you’d find in regular sugar. What foods will I find sweeteners in? Synthetically produced sweeteners are fast becoming a replacement for sugar in lot's of surprising places, from chewing gum, to health supplements to salad dressings - similar to an abundance of seed oils in our diets, we don't yet no the longer term impact of these "foods" on our health and wellbeing. Which Sweeteners Are Bad for Us? Arguably, and from a Real Food mentality and perspective for sure, all sweeteners are bad. For arguments sake, some are worse than others. Not all sweeteners are approved for use and some are in certain countries but not others. Of course, the ones which aren't approved for use in one country but are in another are probably the worst. It's likely that the difference in guidance is simply a lag between research findings and regulation implementation! In the much-debated world of sweeteners it is the general consensus that Aspartame and Saccharin are the worst of a bad bunch. Saccharin has been linked with headaches, skin breakouts, breathing difficulties and even cancer. Studies have shown a link between aspartame consumptions and nausea, vomiting, mood disorders, diarrhoea, memory loss and vision problems. Which Sweeteners Are Safe? There are so many different types of sweeteners and in years been and gone we have seen some approved as safe, some blacklisted and some new kids on the block for which there’s too little evidence to make a decision yet! As it stands right now, sweeteners approved for use in the UK include: - Acesulfame K These have all undergone a rigorous safety assessment and are deemed safe for consumption by the European Food Safety Authority (EFSA). But safe doesn't always equate to optimal health. Just think about sugar, alcohol, inflammatory oils, takeaways and ultra-processed (junk) ‘food’ – they are all recognised as 'safe' to consume but none are beneficial to health. Why Sweeteners Are Bad for You Type the word sweeteners into your search engine and prepare to open a can of worms. You’ll need a Evolve Coffee or two if you’re planning on reading all of the weird and wonderful stories about sweeteners out there! But if sugar is so bad for us, surely sweeteners can’t be as bad? Worryingly, there’s evidence linking sweeteners to a whole spectrum of negative health outcomes. Here are some snippets from the tip of the proverbial iceberg of research which can help to explain why sweeteners may or may not be bad for you… Sweeteners and Cancer The debate surrounding sweeteners and cancer stems back to the 1970s, when some pretty frightening research found a correlation between the use of cyclamate and saccharin (two sweeteners) and the development of bladder cancer in laboratory rats. The result was that saccharin was labelled as a food 'potentially hazardous to health' for some time, though this has since been revoked. Most studies which have found a connection between cancer and sweeteners have investigated artificial sweeteners, rather than natural ones. Saccharin, Aspartame, Sucralose and Acesulfame K have all been linked with increasing the risk of cancer in laboratory rats – very worrying considering these are all approved as ‘safe’ for use in the UK. However, there is no clear evidence to suggest that the USA Food & Drug Administration (FDA) approved sweeteners are associated with increased cancer risk in humans. Based on current mainstream evidence, moderate use of sweeteners is unlikely to increase the risk of cancer in humans . Sweeteners and IBS Inflammatory Bowel Syndrome (IBS) is a long-term chronic digestive condition which is common in the UK and many western populations. It causes digestive symptoms such as bloating, wind, constipation, diarrhoea and stomach cramps which are largely triggered and alleviated by diet. Sweeteners, particularly artificial sweeteners, are hard to digest due to their unnatural chemical makeup which is alien to the gut. For thousands of years our ancestors have not ingested artificial sweeteners, so when they are suddenly introduced it is understandable that the body takes some time to adjust and digest them! Artificial sweeteners have been shown to disrupt gut bacteria and worsen the symptoms of IBS. Fructose based sweeteners and those which end in ‘-ol’ (such as sorbitol) have been found to be particular culprits for exacerbating symptoms of IBS. IBS or no IBS, sweeteners cause digestive discomfort for many including bloating, diarrhoea and wind. So, if you do have IBS they are definitely something you’ll be looking to avoid! Sweeteners and Metabolic Syndrome Metabolic Syndrome is a modern condition characterised by the cooccurrence of hypertension (high blood pressure), Type 2 Diabetes and Obesity. A diagnosis of Metabolic Syndrome increases the risk of things like heart attack, stroke and diabetic complications. Studies have found that using artificial sweeteners may actually increase the risk of Metabolic Syndrome. Evidence has shown that the intended effects of low-calorie sweeteners in replacing sugar aren’t always reflected in health outcomes. It is thought that the latent effects of sweeteners on decreasing satiety and increasing appetite (our friends leptin & ghrelin!), impacting blood glucose and leading to increased nutrient poor calorie consumption and excess fat gain overtime, outweighing the initial benefits of fewer calories. Drat! Sweeteners and Obesity There are two sides of the coin when it comes to sweeteners and obesity. On the one hand, if someone enjoys sweet foods and these contain sugar which contributes to excess fat gain - then it makes logical sense that replacing the sugar in these foods with sweeteners will reduce their nutrient poor calorie consumption and thus have a beneficial effect. On the other hand and as alluded to above, there are some pretty staggering studies which have found that using sweeteners frequently could actually cause excess fat gain. One theory to explain this seemingly paradoxical phenomenon is that sweeteners provide an artificial sweetness hundreds or even thousands of times sweeter than sugar itself. By consuming them often, your sugar receptors and biology are thought to become highjacked and acclimatised to the heightened sweet flavour and so naturally sweet foods no longer satisfy cravings for sweet food and satiety/appetite regulation becomes dysfunctional. This can create cravings for even sweeter foods which has been shown to stimulate appetite/least suppress satiety and create a cyclical impact which ultimately means more cravings, more nutrient poor calories and more excess fat gain! Sweeteners, Obesity and Sugar-Free Drinks A large-scale population study have repeatedly shown that consumption of artificial sweeteners in diet drinks (such as Diet Coke) is linked with increased overall excess fat gain, abdominal fat and risk of obesity over time. It is thought that those who have an nutrient poor diet consisting of ultra-processed food like substances (aka junk food) are more likely to opt for artificially sweetened beverages to reduce their ‘overall calorie intake’, which could explain this correlation. Can Sweeteners Cause Acne and poor skin? Gut health and skin health work in synergy. As artificial sweeteners are known to disrupt gut microbiota, consuming them frequently can cause skin breakouts. Put simply, if your gut is unhappy then your skin will be too – and it shows! The effect of artificial sweeteners on gut microbiota, blood sugar levels and metabolism triggers an inflammatory response which is often regarded as the root cause of skin breakouts. Sweeteners are hard to digest and this can create an autoimmune response which underlies many chronic skin conditions such as acne, eczema and rosacea. Chronic and even moderate sweetener consumption can even contribute to the development of Leaky Gut Syndrome and gut dysbiosis. This can lead to problems with effective digestion and absorption of nutrients and foods, GI illnesses, altered gut microbiota, poor immune function and even psychological impacts. It is thought that gut dysbiosis or leaky gut leads to increased permeability in the small intestine which allows toxins such as lipopolysaccharides (LPS) into the blood and potentially triggering systemic inflammation/immune response. If you’re having a hard time with gut health, check out our top tips to improve gut health along with the benefits of collage for optimal gut health. Can Sweeteners Cause Anxiety? Many people experience feelings of increased anxiety related to consuming sweeteners. Artificial sweeteners have been shown to be neurotoxic. Like other neurotoxins (MSG, artificial food dyes and high fructose corn syrup), sweeteners have long been shown to possibly disrupt the normal function of the nervous system . Scarily, one sweetener in particular – aspartame, has been shown to have such a profound effect on neurological function that high consumption can increase the risk of seizures . There’s a definite connection between sweeteners and mental health and we’d love to see tonnes more research into the connection. Are Sweeteners Bad for Your Teeth? It has been thought for some time that by replacing sugar in foods with sugar replacements, such as sweeteners, you can avoid the harmful effects of sugar on dental health. That’s because it is the combination of sugar, saliva and bacteria in the mouth which combine to create plaque (the unhealthy type as not all plaque is considered to be negative) and go on to cause tooth decay. However, a peer reviewed study exposed the dangers of sweeteners for dental health. A particular group of sweeteners called sugar alcohol polyols (such as sorbitol, xylitol and erythritol) were shown to create an acidic environment in the mouth which can lead to tooth erosion. It was also found that additives in sugar-free products lowered the pH of saliva and increased the risk of weakened tooth enamel. The potential damage sweeteners can have on your dental health seems to be more to do with the other ingredients often found in sugar-free products such as acidic additives and preservatives. All in all, replacing the sugar in your diet with sweeteners could do more harm than good considering the wider effects of sweeteners on your health – so you’re best cutting them both out full stop! Sweeteners When ‘Dieting’ The food industry has used sweeteners for decades to create so-called ‘diet friendly’ foods. When you take the fat and sugar out of foods to make them diet friendly you have one major problem – flavour! This is how ‘diet foods’ end up getting heavily processed, having to undergo a whole spectrum of amendments and additions to try and make them palatable with their shiny new low-fat or low-sugar label. Sugar is a ‘dieting’ no-no, it’s incredibly energy dense, void of nutrients and causes insulin spikes and crashes which ultimately lead to cravings and lulls in energy! So, fashioned in their low calorie, high flavour package – do sweeteners have the answer to diet food that tastes great? Do Sweeteners Cause Cravings? As mentioned earlier, sweeteners are hundreds to thousands of times sweeter than sugar itself. When consumed often, this fools biology and sweetness receptors and can result in making naturally sweet food tasteless. Whilst you may be cutting down on calories in the short-term, this can cause havoc with cravings and lead you to seeking out sweeter and sweeter foods in the long-term. Sweeteners can make low fat and low sugar foods more palatable but within the food industry, marketing can be misleading and manufacturers will often replace the flavour in low fat foods with sugar and vice versa. What’s more, foods with an unnaturally low-calorie content won’t leave you full for long. Throw in the fact that they’re more often than not ultra-processed, we think that’s a good a reason as any to stick to real food and real ingredients when seeking optimal health and weight status! Personally, at Hunter & Gather we try to avoid any refined sugars, synthetically produced sweeteners completely and we watch our intake of any "natural" sugars due to their impact on insulin levels. There are some easy and simple swaps you can make, such as utilising raw honey in the summer months, when our ancestors would have foraged for this. Ensure you choose raw, unpasteurised and ideally locally made honey. During the winter months, we opt for vanilla, or apple cider vinegars. Vanilla we add to recipes such as our Chia Pudding and apple cider vinegar makes a delicious salad dressing without the added sweeteners. If you want some ways to help swap out refined sugar and synthetic sweeteners, why not try some of the following: Will Sweeteners Break My Fast? Sweeteners are a bit of a grey area when it comes to fasting. The general consensus is that a fast is a fast and you wouldn’t consume sweeteners on their own so what’s the point in asking? For arguments sake, and to get all science-y, sweeteners may contain calories and may stimulate insulin release. Even zero calorie sweeteners can trigger an insulin response in the body and so sweeteners are best avoided when fasting. Are Sweeteners Keto Friendly? A Keto diet is a low-carb way of eating and as sugar is a carb, limiting your intake of sugar is one of the first things you’ll learn when you start out with Keto. Lots of people will struggle as their body adjusts to a low-carb intake and replacing sugar with something like a sweetener can seem like a logical approach. Many sweeteners contain zero carbs and so it is argued that they are a safe food for Keto. However, those who are serious about Keto will know that it is a lifestyle and not just a diet. Some core principles of Keto are to improve insulin sensitivity, reduce cravings and sugar dependency and enhance weight loss. These are all things which sweeteners interfere with. What’s more, if you’ve been paying attention, you’ll recognise the wider impact of sweeteners on overall health are best avoided! So, whilst it could be argued both ways – we are team no sweeteners! You can learn more about the Ketogenic Diet in our Beginner’s Guide to the Keto Diet and learn how to follow a successful ketogenic diet that works for you Are Sweeteners Paleo Friendly? The Paleo diet is a template of eating based on our pre-agriculture ancestors (circa pre 11,000 years ago). It’s based on eating only foods which our ancient ancestors would’ve been able to get a hold of – and sweeteners aren’t one of them! Envisage a hairy caveman cooking a whole animal over a log fire – throwing a stevia tablet into his bone broth was definitely not a thing! Refined sugar is a definite no-no in the Paleo diet and cutting out refined sugar is one of the many reasons why people choose to go Paleo. But replacing sugar with artificial sweeteners is not a route on the Paleo path. Sweeteners were not only inaccessible to our ancestors but they are also incredibly processed and often entirely synthetic – the nail in the coffin when it comes to Paleo friendly! Instead of sweeteners, naturally sweet foods such as honey and maple syrup are used at certain times of year within the Paleo lifestyle. Hey – Learn more about the Paleo Diet in our Beginner’s Guide to the Paleo Diet. Sweeteners and Diabetes Many ‘Diabetic friendly foods’ utilise sweeteners instead of sugar as their low or no carb content makes them a safe bet for Diabetics – or does it? How Sweeteners Affect Insulin Insulin is released when we eat foods which contain carbs, as they are metabolised into sugars which enter the bloodstream before insulin is released to allow them to leave the bloodstream again. In Diabetics, this process doesn’t work as intended and increased insulin resistance or poor insulin production leaves sugar in the blood – which, in short, is harmful to health. Interestingly, small amounts of insulin may be released before sugar enters the bloodstream. This is known as the cephalic phase insulin release and it is triggered by the sight, smell and taste of food as well as other mechanisms at play when we eat like chewing and swallowing. Studies have found that the intense sweet taste of artificial sweeteners can cause an early cephalic phase insulin response . It is also thought regular consumption of sweeteners could alter our gut microbiota in a way which reduces insulin sensitivity, causing higher blood sugar levels and higher insulin production – both of which are bad news for Diabetics and those that are insulin resistant. Will Sweeteners Raise Blood Sugar? Consuming sweeteners won’t immediately raise blood sugar levels like when you eat sugar. As with most of the drawbacks of sweeteners, the problem lies in long-term use. One study has found that artificial sweeteners induce glucose intolerance by altering the gut microbiota and further observational studies support this gut-glucose connection. However, further studies are needed to corroborate these findings. Sweeteners and Gut Health Sweeteners are thought to alter the balance of intestinal microbiota in the human gut . Natural and synthetic sweeteners have been found to cause dysbiosis (an imbalance of gut bacteria) and alter metabolic pathways such as glucose metabolism because of this. The gut is a super intelligent and an extremely important organ which communicates with the rest of the body – so anything which messes with your gut bacteria should be avoided at all costs! Especially substances that are very new to humans… On a more relatable level, sweeteners have been shown to cause constipation, bloating and wind. Certain types of sweeteners called polyols are not readily absorbed in the intestine and so can cause even worse problems such as chronic diarrhoea. So, it’s no surprise that consuming sweeteners often can affect gut health when the consequences of consuming them are so evident to see! How Can I Avoid Sweeteners? Sweeteners are becoming more commonplace in processed food and drinks, often hiding where you'd least expect them. As always, the best place to hide hidden nasty ingredients is ultra processed, packaged food (like substances). So, if you're looking to avoid sweeteners and other synthetic ingredients, the best start you can make is to opt for natural, whole foods which are as minimally processed as possible and transparent ingredients. Of course, checking the ingredients list on packaging is the best way to be sure there are no sweeteners in your food. The Not So Sweet Summary: Our Stance on Sweeteners According to the current scientific literature, the risk of sweeteners to health varies between individuals and depending on which type of sweetener is consumed. Consuming sweeteners often has been linked with increased risk of cancer, obesity, Type 2 Diabetes, digestive disorders and even mental health conditions. Overall, natural or artificial, safe or unsafe, sweeteners are a food additive which should be used with caution and, in our humble opinion, avoided as much as possible. Simply put, we have not evolved as a species to consume them and they play no role in pursuing optimal health. The Food and Drug Administration (FDA) and Food Standards Agency (FSA) recognises sweeteners as a food additive and regulates their use with strict criteria. They make sure that sweeteners are ‘safe to use’ within certain amounts over a certain length of time. This is all good and well, but do we seriously want to be told our food is 'safe'? Or would we prefer to be told it is natural, healthful and nutritious? Don't know about you but we think that anything 'Generally Recognised as Safe (GRAS)' or with an 'Acceptable Daily Intake (ADI)' just isn't appetising at all, however sweet! So, what's best: sugar or artificial sweeteners? We think that's like choosing between a rainy Monday morning and a parking fine! If you do choose to include either in your diet on a regular basis we'd recommend using in moderation as much as possible. If you can live without both (which you can!) then even better - happy and healthy eating! All information provided on our website and within our articles is simply information, opinion, anecdotal thoughts and experiences to provide you with the tools to thrive. It is not intended to treat or diagnose symptoms and is definitely not intended to be misconstrued for medical advice. We always advise you seek the advice of a trained professional when implementing any changes to your lifestyle and dietary habits. We do however recommend seeking the services of a trained professional who questions the conventional wisdom to enable you to become the best version of yourself. Artificial Sweeteners and Cancer Diet Soda Intake Is Associated with Long‐Term Increases in Waist Circumference in a Biethnic Cohort of Older Adults: The San Antonio Longitudinal Study of Aging
<urn:uuid:6d2f3d13-9bc5-47a8-ae05-39993c1f6e2c>
CC-MAIN-2022-33
https://hunterandgatherfoods.com/en-pl/blogs/real-food-lifestyle/are-sweeteners-a-safe-alternative-to-sugar
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00097.warc.gz
en
0.943282
5,167
3.40625
3
Promoting Family Involvement Schools can take a number of steps to promote partnerships with families. These can ease teachers' responsibilities or give them better ways to relate to parents. Recognize the disconnection While many parents have strong feelings of support for the schools their children actually attend, with 70 percent of all public school parents giving their children's school a grade of A or B, there still is a strong feeling of disconnection with public education in general (Elam, Lowell, & Gallup 1994). Many families feel that their interests are not fully taken into account by educators. At times, parents feel that educators talk down to them or speak in educational jargon they do not understand, while the majority of teachers feel that parents need to be more engaged in the education of their child (Peter D. Hart 1994). Train teachers to work with parents Schools and school systems seldom offer staff any formal training in collaborating with parents or in understanding the varieties of modern family life. However, both the National Education Association and the American Federation of Teachers are working to make such information and skills widely available. Reduce distrust and cultural barriers Often the first time a parent comes to school is when a child is in trouble. Schools can reduce distrust and cultural barriers between families and teachers by arranging contacts in neutral settings. These might include using resource centers, offering informal learning sessions, conducting home visits by family liaison personnel, and holding meetings off school grounds. Since the first contact a parent has with his or her child's school is often negative, some districts are making sure the first contact with parents is a positive one. Address language barriers Reaching families whose first language is not English requires schools to make special accommodations. Translating materials into their first language can be useful for these parents, but written communications alone are not enough. Ideally, a resource person, perhaps another parent, would be available who could communicate with parents in their first language either face-to-face or by telephone. Interactive telephone voice-mail systems that have bilingual recordings for families also are useful. Evaluate parents' needs Schools can also bridge the distance between families and schools by surveying parents to find out their concerns and opinions about school. Surveys can be especially helpful to assess further changes needed after a school has implemented a program promoting parental involvement. Accommodate families' work schedule Many schools hold evening and weekend meetings and conferences before school to accommodate families' work schedules. By remaining open in the afternoons and evenings and on weekends, schools can promote various recreational and learning activities for parents, including adult education and parenthood training, and can create a safe haven against neighborhood crime. Use technology to link parents to the classroom As much as Americans are eager to get on the Information Highway, getting an old fashioned telephone into every classroom might be one of the most effective ways to improve communication between families and teachers (U.S. Department of Education 1994b). Schools are also using a number of new technologies to communicate with families and students after school hours. One widespread arrangement is a districtwide homework hotline to help guide students with assignments. In addition, voice mail systems have been installed in several hundred schools across the country. Parents and students can call for taped messages from teachers describing classroom activities and daily homework assignments. Audiotapes and videotapes also are being used as alternatives to written communication for parents. These are especially helpful in reaching families who do not read. Computers can help improve children's academic achievement and bring families and schools together. Many parent centers include computer classes for parents to improve their education and job skills. The number of families who use the Internet is also rapidly growing, and several aspects of Internet services are becoming dedicated to families. Make school visits easier Free transportation and child care can especially encourage families in low-income and unsafe neighborhoods to attend school functions. Native speakers of languages other than English, interpreters, and materials translated in their own language can help non-English-speaking parents participate in the schools more fully. A variety of techniques including letters, phone calls, and visits by program staff may be needed to recruit low-income parents and parents who lack confidence in dealing with the schools (Goodson, Schwartz, & Millsap 1991; Moles 1993). Establish a home-school coordinator A parent liaison or home-school coordinator can develop parental involvement programs without adding to the workload of teachers. Personal contacts, especially from people in the community, are important in encouraging hard-to-reach families, including immigrants, to participate (Goodson, Swartz, & Millsap 1991; Nicolau & Ramos 1990). Many of the most effective parent-school partnership programs combine multiple strategies. In order to expand opportunities for school-family contacts, these schools have developed resource centers for parents in schools, home visiting programs, and mentoring programs (Davies, Burch, & Johnson 1991). Encourage family learning Traditional homework assignments can be converted into more interactive ones involving family members. For example, students might interview family members on historical events or their daily work. Give parents a voice in school decisions The parental involvement goal explicitly states, "Parents and families will help to ensure that schools are adequately supported and will hold schools and teachers to high standards of accountability." Many parents, especially those who have limited proficiency in English or who distrust the schools, may be reluctant to get involved to this extent. But this kind of participation is an important component of efforts to increase parental involvement. Schools can give families the opportunity to support the improvement efforts of schools and teachers. In recent years, a number of school systems have established new governance arrangements. Thus many schools are creating new arrangements for working with parents, finding ways to make communication with families more personal and compatible with their needs, drawing on new technologies, and using parents in new ways in the schools. But these new family-school partnerships need continuing support from other members of the society, including community organizations, businesses, and government at all levels. Click the "References" link above to hide these references. Anderson, R. C., Heibert, E. H., Scott, J. A., & Wilkinson, I. A. G. (1985). BECOMING A NATION OF READERS: THE REPORT OF THE COMMISSION ON READING. Washington, DC: National Academy of Education. Andrus, Cecil D, Governor of Idaho. (1994). Personal correspondence. Barton, P.E., & Coley, R.J. (1992). AMERICA'S SMALLEST SCHOOL: THE FAMILY. Princeton, NJ: Educational Testing Service. Bastian, L. & Taylor, B. (1991). SCHOOL CRIME: A NATIONAL CRIME VICTIMIZATION SURVEY REPORT. Washington, DC: Bureau of Justice Statistics. Bauch, J. P. (1993). A SAMPLER OF PROJECTS AND RESULTS -- THE TRANSPARENT SCHOOL MODEL 1987-1993. Nashville, TN: Vanderbilt University, Betty Phillips Center for Parenthood Education. Baumrind, D. (1989). Rearing competent children. In W. Damon (Ed.), CHILD DEVELOPMENT TODAY AND TOMORROW. San Francisco: Jossey-Bass. Becher, R. (1984). PARENT INVOLVEMENT: A REVIEW OF RESEARCH AND PRINCIPLES OF SUCCESSFUL PRACTICE. Washington, DC: National Institute of Education. Bempechat, J. (1992). The role of parent involvement in children's academic achievement. SCHOOL COMMUNITY JOURNAL, 2(2), 31-4. Berla, N., Henderson, A., & Kerewsky, W. (1989). THE MIDDLE SCHOOL YEARS: A PARENT'S HANDBOOK. Washington, DC: National Committee for Citizens in Education. Berrueta-Clement, J. R., Schweinhart, L., Barnett, W., Epstein, W., and Weikart, D. (1984). CHANGED LIVES: THE EFFECTS OF THE PERRY PRESCHOOL PROGRAM ON YOUTHS THROUGH AGE 19. Ypsilanti, MI: High/Scope Educational Research Foundation. Bronfenbrenner, U. (1974). A REPORT ON LONGITUDINAL EVALUATIONS OF PRESCHOOL PROGRAMS, VOL. II: IS EARLY INTERVENTION EFFECTIVE? Washington, DC: Department of Health, Education, and Welfare. Office of Child Development. (ERIC Document Reproduction Service No. ED 093 501). Caplan, N., Choy, M. H., & Whitmore, J. K. (1992). Indochinese refugee families and academic achievement. SCIENTIFIC AMERICAN, 266(2). Carnegie Council on Adolescent Development. (1994). A MATTER OF TIME: RISK AND OPPORTUNITY IN THE NONSCHOOL HOURS. New York: Carnegie Corporation of New York. Chapin Hall Center for Children. (1992). COMMUNITY RESOURCES FOR STUDENTS: A NEW LOOK AT THEIR ROLE AND IMPORTANCE AND A PRELIMINARY INVESTIGATION OF THEIR DISTRIBUTION ACROSS COMMUNITIES. Chicago: Author. Children's Aid Society. (n.d.). THE WASHINGTON HEIGHTS COMMUNITY SCHOOLS PROJECT. PROGRESS REPORT, OCTOBER 1992 TO JUNE 1993. New York: Author. Children's Defense Fund. (1994). THE STATE OF AMERICA'S CHILDREN: YEARBOOK 1994. Washington, DC: Author. Chimerine, C. B., Panton, K. L. M., & Russo, A. W. W. (1993). THE OTHER 91 PERCENT: STRATEGIES TO IMPROVE THE QUALITY OF OUT-OF-SCHOOL EXPERIENCES OF CHAPTER 1 STUDENTS. Washington, DC: Policy Studies Associates. Choy, S. P., Henke, R. R., Alt, M. N., Hedrich, E. A., & Bobbitt, S. A. SCHOOLS AND STAFFING IN THE UNITED STATES: A STATISTICAL PROFILE, 1990-91. Washington, D.C.: National Center for Education Statistics, U.S. Department of Education. Clark, R. M. (1988). Parents as providers of linguistic and social capital. EDUCATIONAL HORIZONS, 66(2), 93-95. Clark, R. M. (1990). Why disadvantaged students succeed. PUBLIC WELFARE, pp. 17-23. Spring. Clinton, W. J. (1994). Memorandum for the heads of executive departments and agencies: Expanding family-friendly work arrangements in the executive branch. Washington, DC: The White House. July 11. Coleman, J. S., Campbell, E. Q., Hobson, C. J., McPartland, J., Mood, A. M., Weinfeld, F. D., & York, R. L. (1966). EQUALITY OF EDUCATIONAL OPPORTUNITY. Washington, DC: U.S. Government Printing Office. College Board. (1994). COLLEGE-BOUND SENIORS OF 1994: INFORMATION ON STUDENTS WHO TOOK THE SAT AND ACHIEVEMENT TESTS OF THE COLLEGE BOARD. New York, NY: Author. Comer, J. P. (1988). Educating poor minority children. SCIENTIFIC AMERICAN, 259(5), 42-48. Conference Board. (1994). Job sharing. WORK-FAMILY ROUNDTABLE, 4 (2). Cooper, H. (1989). HOMEWORK. White Plains, NY: Longman. D'Angelo, D. (1991). PARENT INVOLVEMENT IN CHAPTER 1: A REPORT TO THE INDEPENDENT REVIEW PANEL. Hampton, NH: RMC Research Corporation. Dauber, S. L., & Epstein, J. L. (1993). Parents' attitudes and practices of involvement in inner-city elementary and middle schools. In N. Chavkin (Ed.), FAMILIES AND SCHOOLS IN A PLURALISTIC SOCIETY (pp. 53-72). Albany: State University of New York Press. Davies, D. (1988). Benefits and barriers to parent involvement. COMMUNITY EDUCATION RESEARCH DIGEST, 2(2), 11-19. Davies, D., Burch, P., & Johnson, V. (1991). A PORTRAIT OF SCHOOLSREACHING OUT: REPORT OF A SURVEY ON PRACTICES AND POLICIES OFFAMILY-COMMUNITY-SCHOOL COLLABORATION. Boston, MA: Institute forResponsive Education. de Kanter, A., Ginsburg, A. L., & Milne, A. M. (1987). PARENT INVOLVEMENT STRATEGIES: A NEW EMPHASIS ON TRADITIONAL PARENT ROLES. Washington, DC: U.S. Department of Education. DESIGNS FOR CHANGE. (1993). Issues in Restructuring Schools. Author, 5. Fall. EDUCATION DAILY. (1994). NEA kicks off effort to get parents involved in school. Author. July 11. Elkind, D. (1993). PARENTING YOUR TEENAGER IN THE '90s. Rosemont, NJ: Modern Learning Press. Elam, S.M., Lowell, C.R., and Gallup, A.M. (1994),. The 26th Annual Phi Delta Kappa/Gallup Poll of the Public's Attitudes Toward the Public Schools. PHI DELTA KAPPAN, September. Entwisle, D. & Alexander, K. (1992). Summer setback: Race, proverty, school composition, a mathematics achievement in the first 2 years of school. AMERICAN SOCIOLOGICAL REVIEW, 57, 72-84. Epstein, J. L. (1986). Parents' reactions to teacher practices of parent involvement. ELEMENTARY SCHOOL JOURNAL, 86, 277-294. Epstein, J. L. (1987). Parent involvement: What research says to administrators. EDUCATION AND URBAN SOCIETY, 19, 119-36. February. Epstein, J. L. (1991a). Effects on student achievement of teacherpractices of parent involvement. In S. Silvern (Ed.), ADVANCES INREADING/LANGUAGE RESEARCH, VOL. 5. LITERACY THROUGH FAMILY, COMMUNITYAND SCHOOL INTERACTION. Greenwich, CT: JAI Press. Epstein, J. L. (1991b). Paths to partnership: What we can learn from federal, state, district, and school initiatives. PHI DELTA KAPPAN, 72 (5), 344-349. January. Epstein, J. L., & Salinas, K. C. (1992). MANUAL FOR TEACHERS: TEACHERS INVOLVE PARENTS IN SCHOOLWORK (TIPS) MATH AND SCIENCE INTERACTIVE HOMEWORK IN THE ELEMENTARY GRADES. Baltimore, MD: Johns Hopkins University, Center on Families, Communities, Schools and Children's Learning. ERIC. (1990). Guidelines for family television viewing. ERIC DIGEST. Urbana, IL: ERIC Clearinghouse on Elementary and Early Childhood Education. Families and Work Institute. (1994). EMPLOYERS, FAMILIES, AND EDUCATION: FACILITATING FAMILY INVOLVEMENT IN LEARNING. New York: Author. Family Service America. (n.d.). FAMILIES TOGETHER WITH SCHOOLS. Milwaukee, WI: Author. Fathernet information available from: Children, Youth, and Family Consortium, 12 McNeal Hall, 1985 Buford Ave., St. Paul, MN 55108, or via e-mail at [email protected] Finney, P. (1993). The PTA/Newsweek national education survey. NEWSWEEK. May 17. Freedman, M. (1994). SENIORS IN NATIONAL AND COMMUNITY SERVICE: A REPORT PREPARED FOR THE COMMONWEALTH FUND'S AMERICANS OVER 55 AT WORK PROGRAM. Philadelphia: Public/Private Ventures. Fruchter, N., Galletta, A., & White, J. L. (1992). NEW DIRECTIONS IN PARENT INVOLVEMENT. New York: Academy for Educational Development. Furano, K., Roaf, P. A., Styles, M. B., & Branch, A. Y. (1993). BIG BROTHERS/BIG SISTERS: A STUDY OF PROGRAM PRACTICES. Philadelphia: Public/Private Ventures. Ferguson, S., Outreach coordinator for the National Information Center for Children and Youth with Disabilities. (1994). Personal correspondence. Goodson, B. D., Swartz, J. P., & Millsap, M. A. (1991). WORKING WITH FAMILIES: PROMISING PROGRAMS TO HELP PARENTS SUPPORT YOUNG CHILDREN'S LEARNING. Cambridge, MA: Abt Associates. Gorman, T. (1993). Parents help with reading -- but quit too soon. AMERICAN TEACHER. February. Gotts, E. E. (1982). SCHOOL-FAMILY RELATIONS PROGRAM. Final (annual) report. Charleston, WV: Appalachia Educational Laboratory. Hanson, S. L., & Ginsburg, A. (1985). Gaining ground: Values and high school success. Prepared for the U.S. Department of Education. Henderson, A. (1987). THE EVIDENCE CONTINUES TO GROW: PARENT INVOLVEMENT IMPROVES STUDENT ACHIEVEMENT. Columbia, MD: National Committee for Citizens in Education. Henderson, A. T., & Berla, N. (1994). A NEW GENERATION OF EVIDENCE: THE FAMILY IS CRITICAL TO STUDENT ACHIEVEMENT. Washington, DC: National Committee for Citizens in Education. Henderson, A. T., Marburger, C. L., & Ooms, T. (1986). BEYOND THE BAKE SALE: AN EDUCATOR'S GUIDE TO WORKING WITH PARENTS. Columbia, MD: National Committee for Citizens in Education. Heyns, B (1988). Summer programs and compensatory education: The future of an idea. In B. I. Williams, P.A. Richmond, & B.J. Mason, (eds.), DESIGNS FOR COMPENSATORY EDUCATION: CONFERENCE PROCEEDINGS AND PAPERS. Washington, DC: Research and Evaluation Associates. Hill, Alan T., President of the Corporation for Educational Technology. (1994). Personal correspondence. Hughes, E., Administrator for Education Support Services, McAllen Independent School District. (1994). Personal Correspondence. Ingrassia, M. (1993). Growing up fast and frightened. NEWSWEEK, November 22. Johnson, V. (1993). Parent centers send clear message: Come be a partner in educating your children. RESEARCH AND DEVELOPMENT REPORT. (September, No. 4). Baltimore, MD: Johns Hopkins University, Center on Families, Communities, Schools and Children's Learning. Kagan, S. L., & Neville, P. (1993). INTEGRATING SERVICES FOR CHILDREN AND FAMILIES: UNDERSTANDING THE PAST TO SHAPE THE FUTURE. New Haven, CT: Yale University Press. Keith, T. Z., & Keith, P. B. (1993). Does parental involvement affect eighth-grade student achievement? Structural analysis of national data. SCHOOL PSYCHOLOGY REVIEW, 22(3), 474-496. Lee, V. E., & Croninger, R. G. (1994). The relative importance of home and school in the development of literacy skills for middle-grade students. AMERICAN JOURNAL OF EDUCATION, 102 (3), 286-329. Leler, H. (1983). Parent education and involvement in relation to the schools and to parents of school-aged children. In R. Haskins & D. Adams (Eds.), PARENT EDUCATION AND PUBLIC POLICY. Norwood, NJ: Ablex. Levin, H. M. (1989) ACCELERATED SCHOOLS AFTER THREE YEARS. Stanford, CA: Stanford University, Center for Educational Research. Lewis, A. (1994). Director of Programs, National Urban League. Personal coorespondence. Liontos, L. B. (1992). AT-RISK FAMILIES AND SCHOOLS BECOMING PARTNERS. Eugene: University of Oregon, ERIC Clearinghouse on Educational Management. Louis Harris and Associates. (1987). THE METROPOLITAN LIFE SURVEY OF THE AMERICAN TEACHER: STRENGTHENING LINKS BETWEEN HOME AND SCHOOL. New York: Author. Louis Harris and Associates. (1993). METROPOLITAN LIFE SURVEY OF THE AMERICAN TEACHER 1993: VIOLENCE IN AMERICAN PUBLIC SCHOOLS. New York: Author. Lueder, D. C. (1989). Tennessee parents were invited to participate -- and they did. EDUCATIONAL LEADERSHIP, 47(2), 15-17. Maine Meeting Place information available from the Maine Meeting Place Project, c/o York County Parent Awareness, Inc., 150 Main St., Midtown Mall, Sanford, ME 04073, or via e-mail at: [email protected] Massachusetts Mutual Life Insurance Company. (1989). MASS MUTUAL FAMILY VALUES STUDY. Spingfield, MA; Author. McLaughlin, M. (1994). URBAN SANCTUARIES. San Francisco: Jossey-Bass. Melaville, A. I., & Blank, M. J. (1993). TOGETHER WE CAN: A GUIDE FOR CRAFTING A PRO-FAMILY SYSTEM OF EDUCTION AND HUMAN SERVICES. Washington, DC: U.S. Government Printing Office. Mexican American Legal Defense and Educational Fund. Informational brochure on the Parent Leadership Program. National Headquarters, 634 South Spring, 11th Floor, Los Angeles, CA 90014. Michael, B. (1990). VOLUNTEERS IN PUBLIC SCHOOLS. Washington, DC: National Research Council. Moles, O. C. (1993). Collaboration between schools and disadvantaged parents: Obstacles and openings. In N. Chavkin (Ed.), FAMILIES AND SCHOOLS IN A PLURALISTIC SOCIETY. Albany: State University of New York Press. Morra, L. G. (1994). School-age children: Poverty and diversity challenge schools nationwide. Testimony before the Committee on Labor and Human Resources and the Subcommittee on Education, Arts, and Humanities, U.S. Senate. Washington, DC: U.S. Government Accounting Office. March 16, 1994. Mullis, I.V.S., Campbell, J.R., Farstrup, A.E. (1993). NAEP 1992 READING REPORT CARD FOR THE NATION AND STATES. Washington, DC: National Center for Education Statistics. Mullis, I.V. S., Dossey, J. A., Campbell, J. R., Gentile, C. A., O'Sullivan, C., Latham, A.S. (1994). NAEP 1992 TRENDS IN ACADEMIC PROGRESS. Washington, DC: National Center for Education Statistics. Murray, J., & Connberg, B. (1992). CHILDREN AND TELEVISION: A PRIMER FOR PARENTS. Boys Town, NE: Father Flanagan's Boys' Home. National Commission on Children.(1991). SPEAKING OF KIDS: A NATIONAL SURVEY OF CHILDREN AND PARENTS. Washington, DC: Author. National Education Commission on Time and Learning. (1994). PRISONERS OF TIME. Washington, DC: U.S. Government Printing Office. National Education Goals Panel. (1993). THE NATIONAL EDUCATION GOALS REPORT: BUILDING A NATION OF LEARNERS. Washington, DC: U.S. Governement Printing Office. National Parent Information Network (NPIN)information available from: Lilian G. Katz, ERIC/EECE, University of Illinois at Urbana- Champaign, 805 W. Pennsylvania Ave., Urbana, IL, 61801-4897, 1-800-583-4135. National Research Council. (1993). UNDERSTANDING AND PREVENTING VIOLENCE. Washington, DC: National Academy Press. National Urban League. (n.d.). PARTNERS FOR REFORM IN SCIENCE AND MATH (PRISM). New York: Author. Nicolau, S., & Ramos, C. L. (1990). TOGETHER IS BETTER: BUILDING STRONG RELATIONSHIPS BETWEEN SCHOOLS AND HISPANIC PARENTS. New York: Hispanic Policy Development Project. Pelavin, S.H., & Kane, M.B. (1990). CHANGING THE ODDS: FACTORS INCREASING ACCESS TO COLLEGE. New York: College Entrance Board. Pelavin Associates, Inc. (1993). EQUITY 2000 NATIONAL IMPLEMENTATION REPORT. Washington, DC: Author. Perez-Ortega, L. (1994). PARENT LEADERSHIP PROGRAM. Los Angeles: Mexican American Legal Defense and Educational Fund. Perry, N. (1993). School reform: Big pain, little gain. FORTUNE, 128, November 29 , 130-138. Peter D. Hart Research Associates, Inc. (1994). Internal AFT Survey of elementary and secondary school teachers' views on school reform. April. Pfannensteil, J., Lambson, T., & Yarnell, V. (1991). SECOND WAVE STUDY OF THE PARENTS AS TEACHERS PROGRAM. St. Louis: Parents as Teachers National Center. Powledge, T. (1994). Information highway without tollbooths: Maryland is the first state to offer free access to the Internet. WASHINGTON POST, June 23. Puma, M. J., Jones, C. C., Rock, D., & Fernandez, R. (1993). PROSPECTS: THE CONGRESSIONALLY MANDATED STUDY OF EDUCATIONAL GROWTH AND OPPORTUNITY. INTERIM REPORT. Bethesda, MD: Abt Associates. Radcliffe, B., Malone, M., & Nathan, J. (1994). TRAINING FOR PARENT PARTNERSHIP: MUCH MORE SHOULD BE DONE. Minneapolis: University of Minnesota, Hubert H. Humphrey Institute of Public Affairs, Center for School Change. Radin, N. (1969). The impact of a kindergarten home counseling program. EXCEPTIONAL CHILDREN, 3, 18-26. Radin, N. (1972). Three degrees of maternal involvement in a preschool program: Impact on mothers and children. CHILD DEVELOPMENT, 4, 1355-1364. Rich, D. (1988). MEGASKILLS: HOW FAMILIES CAN HELP CHILDREN SUCCEED IN SCHOOL AND BEYOND. Boston: Houghton Mifflin. Rioux, J. W., & Berla, N. (1993). INNOVATIONS IN PARENT AND FAMILY INVOLVEMENT. Princeton Junction, NJ: Eye on Education. Ripley, S. (1993). A PARENT'S GUIDE: ACCESSING PARENT GROUPS. Washington, DC: National Information Center for Children and Youth with Disabilities. Rutherford, B., Billig, S. H., & Kettering, J. F. (in press). Evaluating education reform: Parent and community involvement in the middle grades. Literature review. In B. Rutherford (Ed.), SCHOOL/FAMILY PARTNERSHIPS. Columbus, OH: National Middle School Association. Salganik, M. W. (1994). Making connections between families and schools. R&D PREVIEW, 9 (3), 2-3. Scott, R., & Davis, A. (1979). PRESCHOOL EDUCATION AND BUSING: DO WE HAVE OUR PRIORITIES STRAIGHT? Paper presented at the meeting of the National Urban Education Association. November. Scott-Jones, D. (1984). Family influences on cognitive development and school achievement. REVIEW OF EDUCATION RESEARCH, 11, 259-304. Shartrand, A., Kreider, H., & Erickson-Warfield, M. (in press). PREPARING TEACHERS TO INVOLVE PARENTS: A NATIONAL SURVEY OF TEACHER EDUCATION PROGRAMS. Cambridge, MA: Harvard Family Research Project. Singer, J. L., Singer, D.G., Desmond, R., Hirsch, B., Nichol, A. (1988). Family mediation and children's cognition, aggression and comprehension of television: A longitudinal study. JOURNAL OF APPLIED DEVELOPMENTAL PSYCHOLOGY, 9, 347. Snyder, T. D., & Fromboluti, C. S. (1993). YOUTH INDICATORS 1993. Washington, DC: U.S. Department of Education, National Center for Education Statistics. Solomon, Z. P. (1991). California's policy on parent involvement: State leadership for local initiatives. PHI DELTA KAPPAN, 72(5), 359-362. Sopris West. (1993). EDUCATION PROGRAMS THAT WORK. 19th edition. Longmont, CO: Author. St. Pierre, R., Swartz, J., Gamse, B., Murray, S., Deck, D., & Nickel, P. (1994). NATIONAL EVALUATION OF THE EVEN START FAMILY LITERACY PROGRAM. Washington, DC: U.S. Department of Education, Planning and Evaluation Service. Stevenson, D. L. & Baker, D. P. (1987). The family-school relation and the child's school performance. CHILD DEVELOPMENT, 58, 1348-1357. Stevenson, H. (1993). EXTRACURRICULAR PROGRAMS IN EAST ASIAN SCHOOLS. Ann Arbor, MI: University of Michigan. U.S. Bureau of the Census. (1994). EDUCATIONAL ATTAINMENT IN THE UNITED STATES: MARCH 1993 AND 1992. Current population reports, P20-476. Washington, DC: U.S. Government Printing Office. U.S. Department of Education. (1987). WHAT WORKS: RESEARCH ABOUT TEACHING AND LEARNING. Washington, DC: Author. U.S. Department of Education. (1990). GROWING UP DRUG FREE: A PARENT'S GUIDE TO PREVENTION. Washington, DC: Author. U.S. Department of Education. (1993a). REACHING THE GOALS: GOAL 6. Washington, DC: Author. U.S. Department of Education. (1993b). SUMMER CHALLENGE: MODEL SUMMER PROGRAMS FOR DISADVANTAGED STUDENTS. Washington, DC: Author. U.S. Department of Education. (1994a). Calculations based on information from the 1994 Condition of Education, the 1993 Digest of Education Statistics, and the 1993 Statistical Abstract of the United States. U.S. Department of Education. (1994b). Personal conversation with teachers in the shadow teacher program at the U.S. Department of Education, Washington, DC. University of Michigan. (1994). MONITORING THE FUTURE. Ann Arbor, MI: Author, Survey Research Center, Institute for Social Research. Utah Center for Families in Education. (n.d.). FAMILY EDUCATION PLAN TRAINING: UTAH NATIONAL DEMONSTRATION PROJECT. Salt Lake City: Author. Walberg, H. J. (1984). Families as partners in educational productivity. PHI DELTA KAPPAN, 65, 397-400. Walberg, H. J. (n.d.). Family programs for academic learning. Prepared for the Office of the Under Secretary, U.S. Department of Education. Wang, M. C., Haertel, G. D., & Walberg, H. J. (1993). Toward a knowledge base for school learning. REVIEW OF EDUCATIONAL RESEARCH, 63, 3. White, B. L. (1987). Education begins at birth. PRINCIPAL, 66 (5). White, B. L. (1988). EDUCATING THE INFANT AND TODDLER. Lexington, MA: D.C. Heath. White, V. (1994). Unpublished material from survey of state legislative activities on parental involvement. Denver: National Conference of State Legislatures. Wilson, J.Q. (1994). What to do about crime. COMMENTARY. 98 (3), 25-34. Wisconsin Department of Public Instruction. (1994). FAMILY-COMMUNITY PARTNERSHIP WITH THE SCHOOLS. Madison: Author.
<urn:uuid:8b9beb3d-5d42-41d8-b8a5-7b9d269e5216>
CC-MAIN-2022-33
https://www.readingrockets.org/article/promoting-family-involvement
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00698.warc.gz
en
0.790998
7,186
3.546875
4
Insomnia is the inability to sleep during a period in which sleep should normally occur. Sufficient and restful sleep is a human necessity. The average adult needs slightly more than eight hours of sleep per day and only 35% of American adults consistently get this amount of rest. People with insomnia tend to experience one or more of the following sleep disturbances: (1) difficulty falling asleep at night, (2) waking too early in the morning, or (3) waking frequently throughout the night. Insomnia may stem from a disruption of the body's circadian rhythm, an internal clock that governs the timing of hormone production, sleep, body temperature, and other functions. While occasional restless nights are often normal, prolonged insomnia can interfere with daytime function, and may impair concentration, diminish memory, and increase the risk of substance abuse, motor vehicle accidents, headaches, and depression. Recent surveys indicate that at least one out of three people in the United States have insomnia, but only 20% bring it to the attention of their physicians. Signs and SymptomsCommon symptoms of insomnia include: - Not feeling refreshed after sleep - Inability to sleep despite being tired - Daytime drowsiness, fatigue, irritability, difficulty concentrating, and impaired ability to perform normal activities - Anxiety as bedtime approaches CausesInsomnia is occasionally a symptom of an underlying medical or psychological condition, but it may also be caused by stress (from work, school, or family) or lifestyle choices, such as excessive coffee and alcohol consumption. About 50% of insomnia cases have no identifiable cause. Some conditions or situations that commonly lead to insomnia include: - Substance abuse—consuming excessive amounts of caffeine, alcohol, recreational drugs, or certain prescription medications; smoking can cause restlessness and smoking cessation may also result in temporary insomnia - Disruption of circadian rhythms—shift work, travel across time zones, or vision loss; circadian rhythms are regulated, in part, by release of melatonin from the brain - Menopause—between 30% and 40% of menopausal women experience insomnia; this may be due to hot flashes, night sweats, anxiety, and/or fluctuations in hormones - Hormonal changes during menstrual cycle—insomnia may occur during menstruation; sleep improves mid-cycle with ovulation - Advanced age—biological changes associated with aging, underlying medical conditions, and side effects from medications all contribute to insomnia - Medical conditions—gastroesophageal reflux (return of stomach contents into the esophagus; frequently causes heartburn), fibromyalgia or other chronic pain syndromes, heart disease, arthritis, attention deficit hyperactivity disorder, and obstructive sleep apnea (difficulty breathing during sleep) - Psychiatric and neurologic conditions—anxiety, depression, manic-depressive disorder, dementia, Parkinson's disease, restless leg syndrome (a sense of indescribable uneasiness, twitching, or restlessness that occurs in the legs after going to bed), post-traumatic stress disorder - Certain medications—decongestants, bronchodilators, and beta-blockers - Excessive computer work - Partners who snore Risk FactorsThe following factors may increase an individual's risk for insomnia: - Age—the elderly are more prone to insomnia - Stressful or traumatic event - Night shift or changing work schedule - Travel across time zones - Substance abuse - Asthma—bronchodilators occasionally cause insomnia - Excessive computer work DiagnosisIf you report symptoms of insomnia or sleep disorders to your physician, he or she will first obtain a detailed sleep history by asking questions about your sleep patterns and sleep quality. He or she will also ask questions to determine whether you snore, have any underlying medical conditions, take medications, or have recently undergone any significant life changes. Keeping a sleep diary (recording all sleep-related information) may help the physician determine the type of insomnia and how best to treat it. The primary care physician may recommend a sleep specialist or a sleep disorders center where brain waves, body movements, breathing, and heartbeats may be electronically monitored during sleep. Preventive CareThe following lifestyle changes can help prevent insomnia: - Exercising regularly—best when done before dinner; exercise can stimulate arousal so should not be done too close to bedtime - Avoiding caffeine (especially after noon) and nicotine - Getting regular exposure to late afternoon sun—stimulates release of melatonin which helps regulate circadian rhythm - Practicing stress reduction techniques such as yoga, meditation, or deep relaxation - Early treatment of insomnia may also help prevent psychiatric disorders such as depression Treatment ApproachBehavioral techniques are the preferred treatments for people with chronic insomnia. Up to 80% of those with insomnia improve with these approaches, and, unlike many medications for insomnia, behavioral techniques do not carry significant risks and side effects. Studies also indicate that healthy sleep habits are necessary for treating insomnia, regardless of its cause, particularly in combination with mind/body therapies such as stimulus control therapy, bright-light therapy, and cognitive-behavioral therapy. Additionally, acupuncture and acupressure have a long tradition of treating insomnia successfully, particularly in the elderly; the herb, valerian, may be useful for certain individuals. Homeopathic remedies may also improve symptoms in some individuals. Generally, medications by prescription or over-the-counter (OTC) are helpful in promoting sleep, but they are not recommended for insomnia that persists for more than 4 weeks. Long-term use of some medications may cause addiction. LifestyleStudies reveal that healthy sleep habits are essential for treating insomnia. The following healthy sleep habits (in addition to the steps mentioned in the Preventive Care section) may help treat the condition: - Maintaining a consistent bed and wake time - Establishing the bedroom as a place for sleep and sexual activity only, not for reading, watching television, or working - Avoiding naps, especially in the evening - Taking a hot bath about two hours before bedtime - Keeping the bedroom cool, well-ventilated, quiet, and dark - Avoiding looking at the clock; this promotes anxiety and obsession about time - Avoiding fluids just before bedtime - Avoiding television just before bedtime - Eating a carbohydrate snack, such as cereal or crackers, just before bedtime - If sleep does not occur within 15 to 20 minutes in bed, moving to another room with dim lighting MedicationsGenerally, medications may be helpful for short-term insomnia, but they are not recommended for insomnia that persists for more than 4 weeks. These medications include: - Over-the-counter sleeping pills (such as diphenhydramine)—promote sleep if insomnia occurs only occasionally - Antidepressants (such as trazodone)—may be prescribed in low doses at night to promote sleep - Benzodiazepines (such as triazolam and lorazepam)—often very successful for resolving insomnia in the short term; long-term use may have serious side effects including daytime drowsiness, depression, sleep walking, and addiction; must not be used with alcohol - Non-benzodiazepine short-acting hypnotics (such as zoldipam and zaleplon)—fewer side effects and less likely to cause addiction than benzodiazepines; particularly effective for elderly and depressed people; side effects may include nightmares and headaches; should not be used with alcohol Nutrition and Dietary SupplementsA carbohydrate snack of cereal or crackers with milk before bed may help because foods rich in carbohydrates and low in protein and fat may boost the production of serotonin and melatonin, brain chemicals thought to promote sleep. The following dietary supplements may also be helpful in promoting sleep: L-tryptophan and 5-hydroxytryptophan (5-HTP) Medical research indicates that supplementation with 1 g L-tryptophan before bedtime can induce sleepiness and delay wake times. L-tryptophan is thought to bring on sleep by raising levels of serotonin, a body chemical that promotes relaxation. This supplement should be used with caution, however, as it may adversely interact with certain anti-depressants (including selective serotonin reuptake inhibitors [SSRIs] and monoamine oxidase inhibitors [MAOIs]) and cause serious negative side effects. Reports of eosinophilia myalgia syndrome (EMS; an autoimmune disorder characterized by fatigue, fever, muscle pain and tenderness, cramps, weakness, hardened skin, and burning, tingling sensations in the extremities), from contaminated L-tryptophan supplements surfaced in 1989, and isolated incidents of EMS continue to be reported on occasion. Studies also suggest that 5-hydroxytryptophan, made from tryptophan in the body or available in supplement form, may be useful in treating insomnia associated with depression. Like tryptophan, however, reports of EMS have been associated with its use. Melatonin supplements appear to be most useful for inducing sleep in certain people, particularly those with disrupted circadian rhythms (such as from jet lag or shift work) or those with low levels of melatonin (such as some people with schizophrenia). In fact, a recent review of scientific studies found that melatonin supplements help prevent jet lag, particularly in people who cross five or more time zones. A few studies suggest that melatonin is significantly more effective than placebo in decreasing the amount of time required to fall asleep, increasing the number of sleeping hours, and boosting daytime alertness. Although research suggests that melatonin may be modestly effective for treating certain types of insomnia, few studies have investigated whether melatonin supplements are safe and effective over the long term. More research is needed in this area. Generally, when melatonin is used, 1 to 3 mg of the supplement is recommended for sleep, but as little as 0.3 mg has been used successfully. HerbsValerian (Valeriana officinalis) Studies have shown that valerian acts as a mild sedative and improves both the ability to fall asleep and the quality of sleep. In one trial, 166 people were randomly assigned to receive valerian extract, an herbal mixture containing valerian, hops (Humulus lupulus), and lemon balm (Melissa officinalis), or placebo. The participants who received either valerian alone or the herbal mixture reported that sleep quality and the ability to fall asleep improved. Other studies have reported similar results. Valerian should not be combined with barbiturates, which currently are rarely prescribed for insomnia. A typical dose of valerian ranges from 150 to 450 mg per day. Kava kava (Piper methysticum) Short-term clinical studies suggest that kava kava is effective for insomnia. According to a recent study, kava kava and diazepam (one of the benzodiazepines) induce similar changes in brain wave activity. Although quite rare, kava may cause skin reactions and liver failure (when used at very high doses for a prolonged period). This herb should not be used at the same time as benzodiazepines. Other herbs that a professional herbalist may use to treat insomnia include: - Passionflower (Passiflora incarnata) - Hops (Humulus lupulus) - Jamaica dogwood (Piscidia erythrina/Piscidia piscipula) - Lemon balm (Melissa officinalis) - Lavender flower (Lavandula angustifolia) - German chamomile (Matricaria recutita) - Motherwort (Leonarus cardiaca) - Gotu kola (Centella asiatica) - Skullcap (Scultellaria lateriflora) HomeopathyThere have been few studies examining the effectiveness of specific homeopathic remedies. A professional homeopath, however, may recommend one or more of the following treatments for insomnia. based on his or her knowledge and clinical experience. Before prescribing a remedy, homeopaths take into account a person's constitutional type. In homeopathic terms, a person's constitution is his or her physical, emotional, and intellectual makeup. An experienced homeopath assesses all of these factors when determining the most appropriate remedy for a particular individual. - Aconitum — for insomnia that occurs as a result of illness, fever, or vivid, frightening dreams; commonly used for children - Argentum nitricum — for impulsive children who are restless and agitated before bedtime and cannot fall asleep if the room is too warm - Arsenicum album — for insomnia that occurs after midnight due to anxiety or fear; this remedy is most appropriate for demanding individuals who are often restless, thirsty, and chilly - Chamomilla — for insomnia caused by irritability or physical pains; sleep may be disturbed by twitching and moaning; this remedy is appropriate for infants who have difficulty sleeping because they are teething or colicky; older children may demand things, then refuse them when they are offered - Coffea — for insomnia due to excitable news or sudden emotions; this remedy is most appropriate for individuals who generally have difficulty falling asleep and tend to be light sleepers; often used to counteract the effects of caffeine, including in infants exposed to caffeine by way of breastfeeding - Ignatia — for insomnia caused by grief or recent loss; this remedy is most appropriate for individuals who yawn frequently or sigh while awake - Kali phosphoricum — for night terrors associated with insomnia; this remedy is most appropriate for individuals who are easily startled and restless, often with fidgety feet; anxiety is often caused by both nightmares and events in the individual's life - Nux vomica — for insomnia caused by anxiety, anger, irritability, or use of caffeine, alcohol, or drugs; this remedy is most appropriate for individuals who wake up early in the morning, for children who often have dreams of school or fights and may be awakened by slight disturbances; nux vomica may also be used to treat insomnia that occurs as a side effect of medications - Passiflora — for the elderly and young children, whose minds are often overactive - Pulsatilla — for women and children who are particularly emotional and do not like sleeping alone; sleeping in a warm room tends to worsen insomnia and the individual may cry due to the inability to fall asleep - Rhus toxicodendron — for restlessness and insomnia caused by pains that occur when the individual is lying down AcupunctureSome reports suggest that acupuncture may have a nearly 90% success rate for the treatment of insomnia. Through a complex series of signals to the brain, acupuncture increases the amount of certain substances in the brain, such as serotonin, which promote relaxation and sleep. Studies of elderly people with sleep disturbances suggest that acupressure enhances sleep quality and decreases awakenings during the night. An acupressure practitioner works with the same points used in acupuncture, but stimulates these healing sites with finger pressure, rather than inserting fine needles. ChiropracticNo well-designed studies have evaluated the effect of chiropractic on individuals with insomnia, but chiropractors report that spinal manipulation may improve symptoms of the condition in some individuals. It is speculated that, in these cases, spinal manipulation may have a relaxing effect on the nervous system. Massage and Physical TherapyMassage has long been known to enhance relaxation and improve sleep patterns. While massage alone is an effective method for relaxation, studies suggest that massage with essential oils, particularly lavender (Lavandula angustifolia), may result in improved sleep quality, more stable mood, increased mental capacity, and reduced anxiety. In one recent study, participants who received massage with lavender felt less anxious and more positive than participants who received massage alone. Mind/Body MedicineA variety of behavioral techniques have proved helpful in treating insomnia. These methods, with the guidance of a sleep specialist or a sleep specialty team, are singly used to treat insomnia, but they may also be combined with other methods of treatment. These methods include: Keeping a daily/nightly record of sleep habits (including the amount of sleep, how long it takes to fall asleep, the quality of sleep, the number of awakenings throughout the night, any disruption of daytime behaviors, attempted treatments and how well they worked, mood, and stress level) can help a person understand and, consequently overcome his or her insomnia. Stimulus Control Techniques This technique involves learning to use the bedroom only for sleeping and sexuality. Individuals using this technique learn to go to bed only when tired and leave the bedroom when not asleep. They must also wake up at the same time every day, including weekends and vacations, regardless of the amount of sleep they had. This method involves improving sleep "efficiency" by attempting to spend at least 85% of time in bed asleep. The time spent in bed is decreased each week by 15 to 20 minutes until the 85% goal is achieved. Once accomplished, amount of time in bed is increased again on a weekly basis. Relaxation Training Techniques Progressive relaxation, meditation, yoga, guided imagery, hypnosis, or biofeedback can break the vicious cycle of sleeplessness by decreasing feelings of anxiety about not being asleep. Studies indicate that these therapies significantly reduce the amount of time it takes to fall asleep, increase total sleep time, and decrease the number of nightly awakenings. This therapy is intended to re-establish healthy sleep patterns by helping an individual cope with his or her sleep problem. One cognitive-behavioral approach, called paradoxical intention, helps to retrain an individual's fears of sleep by doing the opposite of what is causing the anxiety. For example, a person with insomnia worries long before going to bed about not being able to sleep and the difficulty he or she will have at bedtime. Rather than preparing to go to sleep, therefore, the person prepares to stay awake. Another cognitive-behavioral technique, called thought stopping, allows a person with insomnia a certain period of time to repeatedly and continuously think about going to bed. This technique helps "wear out" the anxiety associated with going to bed, and decreases the likelihood that he or she will obsess about falling asleep at other times. Traditional Chinese MedicineMany methods have been used historically in Traditional Chinese Medicine to treat insomnia including herbal remedies, acupuncture, Chinese massage (tui na), and qi gong. Acupuncture is considered to be the most effective. - Insomnia usually occurs in the later months of pregnancy when the mother's size and need to urinate disrupt sleep. - Benzodiazepines should be avoided during pregnancy and while breastfeeding. Warnings and Precautions - Alcohol should be avoided in those who are taking prescription medications or OTC sleeping pills - Discontinuing prescription medications or OTC sleeping pills can lead to rebound insomnia Prognosis and ComplicationsMost people who have insomnia with no underlying medical conditions tend to recover within a few weeks. For those who develop insomnia from a traumatic event (such as those with posttraumatic stress disorder), sleep disruptions can continue indefinitely. People who become dependent on sleeping pills and prescription medication for sleep often have the most difficulty overcoming insomnia. Antidepressants. NMIHI. Accessed at http://drugs.nmihi.com/antidepressants.htm on December 11, 2018. Bootzin RR, Perlils ML. Nonpharmacologic treatments of insomnia. J Clin Psychiatry. 1992;53(suppl):37-41. Benzodiazepines. NMIHI. Accessed at http://drugs.nmihi.com/benzodiazepines.htm on December 11, 2018. Cauffield JS, Forbes HJ. Dietary supplements used in the treatment of depression, anxiety, and sleep disorders. Lippincotts Prim Care Pract. 1999; 3(3):290-304. A, Schmid K. Tolerability and efficacy of valerian/lemon balm in healthy volunteers (a double-blind placebo-controlled, multicentre study). Fitoterapia. 1999; 70(1999):221-228. Chase JE, Gidal BE. Melatonin: Therapeutic use in sleep disorders. Ann Pharmacother. 1997;31:1218-1225. Chen ML, Lin LC, Wu SC, Lin JG. The effectiveness of acupressure in improving the quality of sleep of institutionalized residents. J Gerontol A Biol Sci Med Sci. 1999; 54(8):M389-M394. Cummings S, Ullman D. Everybody's Guide to Homeopathic Medicines. 3rd ed. New York, NY: Penguin Putnam; 1997: 310-312. Czeisler CA, Richardson GS. Disorders of sleep and circadian rhythms. In: Fauci AS, Braunwald E, Isselbacher KJ, et al, eds. Harrison's Principles of Internal Medicine. 14th ed. New York, NY: McGraw-Hill; 1998:154-155. Escher M, Desmeules J, Giostra E, Mentha G. Hepatitis associated with kava, a herbal remedy for anxiety. BMJ. 2001;322:139. Fugh-Berman A, Cott JM. Dietary supplements and natural products as psychotherapeutic agents. Psychosom Med. 1999;61(5):712-728. Gabapentin. NMIHI. Accessed at http://www.nmihi.com/f/gabapentin.html on May 19, 2018. Garfinkel D, Laundon M, Nof D, Zisapel N. Improvement in sleep quality in elderly people by controlled-release melatonin (see comments). Lancet.1995;346(8974):541-544. How much sleep do I need? American Academy of Family Physicians Accessed at https://familydoctor.org/ on May 19, 2018. Herxheimer A, Petrie KJ. Melatonin for preventing and treating jet lag. Copcharane Database Syst Rev. 2001;(1):CD001520. Insomnia. NMIHI. Accessed at http://www.nmihi.com/i/insomnia.htm on December 11, 2018. Insomnia. MedlinePlus. Accessed at https://medlineplus.gov/insomnia.html on December 11, 2018. Juhl JH. Fibromyalgia and the serotonin pathway. Altern Med Rev. 1998;3(5):367-375. Leathwood PD, Chauffard F, Heck E, Munoz-Box R. Aqueous extract of valerian root (Valeriana officinalis L.). Pharmacol Biochem Behav. 1982;17(1):65-71. Lin Y. Acupuncture treatment for insomnia and acupuncture analgesia. Psychiatry Clin Neurosci. 1995;49(2):119-120. Miller LG. Herbal medicinals: selected clinical considerations focusing on known or potential drug-herb interactions. Arch Intern Med. 1998;158(20):2200-2211. Morin CM, Culbert JP, Schwartz SM. Nonpharmacological interventions for insomnia: a meta-analysis of treatment efficacy. Am J Psychiatry. 1994; 151(8):1172-1180. Murtagh DR, Greenwood KM. Identifying effective psychological treatments for insomnia: a meta-analysis. J Consult Clin Psychol. 1995; 63(1):79-89. Rajput V, Bromley SM. Chronic insomnia: a practical review. Am Fam Physician. 1999;60(5):1431-1438. Rakel RE, ed. Conn's Current Therapy. 50th ed. Philadelphia, Pa: WB Saunders Co; 1998. Sertraline. NMIHI. Accessed at http://www.nmihi.com/s/sertraline.html on May 19, 2018. Slideshow: Insomnia. WebMD. Accessed at https://www.webmd.com/ on December 11, 2018. Schulz H, Stolz C, Muller J. The effect of valerian extract on sleep polygraphy in poor sleepers: a pilot study. Pharmacopsychiatry. 1994;27(4):147-151. Shamir E, Laudon M, Barak Y, Anis Y, Rotenberg V, Elizur A, Zisapel N. Melatonin improves sleep quality of patients with chronic schizophrenia. J Clin Psychiatry. 2000;61(5):373-377. Skene DJ, Lockley SW, Arendt J. Use of melatonin in the treatment of phase shift and sleep disorders. Adv Exp Med Biol. 1999;467:79-84. Stoschitzky K, Sakotnik A, Lercher P, Zweiker R, Maier R, Liebmann P, Lindner W. Influence of beta-blockers on melatonin release. Eur J Clin Pharmacol. 1999;55(2):111-115. Vardenafil. NMIHI. Accessed at http://www.nmihi.com/u/vardenafil.html on May 19, 2018. Ullman D. Homeopathic Medicine for Children and Infants. New York, NY: Penguin Putnam; 1992: 108-110. Ullman D. The Consumer's Guide to Homeopathy. New York, NY: Penguin Putnam; 1995: 270-271. Wagner DR. Circadian rhythm sleep disorders. Current Treatment Options in Neurology. 1999;1(4):299-308. Wagner J, Wagner ML, Hening WA. Beyond benzodiazepines: alternative pharmacologic agents for the treatment of insomnia. Ann Pharmacother. 1998; 32:680-691. Wong AH, Smith M, Boon HS. Herbal remedies in psychiatric practice. Arch Gen Psychiatry. 1998; 55(11):1033-1044.
<urn:uuid:8fc7c3cb-32a0-4e19-b605-9fb6cabfdae3>
CC-MAIN-2022-33
https://www.anticoagulationeurope.org/conditions/insomnia/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00498.warc.gz
en
0.899407
5,468
3.75
4
Liberty is a concept that is commonly used by the average American in his daily affairs. A lexical definition of liberty states that it refers to the freedom to believe or act without the restriction of an unnecessary force. As far as the individual is concerned, liberty is the capacity of a person to act according to his will. But do we really know the history of America’s liberty? Do we really understand the historical events that have shaped the liberty that we know of and enjoy in these contemporary times? In this paper, I will be examining the roots of American liberty from the founding era to the modern debates surrounding the concept of liberty. I will also be looking into the proponents of liberty and those who have shared a significant role in defining and upholding liberty as we know it today. The Founding Era Hundreds of years before today, America was an entirely different place. Long before the creation of the Constitution, different European countries have already established their own settlements across America. The Spaniards and the French were among the early colonizers until the time of the British. During the rule of the British Empire, severe shortage in human labor resulted to enslavement and indentured servitude of the natives. In the years that followed, conflicts broke-out between the Native Americans and the English settlers. It should be noted, however, that Virginia already had black indentured servants in 1619 after being settled by Englishmen in 1607 (“Virginia Records Timeline: 1553-1743,” http://memory. loc. gov/ammem/collections/jefferson_papers/mtjvatm3. html), thereby suggesting that the attainment of genuine liberty from the colonizers is yet to be realized. It is perhaps during the time when the English pilgrims came to Plymouth, Massachusetts in 1620 and established their colonies that the concept of liberty came about, not the least in the context of the pre-Constitution history of America. As Mark Sargent writes in his article “The Conservative Covenant: The Rise of the Mayflower Compact in American Myth,” some of the passengers in the Mayflower ship “who were not travelling to the New World for religious reasons would insist upon complete freedom when they stepped ashore” since the New World is already “outside the territory covered in their patent from the [British] crown” (Sargent, p. 236). After the Seven Years War between the British forces and the alliance of French and American Indian forces in 1763, the British Empire enforced a series of taxes on the Americans so as to cover a portion of the cost for defending the colony. Since the Americans considered themselves as subjects of the King, they understood that they had the same rights to that of the King’s subjects living in Great Britain. However, the Sugar Act, Currency Act—both passed in 1764—the Stamp Act of 1765, the Townshend Act of 1767, to name a few, compelled the Americans to take drastic measures to send the message to the British Empire that they were being treated as though they were less than the King’s subjects in Great Britain (Jensen, p. 186). Moreover, the taxes were enforced despite the lack of representation of the American colonists in the Westminster Parliament. One of the famous protests taken by the Americans is the Boston Tea Party in 1773 where numerous crates containing tea that belonged to the British East India Company were destroyed aboard ships in Boston Harbor. As a result, the British government passed a series of acts popularly known as the Intolerable Acts in 1774, further fanning the growing oppression felt by the American colonists. Eventually, the American Revolution ensued beginning in as early as 1775 when British forces confiscated arms and arrested revolutionaries in Concord, thereby sparking the first hostilities after the Intolerable Acts were passed (Jensen, p. 434). From 1775 to 1783, the colonies that formed their own independent states fought as one as the Thirteen Colonies of North America. Lasting for roughly eight years, the American Revolutionary War ended in the ratification of the Treaty of Paris which formally recognized the Independence of America from the British Empire. Between these years, the colonies underwent several changes which constitute part of the developments toward the framing of the Constitution (Bobrick, p. 88). One of these changes is the shift towards the acceptance of notable republican ideals, such as liberty and inalienable rights as core values, among several members of the colonies. Moreover, the republican ideals of the time saw corruption as the greatest of all threats to liberty. In essence, the concept of liberty during the founding era revolves around the liberation of the American colonies from the British Empire and the growing oppression it gave to the colonists through taxation burdens and a series of repressive acts. For the American colonists, liberty meant the severing of its ties from the British government and the creation of its own independent nation recognized by other countries. The writing and ratification of the Constitution On the fourth of July in 1776, the second Continental Congress signed and officially adopted the United States Declaration of Independence which established the separation of the thirteen American colonies—the colonies which were at war with Great Britain from 1775—from the British Empire. Although others say that the founding moment of America is not on July 4 but two days earlier (Groom, http://independent. co. uk/arts-entertainment/books/review/the-fourth-of-july-and-the-founding-of-america-by-peter-de-bolla-455878. html), it remains a fact that there came a point in time when America finally declared its independence. The evolution of American political theory—especially that which is concerned with liberty—can be better understood during the confrontation over the writing and the ratification of the Constitution. In fact, the Declaration asserts that people have unalienable rights which include life, liberty and the pursuit of happiness. The Articles of Confederation served as the constitution which governed the thirteen states as part of its alliance called the “United States of America”. After being ratified in 1781, the “United States of America” was brought as a political union under a confederate government in order to defend better the liberties of the people and of each state. Meaning, each state retained its independence and sovereignty despite being politically held together as part of the union. However, the Articles were not without opposition and criticisms from several notable political thinkers of the time. For example, James Madison saw several main flaws in the Articles of Confederation that were alarming, or threatened the very existence and purpose of the Articles first and foremost. For one, Madison was concerned about the dangers posed by the divided republics or “factions” given that their interest may stand in conflict to the interests of others. Madison argues in The Federalist, specifically in “Federalist No. 10,” that in order to guard the citizens from the dangers posed by these individuals who have contradicting interests, a large republic should be created, a republic that will safeguard the citizens from the possible harms brought by other states. It is likewise important to note that the union is not a homogenous group of citizens with the same political inclinations. Madison also argues that for the government to become effective it needs to be a hybrid of a national and a federal constitution. The government should be balanced in the sense that it should be federal in some aspects and republican in others instead of giving more weight to each separate state over the larger republic. In his “Federalist No. 39,” Madison proposes and describes a republic government guided by three fundamental principles: the derivation of the government’s legitimate power through the consent of the people, representatives elected as administrators in the government, and a limitation on the length of the terms of service rendered by the representatives (Kobylka and Carter, p. 191). Madison also pointed out in “Federalist No. 51” that there should be checks and balances in the government, specifically among the judicial, legislative and the executive branches. The judiciary, therefore, is at par with the other two inasmuch as each of the other two are at par with one another. Giving one of the three more powers disables the other two to check if that branch is still functioning within its perimeters. As a result, the more powerful branch becomes a partisan branch which consequently creates dangers to the liberties of the people. Another important part of the evolution of American political theory is the contention raised by Patrick Henry. In a letter sent to Robert Pleasants in January 18, 1773, Patrick Henry sees the relationship between the new government and the institution of slavery as a contradiction precisely because while the new government is said to be founded on liberty, there the evil that is slavery persisted under the new government. During those times, slavery was not yet abolished and that the new government was unable to meet the challenge of living up to its roles and foundations by failing to address the institution of slavery and demolishing it altogether. Moreover, Henry understood the efforts of secession from the hands of England were a matter of freedom or slavery, which can also be looked upon as a question of either a freedom from or a continuation of tax slavery from the British. While Madison was part of the “Federalists” who were supporting the ratification of the Constitution, the “Anti-Federalists” apparently argued against its ratification. It was Patrick Henry who led the group in criticizing the contents of the proposed Constitution. For instance, Henry argued that the phrase “We the People” in the Preamble of the Constitution was misleading primarily because it was not necessarily the people who agreed and created the proposed Constitution but the representatives of each participating state. Thus, Henry argues that the Preamble should instead read as “We the States” which in turn delegated power to the union. Another argument of the Anti-Federalists is the claim that the central government and, therefore, the central power might result to a revival of the monarchic type of rule reminiscent of the British Empire which the Patriots fought. The fear is that, by delegating a considerable amount of power to the central government, the liberties of the individual states and the people are weakened as a result. Nevertheless, the Constitution was adopted on September 17, 1787 and later ratified in each of the state conventions held. The anti-federalists share a significant role in strengthening some of the points of the Constitution through the succeeding amendments. The first ten amendments to the Constitution are popularly known as the Bill of Rights; it is largely influenced by the arguments of the anti-federalists. For the most part, the Bill of Rights aimed to guarantee that Congress shall not create laws which stand against the rights and liberties of the citizens of the nation. In effect, the Bill of Rights limits the power of the federal government in order to secure the liberties of the people in the United States. In “Federalist No. 84,” Alexander Hamilton argues against the Bill of Rights for the reason that the American citizens will not have to necessarily surrender their rights as a result of the ratification of the Constitution and, thus, the protection of the rights through the Bill is unnecessary. Moreover, Hamilton also argues that creating a Bill of Rights would effectively limit the rights of the people since those that are not listed in the Bill will not be considered as rights. In response to the argument, the Ninth Amendment to the Constitution was introduced and ratified later on. The amendment specifically states that the rights of the people are not to be limited to those which are listed in the Constitution. As it can be observed, the time before and during the ratification of the Constitution and the succeeding amendments made reflect how the people at the time sought to protect the liberties that they have realized and gained after the American Revolution and the defeat of the British Empire. Moreover, the debates at that time revolved around the issue of what to do with the liberties gained and how to secure them for the coming generations. One side—the Anti-Federalists—argues that the central government weakens the independence and sovereignty of the states as well as the rights and liberties of the people. The other side—the Federalists—argues that the Constitution will help preserve and strengthen the Union. Modern debates In the years that followed, debates over the interpretation of the Constitution, the role of the government and the place of the individual in American society have escalated. In his essay “Resistance to Civil Government” (popularly known as “Civil Disobedience”) first published in 1849, Henry David Thoreau asserts that the people should not simply remain passive and allow the government to be an agent of injustice. Much of Thoreau’s political beliefs eventually follow that same philosophy. In his work Walden published in 1854, Thoreau attempts to live a life of solitude in a cabin, away from the reaches of the society. In one of his days in Walden, Thoreau was arrested for the charge of not paying his taxes. His defense was that he refuses to pay federal taxes to a government that tolerates slavery. In essence, the fact that Thoreau decided to stay in solitude for approximately two years (although the contents of Walden was made to appear as though all the events happened within just a year) signifies his decision not to conform to the dictates of the society. On the contrary, Thoreau lived a life of liberty, free to do anything that he chooses without the institutions of society restraining him. The same sentiment—non-conformity or disobedience to the dictates of the society, especially the government—echoes through in Thoreau’s other work, which is “Civil Disobedience”. Thoreau asserts that “the only obligation which I have a right to assume is to do at any time what I think [is] right” (Thoreau, http://sniggle. net/Experiment/index. php? entry=rtcg#p04). That passage, along with the rest of “Civil Disobedience” and its theme in general, implies that people have an inherent liberty, which is the liberty to do any time what they think is right. Taken altogether in the context of the concept of liberty, Thoreau seems to suggest that people ought to disobey a government that oppresses other people since each individual has inalienable rights that nobody can take away, not even the government. In the face of oppression such as slavery (which was still very much a part of America within twenty years after the ratification of the original Constitution since the issue of slavery was a very delicate and contentious matter during the Philadelphia Convention), Thoreau even suggested that Abolitionists should not only confine themselves with the mere thought of abolishing slavery but resisting the instructions of the government such as paying taxes. Thus, as a reading of Thoreau’s works would suggest, to have liberty is to act upon crucial issues instead of passively allowing contentious actions of the government to thrive and continue. I cannot help but think that Thoreau’s concept of liberty is something that is absolute, which I also take to mean as confined only within one’s disposition instead of being limited by the government. Moreover, since Thoreau suggests that liberty is doing any time what one thinks is right an individual should first know if what he or she thinks is indeed right instead of being wrong. Charles Madison notes that Thoreau was heavily concerned with the “ever pressing problem of how one might earn a living and remain free” (Madison, p. 110). I cannot help but begin to think that Thoreau attempts at embodying and enacting his individualistic beliefs. As Leigh Kathryn Jenco argues, “The theory and practice of democracy fundamentally conflict with Thoreau’s conviction in moral autonomy and conscientious action” (Jenco, p. 355); democracy is essentially the rule of the majority which consequently ignores the decisions of the minority. However, I think that much of Thoreau’s thoughts were heavily influenced by the circumstances during his time. His aversion towards the imposed taxation policy of the government stems from the fact that the government at that time still tolerates slavery which is directly against an individual’s liberty. Thoreau’s insight on the perceived conflict between the liberties being upheld by the Constitution and the actual state of the government during his time points us to the ideal that the people are sovereign because the people is the ultimate source of power of the government. If it is indeed the case that the Constitution upholds the rights of individuals including the right to liberty, it seems appropriate to consider as well why slavery at that time was not immediately abolished entirely especially at the time when the Constitution was ratified. In fact, it was only in 1865 under the Thirteenth Amendment—about 80 years after the original Constitution was adopted—when slavery was legally abolished and when Congress was given the power to finally enforce abolition. During the time when slavery was not yet abolished and immediately after the original Constitution was ratified, it can be said that not all citizens living in America were given full liberties. Several people were still laboring as slaves to their American masters. That is perhaps an often neglected piece of history that undermines the spirit of creating a Constitution and a government that will uphold the rights of the people. The pre-American Revolutionary war, the founding era, the ratification of the original Constitution, the creation of the Bill of Rights and the other succeeding amendments to the Constitution—all these stand as testimonies to the evolution of American political thought. The concept of liberty has played an important role in the development of the federal government and the Constitution. Although the history of American political thought might reveal that the attainment of liberty through the years has never been a smooth journey, contemporary America has reaped a large amount of benefits from the sacrifices and ideas of the Founding Fathers and all the people who lived and died during those times. Some might even argue that liberty is yet to be truly attained in today’s American society. But if liberty is yet to be attained in practice, how is it possible that people are given the right to air their grievances before the government? How is it possible that people have the liberty to do as they please so long as what they do does not conflict with what is legal? In any case, the present American Constitution guarantees the liberty of the people and that there are institutions which seek to promote and guard that important right. Had it been the case that the early Americans swallowed everything that the British Empire throw in their way and that the Founding Fathers abandoned the creation and amendment of the Constitution, the United States of America would not have been the land of the free and the home of the brave. Works Cited Bobrick, Benson. Fight for Freedom: The American Revolutionary War. 1st ed. New York, NY: Atheneum, 2004. Groom, Nick. “The Fourth of July and the Founding of America, by Peter De Bolla”. 2007. Independent. Co. Uk. October 16 2008. <http://www. independent. co. uk/arts-entertainment/books/reviews/the-fourth-of-july-and-the-founding-of-america-by-peter-de-bolla-455878. html>. Hamilton, Alexander, James Madison, and John Jay. The Federalist, on the New Constitution. 1787. October 18, 2008 <http://books. google. co. uk/books? hl=en&id=5jMTAAAAYAAJ&dq=the+federalist&printsec=frontcover&source=web&ots=A9c2bdwU7c&sig=k5wcg1Bfdq3We7mJ8jsQXjLsq1Q&sa=X&oi=book_result&resnum=3&ct=result#PPP3,M1>. Jenco, Leigh Kathryn. “Thoreau’s Critique of Democracy. ” The Review of Politics 65. 3 (2003): 355-81. Jensen, Merrill. The Founding of a Nation: A History of the American Revolution 1763-1776. Indianapolis, IN: Hackett Publishing Company, 2004. Kobylka, Joseph F. , and Bradley Kent Carter. “Madison, The Federalist, & the Constitutional Order: Human Nature & Institutional Structure. ” Polity 20. 2 (1987): 190-208. Madison, Charles. “Henry David Thoreau: Transcendental Individualist. ” Ethics 54. 2 (1944): 110-23. Sargent, Mark L. “The Conservative Covenant: The Rise of the Mayflower Compact in American Myth. ” The New England Quarterly 61. 2 (1988): 233-51. Thoreau, Henry David. “Resistance to Civil Government”. 1849. October 18 2008. <http://www. sniggle. net/Experiment/index. php? entry=rtcg#p04>. “Virginia Records Timeline: 1553-1743”. The Library of Congress. October 17 2008. <http://memory. loc. gov/ammem/collections/jefferson_papers/mtjvatm3. html>.
<urn:uuid:21c8ca29-9b39-4074-87b3-ea4c3668d6b0>
CC-MAIN-2022-33
https://homeworkandessays.com/american-politics-4/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573172.64/warc/CC-MAIN-20220818063910-20220818093910-00297.warc.gz
en
0.964066
4,438
3.859375
4
You have been listening to this term called health and Wellness for a long time but have you thought what the difference between health and Wellness is? Is there any similarity or these are different concepts? Let’s understand it in simple language. To understand this term, we have to understand what health and Wellness individually mean and then and we will see the collective meaning of this term. Let’s explore this concept step by step so that it will be easier for us to understand. According to the World Health Organisation‘s health definition section says, “Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.” It means health has something to do with our wellbeing. Not only on a physical level but also on a mental and social status too. READ : What Does Health Mean ? As we all know that human is a social animal, and hence it becomes important for him to stay well not only on the individual level but also on the social level. A good condition of physical, mental and social state helps to live the of meaningful life instead of spending time in curing the diseases. Further, in this article, we will have a look at ways to maintain good health. ALSO READ : The Potato Diet – Everything Explained Again, let’s see what the World Health Organisation has in the wellness definition section, according to WHO Wellness means “A state of complete physical, mental, and social well-being, and not merely the absence of disease or infirmity.” There is one more definition by the US National Wellness Institute. It defines wellness as “A conscious, self-directed and evolving process of achieving full potential.” ALSO READ : The Valgus Stress Test : The Test For Athletes If health is a goal, then wellness is the way and path of keeping it in the good state. We always say that prevention is better than cure, and hence wellness plays a significant part in the prevention phase. It reduces the chances of something going wrong, and thus it reduces the requirement to cure anything. Concept of Health and Wellness The concept of health and wellness is associated with our lifestyle. To live a good life, we have to have the ability to take full advantage of life and experiences. To experience anything, we must have well-maintained health. You cannot have the experience of trekking the mountains if you have the weight more than 400 pounds because you will not even be able to walk in such case. It is just an example of physical health. And it also applies to other areas out there, such as mental as well as social levels. It is necessary to have good health to take the full experience of life. The question, How we can maintain good health on a physical, mental and social level? Leads us to to the term called wellness. It is nothing but the process of achieving and maintaining good health on all the levels. In addition, It includes taking systematic efforts such as regularly practising activities such as meditation, reading, exercising. It also includes inculcating these good habits and making it our part of life. This is a way to achieve and maintain good health in all the areas, and this is called wellness. What Are the Similarities Between Health and Wellness? According to the World Health Organisation’s health definition section says, “Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.” It means health has something to do with our wellbeing. Not only on a physical level but also on a mental and social status too. Associated With Individuality Health and wellness, both the concepts are associated with the individuality. You cannot do workout for others; everyone has to do their own workout and exercises to maintain good individual health. Health and wellness have an individual approach, but group activities might help to boost efforts and quality. For example, if you can do 10 push-ups every day while doing it in your home, then probably doing it with other fitness freaks might give you the motivation and you might be able to do you probably 15 or 20 pushups while doing in the group. However, your fitness freak friends won’t be able to do it for you; they can motivate you and guide you. Making our life Meaningful Both health and Wellness make our life meaningful by improving our quality of life and reducing the chances of any negative impact of the environment and external factors. For example, if you are fit, then it means you have good health, and you don’t have the risks of the person weighting more than 400 points. In this example, good physical condition is known as health, whereas regularly exercising and maintaining that fitness is known as wellness. Hence it helps us to make our life meaningful and take the full advantage of it. We all love savings, right? According to the data, the average cost of healthcare per person in the US is $13,722 per year. Similarly, there are some medical expenses everywhere. If you have good health and you are taking proper care of Wellness to maintain that good health, then you are saving this money. You are also saving your precious time by reducing the chances of any surgery of taking place due to bad health conditions. Primary Difference Between the Concepts of Health and Wellness, let’s have a look at the primary difference between the concepts of health and wellness. Health is a Vehicle, and Wellness is a Driver If taking the full advantage of our life is our goal, then our health is the vehicle for it. Wellness means how we are driving the vehicle, and it will decide how sooner we are going to reach our goal. If you are a good driver, then you will not only take the good advantage of your vehicle but also you will ensure that it is well maintained. The difference in the Approach Health is more related to the state of mind, body etc. Precautions or recovery from illness, diseases and other health-related issues are focused on health. However, wellness is focused on improving the quality of life by inculcating habits such as regular exercises and ensuring well being. The Different Work Environments While health is more about maintaining a good state of physical, mental and social levels. On the other hand, Wellness talks about the improvement in the quality of life. The work area is divided, but these are complementary. Health specifically has more to do with the state and condition, whereas wellness talks about the quality of all the factors of life in general. How Important Are Health and Wellness? There is the significant importance of health and wellness in our life because we are human beings, and we live to get new experiences, knowledge, wisdom, happiness and feel-good factors. Life consists of such things, and it makes our living meaningful. If you are not able to you get any of it in its full capacity, then we will not be able to take full advantage of life. As we have seen the example about if you love trekking the mountains and if you have the barrier of the highly overweighted body, it will create trouble. Hence it is essential to have good health and Wellness is a way to achieve it. Health and wellness are essential because it allows us to take advantage of experiences and its original form. Importance of Health and Wellness in Life Health and Wellness is not a one-time thing. It is an important process that one should follow throughout life to maintain a good quality of health and vitality. That what we use the term of a healthy life. It not only increases the quality of life but also contributes significantly to the quantity of life. Health allows us to take advantage of experiencing things in its original form. Whereas. Wellness increases the lifespan and reduces the chances of any health-related problems. Today, we are living in the era where we have to focus not only on physical health but also emotional health. We haven’t yet studied the long-term disadvantages of things such as over usage of social media. The main reason behind the lack of information is because these things are still relatively new and we are the first who are going to experience it. Probably after 50 or 60 years, we might be able to comment on such topics with proofs. However, that doesn’t mean we shouldn’t focus on our health and wellness now. Health and Wellness in life were important 500 years from now, it is still important today, and I am sure that this importance will grow multifold in the coming years. What Are the Five Components of Health and Wellness? There are five primary components of health and wellness. This component will help you to understand your level and state of health in these areas. The first step of improvement is to analysing the current state, and hence this part will help you in understanding and analysing it. Physical component means our body, the organs, the stamina and other factors which are directly associated with our physics. It is a structure of experiencing life. We can eat, smell, walk, speak, listen & see because the organs required for it are working fine. However, it is essential not only to have the right physical condition but also to maintain it. Good digestive system, normal blood pressure etc. are the factors of internal physical health. ALSO READ - A Complete Guide to Live a Healthy Lifestyle To maintain good physical health, one should do exercise regularly, eat and drink healthy, have a proper sleep schedule for quality sleep. These are the critical factors directly affecting the health on the physical level. As a society, we were not aware of the emotional help and this aspect of life. Even today, most of the people don’t have this vital component of health and wellness. Emotional component means how we are feeling emotionally and what is our emotional behaviour. Even if you have good physical health, you won’t be able to to use it in full capacity if the emotional component is not in a good state. For example, even if someone is giving us the fastest rocket to land on Mars, we won’t be able to land on mars simply because no we don’t have the necessary information and skills to launch the rocket. Emotional health is a significant factor. To maintain good emotional health exercises such as keeping a journal, appreciating others, having a positive approach, regularly doing meditation helps. Suppose you are swimming and you can’t sense or feel the water then it would be odd right? This exactly happens when we are trying to get new experiences when the emotional component is not in a good state. The social component of health means having a meaningful social life. Human is a social animal, and we are dependent on each other to survive. To take the benefit of this system in a better way, we have to improve our social component of health. Even if we refer to digital platforms as social media, I don’t think it is social in real sense. It is an Illusion, and it has its advantages and disadvantages. I would use it but not entirely, and hence I would refrain from replacing it with my social life. Having one friend in real life is much more important than having 1000 friends on Facebook. Getting involved in a face to face single deep conversation is much more valuable than random chat with 100 friends out there. To maintain an excellent social life you don’t have to be extrovert always, even if you are introvert, you can have a perfect social experience. There are specific activities such as meetups, group tours, functions where we can build new connections and take advantage of the social ecosystem. This is what the social component of health is all about. Spirituality means understanding ourselves in a real sense. The first step of improvement is the analysis, and we need information for or analysing anything. If you want to explore yourself, you have to build a connection with your inner self. We have to start the spiritual journey which will take us inside us, and we might be able to explore new aspects about ourselves. If you haven’t read this article, then I would recommend you to read it because it will give you a clear idea of what spirituality and spiritual journey is all about. A human being is evolved over some time, and the most crucial factor in the evolution of humanity was the discovery of something new and passing the knowledge to your next generation. This is the process of continuous development. The intellectual component of health is related to growth, and this growth is not just limited to the individual level but also human species as a whole. You learn something new from one person, and you share something with other people. This process helps not only you but also other people out there. This transfer of values leaves the footprint with us, and we call it knowledge. To improve the intellectual component of health, reading is the best method, in my opinion. You can read books, informative articles on the internet, newspapers to enhance your knowledge and form your opinion. Being intellectual means getting factual information, thinking and experimenting on it and sharing the value with others. What Are the 12 Dimensions of Wellness? There are 12 dimensions of wellness where you can concentrate and improve the quality of health by understanding these dimensions. Self Responsibility and Love This is all about knowing yourself and understanding the need, requirement of actions to improve the shortcomings and loving yourself. If you can’t love yourself, you cannot expect it others to do it. Everyone has his timeline, so instead of comparing with others, one should do self-analysis and love him/herself. What’s the one thing which differentiates between a living and a dead person? It’s the breath. As Albert Einstein said, “Energy cannot be created or destroyed. It can only be changed from one form to another.” Breathing is the source of new energy; it brings new vibes whenever we take a breath. Every breath brings a unique opportunity, and at the same time, it highlights the truth that our time is limited. How do we decide anything is excellent or the worst? It is our reaction to the information experienced by our senses. Sensing includes touch, visuals, sound etc. It is an instrument of actually getting the experience. Our senses help us to decide if we are going to continue getting this experience or not. We need the energy to survive; this comes from eating. There is a famous saying that, ‘we become what we eat’. If you want a healthy life for the long term, then you should be eating healthy stuff. On the other hand, continuous eating junk food impacts our health in a negative sense. It creates health problems which might cause serious trouble in the long term. Hence, having balanced food, it is always recommended. We are not a tree to stay at one place from birth to death. Moving from one place to another is the fundamental characteristic of living. Life is a journey, and a journey means movement. It enriches us with new wisdom and experiences. Feeling something means you are emotionally alive, and that is one of the essential aspects of living. You experience many things throughout the day, sometimes you feel happy, sometimes you feel sad, sometimes you feel angry, or sometimes you feel calm. What if you are feeling the same way 24*7? It would make our life monotonous, and hence multiple feelings are an essential part of our life. Everyone on this planet is different and has a different mindset. The same people travelling in the same car will not have the same experiences. Everyone has different experiences, knowledge, thoughts and that makes us unique from everyone else out there. Thinking makes us wiser because it allows analysing and reaching to a conclusion. Then the conclusion helps us to do anything in a better way next time. Playing & Working We have lost the balance of personal and professional life. Our day is filled with the pressure of work and not giving any time to ourselves. On the other hand, it’s not just about the adults, but also kids are not playing these days because mostly they are busy with online games. A right balance of playing and working can help us to sustain good health over a long period without reducing our quality of earning wealth. Communication is essential if you want to to share and seek information with others. Being a part of the social ecosystem, we have the advantage of learning from others as well as it is your responsibility to share and give back to society. If you haven’t read the in-depth guide regarding communication techniques, I would recommend you to read it. It will provide you with the idea of communication and how to improve it to make things easier in life. As we have seen already, meaningful conversation with one friend is worth more than a random chat with a hundred people on so-called social media. Intimacy helps us to build relations based on emotional bonding which lasts for the long term. When intellect doesn’t work, intimacy helps. There is a meaning for everything in this world, as you are reading this now, there is a meaning for it. Life is all about finding the meaning and experiencing and exploring a meaningful life. The best way is to start from ourself, and it will give you the direction to move forward. There is only one barrier in your growth in life, and that is your mindset. We must understand our potential and keep moving towards our goal. Do not let anyone tell you that you cannot do anything. If you have one goal in mind if you believe, then you will achieve it at any cost. This is not the motivational statement, but there is a science behind it. Whenever you believe something is possible, you give your best to achieve it. Even if there are any actual obstacles, you will cross those obstacles because you know that the goal is achievable at the end of the journey. Take the first step, and you will realise that, where there is a will there is away. We have seen that health is all about the state or condition of our of physics, mind and social behaviour. On the other hand, Wellness is a process which helps to improve our health. Wellness is a continuous process which becomes a habit at a certain point. Individualistic approach, making life meaningful and saving money on medical expenses are the similarities between health and wellness. Whereas, health is a state and fitness is a process, health talks about certain areas, but wellness is focused on improving the life, health and wellness work in two different environments but those are similar. These are the primary differences between health and wellness. Further, we have seen five components of health and wellness, such as physical, emotional, social, spiritual and intellectual. We have also seen how to improve our health in these components. Then we looked at the 12 dimensions of wellness; those dimensions are Self Responsibility and Love, Breathing, Sensing, Eating, Moving, Feeling, Thinking, Playing & Working, Communicating, Intimacy, Finding Meaning, Transcending. What do you think about health and wellness, and what is your regular practice to maintain good health? I would like to know your opinion on this topic. Don’t forget to comment and share your opinion so that other readers will also get value from it. Featured photo courtesy : Bruce Mars Nearmonk.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com. We may earn a small commission for our endorsement, recommendation, testimonial, and/or link to any products or services from this website.
<urn:uuid:b0db4c74-6679-45b9-9c3b-3d17ed981269>
CC-MAIN-2022-33
https://www.nearmonk.com/difference-between-health-and-wellness/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00298.warc.gz
en
0.958485
4,252
3.078125
3
Before January: the March towards the Invasion of France – 7 NOVEMBER, 1813: THE END OF THE CAMPAIGN IN SAXONY Reaching the fortress at Mainz on the river Rhine on 2 November in the morning, Napoleon remained there for six days trying to reconstitute what remained of the army (70,000 men, of which thousands were sick with typhus and only 30,000 were properly under orders). As he noted in a letter Cambaceres written that very day, “I'm attempting to rally, rest and reorganize the army” (meaning, “the army is dispersed, exhausted and disorganised”). With the Confederation of the Rhine collapsing (Duke Louis of Darmstadt and Frederick of Wurtemburg joined the allies on the same day), Napoleon headed for Paris incognito on 7 November to begin preparations for the continuation of the fight. – 2 DECEMBER, 1813: “ORANJE BOVEN!” (‘Long Live the House of Orange'): William I of Orange, made Sovereign Prince of the Netherlands on 2 December, 1813 After nineteen years in exile, the Prince of Orange returned to the Netherlands on 30 November, 1813. He was received with a triumphant welcome in Scheveningen, a reception all the more cordial given the anti-French riots which had taken place in all of the main Dutch cities (see Bulletin n° 690), with support from the United Kingdom and Prussia. Upon the Prince of Orange's arrival in Harlem on 1 December, he declared: “I have come amongst you, determined to forgive and to forget the past.” In Amsterdam, on the following day, 2 December – quite ironically (see the beginning of this Bulletin), William I of Orange was proclaimed Sovereign Prince of the Netherlands by the interim Dutch government. The French evacuated the Netherlands quickly, though some garrisons held out strongholds such as Breda or Berg-op-Zoom. In the meantime, the Grande Armée was preparing to leave for other fronts. Rapp had capitulated on 29 November in Danzig after an 11-month siege, and the allied armies were about to enter France, since Belgium was also slipping away from French control, following the Dutch example. On 4 December, the coalition signed the Declaration of Frankfort, in which the allies solemnly declared to be at war with Napoleon and not France. – 9 DECEMBER, 1813: THE FIVE-DAY BATTLE (NIVE AND St PIERRE) AROUND BAYONNE After a long period of wet weather in the Pyrenees, the sun came out, and Wellington confident in the sunshine and in the news of Napoleon's defeat at Leipzig, decided to advance upon the key fortress of Bayonne on 9 December, 1813. With the roads however still in a waterlogged state, he launched an indecisive attack on the French positions before Bayonne. Soult thought Wellington's position weak (the latter's troops were separated by the swollen river Nive) and launched a counterattack on the following day. This too foundered, not just because of the weather but also from the chopped-up nature of the terrain – in the absence of a precise battle line, detachments found themselves either surrounded or attacked from behind. The British and allied troops held on, and the two days ended in stalemate, both sides having lost about 2,000 men. As Wellington attempted to consolidate his position around Bayonne, Soult once again tried to take advantage of the Anglo-Irish general's ‘false' position across the Nive and attacked the British right under Hill before St Pierre d'Irube, cut off from the rest of the army by the exceptionally high Nive – so high that an allied pontoon bridge had been swept away. Once again, the British and allied troops held on in the face of a stiff French attack – some Portuguese divisions performed particularly well and saving some British bacon – and Soult was forced to fall back on Bayonne having lost nearly 6,000 men over the five preceding days – only 800 more than his opponents. The writing was on the wall for the French Empire. With France close to being invaded, foreign contingents within the French army (notably German or Italian in the theatre around Bayonne) were either returning home or being sent back – this despite Napoleon's orders to have them disarmed and interned far from the front line. – 7 JANUARY, 1814: THE KING OF NAPLES DEFECTS At the end of August 1813, an Austrian army marched into Italy against Viceroy Eugène and his ill-equipped and inexperienced army of 56,000 men, a majority of whom were not French. Murat had returned to Naples on 5 November. Observing the state of agitation in areas openly expressing their anti-French sentiment, he immediately began negotiations with Austria and with Italian Nationalists. In fact, Murat was preparing for several possible outcomes. Whilst conferring with the Austrians, he was leaving open the possibility of supporting Italian Independence within a unified kingdom under either Eugène… or himself. The state would not be hostile to the Emperor, but it would nevertheless be outside the Empire. This new status for Italy would theoretically have obviated any need for direct intervention from the coalition on Italian soil. Napoleon took no notice of Murat's letters presenting this plan. There were rumours that Murat had joined the enemy camp and that his men were disarming Eugène's troops in Rome. In December, Napoleon sent Fouché to inquire about the position of the king of Naples. Far from informing Napoleon of current intrigues, Fouché delayed the negative reports. Once Napoleon received confirmation that Murat had indeed been negotiating with the Austrians, he fulminated: “the perpetrator of such infamous treason – has there ever been such? – will certainly have to face the consequences”. On 31 December, the Austrian envoy Neipperg, Marie-Louise's future second husband, made it a fundamental condition, if Murat was to keep his throne, that he should declare war on France. On 6 January, Murat's minister, Gallo, negotiated a treaty, which was signed during the night of 7 to 8 January and immediately ratified by the king. Murat was indeed eager to make a pact with Austria, since the British were actively working for a return of the Bourbons to both parts of the kingdom of the Two Sicilies. Although the text was dated 11 January, negotiations continued for another month. Murat was forced to give up Sicily, which then remained under British influence (Britain and Naples negotiated an armistice at the same time, signed on 26 January), but was allowed to extend his territory northwards at the expense of the states around Rome. Murat then placed 30,000 Neapolitan troops at the disposal of the coalition – the soldiers that Murat had never sent to Eugène back in November – to operate in Northern Italy. The only restriction was that they should not enter French territory. January 1814: First fights in Champagne – 25 JANUARY, 1814: NAPOLEON LEAVES PARIS The Emperor devoted the third week of January 1814 to the preparations for the French Campaign. On 20 January, at 11am, Napoleon reviewed several cavalry regiments at the Louvre Carrousel. On 23 January, he received the officers of the National Guards and presented his son to them. He also signed patent letters making Marie-Louise the regent and received the final oaths of allegiance from civil servants. On the following day, 24 January, he appointed Joseph as Lieutenant of the Empire, and gave one final goodbye kiss to his son and his wife before leaving Paris on 25 January at six in the morning and arrived in Châlons late in the evening – he was never to see them again. – 29 JANUARY, 1814: A CLOSE SHAVE AND THE RETAKING OF BRIENNE At the end of January 1814, Blücher's army was, of all the allied troops, the most likely to reach the French capital. Napoleon therefore in the three days between 25 and 29 January focused his attention on this army, vainly scouring the area around Saint-Dizier (near Troyes and Reims) in the hope of confronting the Prussians. The battle eventually took place near Brienne, the very town where young Bonaparte had completed his military training. On 29 January in the evening, while the battle was raging, Napoleon was nearly killed in an ambush. As he was riding at the front of some of his troops, listening to a report by General Gourgaud, a Cossack suddenly emerged and attacked the Emperor with a lance. Gourgaud shot him dead at point-blank range. Later, Napoleon was to give the General one of his own swords to thank him for saving his life. However, Napoleon on St Helena (merely a few years later) refused to credit Gourgaud with saving his life, something which Jacques Macé, who wrote a biography of General Gourgaud in 2006 (in French), described as a sort of “moral cruelty”. Blücher had to abandon the Brienne château in the face of a French attack and was forced to wait for Schwarzenberg's Austrian reinforcements whilst Napoleon was attempting to consolidate his troops. Two days later, on 1 February, 100,000 Allied troops were to take on 40,000 Frenchmen at La Rothière. – 1-3 FEBRUARY, 1814: FRENCH DEFEAT AT LA ROTHIÈRE AND RETREAT TO TROYES The Battle of La Rothière was in fact the second day of the Battle of Brienne. It started at 1pm on 1 February but was lost in five hours by the French, who were outnumbered by more than 2 to 1 (40,000 vs 100,000 combined allies). After his troops had fought heroically and suffered great losses, Napoleon ordered a retreat via the Lesmont bridge, which Schwartzenberg had not managed to take, and the French retired to the Brienne château towards 8pm. The Emperor left Brienne on 2 February, spent the night in Piney, and entered Troyes at about 3pm on Thursday 3. He was to stay there three days. It was there he learnt that Blücher had decided to march on Paris without coordinating his movements with Schwartzenberg; the latter's shortcomings at La Rothière had greatly annoyed Blücher. Napoleon happily seized upon this separation of the two armies, which in fact levelled the playing field for him. – 10-15 FEBRUARY, 1814: VICTORIES OF CHAMPAUBERT, MONTMIRAIL AND VAUCHAMPS As he was withdrawing towards Troyes after the unfortunate Battle of La Rothière, (see above), Napoleon was informed that his two enemies, Blücher and Schwarzenberg, were no longer fighting together, and that the Prussian was heading towards Paris. So the Emperor decided to concentrate all his forces on Sacken and Olsufiev's men that Blücher had detached to Meaux. On 10 February in Champaubert, Napoleon thus swept Olsufiev, and on 11 February, it was Sacken who was on the receiving end in Montmirail. On 14 February, in Vauchamp, Blücher attempted to retake the initiative, but the French army, then more numerous, were to win the day. Blücher was momentarily paralysed by the lack of supplies and the loss of nearly 20,000 men. Napoleon could now turn his attention to Schwartzenberg, whose forces had so far been held in check by Mortier. – MANEUVRES IN CHAMPAGNE AND NEGOTIATIONS IN CHÂTILLON On the evening of the victory at Vauchamp (14 February), Napoleon decided to stop pursuing Blücher and turned instead towards the Bohemian army which was threatening Paris. Once he had joined with Victor and Oudinot's men, Napoleon was able to launch his offensive towards Montereau, which he retook on 18 February. There Napoleon was told, on 20 February, the official news about the King of Naples' defection (see above, January 1814). At the same time, the Congress of Châtillon was taking place (it had started on 5 February). Caulaincourt had been given carte blanche to negotiate peace with the allies. The conditions the latter were now demanding were far harsher than those proposed in Frankfurt. It was no longer a question of preserving France's natural borders; now they demanded that France return to the frontiers of 1792. After the defeat in La Rothière, Caulaincourt had hardly any room for manoeuvre in the negotiations; and even that was curtailed by the fact that Razumovsky (following the Tsar's instructions) was secretly trying to overturn the negotiations. In fact, Razumovsky was successful in that he caused the talks to stop on 10 February – the period of Napoleon's first victories. Napoleon withdrew the full powers from Caulaincourt on 12 February. On 19 February, the French Emperor received a peace proposal from the Allies, which he officially and categorically rejected: “I am ready to cease hostilities and to let the enemies go safely back home, provided they sign the preliminaries based on the Frankfurt propositions”. The month of March was to prove that this attempt to find a diplomatic solution was in vain. Despite the disagreements, the Coalition – which included Austria, the dynastic links between the two empires being in the end of little importance – had already moved on: all it intended to put in place now was a Napoleon-free Europe. – PARADE OF PRISONERS AND ENEMY FLAGS IN PARIS Whilst the Emperor was campaigning in the region of Champagne in order to save the capital from capture, he was also thinking how to keep the morale of the people up and how to ensure the continued support of the population of Paris. This is why a parade of Russian prisoners took place on the Boulevard Saint-Martin in Paris after the victory at Montmirail, on 17 February. On the day after the victory in Montereau, on 19 February, 1814, Napoleon wrote a letter to the French Ministry of War, Clarke, suggesting there should be another parade of the same type: “It seems appropriate to me there should be a review of the national guard before a parade of flags and with military music. You should say that these flags were taken at the Battles of Montmirail, Vauchamps and Montereau.” On 27 February, when this second parade took place, Troyes had been recaptured from the Austrians three days previously. – TREATY OF CHAUMONT At the beginning of February 1814, Napoleon had sent Caulaincourt to Châtillon to negotiate a possible peace treaty with the Coalition, based on the Treaty of Frankfurt. But his enemies had other plans in mind. No doubt the allies were somewhat taken aback by the Emperor's unexpected victories during this first phase of the French Campaign. They were also facing internal dissent, as Blücher and Schwarzenberg's movements were not well coordinated. Moreover, the allies met in Chaumont at the end of February to examine the project of a new alliance against Napoleon, at the initiative of the British Minister Castlereagh. This treaty, signed on 9 March but dated 1 March, stipulated that each member of the Coalition – namely Prussia, Russia, Austria and the United Kingdom – vowed to continue its war effort and would refuse to sign any separate peace treaty. It also envisaged that, should France attack them again during the next twenty years, the Coalition would automatically be reactivated. – CRAONNE, LAON AND SOISSONS With Napoleon on his heels, Blücher halted at Oulchy-le-Château, in the hope of being able to take a stand there once he had received reinforcements from Bülow and Wintzingerode. This plan however came to nothing and he was forced to retreat because both Bülow and Wintzingerode, in spite of his orders to join the Silesian army, had begun the siege of Soissons. To everyone's surprise, the city capitulated almost immediately, on 5 March, allowing the allies to free passage across the river Aisne. Napoleon, furious at the loss of Soissons, nevertheless continued military operations and forced his way over the bridge of Berry-au-Bac (also on 5 March). Blücher then managed to reorganise his army and let it rest, making it possible for him to retake the offensive. The confrontation at Craonne took place on 7 March. In the end, it was only at nightfall, and after severe losses on both sides, that the allies abandoned the field, leaving Napoleon sole master of it. On 9 March, before Laon, the French Emperor came up against the allies who had adopted an extremely strong position, and he was forced into retreat towards Soissons to reorganise his army and to attempt to make up for his recent losses, which had been particularly severe for the men under Marmont after their attempt to take Athies. These successive defeats demoralised part of the army and caused a new wave of desertions from French ranks. – THE FALL OF THE KEY CITIES OF LYONS AND BORDEAUX The costly Battle of Orthez, 27 February, as planned by Wellington, put Marshal Soult in an exceedingly difficult position. Pushed away from Bayonne by Wellington towards Toulouse, Soult was furthermore deprived of half of his cavalry by the Emperor who requisitioned them for service near Paris. The road was now open for Wellington to send General Beresford to the strategic port of Bordeaux. The British general entered the almost unprotected town on 12 March – the inhabitants had already expressed their pro-Bourbon inclinations. Beresford was in fact welcomed by the locals as a liberator. On another front, Augereau had been trying to counter Austrian troop manoeuvres in the Ain region and the Rhone valley since January. On 18 March, he met his first clear defeat at Saint-Georges-de-Reneins, about forty kilometres north of Lyons. On 20 March, 1814, the battle of Limonest, on the outskirts of Lyons, sounded the death knell for any further hopes he may have had. The 24,000 troops under his command came up against 56,000 Austrians. Overwhelmed by the sheer numbers, Augereau's men were driven back. On 22 March, Austrian troops entered Lyons. After the defeat in Laon, Napoleon managed to re-organize his troops near Soissons, and to retake the city of Rheims from the Prussians. He was then faced with a dilemma: he could either follow his initial plan and join his men left in Lorraine and in Alsace to the East, or he could keep protecting Paris, and for this he needed to stop the inexorable progression of the allies towards the capital. He chose the second option and came back towards Troyes in order to stop Schwarzenberg on his way to Paris. The allied army (100,000 men) and French army (27,000 men under Napoleon's direct leadership) fought at Arcis-sur-Aube on 20 and 21 March. The French resisted bravely but only avoided total defeat thanks to Schwarzenberg's errors (the latter omitted to destroy the only bridge over the river Aube which was indispensable to Napoleon's retreat). In fact this omission by Allies (and also negligence in pursuing their enemies) was the sign of a change in allied strategy, namely to concentrate solely on reaching Paris without worrying about the French Emperor's moves. Tsar Alexander of Russia had approved this strategic change: on 23 March, he had learned of the turmoil in Paris via an intercepted letter to the Emperor, supposedly from Savary. The decision to march on the city whatever the cost was approved on 24 March after an allied council of war in Sommepuis: thereupon the combined allied armies converged on Paris. – THE CAPITULATION OF PARIS On 28 March, with the allies' advance becoming extremely threatening, an extraordinary Regency Council met in the Tuileries with Empress Marie-Louise, the latter supported by Joseph Bonaparte. In the face of advice from the majority of the council – composed of the presidents of the legislature, the Senate and the ministers -, Joseph decided to take the Empress and the Roi de Rome away from the capital, thus respecting the Emperor's wishes (declared earlier). Indeed, Blücher and Schwarzenberg were at the gates of Paris, the former at St-Denis (north of Paris) and the latter at Bondy and Neuilly-sur-Marne (to the east). On 30 March, Moncey and his 40,000 men, bolstered with volunteer reinforcements, were to defend Paris against the 100,000 men of the allies. The Clichy Gate fell after a bitter struggle, with enemy troops over-running the St-Denis plain. Around 4pm, Marmont attempted to negotiate a 24-hour truce. His plan was to wait for the Emperor who was near Juvisy. The French capital however was to cave in on Alexander I's threat to ransack and pillage the city. Capitulation was signed at 2am on 31 March, and the remains of the Grande Armée that had stayed to defend the capital evacuated Paris. On 1 April, the Senate, under Talleyrand's influence, were to vote for the destitution of Napoleon from the imperial throne. The Emperor, on learning the news of the fall of Paris barely twenty kilometres away from the city, turned around and established camp at Fontainebleau. It was here that he was to be forced to negotiate a peace treaty, for which the allies would make no concessions and accept no conditions. – ABDICATION, CHARTER, RESTORATION In the night of 5-6 April, after long attempts at negotiations with the allies, Napoleon accepted defeat and wrote out a short text declaring that he had abdicated as follows: “[…] the Emperor Napoleon, true to his oath, declares that he renounces for himself and his heirs the thrones of France and of Italy, and that there is not personal sacrifice, even that of his life, which he is not ready to perform in the interests of France.” He gave this to Caulaincourt to transmit to the allies explaining that it should not be published until the treaty (the future Treaty of Fontainebleau, to be signed and published on 11 April) had established the rules for the abdication. On the same day (5 April), the Provisional government headed by Talleyrand, sent a document, ‘a constitutional charter', to the Senate. The text took its inspiration from the constitution of 1791 guaranteeing civil and political liberties but importantly established a sharing of legislative power between the King and the Chambres. It also noted that “Louis Stanislas Xavier de France, brother of the last King” was “freely” called to the throne of France by the “French People”. The stage was being set for the return of the Bourbon monarchy to France. – BATTLE OF TOULOUSE 10 April, the extremely bloody (and useless) Battle of Toulouse, described by the cavalryman George Woodberry, an eyewitness, as a “day of carnage for all”, took place. Soult had fought Wellington there because he refused to believe that Napoleon had abdicated. The combined French and British losses for the day of 8,000 dead or wounded marked the military end of the First Empire.
<urn:uuid:131c326d-2926-4bbe-896b-5c3e0bc4b31c>
CC-MAIN-2022-33
https://www.napoleon.org/en/history-of-the-two-empires/articles/200-years-ago-1814-the-french-campaign-step-by-step/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00097.warc.gz
en
0.976957
5,038
3.25
3
|Common Names||Angles, Farmboys, Backwaters, Neerlaender.| |Origins||Anglia, Regalian Archipelago| |Social Classes||Farmers, Stage Writers, Actors, Writers, Shepherds, Archers, Hunters, Aldermen| |Major Cities||Axford, Redford, Biddeton, Castleton, Maudberg, Heeresveen, Lammekastel, Oort, Heerhugowaardt.| The Anglian Culture, sometimes also referred to as Alt-Anglian, is one of Regalia’s oldest Cultures, dating back to the Empire’s early formation, and some would say the heart of old Regalian peasant society. Over the centuries however, while more conservative Cultures like Wirtem centralized and entrenched in their traditions and conformity, Anglian diversified to the point that what truly encompasses Anglian in the present era, is actually a collection of collaborative and closely related cousin Cultures. As such, Anglian refers generally speaking more to a person “from the Anglian cultural sphere”, as the word Anglian in itself does not mean anything, it is merely a geographical classification. By far, the largest subgroup of the Anglian Culture is the Akkerman Group which is found in the agricultural heartlands, followed by the Axelland Group found in the major urban areas of Anglia itself, followed by the Door-Inner Group found in Dorinn which is roughly the same size as the Zuidvelde Group in Lokinge. By far, the smallest subgroup is the Lower-Heere Group, which can mostly be found in Moriss and the less populated areas, clinging onto older cultural traditions and beliefs. Despite being distinct sub-Cultures, these are all still referred to as Anglian, as they come from the same region, speak (roughly) the same language, have a common ancestry, and share many beliefs in common. The Anglian Culture is one of the oldest Cultures of the Empire, dating back to some of the earliest settlements of the Regalian Archipelago by seafaring Old Ceardians. In the earliest days of the Anglian Culture, this Culture was not so distinctly different from the Velheim or Old Ceardian, as sustenance based on hunting and gathering as well as fishing was their main method of surviving. Thanks to the warmer climate of the Archipelago, however, and the very mild winters of Anglia, what little goat-herding these early proto-Anglian groups did became easier to do on a larger scale. Herds grew quickly and efficiently on the vast fertile grass plains of Anglia, and within a relatively short time-span, several breeds of wheat and barley were domesticated. The speed at which these domestications occurred is often debated among scholars, but all point at the unlikely probability of mere coincidence or luck, given that it took other Cultures much longer to establish their more advanced dietary patterns. In Anglian folklore, the taming of the wildlands is often related to the story of Adamme and Eevie, a prince and princess who were supposedly led into paradise, this referring to the Anglian Wildland. There, with the guidance of the Crown Dragons, they tamed both animal and plant, and heeded the warnings of the guardians not to stray from the path and give in to the temptation of the Demons. It is said that Adamme and Eevie were the first Archblood Primacy in Anglia, but no actual evidence has ever been found that they truly existed. The decades following settlement of the Anglian countryside, the population boomed at a far more staggering rate than anywhere else in the Archipelago, due to the huge abundance of food. While the Wirtemcaller Kingdom or other, southern realms would contend with occasional famines, Anglia was always free of these kind of disasters, and the people found themselves in a comfortable rotation of serfdom and free time where cultural traditions and hobbies were developed. To outsiders, Anglians can often be considered lazy because of their tradition called “Middens-Uhr”, where all work simply ceases in the entirety of Anglia between the clock of noon and two hours past noon, in which most Anglians simply take a nap. On the flipside however, Anglians wake up much earlier than most other Regalians and end up working much longer. Despite being a Heartland Culture, the other more high-Cultured Ailor Cultures tend to look down on the Anglians as simple-minded or without any great cultural expression. This is largely because over nearly 300 years the Anglian Culture did not truly change much at all. Anglians continue to be very closed off to the outside world, and while many travellers have gone from Brissiaud to Calemberg to Girobalda and back to Vixhall, Anglia is often curiously missing from many an adventurer’s itinerary. Despite this, since the rise of House Kade to prominence as the new family to wear Imperial purple, so has the significance of many Anglian businessmen and nobles also grown thanks to these ties with the new era of Regalian leadership. Language and Dialects Anglians naturally learn to speak one of a trio of Languages in the modern day, with their collective well-known progenitor tongue being Ald-Ang. This tongue, spoken before the rise of the Regalian Empire, is sometimes said to be the “missing link that pieces together the chain of North-Ailor languages”, which can be proven to a certain degree, given Ald-Ang represents the consistency of Old-Ceardian grammatical constructs and words that disappeared in the other languages. Over time, while grammar and vocabulary remained largely unaltered, pronunciation shifted to a flatter palette, with tonal g’s and r’s that sound incredibly foreign in other Languages, guttural even. This is not always consistent however, as Anglia is rich in many local dialects. In Lokinge for example, the g’s are much softer, while they are short and rough in the north. In Dorinn, the locals speak far more lispy, dragging their s’s and l’s out. However, the three main language groups to emerge from Ald-Ang, now an extinct language with few native modern speakers, are High-Anglian, East-Anglian and West-Anglian. Anglian naming is incredibly simple and often too-literal for outsiders to take seriously. There are plenty of records of fathers naming their first born son after their own father, with this tradition continuing on ad infinitum. The case of Hendricus and Michiel is well known, where one particular family was founded by one Hendricus van Malden somewhere before the Cataclysm, hailing from the small village of Malden. His son was named Michiel, who in turn named his son Hendricus. His son in turn again was called Michiel, who would name his own son Hendricus in turn again. This flip flopping continued for over 300 years, until in 289 a foreign wife to Michiel van Malden insisted that their child should be called Jesper. Besides showing the naming customs of Anglians, this tale is often also used to warn Anglians from tying in with outsiders, who are often inclined to think poorly of Anglian traditions. The concept of naming one’s children after one’s grandparents is in fact a very important form of filial homage in the Anglian Culture. Anglian first names are found in a very wide number of options, largely because of the large variety of dialect variants and sub-Culture norms. As such, each subgroup has a couple of names that are popular, but in reality, all names can be interchangeably used by any of the subgroups. - Akkerman Names: Aart, Aldert, Pim, Cassiaan, Berend, Lars, Elmo, Espen, Coen, Joost, Floris, Hiddie, Jelle, Kees, Bram, (previous all Male), Brechtje, Aleta, Doutzen, Fleur, Gwen, Tessa, Beatrix, Amalia, Doortje, Elsje, Eline, (previous all Female) - Axelland Names: Ært, Alderic, Pimric, Cassiaan, Beyrend, Larss, Eylmo, Espenric, Coethric, Just, Flythrin, Hiddric, Jelle, Carlo, Breothric, (previous all Male), Brychta, Aletta, Dudda, Flyrra, Gwyneth, Tyssa, Beatrix, Amalrianne, Dyrtha, Elsa, Elina, (previous all Female) - Door-Inner Names: Albar, Thrim, Casper, Dulf, Bet, Maan, Nesk, Nölke, (previous all Male), Lora, Greet, Ina, Betta, Neska, Nölkelle, (previous all Female) - Zuidervelde Names: Klaus, Frederik, Dietricht, Enno, Yvar, Gebhardt, Johannus, Karl, Marvic, (previous all Male), Christine, Malta, Heike, Gertrude, Fenja, Karja, Anna, (previous all Female) - Lower-Heere Names: Enno, Herbert, Hugo, Henrv, Hermann, Ingo, Julius, Nela, Reiko, One, (previous all Male), Inse, Frauke, Volanna, Adelheid, Silja, Siebbe, (previous all Female) Anglian law has been strongly influenced by Regalian Law, but is slightly different in a few aspects. Regalian Law is based on the idea that the perpetrator must be punished, where Anglian law aims to compensate the victim more than punish the perpetrator. Anglian law also does not abide by the Regalian Law structure of the judges. Rather, the law giving sector in Anglia is the council of Aldermen: town elders who gather to form a tribunal when presiding over a legal case. Anglia follows the State Law structure in theory, but in practice a very lax form of “Fezanten Huys Loove” is maintained, where local issues are dealt with on a local level without pulling in the Aldermen from the bigger cities. The crime rate is generally speaking very low in Anglia due to the relatively high standard of living, the intrinsic freedoms of the peasants, and the incredibly stable political situation. Anglians do fall under specific so called “Ruiten Wetten”, which are a specific set of laws that apply uniquely to Anglians, even elsewhere in the Empire. These laws are usually backed by Imperial Law within the Regalian state, but only rarely called upon by those familiar with the legal system to help them out, or to exact a greater punishment on those deserving. Some of these laws are incredibly obscure, but a couple of them are well known in public knowledge: - Angliesch Bloet Wet: Any person who is a member of House Kade who engages in treason against House Kade shall be punished by a slow drip of viper or other such animal poisons applied to the eyes over a period of many hours or until such a time the person has gone fully blind or their eyes have rotten away. - Paardenveld Wet: Any person who is Anglian and inflicts a fatal wound or conspires to the death of another Anglian’s domestic animal, herd, or cattle, shall be subjected to agricultural bondage, particularly forced to take care of any surviving or newly acquired cattle or animals on hands and feet. - Graantel Wet: Any Anglian person who offends or attacks another Anglian person can aquit any legal trial against them, so long as the attack or offence was of non-permanent marking (such as open wounds or scars), by paying the person their body weight in fine milled type-0 flour, weighed in linen bags. When this payment occurs, the other Anglian may no longer press charges and is expected to forgive the offender. - Biettebier Wet: An Anglian who wields a weapon in one hand and any alcohol containing liquid in a container in another, should be immediately arrested and spend the night in prison. Additionally, any Anglian pulling a weapon in any establishment that sells or dispenses alcoholic beverages, should equally be arrested and spend the night in prison for offending the hospitality of beer. - Angliesch Compact Wet: Any person who is Anglian or half-Anglian descent can appeal to the Regalian Judiciary to have their entire trial performed in the Anglian language. Preference will be given to Axelland pronunciations and dialects, given their high-class standard. Should no judge be found capable of speaking Anglian, then the trial is postponed indefinitely until one can be found. Lifestyle and Customs Anglians still follow the old code of conduct when it comes to family planning. Most families in Anglia are agriculturally-inclined families that focus their family planning around their farming lives. Marriages between children are often arranged between farmers who live close to one another, so they may swap pieces of land or form a communal farming community. Daughters are often sold with a dowry of cattle. The woman often has no choice in who she wishes to marry, while the man has a small amount of voice in the matter. Romance between young couples exists before marriage, which is permitted as long as the woman doesn’t becomes pregnant. A pregnancy is an immediate cause to get married, or the couple risk being social outcasts. This is a frequent tactic used by young couples who are in love to force their parents’ hands to let them marry, even if they intended a different spouse for their child. Despite this sounding incredibly conservative, parents tend to be far more loving and well intending among the Anglians than the other conservative parental rule-households. While a Wirtem could reject their child for disobeying, an Anglian is far more likely to accept a child’s wishes, as long as it fits within the lines of family planning. Elopement and running away from home occurs far less in Anglia than it does anywhere else, despite the strict house rules, largely because most children supposedly had a very happy childhood in Anglia, and parental love and dedication is a norm. Anglian households tend to be very large compared to other Cultures. It is not out of the ordinary to see a married couple have up to seven or even ten children. Children are seen as a means to add extra production on the farm, as farms are inherited from fathers to sons, while daughters help expand the farm through marriages and help keep the household. Domestic animals are also fondly kept by families, and most families will have numerous dogs or cats, depending on their preference. Dogs in particular are greatly loved among the Anglians, who breed numerous different dog breeds and treat them like their own children. Anglians are even known to sleep in the same bed as their dogs, using their body heat to warm up the feet-end, as opposed to using a far more fire-hazard prone heating pan like they do in Dragenthal, even if this domestic animal sleeping draws the ire of other Cultures who consider this dirty. Anglians also have an unusual practice called “Zondagsuitje” where the whole family goes on a trip together on Sunday, the day of rest. Anglians work from Monday to Saturday, but most serfdoms allow the peasants to have a free Sunday, which they usually take off with the whole family. Frequent visits to local lakes, playing tennis, walking or hiking with dogs, or just having a picnic are frequently observed. In line with the general Regalian customs, Anglian Culture is strongly influenced by patriarchal values that bleed into their gender roles. Women are the caretakers of the children— the weavers and the cooks—while the men work the fields and wage war. Men control all political positions, while women have effectively no say in how the household or various government institutions are run. In Anglian law, it is illegal for women to serve in the military (though many still find workarounds with mercenary groups or foreign military service). While emancipation is a big thing in other places of the Regalian Archipelago, the gender roles are still very strictly enforced in Anglia. Despite sounding very repressive to an outsider Ithanian, the gender norms (which appear to favor men), are actually more about creating supremacy of the gender in certain fields. For example, a man may rule the household and demand food provisions at certain hours, but he is always strictly forbidden from entering the kitchen, choosing what type of food is provided, and gives praise and homage to his wife for every meal provided. This only sometimes leads to conflict when a woman or man wishes to perform the role of the opposite gender, or interferes with the work of the other party. Children in Anglia live a very happy and carefree childhood. Anglian children are always born in very busy families, which may leave them somewhat starved of parental attention, but they always have siblings to play with. Anglian children enjoy a great deal of freedom, as their parents often work throughout the day, and children below the age of five aren’t required to do any work. Usually, young children are gathered in the village or town green, and tended to by a Juff-Mevrouw, an unmarried young woman who tends to the children in preparation of marriage of her own. The Juff-Mevrouw is not very invasive with her tending, she merely ensures no danger befalls the children, but takes a backseat role of just enforcing fairness among the children while they play. Children are allowed to roam the village green and adjacent streets without much minding, and generally speaking as long as they avoid the working houses and fields, they aren’t ever in any realistic harm. After early childhood, Anglian children are given a two year school curriculum which is actually legally forced on all children. While not necessarily making a child literate, this school curriculum (and the fact it is mandatory in Anglia), is an extremely rare thing among the Ailor people. This form of primary school affords the children basic social skills and learning what their expectations in life will be, after which they usually help out on their parent’s work around the age of 8 years old. This normally continues until the age of 13, where the children take an apprenticeship with any other adult working to learn the trade, joining guilds, or working their way up the ranks of employment. Children officially stop being children at the age of 16 in Anglia, where they are considered of marriageable age. Marriages usually don’t happen until the age of 22 among the common folk (nobles however do marry around 16 years old). April the 16th is national Dragon Festival Day in the Anglian Culture where Anglians bring offerings to the Crown Dragons and the ruling families of the land. Anglians living outside of Anglia itself often use this day as an excuse to get drunk in the pub and then bring some form of offerings to the Emperor; after which they proceed to start fights together in North Boxing clubs. September 29th is Haeksendag, a day where children dress up as witches and play around with sheep bones attempting to scare adults, especially the elderly. Families often make Haeksenpoppe during this period, dolls which are meant to look like witches to hang on the wall in their homes. These dolls are believed to indicate a house is welcoming of witches and their blessings against the evil spirits. Finally, December 5th is Holy Nichols Day, a day when the Anglians celebrate the life of Hero Nichols of the miners, who once saved eighty miners from a collapsed tunnel. During this time, Anglians cover their faces in charcoal soot and engage in communal singing with a candle each to provide the light and the way for the lost souls of the miners in their rural communities. One particular Holiday, rumored to have been the basis of the Imperial Culture, is the Andermans Dag. Andermans Dag is not tied to a specific day, rather, it can be called at any time by any Anglian ruler, after which a week of mandatory dress-up occurs. The people are expected to look into other Cultures, and dress up as their customs dictate, or appear as they would in their own Culture. For example, a blond Anglian may change their clothing to Wirtem styles, while blackening their hair with dye, and “trying out” living as if part of another Culture. The point of Andermans Dag is to form an appreciation for the values of other Cultures. Nobles tend to take Andermans very seriously, with a final day of reflection where they all write poems to each other about what was learnt. Most commoners however just pick whatever debauching Culture they are unfamiliar with and carouse their way into a drunken stupor. Religion in Anglian Culture is a mixture of Dragon Worship and Unionism. Much like the City of Calemberg, the capital of the Anglian lands, Axford, is a bastion of the Unionist faith, though the outlying lands and shepherd villages still maintain some Dragon Worship practices. Dragon faith and folklore had a major impact on Anglian history and tradition; even though some practices should be banned by the local Celate authorities, they are still maintained. Several Heroes and important figures in Unionism have come from the Anglian Culture, such as Heroes Ellora and Vlaas. A curious difference Anglians have from other Cultures, is their disposition to humanize Demons. The Anglians have two different names for Demons that they are familiar with as taught in Unionist gospel: Holle-Bolle-Kees and Boosdoener. Holle-Bolle-Kees takes the form of an extremely overweight peasant with a massive mouth that eats and destroys whole villages by consuming the buildings stone and thatch and all. He is generally depicted as a giant, walking the countryside and quenching his never-ending hunger on the people by eating them. Boosdoener instead is a thin and emaciated man who can only be seen in the shadows of others at night, rubbing his hands and laughing like a crow. Boosdoener supposedly waits until people are alone at night and only lit by a single candle, before strangling the life out of them. Why Anglians humanize Demons instead of depicting them as monstrous creatures isn’t known, but scholars have theorized that they engage in this practice to teach the people that Demons are not usually the monstrous corrupted creatures that Celates teach them to see, but that Demons can be found in anyone, and will take the form of more familiar people to make their victims lower their guard. Anglians remain pious in their Unionist faith, but maintain many of the old traditional expressions of worship, some of which have really taken to the forefront again since the appearance of the Imperial Dragon, which the Anglians are fanatically supportive of. Literature and Folklore Anglians are infamous for their Zondespael and Frondespael literature. Zondespael is a form of literature where the author makes a public dissent of something, like a flower or a horse, but to an extreme manner. The author will attempt to convince the world of how utterly terrible this otherwise seemingly insignificant object or animal is. The only true rule of Zondespael is that it must never cover people or institutions. Frondespael is a bit more unconventional than Zondespael however and often banned in many other lands. Frondespael is racy literature that often involves vulgar or eroticized scenarios described in metaphorical detail. For an otherwise austere people, these forms of literature are often seen as a means to offload the pressure of cordial living. Most Anglian families have at least one book of either style in their household, and some famous writers from Anglia have made a name in Regalia selling these styles as either comedic expression or an alternative to courtesanship. Anglia has a very strong relation with the Regalian Empire, and maintains a prominent Feudal identity. The lands have been ruled by the same Kade family for the past three centuries who control everything from the lowest production chain to the highest political offices. The passage of the Emperorship from the Ivrae family to the Kade family also provided an extra boon to Imperial loyalty, due to the fact that the Emperor is now “one of the Anglians”. It can often be said that the Anglians are on the forefront of loyalty to the Feudal system and the Emperor, and are understood as a land that will always support the Emperor’s cause. This was seen in the fact that the Anglians were the first to rebel against the Usurper Andrieu Anahera during the dictatorship of the Protectorate. Anglians also make up the vast majority of both the Imperial Guard and the Imperial Tenpenny Army. Anglians have a rich folklore that stems back to the days of the Dragon Worship. One major aspect of the Dragon Worship faith that has remained is the principle of Haeksen, or witches, though the custom is fundamentally different from the Wirtem belief in witches. In contrast, Anglian witches are considered good beings that scare away evil spirits and demons with their magical hexes. They are often said to live in the swampy areas bordering the Anglian Morass to the north and collect lamb bones and skulls to perform defensive rituals. It is indeed true that some ritualist communities still exist deep in the border swamps where Unionist missionaries have not yet ventured. It is theorized that Phantasma actually owe their colloquial Witchblood name to this Anglian belief, and that Witchblood are actually looked on very fondly in Anglian tradition and custom, as green eyes are considered a blessing. Another common Anglian tradition are pyres and summer burnings to ward off bad luck for the harvest and the new year. Many of the Anglian townships compete with one another for the highest pyre they can build. These traditions are not wholly different from the Velheim Solstice and Wintertide pyres, which can sometimes cause a very rare level of cooperation and cohesion between Velheimers and Anglians during these times of year, where both mingle and drink together, forgetting the centuries of warfare between Drixagh and Anglia. Anglian artists are counted among the artistic conservatives of the Ailor Cultures, although they do appreciate modern art styles like impressionism. Anglians prize paintings of agricultural landscapes and coastal picturesque towns. Imagery of dead people is almost strictly forbidden, though it is not entirely clear why this is. When any person dies, any pictures or statues belonging to them are immediately taken down, unless they were a high political person, or declared a Hero by Unionist leadership. Anglians are very avid performers; many Regalian stage writers and actors come from Anglia (though more specifically Dorinn). Stage performances are often held in the open air in a village square or town theater. These so-called Tenpenny performances are particularly popular among the common folk of Regalia and the poor district inhabitants due to their low cost and general comedic value. More racy and vexatious stage plays are made in Dorinn, where the art of attacking your political rivals through theatre is a major export. Most of the Dorinn performers are well known for their sharp tongues and wit, weaving insulting monologues and dialogues into their theatre displays to make a statement about individuals or organizations. This is in stark contrast to literature which specifically forbids attacking individuals or organizations. Anglian music is often based on lutes, flutes, and simple drums. The melodies are always upbeat and incite dancing, though vocals are a rarity. Another popular form of music production is the so called Vermaekspael. Vermaekspael can generally be referred to as bardic singing, but unlike bards from other lands, Anglian bards specialize in the art of ridicule rhyme. Their bardic songs often produce contrived and metaphorical insults against individuals, aimed at making the audience laugh. These Vermaekspael are often taken in good sport by everyone, because the idea is that anyone can pick up a lute, play a basic tune and string a few sentences of insults together with some rhyme to retaliate and call it settled. A more private means of producing music for Anglians is the sideways flute. Anglian plays often feature beautiful melodic long flute compositions, and these can in fact be the only truly emotional pieces of music audible from the Anglian Culture, filled with sadness and longing. This often strongly contrasts with the general happy and comedic overtones of all other musical productions. Anglians dress like the produce of the land, simple, and sometimes very colorless. There is no real upper class Anglian fashion; this is all imported from Calemberg, though the Anglian nobility often tries to appear down to earth, and will dress based on local customs unless a foreign dignitary is visiting. All Anglian clothing is either made from hemp weaving or sheep wool, with the occasional outfit sporting cowhide here and there. Anglian clothing is often very conservative and closed. Women cover themselves up in dresses, and many of them even wear peasant veils. Men wear woven trousers, and woven shirts with a sheep hide vest over that. Headwear is also very popular among the Anglians, though it is often restricted to simple hoods, sacks and caps. Anglian fashion remains true to its peasant roots and has remained practically unchanged for the past three centuries. Anglians are often pitied by their Regalian cultural counterparts as they almost exclusively live in plaster houses with thatch roofs. In spite of this, Anglia isn’t without its unique architecture. The Anglian cloisters are famous for their flat roofs with spiked ornaments and imposing stone structures. Many archways in Anglian architecture use little to no mortar, raising the art of counter-balance to a true skill. Only important buildings in the Anglian lands are actually made of stone; most buildings make do with wooden timber frames and compressed mud, with a layer of plaster over it. House fires are very frequent in Anglia, which often decimate large numbers of houses. Still, Anglians hold on to a sense of history with their architectural style, as it has been unchanged since the early days of the Empire. Anglian castles are often considered the most boring but the heaviest and most fortified structures in the Regalian Archipelago. They are often large, thick-walled, and feature few windows or luxuries, but rarely built with Anglian architecture in mind, as Anglians usually hire the Breizh to design their fortifications for them. Anglian Cuisine is extremely simple, and in other parts of the realm, it is referred to as pauper cuisine. The Anglian staple food is grain, and it is used in practically every dish. Bread is the breakfast and lunch, and porridge may often be dinner. Roast beefs and lambs in bread baskets are very common due to the cattle industry in Anglia. Anglians don’t really have much in the ways of sweets, though blueberry and strawberry pies are very popular due to the exquisite flavor of their fruits. Apricot and apple jam are also extremely popular as export products; Anglians make ample use of the favorable orchard climate around Axford to produce delicious jams. One particularly rare food in Anglia is Haggelslaage, which is a low-sugar and low-milk variant of chocolate shavings, usually eaten on bread with a thin spread of butter. In past centuries, the mostly grain based Anglians were some of the shortest people in the Archipelago due to their low protein diet, but recent decades have seen a massive leap in average height, seeing Anglians become some of the tallest people in the Archipelago due to their extremely varied and rich diet with many nutrients. Like most things in their Culture, Anglian sports are often very simple. The most popular pastime sport Anglians engage in is Ballespeylen. In this game, competitors attempt to throw an iron cast ball into a pole marked with various colored segments to indicate points. Every person gets ten throws, and the person with the most amount of points wins. Swimming, hiking and even horse riding are also seen as sports in Anglia. Another major sport in Anglia is recreational shooting of the Anglian Longbow. This is so common that the shooting glove used in Anglian Longbow shooting can often be seen as a general clothing accessory worn by women and men alike, even if they have no functional use for the latching glove in everyday work and it could even be inconvenient to try and write with. Wapnbog longbow archers make frequent appearances at such recreational target shoots, eager to show off their considerable skills. Another popular pastime leisure is North Boxing, a Regalian variant of Northern Mud Wrestling. Anglian leisure and sports are often very physically intensive activities that engage in heavy competition. Fist fights are extremely common during competitions, and this generally rowdy sports culture combined with the heavy labor of the land often makes Anglians very burly individuals. Anglians have a surprisingly high level of literacy compared to other peasants from other realms. As such, reading by a fireplace or what the locals call “Lantefanteren” (just lazing about) are very common forms of leisure. Generally speaking, because Anglians are physically intensive (compared to other Cultures) most of their pastimes and hobbies times involve incredibly sedentary activities, or simply doing nothing at all, which is called “Niksen”. This can sometimes be as simple as laying on a sofa staring at the ceiling for hours, which is done to promote clearing the head and resting at ease from an over-active lifestyle otherwise. The main symbol of Anglia is the Crown Dragon; also the sigil animal of the ruling Kade family. These creatures used to live in the border mountain chains of Dorinn and Nordwalle, symbiotically with the farmers living in the river delta valleys below. The Dragon had gone extinct since the early decades after Cataclysm (re-appearing recently in the form of the Imperial Dragon), but the animal represents Anglia strongly in its unwavering vigilance and loyalty to its kind. Another strong symbol for Anglia is the wheat and rye plants; food produce is the major export of Anglia and, as such, the majority of the population works in the agricultural sector. Lambs are often also frequent to symbolize Anglians, as are the three wingless Crown Dragons in a circle, which represents the Three Son Ideology. This ideology dictates that the oldest son must always inherit all, while the next two sons are bred to support the oldest son. - Despite being austere, simple, and even cordial people internally, Anglians are considered greedy and shut in by outsiders, even within the Regalian Archipelago. They often artificially change bread prices to their suiting, even if it causes a famine elsewhere. - Becoming overweight is a major issue in some of the Anglian cities. Anglians are often fed so well with large quantities of food that weight becomes a major problem. - Anglians are fairly homogenous with the Wirtem. Very few blondes live in Anglia, while foreigners are generally looked at with a modicum of distrust. - The Turaal Order, long a strange bastion of racial and gender equality, has gained major traction in recent years, as the greater awareness of the Knightly Orders has demanded, and often drawn out, Turaal Blades to the public eye.
<urn:uuid:833a36f5-fba1-4939-91d8-0e0a6845a14f>
CC-MAIN-2022-33
https://wiki.massivecraft.com/Anglian
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00498.warc.gz
en
0.957375
7,766
2.8125
3
The French conquest of Algeria took place between 1830 and 1847. Using an 1827 diplomatic slight by Hussein Dey, the ruler of the Ottoman Regency of Algiers, against its consul as a pretext, France invaded and quickly seized Algiers in 1830, and rapidly took control of other coastal communities. Amid internal political strife in France, decisions were repeatedly taken to retain control over the territory, and additional military forces were brought in over the following years to quell resistance in the interior of the country. Algerian resistance forces were divided between forces under Ahmed Bey at Constantine, primarily in the east, and nationalist forces in Kabylie and the west. Treaties with the nationalists under `Abd al-Qādir enabled the French to first focus on the elimination of the remaining Ottoman threat, achieved with the 1837 capture of Constantine. Al-Qādir continued to give stiff resistance in the west. Finally driven into Morocco in 1842 by large-scale and heavy-handed French military action, he continued to wage a guerilla war until the Moroccan government, under French diplomatic pressure following its defeat in the First Franco-Moroccan War, drove him out of Morocco. He surrendered to French forces in 1847. The conquest of Algeria was initiated in the last days of the Bourbon Restoration by Charles X as an attempt to increase his popularity amongst the French people, particularly in Paris, where many veterans of the Napoleonic Wars lived. He believed he would bolster patriotic sentiment and turn eyes away from his domestic policies by "skirmishing against the dey". The territory now known as Algeria was only partially under the Ottoman Empire's control in 1830. The dey ruled the entire Regency of Algiers, but only exercised direct control in and around Algiers, with Beyliks established in a few outlying areas, including Oran and Constantine. The remainder of the territory (including much of the interior), while nominally Ottoman, was effectively under the control of local Arab and Berber tribal leaders. The dey acted largely independently of the Ottoman Emperor, although he was supported by (or controlled by, depending on historical perspective) Turkish Janissary troops stationed in Algiers. The territory was bordered to the west by the Sultanate of Morocco and to the east by the Ottoman Regency of Tunis. The western border, nominally the Tafna River, was particularly porous since there were shared tribal connections that crossed it. The Fan Affair In 1795-1796, the French Republic had contracted to purchase wheat for the French army from two Jewish merchants in Algiers, and Charles X was apparently uninterested in paying off the Republic's debt. These merchants, who had debts to Hussein Dey, the Ottoman ruler of Algiers, claimed inability to pay those debts until France paid its debts to them. The dey had unsuccessfully negotiated with Pierre Deval, the French consul, to rectify this situation, and he suspected Deval of collaborating with the merchants against him, especially when the French government made no provisions for repaying the merchants in 1820. Deval's nephew Alexandre, the consul in Bône, further angered the dey by fortifying French storehouses in Bône and La Calle against the terms of prior agreements. After a contentious meeting in which Deval refused to provide satisfactory answers on 29 April 1827, the dey struck Deval with his fan. Charles X used this slight against his diplomatic representative to first demand an apology from the dey, and then to initiate a blockade against the port of Algiers. The blockade lasted for three years, and was primarily to the detriment of French merchants who were unable to do business with Algiers, while Barbary pirates were still able to evade the blockade. When France in 1829 sent an ambassador to the dey with a proposal for negotiations, he responded with cannon fire directed toward one of the blockading ships. The French then determined that more forceful action was required. Following the failure of the ambassador's visit, Charles appointed as Prime Minister Jules, Prince de Polignac, a hardline conservative, an act that outraged the liberal French opposition, which was then in a majority in the Chamber of Deputies. Polignac opened negotiations with Muhammad Ali of Egypt to essentially divide up North Africa. Ali, who was strongly under British influence (in spite of nominally being a vassal of the Ottomans), eventually rejected this idea. As popular opinion continued to rise against Polignac and the King, they came to the idea that a foreign policy victory such as the taking of Algiers would turn opinion in their favour again. Invasion of Algiers Admiral Duperré took command in Toulon of an armada of 600 ships and then headed for Algiers. Following a plan for the invasion of Algeria originally developed under Napoleon in 1808, General de Bourmont then landed 34,000 soldiers 27 kilometres (17 mi) west of Algiers, at Sidi Ferruch, on 14 June 1830. To face the French, the dey sent 7,000 janissaries, 19,000 troops from the beys of Constantine and Oran, and about 17,000 Kabyles. The French established a strong beachhead and pushed toward Algiers, thanks in part to superior artillery and better organization. On 19 June the French defeated the dey's army at the battle of Staouéli, and entered Algiers on 5 July after a three-week campaign. The dey accepted capitulation in exchange for his freedom and the offer to retain possession of his personal wealth. Five days later, he went into exile in Naples with his family. The Turkish Janissaries also quit the territory, leaving for Turkey. The dey's departure ended 313 years of Ottoman rule of the territory. While the French command had nominally agreed to preserve the liberties, properties, and religious freedoms of the inhabitants, French troops immediately began plundering the city, arresting and killing people for arbitrary reasons, seizing property, and desecrating religious sites. By mid-August, the last remnants of Turkish authority were summarily deported without opportunity to liquidate significant assets. One estimate indicates that more than fifty million francs of assets were diverted into private hands during the plunder. This activity had a profound effect on future relations between the French occupiers and the natives. A French commission in 1833 wrote that "we have sent to their deaths on simple suspicion and without trial people whose guilt was always doubtful ... we massacred people carrying safe conducts ... we have outdone in barbarity the barbarians". One important side effect of the expulsion of the Turks was that it created a power vacuum in significant parts of the territory, from which resistance to French occupation immediately began to arise. Hardly had the news of the capture of Algiers reached Paris than Charles X was deposed during the Three Glorious Days of July 1830, and his cousin Louis-Philippe, the "citizen king", was named to preside over a constitutional monarchy. The new government, composed of liberal opponents of the Algiers expedition, was reluctant to pursue the conquest begun by the old regime. However, the victory was enormously popular, and the new government of Louis-Philippe only withdrew a portion of the invasion force. General Bourmont, who had sent troops to occupy Bône and Oran, withdrew them from those places with the idea of returning to France to restore Charles to the throne. When it was clear that his troops were not supportive of this effort, he resigned and went into exile in Spain. Louis-Philippe replaced him with Bertrand Clauzel in September 1830. The bey of Titteri, who had participated in the battle at Staouéli, attempted to coordinate resistance against the French with the beys of Oran and Constantine, but they were unable to agree on leadership. Clauzel in November led a French column of 8,000 to Médéa, Titteri's capital, losing 200 men in skirmishes. After leaving 500 men at Blida he occupied Médéa without resistance, as the bey had retreated. After installing a supportive bey and a garrison, he returned toward Algiers. On arrival at Blida, he learned that the garrison there had been attacked by the Kabyles, and in resisting them, had killed some women and children, causing the town's population to rise against them. Clauzel decided to withdraw that garrison as the force returned to Algiers. Clauzel introduced a formal civil administration in Algiers, and began recruiting zouaves, or native auxiliaries to the French forces, with the goal of establishing a proper colonial presence. He and others formed a company to acquire agricultural land and to subsidize its settlement by European farmers, triggering a land rush. Clauzel recognized the farming potential of the Mitidja Plain and envisioned the production there of cotton on a large scale. During his second term as governor general (1835–36), he used his office to make private investments in land and encouraged army officers and bureaucrats in his administration to do the same. This development created a vested interest among government officials in greater French involvement in Algeria. Commercial interests with influence in the government also began to recognize the prospects for profitable land speculation in expanding the French zone of occupation. Over a ten-year period they created large agricultural tracts, built factories and businesses, and bought cheap local labor. Clauzel also attempted to extend French influence into Oran and Constantine by negotiating with the bey of Tunis to supply "local" rulers that would operate under French administration. The bey refused, seeing the obvious conflicts inherent in the idea. The French foreign ministry objected to negotiations Clauzel conducted with Morocco over the establishment of a Moroccan bey in Oran, and in early 1831 replaced him with Baron Berthezène. Berthezène was a weak administrator opposed to colonisation. His worst military failure came when he was called to support the bey at Médéa, whose support for the French and corruption had turned the population there against him. Berthezène led troops to Médéa in June 1831 to extract the bey and the French garrison. On their way back to Algiers they were continually harassed by Kabyle resistance, and driven into a panicked retreat that Berthezène failed to control. French casualties during this retreat were significant (nearly 300), and the victory fanned the flames of resistance, leading to attacks on colonial settlements. The growing colonial financial interests began insisting on a stronger hand, which Louis-Philippe provided in Duke Rovigo at the end of 1831. Rogivo regained control of Bône and Bougie (present-day Béjaïa), cities that Clauzel had taken and then lost due to resistance by the Kabyle people. He continued policies of colonisation of the land and expropriation of properties. His suppression of resistance in Algiers was brutal, with the military presence extended into its neighborhoods. He was recalled in 1833 due to the overtly violent nature of the repression, and replaced by Baron Voirol. Voirol successfully established French occupation in Oran, and another French general, Louis Alexis Desmichels, was given an independent command that gained control over Arzew and Mostaganem. On 22 June 1834, France formally annexed the occupied areas of Algeria, which had an estimated Muslim population of about two million, as a military colony. The colony was run by a military governor who had both civilian and military authority, including the power of executive decree. His authority was nominally over an area of "limited occupation" near the coast, but the realities of French colonial expansion beyond those areas ensured continued resistance from the local population. The policy of limited occupation was formally abandoned in 1840 for one of complete control. Voirol was replaced in 1834 by Jean-Baptiste Drouet, Comte d'Erlon, who became the first governor of the colony, and who was given the task of dealing with the rising threat of `Abd al-Qādir and continuing French failures to subdue Ahmed Bey, Constantine's ruler. The rise of `Abd al-Qādir The superior of a religious brotherhood, Muhyi ad Din, who had spent time in Ottoman jails for opposing the bey's rule, launched attacks against the French and their makhzen allies at Oran in 1832. In the same year, tribal elders in the territories near Mascara chose Muhyi ad Din's son, twenty-five-year-old `Abd al-Qādir, to take his place leading the jihad. Abd al-Qādir, who was recognized as Amir al-Muminin (commander of the faithful), quickly gained the support of tribes in the western territories. In 1834 he concluded a treaty with General Desmichels, who was then military commander of the province of Oran. In the treaty, which was reluctantly accepted by the French administration, France recognized Abd al-Qādir as the sovereign of territories in Oran province not under French control, and authorized Abd al-Qādir to send consuls to French-held cities. The treaty did not require Abd al-Qādir to recognize French rule, something glossed over in its French text. Abd al-Qādir used the peace provided by this treaty to widen his influence with tribes throughout western and central Algeria. While d'Erlon was apparently unaware of the danger posed by Abd al-Qādir's activities, General Camille Alphonse Trézel, then in command at Oran, did see it, and attempted to separate some of the tribes from Abd al-Qādir. When he succeeded in convincing two tribes near Oran to acknowledge French supremacy, Abd al-Qādir dispatched troops to move those tribes to the interior, away from French influence. Trézel countered by marching a column of troops out from Oran to protect the territory of those tribes on 16 June 1835. After exchanging threats, Abd al-Qādir withdrew his consul from Oran and ejected the French consul from Mascara, a de facto declaration of war. The two forces clashed in a bloody but inconclusive engagement near the Sig River. However, when the French, who were short on provisions, began withdrawing toward Arzew, al-Qādir led 20,000 men against the beleaguered column, and in the Battle of Macta routed the force, killing 500 men. The debacle led to the recall of Comte d'Erlon. General Clausel was appointed a second time to replace d'Erlon. He led an attack against Mascara in December of that year, which Abd al-Qādir, with advance warning, had evacuated. In January 1836 he occupied Tlemcen, and established a garrison there before return to Algiers to plan an attack against Constantine. Abd al-Qādir continued to harry the French at Tlemcen, so additional troops under Thomas Robert Bugeaud, a veteran of the Napoleonic Wars experienced in irregular warfare were sent from Oran to secure control up to the Tafna River and to resupply the garrison. Abd al-Qādir retreated before Bugeaud, but decided to make a stand on the banks of the Sikkak River. On July 6, 1836, Bugeaud decisively defeated al-Qādir in the Battle of Sikkak, losing less than fifty men to more than 1,000 casualties suffered by Abd al-Qādir. The battle was one of the few formal battles al-Qādir engaged in; after the loss he restricted his actions as much as possible to guerilla-style attacks. Ahmed Bey had continuously resisted any attempts by the French or others to subjugate Constantine, and continued to play a role in resistance against French rule, in part because he hoped to eventually become the next dey. Clausel and Ahmed had tangled diplomatically over Ahmed's refusal to recognize French authority over Bône, which he considered to still be Ottoman territory, and Clausel decided to move against him. In November 1836 Clausel led 8,700 men into the Constantine beylik, but was repulsed in the Battle of Constantine; the failure led to Clausel's recall. He was replaced by the Comte de Damrémont, who led an expedition that successfully captured Constantine the following year, although he was killed during the siege and replaced by Sylvain Charles, comte Valée. Al-Qādir's resistance renewed In May 1837, General Thomas Robert Bugeaud, then in command of Oran, negotiated the Treaty of Tafna with al-Qādir, in which he effectively recognized al-Qādir's control over much of the interior of what is now Algeria. Al-Qādir used the treaty to consolidate his power over tribes throughout the interior, establishing new cities far from French control. He worked to motivate the population under French control to resist by peaceful and military means. Seeking to again face the French, he laid claim under the treaty to territory that included the main route between Algiers and Constantine. When French troops contested this claim in late 1839 by marching through a mountain defile known as the Iron Gates, al-Qādir claimed a breach of the treaty, and renewed calls for jihad. Throughout 1840 he waged guerilla war against the French in the provinces of Algiers and Oran, which Valée's failures to adequately deal with led to his replacement in December 1840 by General Bugeaud. Bugeaud instituted a strategy of scorched earth, combined with fast-moving cavalry columns not unlike those used by al-Qādir to progressively take territory from al-Qādir. The troops' tactics were heavy-handed, and the population suffered significantly. Al-Qādir was eventually forced to establish a mobile headquarters that was known as a smala or zmelah. In 1843 French forces successfully raided this camp while he was away from it, capturing more than 5,000 fighters and al-Qādir's warchest. Al-Qādir was forced to retreat into Morocco, from which he had been receiving some support, especially from tribes in the border areas. When French diplomatic efforts to convince Morocco to expel al-Qādir failed, the French resorted to military means with the First Franco-Moroccan War in 1844 to compel the sultan to change his policy. Eventually hemmed between French and Moroccan troops on the border in December 1847, al-Qādir chose to surrender to the French, under terms that he be allowed to enter exile in the Middle East. The French violated these terms, holding him in France until 1852, when he was allowed to go to Damascus. - A Global Chronology of Conflict: From the Ancient World to the Modern Middle ... , by Spencer C. Tucker, 2009 p. 1154 - A Global Chronology of Conflict: From the Ancient World to the Modern Middle ... , by Spencer C. Tucker, 2009 p. 1167 - "Algeria, Colonial Rule". Encyclopædia Britannica. Encyclopædia Britannica. p. 39. http://www.britannica.com/eb/article-220553/Algeria#487751.hook. Retrieved 2007-12-19. - Abun-Nasr, Jamil, p. 249 - Abun-Nasr, p. 250 - Ruedy, p. 47 - Ruedy, p. 48 - Ruedy, p. 49 - Ruedy, p. 50 - Ruedy, p. 52 - Wagner, p. 235 - Wagner, pp. 237-239 - Wagner, p. 240 - Wagner, pp. 241-243 - Abun-Nasr, Jamil (1987). A history of the Maghrib in the Islamic period. Cambridge University Press. ISBN 978-0-521-33767-0. - Priestley, Herbert Ingram (1966). France overseas: a study of modern imperialism. Routledge. ISBN 978-0-7146-1024-5. http://books.google.com/books?id=BOopmtvrsOAC&lpg=PA33&dq=Berthezene%20Rovigo&pg=PA34#v=onepage&q=Berthezene%20Rovigo&f=false. - Ruedy, John Douglas (2005). Modern Algeria: the origins and development of a nation (second ed.). Bloomington, Indiana: Indiana University Press. ISBN 978-0-253-21782-0. - Wagner, Moritz; Pulszky, Francis (translator) (1854). The Tricolor on the Atlas: or, Algeria and the French conquest. London: T. Nelson and Sons. http://books.google.com/books?id=xXYoAAAAYAAJ. |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:e170b5bf-d051-41f0-9a06-9092aabc852f>
CC-MAIN-2022-33
https://military-history.fandom.com/wiki/French_conquest_of_Algeria
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00495.warc.gz
en
0.968429
4,475
3.5
4
The accumulated evidence from recent excavations at Miqne and other sites and current research on the material culture of the Philistines and other Sea Peoples make the time ripe for a reassessment of the initial appearance and settlement in Canaan of this enigmatic people. Critical to any such reassessment is the understanding that cultural change during the transitional period from the Late Bronze Age to early Iron Age I was not uniform or simultaneous throughout the country. Rather, this period was characterized by a complex process in which indigenous Canaanite, as well as Egyptian, Philistine and Israelite cultures at times overlapped. Several recent articles dealing with the end of the Late Bronze Age and the beginning of the Iron Age in Canaan are based predominantly on the assumption that cultural change in the period was both uniform and simultaneous.a This conclusion distorts the true nature of this transitional period.1 This transitional period in Philistia can be better understood in light of the recent excavations at Tel Miqne-Ekron. At this site, ceramic and architectural evidence from secure stratigraphic contexts makes it possible to distinguish important stylistic developments within the monochrome Mycenaean III C:1b repertoire and to assess its connection to, and impact on, later Philistine bichrome ware. In addition, the archaeological finds from Miqne-Ekron provide a fresh context in which to try to determine the absolute chronology for both Philistia and greater Canaan. We will be talking about two principal Iron I phases—the first, stratum VII, is characterized by a style of monochrome pottery known as Mycenaean III C:1b. The second phase, represented by stratum VI, is characterized by a style of pottery known as Philistine bichrome ware. We will be trying to understand the changes, the transitions—first from the Late Bronze Age to the Iron Age and then within Iron Age I, from stratum VII to stratum VI. The change from the Late Bronze Age to the first Iron Age settlement is clear-cut and distinct. As we noted, the Late Bronze Age was characterized by extensive international trade. In the Late Bronze Age we find Mycenaean and Cypriote pottery throughout the eastern Mediterranean—the result of this trade. The cessation of such imports is a hallmark of the termination of the Late Bronze Age in Canaan, as well as elsewhere in the eastern Mediterranean. That is precisely what we found in Ekron. The absence of Mycenaean and Cypriote imports signals the end of the Canaanite settlement at the site. In stratum VII, which dates to the first third of the 12th century B.C.E., we find locally made Mycenaean III C:1b ware. This marks the beginning of the Iron I city at Ekron. The Mycenaean III C:1b ware is typically decorated and painted with dark brown to reddish monochrome designs and occasionally decorated with a stylized bird or fish motif. In stratum VI, the second Philistine stratum, a new kind of pottery appears—Philistine bichrome ware. While it is obviously related to the early Mycenaean III C:1b monochrome pottery, bichrome ware is decorated not only in two-color designs but also with fish and bird motifs. The transition from Mycenaean III C:1b to Philistine bichrome ware is gradual, unlike the clear-cut division between the Late Bronze Age, on the one hand, and the earliest Iron I city, on the other. All this attention to the fine details of pottery changes may sound less than exciting. But it paid off—in stratum VII it allowed us to identify a new ethnic group at the site—the Sea Peoples. The tip-off was that in stratum VII Mycenaean and Cypriote imports disappeared; instead we found locally made Mycenaean III C:1b ware. But the distinctly Mycenaean characteristics of this locally made pottery show the Sea Peoples’ strong inclination to recreate in Canaan—at least in their pottery—the home environment of the Aegeanworld they came from. With this background, we can look in more detail at the evidence as it came from the ground in the various excavation fields. This evidence, especially the pottery, will flesh out the cultural transitions we have identified. In what follows, however, we will be looking not only at pottery, but at fortifications, architecture, industrial activity, cult practices and even city planning. Let us begin with fortifications. A mudbrick wall over 10 feet thick protected the first Iron Age I city (stratum VII). We found extensions of this wall in both the upper and lower cities (along the northeastern and southern crests). This indicates that the Iron I city occupied the entire 50 acres of the tell. We identified two fortification phases of this city wall. The first was associated exclusively with Mycenaean III C:1b pottery; the second—a reinforcement of the first—was associated with the first appearance of Philistine bichrome pottery. In the upper city, next to the city wall, we excavated a number of square and horseshoe-shaped kilns, indicating a large industrial area. An enormous quantity of Mycenaean III C:1b pottery was found in this area. A recently developed, highly sophisticated process for measuring trace elements of various chemicals in the clay from which ancient pottery was made enables us to determine whether the pottery was locally made. We performed this test—known as neutron activation analysis—on this Mycenaean III C:1b pottery and determined that it was indeed locally made.2 Locally made pottery of this type, associated with kilns of the early Iron Age, has also been found on the coastal plain, at Ashdod3 and Acco4 in Israel, as well as in the north, at Sarepta in Lebanon.5 This type of pottery can be followed around the Mediterranean coast—from Sardinia in the west, to Sicily, to Greece, down the coast of Anatolia (Tarsus), to Ras Ibn Hani in Syria, to Crete, Rhodes and Cyprus. The wide distribution of this locally made pottery indicates not trade, but settlement of people of the same cultural background—the Sea Peoples. This is a focal point for understanding Mediterranean history at this time. The locally made pottery signals the cessation of Late Bronze Age trade and the arrival and settlement of the Sea Peoples in Early Iron I. The locally made Mycenaean III C:1b at Ekron, with its similarities in ware, form and decoration to the pottery manufactured in Cyprus and the Aegean during the same period, reflects the firsthand know-how the new settlers brought with them. They used their skills to manufacture fine tableware, such as bell-shaped bowls, kraters with horizontal handles and jugs with strainer-spouts; the Ekron settlers decorated these vessels in monochrome with many variations of spirals and related motifs, all reflecting their Aegean origins. The forms of their undecorated vessels, incidentally, can also be traced to the Aegean. A deep, V-shaped bowl with horizontal handles, known as a lekane or kalathos, made of well-levigated clay and in some cases decorated with plain bands is the most common of this type. A small, plain, rather delicate globular cooking pot with one or two handles is clearly not a continuation of a local Canaanite tradition, but is known from Cyprus and the Aegean. On the other hand, the Canaanite ceramic tradition did continue in other forms, such as store jars, juglets, bowls, lamps and cooking pots that were found with the Mycenaean III C:1b pottery. In the kiln area at Ekron, the new ceramic style associated with the arrival of this new ethnic group makes up 60 percent of the pottery assemblage. We have become accustomed to finding cultic objects in industrial areas of Iron Age cities, a phenomenon we don t entirely understand. In any event, the phenomenon made its appearance in the kiln area of Ekron. Indeed, we found a number of objects of a cultic nature in the kiln area, including painted animal figurines and a stylized head with a spreading headdress and birdlike facial characteristics. The head foreshadows the famous Ashdoda, a female figurine—first found at Ashdod, in Israel—with a birdlike head and a body in the shape of a chair. The Ashdoda is a hallmark of the mother goddess in the Aegean cult.6 In the next stratum (stratum VI), which dates to the last two-thirds of the 12th century B.C.E., the character of the kiln area changed. A building consisting of four rooms displays special features, including a stone pillar base and a pit with a cows shoulder blade, or scapula, and a kalathos, the familiar Aegean large krater with horizontal handles. These features clearly identify a cultic shrine (which continues in a different form into stratum V). Around the small shrine of stratum VI, which lay on the periphery of the city, was a rich b and a lion-headed rhyton.c7 The rhyton is remarkably similar to one found in the temple favissa (a repository for discarded cultic objects) at Tell Qasile, a Philistine site uncovered in modern Tel Aviv. (Objects like these have a long history in the Aegean world.)assortment of cultic items from the different phases of the shrine, including miniature vessels, clay figurines of the Ashdoda type, kernos fragments Several incised cow shoulder blades, long known from shrines in Cyprus, were also found at our Ekron shrine. These scapulae are associated with the cultic ritual of divination in which the god delivers a message or gives advice. The cow was the chief sacrificial animal used in this ritual.8 The earliest scapulae at Ekron, found in stratum VI, may mark this shrine as one of the first cultic installations of the Sea People/Philistines established in Philistia, and it may indicate that from its inception this building complex functioned as a cultic installation. As we continued to excavate, new kilns appeared; next to one we found a beautifully worked, ivory ring-shaped pommel handle. This ivory handle, with a suspension hole and traces of an iron blade, was found near a ritual burial; a decapitated puppy had been interred with the head placed between the hind legs. We have no idea what this signifies. Three other knives were later found at the site, two in a cultic context. In stratum VI Philistine bichrome pottery appeared for the first time. Therefore, we could confidently date the stratum to the final two-thirds of the 12th century B.C.E. As we have noted, Philistine bichrome ware differs from the earlier Mycenaean III C:1b pottery. Philistine bichrome ware is characterized by red and black decoration, divisions into metopes (discrete decorated areas) and the use of fish and bird motifs in a highly stylized manner. This new pottery has close affinities with the elaborate style of Mycenaean III C:1b pottery that was just then appearing in Cyprus, so its appearance at Ekron may mark a second influx of settlers at our site. Its appearance at this time may also correspond with the first historical mention of the Philistines in the Egyptian annals, dating from the eighth year of the reign of Ramesses III (1191 B.C.E., according to the high chronology, and 1175 B.C.E. according to the low chronology). Toward the end of this phase, Philistine bichrome pottery predominates; the amount of Mycenaean III C:1b pottery of the earlier period gradually diminishes and finally disappears. Let us turn now to the evidence from the lower city (in our fields III and IV). We have already mentioned the fortification wall found here. In addition to the wall itself, we revealed a massive fortification with rooms attached; this may have been a gate. The heavy white plastered mudbrick walls of these fortifications and rooms are typical of all buildings on the site during the late-12th through 11th centuries B.C.E. Near these fortifications was a huge installation lined with hamra (a red, sandy plaster). In this installation we found a crucible with traces of silver on it. Perhaps a metal industry existed here. Remember that on the upper tell as well, we found an industrial area located on the periphery of the city. The locations of these industries may reflect town planning policy that considered ecological factors—and placed industrial facilities as far away as possible from the center of the city. (We found much the same “planning” in the Iron Age II city.) We also uncovered some very special artifacts in this area—for example a gold, double-coiled ring for the hair of a Philistine maid. As might be expected, the ring has close analogies in the Aegean world.9 Another unusual find is a beautifully worked ivory knife handle. Let us leave the industrial area on the periphery of the city and go now to the city center (our field IV). Here we are in the heart of the site, what we call the elite zone. This was undoubtedly the administrative center of the city. Here stood well-planned, monumental buildings—possibly palaces or temples. Plastered mudbrick walls, well preserved, still stand to a height of 3.5 feet. We will concentrate our attention here on two buildings, one built partially on top of the other. Onewe call building 350; the other, building 351—very prosaic names for two very exciting structures. The earlier building—that is the lower one—is building 351. We have still not finished excavating it, so this must be considered only a preliminary account. Our work has been drastically slowed down because of numerous technical problems, not the least of which was the nature of the soil we were excavating: It was extremely moist, almost wet, suggesting we were very close to the water table. In the coming season, we want to lower the water table—a large-scale project for which we hope to use modern hydraulic equipment. But it is imperative to continue the excavation of building 351 because its history will tell us a great deal about the initial phase of the Philistine settlement in the elite zone of the city. Despite the fact that the excavation is incomplete, it is clear that building 351 was a public building. In the earliest phase so far uncovered (our stratum VIA), it is a large, well-planned mudbrick structure, partiallydamaged by the later construction of palace/temple 350. It consists of a large hall (26 feet by 33 feet) on the west and a number of small rooms on the east. So far we have not found any evidence of pillar bases or interior walls that would have supported a roof, so we don’t yet know whether this area was a roofed hall or an open courtyard. The walls were constructed of mudbricks laid lengthwise. Traces of white plaster can still be seen on the walls, a feature that repeats itself in building 350. The floor of this large hall in building 351 is composed of beaten earth covered by ashes, charcoal and pottery sherds. The only hint we have of what went on in this hall is the presence of a number of huge open vats, of which we found large, thick fragments. In the southwestern corner of the large hall, we found a small, white-plastered, stepped “niche.” The function of this niche is still unclear, but it is obviously cultic. One of the small rooms on the east yielded a very large number of restorable vessels, mostly storage jars. The pottery includes elaborate Philistine bichrome ware and only a few Mycenaean III C:1b potsherds, so we date this phase near the end of the 12th century B.C.E. But there is surely an earlier phase yet to be uncovered. On top of building 351 is building 350—another large hall, with smaller rooms (three of them) on the east. Here we are in stratum V—the 11th century B.C.E. Near the southeast corner of the large hall, just below the floor level, we discovered a foundation deposit that included a lamp inside two bowls. One bowl was upturned on top of the other, with the lamp nested inside in an upright position. The bowls were decorated with concentric circles, and the lamp showed no burn signs. Similar deposits at other sites—from Gezer in the north to Deir el-Balah10—have been connected with the ceremonial founding of a new building. The massive, 4-foot-wide foundation of building 350 and the boulder-size stones used for it suggest that this building had more than one story, although only the foundation and part of the first floor have survived. The above-ground 4-foot-thick walls were made of white-plastered mudbrick. Several layers of plaster could be detected, indicating frequent replastering. Small fallen fragments of blue-colored plaster seem to indicate that at least parts of the walls were painted. Architectural features as well as the artifacts found indicate that the building was used for cultic purposes. It was either a temple or a palace/temple. The main hall and each of the three side rooms on the east display unusual features that are as yet only incompletely understood. The middle of the three small eastern rooms contained a plastered, mudbrick bamah (offering platform) that was preserved to a height of 3 feet. On it were two bowls and a flask with red concentric circles. Near the bottom of the bamah was a bench that ran around its base. Such bamot (the plural of bamah) are part of the local Canaanite tradition seen at Tel Mevorakh and Tell Qasile, but they are also known from Cyprus and the Aegean, at such sites as Enkomi, Kition, Philakopi and Mycenae.11 In the Canaanite tradition (for example, at Tell Qasile) temples with bamot existed as independent shrines. In the Aegean (as at our site), the bamot and the shrines were part of a larger building complex. We now need to consider the Aegean and Levant influences on shrines and bamot and what these influences tell us about interconnections among these regions.12 In the next phase of this room, in stratum V, we found two bamot and a bench. The floor from this level proved to be a treasury of finds: a broken ivory knife handle; a broken faience ring; a gaming piece of faience in the shape of a chess pawn; various pottery vessels, including chalices; and a fang of a wild pig. Especially interesting were three bronze wheels with eight spokes each. These were undoubtedly part of a square cultic stand on wheels, a design known from Cyprus in the 12th century B.C.E.13 We also found a corner of this stand, and a bud that hung down from the stand as a decoration, all made of cast bronze. A basin, or laver, would be set on top of the square stand, which, in effect, provided a supporting frame. The offering was placed in the basin. This cult stand—in its shape, workmanship and decorative repertoire—is reminiscent of the Biblical description of the mechonot, the laver stands made for Solomon’s Temple in Jerusalem by Hiram, king of Tyre. As with our stand, lavers were placed on the frame of the stand: “[Hiram] made the ten laver stands of bronze. Each stand was four cubits long, four cubits wide and three cubits high. This is how the stands were constructed: They had panels and on the panels within the frames were lions, oxen and cherubim. In the frames, both above and below the lions and oxen were wreaths of hammered metal. Each laver stand had four bronze wheels and [two] bronze axles” (1 Kings 7:27–30). Our Ekron example is the first wheeled cult stand found in Israel. It is also the closest in time (11th century B.C.E.) to Solomon’s Temple (mid-tenth century B.C.E.). Other rooms in building 350 also contained extensive finds, many of them associated with cultic practices. Architectural features in these rooms, such as benches, also indicate that the rooms were used for cultic rituals. The northernmost of the small rooms actually had three superimposed floors. A plastered, funnel-shaped installation was set into the upper floor. We are not sure how it functioned, but, taking into account the other finds in this building, we assume that it had some cultic purpose. On the middle floor, a mudbrick bench was built next to the eastern wall. The other finds on this middle floor included 20 lumps of unbaked clay objects, biconical or rounded in shape. Similar objects, designated as “loom weights,” were found in large quantities at Ashkelon in 12th- and 11th-century B.C.E. contexts. They are also known from Kition and Enkomi on Cyprus. But what they are or what purpose they served is still a mystery. Stacked alongside the eastern wall on this middle floor, we found a cache of unusual vessels: a bottle with an elaborate style decoration, including a dotted scale and triangles; a horn-shaped, red-slipped, burnished bottle; an elongated bottle with horizontal red stripes; and a red-slipped, black-decorated, highly burnished carinated beer jug. The decorative styles called red slip and red burnished slip seem to appear at the beginning of the 11th century B.C.E., alongside the elaborate Philistine bichrome decorative style. Finally, on the highest floor of this room, we found a large, ivory, Egyptian-style earplug. The earplug was used as an earring inserted in the lobe of the ear. The southernmost of the three small rooms contained another small bamah. Its top and two sides were covered with a thin layer of plaster. On top of the bamah was an iron object that resembled an ingot; it may represent something that still escapes us. One of the most important artifacts found on the floor of this room, a complete iron knife, had an ivory handle and bronze rivets fixing the blade into the handle. Not far from the iron knife lay a bronze linchpin. Originally part of a real chariot, this linchpin secured one of the chariot’s wheels to its axle. The length of the linchpin would fit a normal-sized wheel, not the miniature wheels on the laver stand we described above.14 The entrance to the large, elongated main hall of building 350 was in the building’s northern wall. Inside, three entrances led from the main hall to the small rooms on the east. On the north-south central axis of the main room, we discovered two pillar bases (and possibly a third), one located exactly in the center of the hall. This configuration resembles that in the Philistine temple at Tell Qasile, where two support pillars stood about 6 feet apart. These two pillars, of course, also recall the pillars in the Philistine temple mentioned in the famous Bible story in Judges 16. Chained and blinded, Samson brings a Philistine temple down on himself by pushing two pillars apart. The two pillars in the Ekron building were 7.5 feet apart. The floor of the main hall, a laminated, beaten earth surface, contained many fish bones, animal bones, ashes and charcoal. Three superimposed hearths 15 Hearths are not known in the Canaanite building tradition; the only other hearth known in Canaan comes from Tell Qasile, which was also a Philistine city. On the other hand, hearths are an important feature in the building tradition of Cyprus and the Aegean, particularly in the plan of buildings we call megarons. A megaron is a large, long building with a central hall, which features a hearth, side chambers and an open-fronted porch. Indeed, in a megaron, the hearth is a central element. Again at Ekron we find reflected the Aegean background of the Philistines.in the northeastern part of the hall may explain why fish and animal bones, ashes and charcoal appear in the floor material. Each of these round hearths, about 3 feet in diameter, was paved with hundreds of small wadi pebbles. On top of these pebbles lay a thick layer of ashes and charcoal mixed with animal bones. Nearby we found chicken bones—a unique phenomenon in archaeological excavations in Israel. A word about the iron objects that came to light in this building. We already mentioned one complete iron knife with an ivory handle. We also recovered three other ivory handles belonging to iron knives—all dating to the 12th and 11th centuries B.C.E.. In addition, we mentioned a large iron ingot found on a d The elegant craftsmanship of these iron knives and the context in which they were found attest to their cultic and ceremonial significance.16 Similar knives found in the Aegean also make the discovery of the Ekron knives important. Current research increasingly points to evidence of European influence in the development of this type of knife with ring pommel handles.small bamah. These all add to the growing inventory of iron objects found at Philistine sites in Iron Age I. And this inventory raises anew the question of the Philistines’ role in the introduction of ironworking technology. In stratum IV (late 11th to early tenth century B.C.E.), the building we have just described in such detail retained the same architectural plan. Neither the walls nor anything else in the building was changed. The fill, intentionally placed to level and raise the floor, helped to preserve the walls to a height of 3 feet. The cultic function of the building in stratum V continued in stratum IV. In the main hall, a bamah was still in use. The rich finds—pottery, ivories, faience and stone artifacts—found on the last floors of the building also point to the special character of this structure. Thehearth, on the other hand, a central feature in the earlier strata, was not rebuilt; this Aegean tradition was no longer significant. The finds from this stratum, including ivories and faience pieces, reflect strong Egyptian ties. This indicates a turning point in Philistine material culture: New features reflect the impact of Egyptian and Phoenician culture on the Philistine world. Thus we come to the end of Iron Age I at Ekron—about 1000 B.C.E. Our excavations at Ekron have given us a glimpse into the history of a large urban center with a rich material culture—from its initial settlement, associated with the arrival of the Sea Peoples/Philistines, to its fortification and development into an important member of the Philistine pentapolis. The city featured industrial areas, unique cultic installations and a distinctive material culture, all reflecting strong Aegean ties. Ekron reached its peak of development in the 11th century B.C.E. in Iron Age I. However, this progress went hand in hand with a loss of distinctiveness of the Philistines’ material culture. The quality of the Philistine bichrome pottery degenerated as Egyptian and Phoenician influences had their effects on Philistine material culture. In the early tenth century B.C.E., Ekron was destroyed and for the most part abandoned. The bulk of the city lay barren for 270 years, until it was resettled in the seventh century B.C.E. Who destroyed the city? Perhaps King David. Contemporaneous strata (stratum X at Ashdod and stratum X at Tell Qasile) were also destroyed. Someone was obviously pressing the Philistines. If it was not David, perhaps it was the Egyptian pharaoh Siamun. What military, political or economic reasons can account for the sudden abandonment of most of a major urban center like Ekron? The answer is probably to be found in the changing geopolitics of the region. In short, as we shall see, the Philistines were no longer able to control the land that had been their home for 200 years. The second part of this article, by Seymour Gitin, covering Ekron’s resurgence in the eighth century B.C.E., will appear in our next issue.
<urn:uuid:6148f2dd-65e6-4b68-b920-7e970183f1db>
CC-MAIN-2022-33
https://www.baslibrary.org/biblical-archaeology-review/16/1/2
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00498.warc.gz
en
0.958429
6,141
2.671875
3
Milena Apostolovska Stepanoska Hristina Runceva Tasev NATIONAL IDENTITY VS. EUROPEAN IDENTITY: PARTNERS OR This paper aims to analyze whether the European identity can be equated with the national identity. The authors make an attempt to define the European identity through the markers of national identity given by the famous theorist and researcher of national identity Anthony Smith. The paper presents an overview of the theory of the national identity from Anthony Smith and the standpoints of the most relevant authors and theorists in the field of European identity. The authors of the paper come to a conclusion that: the European identity can’t be theorized with the national identity definitions and that the existence of national identity is not an obstacle for the formation of the European supranational identity. Key words: European identity, national identity, culture, supranational identity. I. NATIONAL IDENTITY The national identity is special type of collective identity. Different authors give different definitions of what the national identity is. In that manner some authors claim that the national identity presents the totality of social and cultural characteristics of the nation that can help the nation to integrate from inside and based on which it differs from other nations. National identity is inconsistent and changing in different socio-historical circumstances, especially in circumstances that arise with globalization. According to Anthony Smith the term "national identity" implies to some sense of political affiliation, regardless if that feeling is low. The political community is comprised of common institutions and a code of rights and duties for all its members living in certain precisely marked territory to which they identify with. In the same line were the French philosophers when they defined the nation as a community of people who obey the same laws and institutions within a given territory. It must be pointed out that this definition is typical for the West. However, the Western experience made the powerful, in fact major influence on the understanding of the nation. A new kind of politics - the rational state, and a new kind of community - territorial nation, are put in close correlation for the first time. The western or the civil model of the nation is mostly often named- territorial conception. Anthony Smith as important symbols of national identity lists: 1. Historical territory-homeland; 2. Common myths and historical memories; 3. Shared mass public culture; Milena Apostolovska Stepanovska, PhD., Assistant Professor, Ss. Cyril and Methodius University, Faculty of Law “Iustinianus Primus”, Skopje, Republic of Macedonia. Hristina Runcheva Tasev, PhD., Assistant Professor, Ss. Cyril and Methodius University, Faculty of Law “Iustinianus Primus”, Skopje, Republic of Macedonia. Antoni Smtih. Nacionalni identiteti, Biblioteka XX-vek,Beograd,prev.Slobodan Djorgjevic(1998): 15. 4. Rights and duties of all members of the nation; 5. Joint economy and territorial mobility of the members of the nation. The first feature of the national identity means that there must be compact, self- contained, defined territory. The people and territory must belong to each other. But that territory must not be any, neither to be located anywhere. It must be the "historic land" or the "historic home" of the nation. The "Historical country" is the country where over several generations lived. The homeland becomes a repository for historical memories and associations, a place where their sages, saints and heroes worked, fought, prayed and lived. All this makes the homeland unique. The homeland is a community of laws and institutions, united by a single political will. It entails at least some social rules of conduct which are the expression of common political feelings and goals. Parallel to the growing sense of legal and political community, you can follow the development of feelings of legal equality of the members of that community. Their full expression is different types of "rights based on citizenship": civil, political, economic, social, cultural, etc. It involves a minimum of mutual rights and obligations of the members of the nation and correlating the exclusion of the It is considered that the legal equality of the members of the political community in its delimited territory is joint value and tradition among its residents. In other words, nations must have common culture and certain civil ideology, common notions and aspirations, feelings and ideas that link citizens to their homeland. The task of providing a common public culture is delivered to the holders of popular mass socialization, particularly to the public education system and mass media. Nations according to the Western model of national identity are perceived as cultural communities whose members are united by common historical memories, myths, symbols and traditions. The national identity is a kind of collective identity that according to Karl Deutsch represents a group of people aspired to gain power through the mechanism of coercion, strong enough to be able to apply regulations to avoid arbitrariness and to practice alignment with them. "But in order to achieve this, there must be unity among members of different social groups. In that case, "national identity then indicates the alignment of the low and middle class with regional centers and social groups through communication channels of social and economic discourse center". The theory of Karl Deutsch can be helpful in giving an explanation for the process of creating the European identity. If there is a European national identity, it was created by people who associate transcending national borders and pooling experience in a positive way. The development of European economic, social and political fields, contributes to the daily interaction of members of different societies. They are the ones who see themselves as Europeans, involved in European national project. They can recognize the similarities that exist in other countries, and to connect with them as part of a large group of Europeans. 1. Functions of national identity The national identity and the nation are complex structures composed of multiple interconnected components - ethnic, cultural, territorial, economic, legal and political. They indicate the links of solidarity between members of communities that are united by common memories, myths, traditions and more. The nation appears as a mixture of two different Neil Fligstein; Who аre the Europeans and How Does this Matter for Politics, European Identity, Jeffry T.Checkel and Peter J.Katzenstain, Cambridge University Press, New York,(2009):132. components – (civil, territorial, ethnic or other) that vary from case to case. In fact, the multidimensional characteristic of the national identity is its enduring force in modern life and politics that successfully connect the national identity with other powerful ideologies and The multiple power of the national identity can be illustrated by examining the functions that the identity has for groups and individuals. In accordance with the above- mentioned dimensions, the functions of national identity are divided into external and The external features of the national identity are territorial, economic and political: The territorial characteristic refers to the social space where the holders of the national identity live and work; In economic terms, the national identity embraces the pursuit of territorial control over resources, including also the workforce. In political terms, the national identity relies on the state and its organs. It includes the selection of the political staff, the regulation of political behavior and appointment of the government, based on the standards of national interest, which presumably reflects the national will. Accordingly, the most important political function of the national identity is its identification with the common rights and obligations of legal institutions that on the other hand determine the values and character of the nation through centuries. Today, the reference to national identity is the main identification of social origin and The national identities also have interior features important for the members of the The most obvious one is the socialization of the members of the nation as "nationals" and "citizens". Today this is achieved with the mandatory, standardized, public mass education systems. In this manner, the state officials expect to instill national commitment and particularly homogeneous culture, an activity that most regimes performed with many energy, influenced by nationalist ideals of cultural authenticity and unity. Also, the national identity establishes a social link between members of different classes, connecting them on the basis of common values, symbols, traditions etc. Symbols, flags, coins, anthems, uniforms and monuments remind the members of the nation on their common heritage and past. Their sense of common identity and belonging encourage and uplift the members of the nation. Finally, the sense of national identity is a powerful tool for identifying and locating individuals in the world through the prism of culture. II. THEORIES OF EUROPEAN IDENTITY Although this is a new field of interest, so far several different theories that explain the nature and creation of the European identity have been developed. In this context, the first important theory is the theory of social constructivism introduced by Thomas Rise. "It is based on sociological ontology, which assumes that people cannot exist independently of Antoni Smtih, Nacionalni identiteti, Biblioteka XX-vek,Beograd,prev.Slobodan Djorgjevic(1998): 31. their social environment and their joint collective system of meanings (culture in a broad He further argues that "social identities first of all contain the ideas that describe and categorizes individual membership in the social group or community, including emotional, affective and evaluative component." This author also suggests three potential forms of European identity: 1. Nested; 2. Model cross road; 3. "Marble cake". It is necessary to emphasize that the theory of social constructivism is of greater assistance when we have to explain why we need to study the European identity than when we have to explain what the European identity is. The forms of identification that were introduced by Thomas Rise are of great help for the research of the relationship between the European and the national identity. Bernard Giessen uses the so-called procedural model of collective identity to explain the European identity. This author also uses the term "Verfassungspatriotismus" or “constitutional patriotism” introduced by J. Habermas. This theorist tries to explain the European identity as a "constitutional patriotism", which according to him does not mean love or commitment of someone to his homeland, but love or commitment to its He claims that the collective identity, as described in his procedural model, is “an attempt to link politics with political tradition”. It is evident that this model gives advantage to the process and practice of the EU membership. B. Giessen concludes that the European identity is based on a common system of common political ideals, not on a common culture or history. Richard Munch develops a theory for European identity on multiple levels. He claims that there is not one monolithic European identity, but there are many trans-European identities based on narrow individual interests. Furthermore, this author claims that the EU has not been established by the sovereign nations, but the Union is based on the strong social support of its citizens, which has been supported by more than a dozen associates. European society is developed in a society composed from several levels. of the European identity underlines the multiplicity of identities, which together create meta- identity or the European identity. These identities are not equal in their interactions, some are more developed than others; on some is given more weight than on the others, and that is how more levels of identities are created. The claim that European identity means different things to different people is true, but claiming that it is a set of identities means denying the civil elements found in its roots. Thomas Rise says that there is "identification with the EU as a separate civil and political entity," although he also claims that there is identification with "wider Europe as a cultural and historical social space." Here he claims that it should not be confused the existence of the civil European identity with the lack of the European cultural identity. However, the civil European identity grows at the expense of the European cultural identity, and according to Rise, dominates in the current discourse. He argues that “Thanks to the Risse, Th. Social Constructivism and European Integration, European Integration Theory,eds. Anje Wiener and Thomas Diez, Oxford,Oxford Univ. Press, (2004):160. Тhomas Risse; Social Constructivism and European integration, European integration theory,eds. Anje Wiener and Thomas Diez, Oxford,Oxford Univ. Press,(2004):167. Тhomas Risse; Social Constructivism and European integration, European integration theory,eds. Anje Wiener and Thomas Diez, Oxford,Oxford Univ. Press,(2004):168. Bernhard Giesen “The Collective Identity of Europe: Constitutional Practice or Community of Memory?``, Europanisation, National Identities and Migration eds. Willfried Spohn and Anna Traiandafyllidou, London, Routledge (2003): 22. Munch, R. ,,Democracy without Demos“, Europanisation, National Identities and Migration eds. Willfried Spohn and Anna Traiandafyllidou, London, Routledge. (2003):58. Richard Munch, see page 61. Тhomas Risse, see page 170. connection with the new Europe, the European constitutional patriotism became dominant Bernard Giessen claims that the European identity is built on a staggered traumatic He argues “the nations of Europe have been affected by the collective torture or guilt, especially for the Second World War and the Holocaust, during the past decade”. He also argues that this identity of collective trauma and guilt has cultural foundations in the From the above-mentioned theories of the European identity, we can conclude that there are two approaches: the constructivist approach argues that the collective identities can be created on an elementary level. In contrast to this theory, the essentialist theory negates the ability to create identities and indicates that there are only limited opportunities. The esessentialist opinion is dominant in everyday life and among the common people, while those dealing with research of the identity usually accept the constructivist theory. Samuel Einshtat and Bernard Giessen came to three conclusions from constructivist perspective, referring to the European identity: 1. Its appearance is possible; 2. The national and the European identity are compatible; 3. The European identity can be encouraged by providing access to those resources that allow creation of supranational identities. Jürgen Habermas in this context asks “why it would not be possible to an identity to be created beyond the national borders, in the same way that the European countries in XIX century have created the national identity”. Once reviewed all the important concepts for the national identity, the question arises whether they can be applied in the context of European identity. If we consider the markers of national identity, which according to Anthony Smith are important in determining the nation - named population, (which has a common area called homeland), common myths and historical memories, a mass public culture, a common economy and common rights and obligations that apply equally to all the members of the nation; and to make an effort to fill the content of the European identity – we will come to a conclusion that it is mission impossible. Namely, what corresponds to the national identity does not correspond to the European identity. For example, we can see that in the first integral element of the national identity: "named population”. When we try to put this element in the context of the European identity a problem appears. That is because in the case of the EU, we are talking about the Europeans, but the question is what are they? When it comes to national identity, we refer to the Dutch, Germans, and British etc. Identity marker identifies members as members of these nations. In each of these cases there is a national language and a native country, where the national identity is original. Europeans do not possess a common language or a fixed historical territory. The European Lingua Franca, as the Latin language was considered in the past, cannot be put into use again. The other three languages spoken across Europe (English, French and German) still have limited use – the English language is increasingly applied in commercial and scientific sense, while the German language is more commonly used on the territory of Central Europe. However, none of these languages can be accepted for Lingua Franca. Some researchers argue that in the case of the European identity, the second element Giesen, B. 2003: “The Collective Identity of Europe: Constitutional Practice or Community of Memory?``, Europanisation, National Identities and Migration eds. Willfried Spohn and Anna Traiandafyllidou, London, Eisenstadt, Sh. Noah and Giesen, B. 1995: The Construction of Collective Identity. European Journal of Sociology, 36(1995) :72-102. “common territory” is disputable in a European context. It primarily concerns the Eastern European borders, when in certain historical periods, Eastern Europe and Russia belonged to Europe, and in other times did not belong. Language and territory are visual markers of the national identity, which cannot be applied in the case of the European identity. According to some authors, one more marker of the national identity, defined by Anthony Smith, is controversial when set in the context of the European identity. It is about the legal norms that should apply to all members of the community. In 2007, the results from a survey were presented, conducted in several member states (Netherlands, Belgium, Germany, Italy, Spain, Sweden, Greece, Hungary, Poland, Czech Republic and Great Britain), which concerned the legal rules regarding the non-governmental organizations, the legal restrictions on their operations, the rules establishing NGOs, etc. The results from the survey showed that there were different legal rules, which applied in various Member States of the Union. Thus, for example, in all Member States there was obligation for registration of an NGO, but in each of the Member States the procedure was different. Another marker of the national identity given in the definition of Anthony Smith cannot be applied when it comes to the European identity. That marker is consisted of common myths and history. The defenders of the European identity emphasize the fact that the Europeans have a common European heritage- the Christianity, which has been an important factor in identity building in Western Europe in the past. But how about the Western Balkan countries aspiring for EU membership, which for many years in the past were part of the Ottoman Empire and under the influence of the Islam? Here it is necessary to ask the question: Does the theory of European identity oppose the traditional theories of identity? If European identity is defined as a collective identity that is based on the common system of political values, it is because it is associated with a unique political organization, and that is the European Union. Just as the national identity is linked to the nation-state, the European identity is associated with the EU. While the national identity is closely associated with the nation-state, historical examples of identities associated with multinational political organizations are those of empires. However, we cannot compare the EU with an empire, because the Member States are there on a voluntary basis. There is only one European identity unlike most often conflicting national identities, which existed in the We come to a conclusion that we cannot give a definition for the European identity through the definition of national identity because these two identities cannot be compared. The European identity is a special kind of identity that is a product of the existence of a special type of organization such as the European Union. Some authors say that it is a sui generis organization and this leads to a conclusion that the European identity is a sui generis identity that does not deny the existence of the national identity. The European identity and the national identity are not rivals but counterparts and can coexist together. 1. Eisenstadt, Sh. Noah and Giesen, B. 1995: The Construction of Collective Identity. European Journal of Sociology, 36(1995). 2. Fligstein, Neil. Who аre the Europeans and How Does this Matter for Politics, European Identity, Jeffry T.Checkel and Peter J.Katzenstain, Cambridge University Press, New York, 2009. 3. Giesen, Bernhard. “The Collective Identity of Europe: Constitutional Practice or Community of Memory?``, Europanisation, National Identities and Migration eds. Willfried Spohn and Anna Traiandafyllidou, London, Routledge, 2003. 4. Munch, Richard. ,,Democracy without Demos“, Europanisation, National Identities and Migration eds. Willfried Spohn and Anna Traiandafyllidou, London, Routledge, 2003. 5. Risse, Th. Social Constructivism and European Integration, European Integration Theory,eds. Anje Wiener and Thomas Diez, Oxford,Oxford Univ. Press, 2004. 6. Smtih, Antoni. Nacionalni identiteti, Biblioteka XX-vek,Beograd,prev.Slobodan Djorgjevic, 1998.
<urn:uuid:8dca3509-acef-4cd4-87a8-37a2fafa43b1>
CC-MAIN-2022-33
https://www.researchgate.net/publication/333521154_NATIONAL_IDENTITY_VS_EUROPEAN_IDENTITY_PARTNERS_OR_RIVALS
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00298.warc.gz
en
0.909791
4,830
3.09375
3
In this chapter 10 Keys to being a good communicator Communicating with parents Communicating with league administrators Communicating with opponents and referees As a coach, you're called on to do a lot of communicating. You address players, parents, other coaches, league administrators, and referees. You communicate in person, on the phone, in writing, one on one, and within group settings. How well you communicate with these groups significantly influences how successful your season is, how enjoyable it is, and how much your players learn. Of course, you've been communicating all your life. It can't be that hard, right? Right and wrong. If you haven't coached or taught before, and if you aren't used to instructing and leading youngsters, you are entering uncharted territory. Consider this chapter your roadmap to help you chart that territory. The 10 keys, presented first, will help you hone your communication skills as a coach. These keys are written with players in mind, but they apply to all groups with which you will communicate. Following the keys, we'll focus on the specifics of communicating with parents, league administrators, opponents, and referees. 10 Keys to Being a Good Communicator Most people tend to think only of the verbal side of communication. That’s important, but there’s so much more to being a good communicator. Here are 10 keys to good communication: Know your message. Make sure you are understood. Deliver your message in the proper context. Use appropriate emotions and tones. Adopt a healthy communication style. Provide helpful feedback. Be a good nonverbal communicator. Know Your Message Coach Caravelli gathers his players near a basket at the practice court and says, "All right, guys, today we’re going to learn how to box out." He tells David to help him demonstrate, and asks Alex to put up a shot. Alex shoots, and as he does, Coach Caravelli spreads his legs and arms wide and sticks his rear out, trying to find David, but he keeps his eyes on the ball and the basket. David easily slips by him, untouched, and grabs the rebound. "Just a lucky bounce," Coach mutters. "But Coach, my dad says you’re just supposed to find your man first, and then box out," one player says. Coach Caravelli considers this a moment before saying, "Actually, let’s just focus on shooting today. You guys like to shoot, right? Who wants to box out, anyway?" The player was right; Coach Caravelli didn’t know the technique for boxing out. He didn’t really know his message. Three issues are involved in knowing your message. You need to Know the skills and rules you need to teach. Read situations and respond appropriately. Provide accurate and clear information. Know the Skills and Rules Coach Caravelli didn’t know how to teach the skill of boxing out. He might be a smooth, coherent, and clear speaker, but that’s not going to help his players learn how to box out. Smoothness doesn’t make up for lack of knowledge. You have to know the skills and rules. Read the Situation As Coach Caravelli teaches his players how to correctly execute screens, Kenny and Sam are quietly goofing off, not paying attention. But Coach Caravelli doesn’t address the situation because they’re not really disrupting his instruction and he’s a little behind schedule. As his players begin to practice screens, Kenny and Sam are not executing as instructed. Kenny is not stationary when he sets screens, and Sam leaves a wide berth when running by the screener. So Coach Caravelli stops the action and tells them how to properly execute screens. Then he lets them proceed. Coach Caravelli delivered an important part of the message—Kenny and Sam need to know how to execute screens—but that was only part of the message he should have delivered. The real issue here was that the players weren’t paying attention, and Coach Caravelli didn’t correct the situation when it was occurring. He should have corrected that on the spot. Barring that, he should have told Kenny and Sam that the reason they didn’t know how to execute a screen was because they weren’t listening when he was teaching how to do so, and that they need to listen to his instruction the first time around. Sometimes knowing your message goes beyond understanding the content. You have to read the situation as well and tailor your message accordingly. Provide Accurate and Clear Information Knowing the content of your message isn’t enough. You need to be able to deliver that content clearly and accurately. Imagine a portion of a coach’s preseason letter to parents reading like this: "I’m really looking forward to coaching your child this season. Our first practice is next Monday at 6 p.m. See you then!" Too bad the coach didn’t remember to note where the first practice is being held. As a result of not being clear in his letter, he’ll have to spend a lot of time on the phone calling parents to deliver the information. The same goes for teaching skills. Perhaps you know the proper technique for shooting, but your instruction is so technical and confusing that your players are worse off than if they had received no instruction at all! They’re confused, you’re frustrated, and no one learns how to shoot. Know what information you need to deliver, and deliver it clearly so that all concerned understand. That’s sometimes easier said than done. Make Sure You Are Understood As you can imagine, if you are not clear with your directives, you can create a lot of confusion. Take the following example: "Okay, Dion," Coach Hagan says, "the next time you’re in that situation, make a crossover dribble and you’ll shoot right past your defender. All right? Let’s try it again." Dion gives Coach Hagan a puzzled look, but Coach Hagan, in the midst of conducting a drill, doesn’t notice. He’s already getting the drill going again. Dion just hopes he’s not in that same situation, because he has no idea what a crossover dribble is. Just because something is clear to you doesn’t mean it is clear to whomever you’re delivering your message to, be it a player, a parent, an administrator, or anyone else. You need to watch for understanding and be ready to clarify your message if the person on the receiving end is confused. When you state your message clearly and simply, you increase your chances of being understood. But don’t count on that; instead, watch your players’ facial expressions and read their body language. If they look confused or unsure of what to do, state your instruction again, making sure you use language they understand. And watch how you say things: When you tell a player to "move to the vacated spot," she might not know that you mean to rotate to the open area that her teammate just left. Likewise, shouting out "Pick and roll! Pick and roll!" doesn’t help if your players don’t know what a pick and roll is. Speak in language your players understand, and watch for their understanding. Deliver Your Message in the Proper Context In the first game of the season, Karim has just put up an awkward shot, using poor form. The ball is rebounded by the opponents, and a foul is called. As Karim moves downcourt before the ball is inbounded, Coach Grantham cups his hands to his mouth. "Hey, Karim! Use your fingers, not your palm! And square up your shoulders and hips to the basket! Remember to bend your knees to get a little momentum for your shot! And bend your shooting arm elbow to 45°. Don’t forget to follow through!" What’s wrong with this? First, it’s probably humiliating for Karim to have everyone in the gym witness his coach trying to instruct him on how to shoot. Second, it’s not the time or place to give such detailed instruction. That should come in practice, not in games. The instruction itself wasn’t incorrect; the timing of it was. Consider your context for delivering your message. Give brief reminders of tactical or skill execution during games, but save the teaching for practices. Use Appropriate Emotions and Tones Emotions are a natural part of basketball. Both you and your players (and their parents) can expect to experience a range of emotions throughout the season. In terms of communicating with others, your emotions can significantly affect your message. How? Let’s look at a few examples: Situation: Devon, your point guard, is stationary, dribbling near the top of the key as his teammates are moving and cutting to get open. Jeff cuts toward the basket and is wide open for a moment. Devon is late with his pass, though, and the ball is knocked away and stolen. Response #1: "Come on, Devon! Jeff was wide open! You can’t fall asleep out there!" Response #2: "That’s all right! Let’s get back on defense! Hold them, now. Let’s get it back!" Don’t ever berate a player, publicly or privately. Remember that even National Basketball Association players make plenty of mistakes. Your players are going to make mistakes; what they need is instruction if they’re not sure how to make a play, and encouragement, regardless. Help them to keep their focus on the game, not on how well they’re pleasing you. Situation: You are moments away from beginning the game that will decide your league championship. Response #1: "All right, this is it, guys! There’s no tomorrow. We’ve been playing to get to this game all year long. Show them what you’re made of. I want to feel that championship trophy in my hands at the end of the game. How about you? Are you ready to go out and win?" Response #2: "Okay, let’s play basketball like we know how. Keep your focus on the fundamentals. Let’s move the ball around, look for the open guy, play tough defense, and box out on the boards. Let’s go have some fun, all right?" Pep talks are better saved for the movies. Such talks often backfire because they get kids so sky high that they can’t perform well. Your players need to focus on playing sound, fundamental basketball. Situation: While practicing free throws, Terrell awkwardly slings the ball toward the basket, not using his legs at all. Response #1: "Hey, Terrell, you look like you’re shot-putting the ball up there! This isn’t track and field, this is basketball!" Response #2: "Use your legs, Terrell. Bend your knees to get a little momentum and strength. You can do it." Sarcasm will get you nowhere. Terrell doesn’t need sarcasm, or any type of humor. He needs instruction and encouragement. Adopt a Healthy Communication Style A lot of what you’ve been reading has to do with your communication style—whether you over-coach during games, offering too much instruction; whether you keep your emotions in check, or are too excitable or high-strung; what your tone is as you communicate; and so on. But there is more to consider concerning your communication style. It has to do with the bigger picture, with how you communicate on a daily basis. It has more to do with personality, outlook, and attitude than with reacting to a specific moment. And some styles are more effective than others. Here are a few of the less-effective styles some coaches fall into: Always talking, never listening—Some coaches feel if they’re not constantly talking, they’re not providing the proper instruction their players need. Carried to the extreme, some feel that their players have nothing to say. Coaches who always talk and never listen tend to have players who stand around more in practice because their coach is talking, and those coaches don’t get to know their players, thus missing out on one of the real joys of coaching basketball. Deliver the messages you need to deliver, but don’t feel you have to be talking throughout the entire practice. Always in control, too directive—Some coaches run practices like drill sergeants, snapping orders at players, exerting their authority, and squelching fun wherever it begins to appear. When practice doesn’t go exactly as they have choreographed it, they become irked. When players don’t progress according to schedule, it drives them crazy. Be in control of practice, yes, but don’t squelch the fun and don’t obsess over things you can’t control. Not in control, too passive—Other coaches take the opposite tack, either because they’re unsure of themselves or they’re too laid-back and give the impression that no one is in charge. They don’t provide the guidance or discipline players need. Not comfortable in the spotlight, they avoid it, and discipline problems begin to crop up. If you’re a quiet or laid-back person, don’t change your personality but do exert your authority as coach. You can be in charge and provide instruction without being loud and obnoxious. Seeking perfection—There’s a fine line between seeking to improve and seeking perfection. When coaches cross over the line into perfectionism, they are rarely satisfied with anything. Their forwards get rebounds, but their blocking out is not quite right. Shots go down, but there are flaws in the shooting mechanics. Even the gyms are not adequately lit or swept, at least in these coaches’ eyes. Players are on edge when they play for a perfectionist coach; their focus turns from playing the game to pleasing the coach. Help your players improve their skills, but allow them margin for mistakes. You can strive for improvement without putting added stress on the kids. Celebrate improvement even if it’s still not picture-perfect. Not in control of emotions—Some coaches throw up their hands in frustration when players are trying hard but having difficulty learning a skill. They shout in anger at a questionable call made by a volunteer referee. Their voices drip with sarcasm when players ask them something they feel the players should know. They respond with overzealous enthusiasm when their team scores a basket late in a game they are in control of, and this response is interpreted by all as unsporting behavior. The point is not to suppress all your emotions, but to be in control of them. Consider the message you send with the emotion you show. Do suppress any urge to show your frustration toward kids who are trying to learn the skills, as well as any desire to express your anger on the court. Maintain your respect for the people involved in all situations. Your players need you to be steady and need to know what to expect from you. Not aware of nonverbal communication—Some coaches watch what they say but not what they do. They express their frustration or anger nonverbally, and if someone confronts them about that expression, they likely will say, "What? I didn’t say anything." Remember that you’re communicating every second, whether verbally or nonverbally. Keep your nonverbal communication in line with your verbal communication, and make sure that both are positive, instructive, and encouraging. Buddy-buddy with the players—It’s good to be friendly with players, but it’s inappropriate to try to be their friend. Coaches who do this show a lack of maturity as they try to impress their players with how cool they are. Have fun with your players, but maintain the coach-player relationship. You’re there to help them become better ballplayers, not to become their pal. So, what should be your communication style? You should provide the instruction your players need in a way that helps them improve their skills. To do this, you need good listening skills as well as good speaking skills, and you need to be encouraging and positive as you instruct and correct. Maintain respect for your players as you communicate with them. Be friendly and open with them, but don’t try to become their friend. Create an enjoyable learning environment, maintain control over your emotions, and watch your nonverbal communication. When you adopt this type of communication style, you’re paving the way for your players to learn the game, improve their skills, and enjoy the season. A common mistake of new coaches is to assume that their sole role in communicating is to talk. Athletes are there to receive instruction, to be coached. Their focus should be on listening to you, on soaking in your instruction, on carrying out your commands. There’s plenty of truth in those statements, but they don’t reflect the whole truth. Give your players room to speak, to ask questions, to voice opinions or concerns. In doing so, you can get to know them better and are better tuned in to their needs. Thus, you are more likely to pick up on issues and problems you need to deal with; see the following sidebar, "Dealing with Issues As They Arise." Work at not only sending messages, but receiving them as well. As you talk to players, if you notice that their eyes are wandering or their bodies are turned partially away from you, they’re sending you a message ("We’re not really listening"). If their shoulders are slumped, their heads are down, or they’re dragging their feet, they’re sending one or more messages ("I’m tired"; "I’m discouraged"; "I’m bored"). If they’re giving you a blank stare or have a dazed look, they’re telling you they are tuning you out or are confused. Don’t ignore these signals. Handle them on a case-by-case basis. Each player will respond differently. Tune in, address the issues that need to be addressed, clarify instruction, and provide encouragement as needed, and keep your players on as even a keel as possible. Provide Helpful Feedback Tyler has been having trouble learning how to be a good defender. He tends to lunge for the ball, wanting to make a steal every time, and as a result he commits a lot of fouls. After one such foul, Tyler and his teammates return to the bench during a timeout. "Tyler, you need to stay on your man and play good defense," Coach Is Coach Dixon telling Tyler something he doesn’t already know? Hardly. Is he helping Tyler improve his defensive abilities? No. His feedback isn’t helpful at all; if anything, it just adds to the pressure Tyler undoubtedly already feels. Coach Dixon should focus on giving specific, practical feedback that will help Tyler improve his defense. You’ll learn about this type of feedback in Chapter 6, "Player Development." For now, know that such feedback is one of your duties in communicating with your players, and when it’s given properly, it can reap great dividends in terms of player improvement. Be a Good Nonverbal Communicator Studies have shown that up to 70% of communication is accomplished nonverbally. You just read about the importance of reading nonverbal cues—watching facial expressions and body language. You also have to pay attention to the nonverbal cues you send: "Way to go, Alex!" Coach Dintiman says, clapping his hands and smiling. "Way to go, Alex!" Coach Garner says, arms crossed tightly across his chest and a scowl on his face. The same words were used, but Coach Garner sent a vastly different message than Coach Dintiman. Nonverbal messages are being sent constantly—both with and without words. Consider your facial expressions during practices and games. Sometimes it’s appropriate to show that you’re frustrated—for example, when kids are goofing off. But when kids are exerting themselves on the court and not executing well, keep your frustration in check. Consider what messages your expressions and body language are sending, and make sure those messages are what you want to be sending. Your players need consistency from you in three ways. They need consistency In the messages you send In how you treat them In your temperament and style If you hear different messages from the same person on the same topic, what happens? You begin not to trust that person. The same happens if one week your players hear you say, "We’re hurrying our shots! I want to see at least three passes each time before we put up a shot," only to hear you follow that the next week with, "Terry, you were wide open for that shot! You’ve got to take that opportunity when you get it!" (This latter advice came after Terry received the first pass and was dutifully looking to pass.) Confusing? You bet. If you do this often, the players will not know what to believe, no matter what you say. Be sure you send consistent messages. Make sure you treat all your players in similar fashion. If Dan breaks a team rule one week and you discipline him accordingly, and the next week Zach breaks the same rule but you overlook it because he’s one of your best players, what message does that send to your team? That it’s okay to break the rules if you’re good enough? Likewise, if you spend more of your time with your average and good players in hopes of turning them into good and great players, respectively, what does that say to the lesser-skilled players? That they don’t matter because they can’t shoot or defend as well as their teammates? All your players need your attention and guidance to improve. They need to adhere to the same team rules and be treated the same way if they break those rules. And they all need to know that they are equally valued by you, regardless of their playing abilities. Know that after your season starts and you name your starters, players (and parents) will feel that the substitutes are not quite as valued as the starters. At the younger levels, you might rotate starting responsibilities from game to game and thus avoid this dilemma, but at older levels, you’ll be starting your best players. So how do you handle this? First, make sure you give equal attention and help to all your players in practice. They not only deserve this attention, but they need it to contribute in their substitute roles. It helps your team when everyone improves, not just your starters. Second, let players know how the middle and end of the game is just as important as the beginning. If you have 5 or 10 or 15 minutes to play, no matter what segments of the game those minutes come in, the team needs every player to contribute. Third, emphasize that not everyone is going to be a scoring machine, and reward players for all the other things—big and small—that contribute to wins: rebounds, tough defense, steals, assists, and so on. Find ways to tangibly reward substitutes who play well, doing the "little things" that often go unnoticed. Don’t let them go unnoticed on your team! They also need to know what to expect from you. If you are patient and encouraging one practice and moody or volatile the next, the learning environment suffers (as do the players). We all have mood swings, and we’re not robots. But do strive to be even-keeled and consistent in your approach from practice to practice, setting aside any personal issues that might affect your mood and your communication with your players on any given day. Kids learn best in a positive environment. Give them sound instruction, consistent encouragement, and plenty of understanding. Note, however, that being positive doesn’t mean letting kids run all over you, and it doesn’t mean having a Pollyanna attitude where you falsely praise a player for almost getting a rebound if, by using good technique, she should have easily gotten the rebound. It means you instruct and guide your players as they learn and practice skills and give them the sincere encouragement and praise they need as they work to hone their abilities. You’ll learn more about how to use praise in Chapter 6.
<urn:uuid:94514a77-124b-491a-a494-7dc5b94b290f>
CC-MAIN-2022-33
https://www.informit.com/articles/article.aspx?p=401619&amp;seqNum=3
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571987.60/warc/CC-MAIN-20220813202507-20220813232507-00296.warc.gz
en
0.964467
5,270
3
3
For citation, please use: Safranchuk I.A., 2022. Empathy Is the Best Strategy for Diplomacy. Russia in Global Affairs, 20(2), pp. 54-64. DOI: 10.31278/1810-6374-2022-20-2-54-64 Historically, the fundamental function of diplomacy is to represent one sovereign before another and ensure communication necessary for that. Diplomacy as a practice has rich traditions. Its two most important features—the privilege of being received by the sovereign and personal security—have naturally transformed into the high status of an ambassador. During the Renaissance, diplomatic missions became permanent in Europe. With time this practice spread to the whole world and the legal basis of diplomatic activity expanded (Zonova, 1999, pp. 1-3; Hamilton and Langhorne, 2011, pp. 37-60). However, diplomacy is not only a practice, but also a function. The key question here is not how international communication occurs, but why, that is, what its purpose is. General explanatory models, applied to different areas of human activity, answer the “why” question differently. From the idealistic and humanistic standpoint, communication between sovereigns and peoples is a natural wish of civilized actors. Therefore, as civilization developed, international communication expanded. From the economy-centered point of view, the development of trade is a beneficial phenomenon that requires the broadening of external relations and international communication. From the standpoint of rational interest understood the Machiavellian way, external communication gives a sovereign greater opportunities and additional tools. Taken individually and described in historical retrospect, these factors and their combinations bring one to the conclusion that since the late Middle Ages and the Renaissance the need for international communication and related diplomatic practice expanded almost continuously. There were attempts at self-isolation or forced isolation, but all of them ended in a “reunification” with the outside world. Moreover, in the second half of the 20th century, forced disconnection from the international environment began to be perceived as a punishment and turned into a tool—a full-fledged policy of sanctions. Recognition of the fact that international communication has been constantly expanding for centuries does not fully explain why it is so necessary. So, it would be appropriate to highlight some features of international communication in different historical periods. DIPLOMACY AND BALANCE OF POWER In modern times, diplomacy was tightly linked with the questions of war and the balance of power (Zonova, 1999, p. 8; Mallett, 1981). Explaining the title of his fundamental work Diplomatic History of Europe, Antonin Debidour admitted the possibility of a broad interpretation of diplomacy understood as all relations that governments can maintain among themselves and all issues on which they wish to reach an agreement through negotiations. However, he devoted his work to what he considered to be the centerpiece of diplomacy, namely, what promotes or breaks peace and to the relationships that emerge around the balance of power (Debidour, 1891, pp. 1-2). The absence of a hegemon required constant maneuvering by all countries. Threatening to use force, formalizing the results of its use, creating alliances for building up one’s own strength, and disrupting the enemy’s attempts to act likewise—all these efforts required communication with other actors. The 19th century can be considered the heyday of such a system of international communication, based on diplomatic missions of various levels and large congresses convened at the turning points in history (Nesmashny, Zhornist, Safranchuk, 2022, pp. 9-10). However, in modern times, along with the practical function of diplomacy, tied to issues of war and the balance of power, there was also a philosophical dream of peace, which resonated with the aspirations of the enlightened part of society (Angell, 1910). The beginning of World War I made intellectuals think that the balance of power concept and corresponding diplomatic maneuvering had failed. The war was now seen as a very wasteful and extremely inhuman phenomenon. The establishment of an order based on agreements looked as a sound alternative, and diplomacy began to be increasingly associated with such activity. Its perception as an alternative to war grew stronger. Practical steps along these lines had little or no effect in the 1920s and the 1930s, but World War II strengthened the peace-centered diplomatic tradition, or so it seemed at that time. The victorious powers declared their intention to create a world order that would be universal and fair, and also to bear responsibility for it. This is precisely how the unanimity regarding the spirit and letter of the UN Charter can be interpreted. However, in practice, the world turned towards the Cold War, the beginning of which is generally associated with Winston Churchill’s Fulton Speech. Either of the newly established warring camps was an alliance cemented by political, economic, and idiational factors. Creating and maintaining such alliances and establishing a solid regulatory framework for them, that is, forming their own hegemoniс orders, was an important task for the diplomatic services of the United States and the Soviet Union. At the same time, relations between the two camps and their leaders followed the tradition of the balance of power and international maneuvering, but it had two significant limitations. Firstly, the awareness of the destructive potential of nuclear weapons limited power politics. Needless to say, the balance of power theory was also applied to nuclear weapons and formed the basis for the Soviet Union’s and the United States’ gradual transition to diplomatic contacts on arms control issues. However, it was not the kind of international maneuvering that is designed to gain an advantage, traditional for the balance of power approach, but contractual recognition of the “nuclear stalemate,” that is, the impossibility of a rational nuclear war. In the 1970s and 1980s, the United States made several attempts to get out of the nuclear stalemate with the help of military-technical innovations, but each time the Soviet Union forced it back to square one at a new, higher level of quantitative and qualitative balance of nuclear weapons. Secondly, the two warring camps in the Cold War were keen to assert the universality of their economic and value-philosophical models. Each of them believed that it was on the side of Truth and History. Therefore, avoiding defeat was rather a tactical objective, but winning was the main goal, and this meant not just the disappearance of the geopolitical rival, but its total ideological debunking. Attempts at détente and peaceful coexistence changed the forms and temporal horizons of rivalry, but not its essence. The determination to achieve universality and eventually total victory, and not just relative gains, temporary recognition of one’s opponent as an actor—all this is incompatible with the balance of power doctrine. The latter implies not a final battle between “good” and “evil” (although moral and ethical factors are necessary to mobilize the masses), but endless rivalry for relative advantages in pursuit of rational interests. The impossibility of a direct victory in the Cold War and the existential rejection of the opposing side prompted the natural choice in favor of destruction from within: trying to find the opponent’s weaknesses and putting pressure on its sore spots in various ways, including by supporting the “fifth column” in order to undermine its system. Accordingly, diplomatic communication was needed not only for conducting traditional negotiations with the adversary’s leaders, but also for gaining access to its society for propagandistic (in fact, even subversive) purposes. At the same time, as the rivalry dragged on and had to be conducted in such a way as to prevent uncontrolled escalation, the sides agreed to develop international law and the system of UN institutions and form not only hegemonic orders for the like-minded, but also a kind of global order. DIPLOMACY DETACHED FROM NATIONAL INTEREST The end of the Cold War produced such a high degree of material and ideological dominance of the United States that for some time it seemed that the whole world had really become Pax Americana. This factor distorted the idea of rational international communication and its goals and objectives. In the U.S., many began to think that traditional foreign policy was no longer relevant and a system of relations with the United States was all that everybody needed. By the early 2000s this delusion had vanished (Kissinger, 2002). However, the strongest trend towards globalization had a far greater and lasting impact on the ideas of international communication at the end of the 20th and at the beginning of the 21st centuries. The phenomenon that by the end of the last century—after several decades of discussions about the growing interdependence of the world and its new (or not quite new) quality—began to be called globalization, can be considered in a wider context as a convergence of two interrelated processes that continued over the previous two or three centuries. Namely, an increase in the physical interconnectedness of the world, that is, material globalization, and its growing ideational integrity, that is, ideological homogeneity and universalization. These processes were not linear: there often happened events that divided peoples, disrupted their physical ties, and antagonized them ideologically. But in the long term, all most significant political, economic and social upheavals, be it the consequences of colonialism or of big wars, contributed to the emergence of a common global economic and political system and increased the interconnectedness of the world (Chanda, 2008). By the beginning of the 21st century, this long historical trend had gained an incredible momentum. Liberal intellectuals believed that over time the influence of the global environment would outweigh American power, and in the future the United States would have to join it on an equal footing with everybody else, delegating power to international institutions (Nye, 2003). In the ideal model of a globalized world there supposedly would be no room for traditional international communication. In such an extreme form as a “flat world” (Friedman, 2007) the system would cease to be an external environment for individual actors but would turn into a global network where everyone is connected to everyone by many horizontal links, with the hierarchical structures losing all relevance. There would be nothing external anymore—no actors and no environment, all becoming parts of one whole. Naturally, far from everyone believed that such an ideal form would ever be possible, but recognition of the global system’s expansion and its significant influence on each individual state prevailed. So, the exceptional importance of relations with the external environment was deemed immanent. The work of diplomats as communication professionals was reconfigured. Priority was attached to participation in building a regulatory and institutional framework for the international environment (Neumann, 2008; Jönsson, 2008), and to the establishment of relations with it in such a way as to obtain the maximum practical benefits. This could be understood as a relationship between universality and particularity (Jönsson and Hall, 2005, pp. 33-34). At the same time, diplomats ever more often found themselves in the same company with representatives of the transnational businesses and the non-governmental sector, whose interests, even if they retained priority connections with some jurisdictions, were not determined by the national considerations of individual states. This produced the widespread thesis that the world was witnessing an erosion of the borderline between “external” and “internal” affairs (Putnam, 1988). This was true in the sense that, as mentioned above, the dependence on the external environment grew and international communication professionals were breaking away from the national interests of their states to drift closer to the “global world” than to their own peoples and domestic politics. Such a bias in favor of the external environment and the global agenda triggered a backlash. In the 1990s and early 2000s, it manifested itself in the developing countries, where it was mainly used by the left and ultimately had little effect. However, in the developed countries ordinary citizens grew increasingly critical of their globalized elites, who were no longer quite national. At first, this protest was exploited by right-wing populists, but gradually it penetrated the political mainstream. Security and economic development agendas began to be nationalized (Popov and Sundaram, 2017; Safranchuk and Lukyanov, 2021b, pp. 15-18). Elected politicians increasingly adjusted their countries’ foreign relations to “domestic” interests, both national and electoral. Apparently, international communication professionals were required not only to prioritize the national interests formulated by the elite, but also to pursue them in a way that would be consonant with public sentiment at home, so that the country’s foreign policy would be an asset in elections and not a liability of elected politicians. International communication was reconfigured. The connection between the “external” and the “internal” remained, but its qualitative parameters had changed. Whereas at the previous stage, when the emphasis was on globalization, the task was to maximize participation in the “external” domain, to influence it, and to derive the greatest possible benefits from it, now it was more important to make foreign policy a natural extension of domestic affairs. WITH NO ORDER FOR A LONG TIME In material terms, the world remains global, even despite its certain fragmentation in the last decade. But the desire for idiational homogeneity and a common value space has been completely lost. The process of universalization has come to a stop. A materially global but idiationally non-universal—and moving towards further heterogeneity—world is the modern reality that contains an immanent contradiction. For the existing level of material globality, the world is too diverse, while for the growing level of non-universality, it is excessively interconnected materially on a global scale. Too different actors are too closely connected to each other (Safranchuk, 2020; Safranchuk and Lukyanov, 2021a, pp. 62-64). For the time being, the temptation to “correct” this reality in either direction remains strong. Liberal intellectuals are dreaming of restarting idiational re-universalization, while realists are pushing for material deglobalization. However, the former is hardly possible in any way other than by means of brutal coercion, while the United States as the only strong agent harboring such aspirations no longer has the capability to do so. As for material deglobalization, especially a deep one, it is rejected by the major part the international community (although in different ways in different countries), because most countries’ socioeconomic development models provide for material globalization. Therefore, a combination of material globality and idiational non-universality, and an imbalance between the two may become a long-term structural reality determining the nature of international communication. The attitude to the external environment will also change. For a long time, its importance grew. Some tried to use it, to derive benefits, or even to radically remake it. But in any case, the focus on the maximum involvement in it prevailed. In other words, the emphasis will shift from maximizing the impact on this environment to minimizing the environment’s feedback. At the same time, the international environment itself is likely to become more uncertain, turning into an inevitable risk, not an opportunity. In such conditions, two main functions of diplomacy, which have been steadily manifesting themselves in different historical periods, are in demand. The first one is international maneuvering aimed at gaining relative advantages over rivals. The second one is the preparation of agreements setting the rules for such maneuvering in order to reduce the risk of unintended escalation. However, the effectiveness of these efforts in the new conditions may be low. Due to material interconnectedness, it will be difficult for the rival parties to achieve and formalize the balance of power, while the growing idiational disunity does not let trust become strong enough for setting the “rules of the game.” All these factors promise an extremely volatile and unpredictable international environment in which the desire to protect oneself is combined with the inability to do so by fencing oneself off. Attempts to fence somebody off by coercion will be interpreted as an act of aggression and evoke a harsh response. In the past, the main response to uncertainty and risks was an order guaranteed either by the hegemon or “common” international institutions. Both options have been used up. A likely alternative may be not an order of some type but strategic empathy, that is, the ability to understand and recognize the needs of another party without giving up one’s own views or trying to change the opponent. Angell, N., 1910. The Great Illusion: A Study of the Relations of Military Power in Nations to Their Economic and Social Advantage. New York and London: G. P. Putnam’s Sons. Chanda, N., 2008. Bound Together: How Traders, Preachers, Adventurers, and Warriors Shaped Globalization. New Delhi: Penguin Viking. Debidour, A., 1891. Histoire diplomatique de l’Europe depuis l’ouverture du Congrès de Vienne jusqu’à la clôture du Congrès de Berlin. Paris: F. Alcan. Friedman, T.L., 2007. The World Is Flat: A Brief History of the Twenty-First Century. New York: Farrar, Straus and Giroux. Jönsson, C., 2008. Global Governance: Challenges to Diplomatic Communication, Representation, and Recognition. In: A.F. Cooper, B. Hocking and W. Maley (eds.) Global Governance and Diplomacy: Worlds Apart? Houndmills and New York: Palgrave Macmillan, pp. 29-38. Jönsson, C. and Hall, M., 2005. Essence of Diplomacy. Houndmills and New York: Palgrave Macmillan. Hamilton, K. and Langhorne, R., 2011. The Practice of Diplomacy: Its Evolution, Theory and Administration. 2nd Ed. London and New York: Routledge. Kissinger, H., 2002. Does America Need a Foreign Policy? Toward a Diplomacy for the 21st Century. Moscow: Ladomir. Mallett, M.E., 1981. Diplomacy and War in Late Fifteenth Century Italy. Proceedings of the British Academy, 67, pp. 267-288. Nesmashnyi, A.D., Zhornist, V.M., and Safranchuk, I.A., 2022. International Hierarchy and Functional Differentiation of States: Results of an Expert Survey. MGIMO Review of International Relations. doi.org/10.24833/2071-8160-2022-olf2 Neumann, I.B., 2008. Globalisation and Diplomacy. In: A.F. Cooper, B. Hocking and W. Maley (eds.) Global Governance and Diplomacy: Worlds Apart? Houndmills and New York: Palgrave Macmillan, pp. 29-38. Nye, J., 2003. The Paradox of American Power. Why the World’s Only Superpower Can’t Go It Alone. New York: Oxford University Press. Popov, V. and Sundaram, J., 2017. Convergence? More Developing Countries Are Catching Up. In: V. Popov and P. Dutkiewicz (eds.) Mapping a New World Order: The Rest beyond the West. London: Edward Elgar Publishing, pp. 7-23. Putnam, R.D., 1988. Diplomacy and Domestic Politics: The Logic of Two-Level Games. International Organization, 42(3), pp. 427-460. Safranchuk, I., 2020. Globalisation and the Decline of Universalism: New Realities for Hegemony. In: P. Dutkiewicz, T. Casier, and J.A. Scholte (eds.) Hegemony and World Order. New York: Routledge, pp. 65-77. Safranchuk, I.A. and Lukyanov, F.A., 2021a. The Modern World Order: Structural Realities and Great Power Rivalries. Polis. Political Studies, 3, pp. 57-76. Safranchuk, I.A., and Lukyanov, F.A., 2021b. The Contemporary World Order: The Adaptation of Actors to Structural Realities. Polis. Political Studies, 4, pp. 14-25. Zonova, T.V., 1999. Komparativny analiz stanovleniya rossiyskoi i evropeiskoi diplomaticheskoi sluzhby. [Comparative Analysis of the Emergence of Russian and European Diplomatic Service]. Rossiiskaya diplomatiya: istoriya i sovremennost’. Materialy nauchno-prakticheskoi konferentsii. [Russian Diplomacy: History and Modernity. Academic Conference Proceedings], 29 October. MGIMO University.
<urn:uuid:9fbde883-7480-4128-a429-75fa05c246bb>
CC-MAIN-2022-33
https://eng.globalaffairs.ru/articles/empathy/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00496.warc.gz
en
0.933922
4,394
3.34375
3
|Elevation||13 ft (4 m)| |• Ethnicities||Bamar Mon Shan Burmese Chinese Burmese Indians Kayin| |Time zone||UTC+6.30 (MMT)| Bago (formerly spelt Pegu; Burmese: ပဲခူးမြို့; MLCTS: pai: khu: mrui., IPA: [bəɡó mjo̰]), formerly known as Hanthawaddy, is a city and the capital of the Bago Region in Myanmar. It is located 91 kilometres (57 mi) north-east of Yangon. The Burmese name Bago (ပဲခူး) is likely derived from the Mon language place name Bagaw (Mon: ဗဂေါ, [bəkɜ̀]). Until the Burmese government renamed English place names throughout the country in 1989, Bago was known as Pegu. Bago was formerly known as Hanthawaddy (Burmese: ဟံသာဝတီ; Mon: ဟံသာဝတဳ Hongsawatoi; Pali: Haṃsāvatī; lit. "she who possesses the sheldrake"), the name of a Burmese-Mon kingdom. An alternative etymology from the 1947 Burmese encyclopedia derives Bago (ပဲခူး) from Wanpeku (Burmese: ဝမ်းပဲကူး) as a shortening of Where the Hinthawan Ducks Graze (Burmese: ဟင်္သာဝမ်းဘဲများ ကူးသန်းကျက်စားရာ အရပ်). This etymology relies on the non-phonetic Burmese spelling as its main reasoning. Various Mon language chronicles report widely divergent foundation dates of Bago, ranging from 573 CE to 1152 CE[note 1] while the Zabu Kuncha, an early 15th century Burmese administrative treatise, states that Pegu was founded in 1276/77 CE. The earliest possible external record of Bago dates to 1028 CE. The Thiruvalangadu plate describe Rajendra Chola I, the Chola Emperor from South India, as having conquered "Kadaram" in the fourteenth year of his reign- 1028 CE. According to one interpretation, Kadaram refers to Bago. More modern interpretations understand Kadaram to be Kedah in modern day Malaysia, instead of Bago. The earliest reliable external record of Bago comes from Chinese sources that mention Jayavarman VII adding Pegu to the territory of the Khmer Empire in 1195. The earliest extant evidence of Pegu as a place dates only to the late Pagan period (1212 and 1266)[note 2] when it was still a small town, not even a provincial capital. After the collapse of the Pagan Empire, Bago became part of the breakaway Kingdom of Martaban by the 1290s. The small settlement grew increasingly important in the 14th century as the region became most populous in the Mon-speaking kingdom. In 1369, King Binnya U made Bago the capital. During the reign of King Razadarit, Bago and Ava Kingdom were engaged in the Forty Years' War. The peaceful reign of Queen Shin Sawbu came to an end when she chose the Buddhist monk Dhammazedi (1471–1492) to succeed her. Under Dhammazedi, Bago became a centre of commerce and Theravada Buddhism. In 1519, António Correia, then a merchant from the Portuguese casados settlement at Cochin landed in Bago, then known to the Portuguese as Pegu, looking for new markets for pepper from Cochin. A year later, Portuguese India Governor Diogo Lopes de Sequeira sent an ambassador to Pegu. Toungoo Dynastic Capital The city remained the capital until the kingdom's fall in 1538. The ascendant Toungoo dynasty under Tabinshwehti made numerous raids that the much larger kingdom could not muster its resources against. While the kingdom would have a brief resurgence for 2 years in the 1550s, Tabinshwehti's successor Bayinnaung would firmly come to control Bago in 1553. In late 1553, Bago was proclaimed the new capital with commissioning of a new palace, the Kanbawzathadi Palace and Bayinnaung's coronation itself in January 1554. Over the next decade, Bago gradually become the capital of more land and eventually the largest empire in Indochina. A 1565 rebellion by resettled Shans in Bago burnt down major swaths of the city and the palace complex and the Kanbawzathadi Palace was rebuilt. Bayinnaung, this time, added 20 gates to the city named after the vassal who built it After the 1565 rebellion by resettled Shans in Pegu, he faced no new rebellions for the next two years (1565–1567). Because the rebellion burned down major swaths of the capital, including the entire palace complex, he had the capital and the palace rebuilt. The new capital had 20 gates, each named after the vassal who built it. Each gate had a gilded two-tier pyatthat and gilded wooden doors. |Plan of the gates of the newly built Hanthawaddy Pegu, 1568| The newly rebuilt Kanbawzathadi Palace was officially opened on 16 March 1568, with every vassal ruler present. He even gave upgraded titles to four former kings living in Pegu: Mobye Narapati of Ava, Sithu Kyawhtin of Ava, Mekuti of Lan Na, and Maha Chakkraphat of Siam. As a major seaport, the city was frequently visited by Europeans, among these, Gasparo Balbi and Ralph Fitch in the late 1500s. The Europeans often commented on its magnificence. Pegu also established maritime links with the Ottomans by 1545. The Portuguese conquest of Pegu, following the destruction caused by the kings of Tangot and Arrakan in 1599, was described by Manuel de Abreu Mousinho in the account called "Brief narrative telling the conquest of Pegu in eastern India made by the Portuguese in the time of the viceroy Aires de Saldanha, being captain Salvador Ribeiro de Sousa, called Massinga, born in Guimarães, elected as their king by the natives in the year 1600", published by Fernão Mendes Pinto in the 18th century. The 1599 destruction of the city and the crumbling authority of Bayinnaung's successor Nanda Bayin saw the Toungoo Dynasty flee their capital to Ava. The capital was looted by the viceroy of Toungoo, Minye Thihathu II of Toungoo, and then burned by the viceroy of Arakan during the Burmese–Siamese War (1594–1605). Anaukpetlun wanted to rebuild Hongsawadi and the glories of Bago, which had been deserted since Nanda Bayin had abandoned it. He was only able to build a temporary palace, however.: 151–162, 191 The Fall of Toungoo and Konbaung Dynasty Bago was rebuilt by King Bodawpaya (r. 1782-1819), but by then the river had shifted course, cutting the city off from the sea. It never regained its previous importance. After the Second Anglo-Burmese War, the British annexed Bago in 1852. In 1862, the province of British Burma was formed, and the capital moved to Yangon. The substantial differences between the colloquial and literary pronunciations, as with Burmese words, was a reason of the British corruption "Pegu". In 1911, Hanthawaddy was described as a district in the Bago (or Pegu) division of Lower Burma. It lay in the home district of Yangon, from which the town was detached to make a separate district in 1880. It had an area of 3,023 square miles (7,830 km2), with a population of 48,411 in 1901, showing an increase of 22% in the past decade. Hanthawaddy and Hinthada were the two most densely populated districts in the province. Hanthawaddy, as it was constituted in 1911, consisted of a vast plain stretching up from the sea between the mouth of the Irrawaddy River and the Pegu Range. Except the tract of land lying between the Pegu Range on the east and the Yangon River, the country was intersected by numerous tidal creeks, many of which were navigable by large boats and some by steamers. The headquarters of the district was in Rangoon, which was also the sub-divisional headquarters. The second sub-division had its headquarters at Insein, where there were large railway works. Cultivation was almost wholly confined to rice, but there were many vegetable and fruit gardens. Today, Hanthawaddy is one of the wards of Bago city. As of 2019, the city has 220,387 people based on the General Administration Department's estimates. 88.73% of the Township is Bamar with a significant Karen, Mon, Palaung and Burmese Indian population. Buddhists make up 94.2% of the city with Christianity being the second most populous at 4.2%. There are 749 monasteries, 92 nunneries and 134 stupas of various sizes including the tallest pagoda in Myanmar, the Shwemawdaw Pagoda. The city also has 9 churches, 6 mosques, 16 Hindu temples and 3 Chinese Mahayana temples. Economy and Transport The main industries of Bago Township are agriculture and service sector employment. Bago city has an industrial zone with several factories, mostly in textiles and shoe-making. Smaller factories and workshops within the city also create food products, plastics, electric meters, motors, wood products, tea and halwa. Bago also has a small, but thriving tourism industry with many tourists from nearby Yangon. The Bago Development Committee manages 11 markets around the city. There are no airports within the township, and the city is served mostly by Yangon International Airport but the proposed Hanthawaddy International Airport serving Yangon and Bago may be located within Bago Township. There are 2 rail lines that pass through Bago, one going to Mandalay and another south to Mawlamyine. Bago also has several bus depots on its outskirts with intercity buses providing regular service. Bago is served by the Yangon–Mandalay Expressway as well as the old highways going to Taungoo and Myeik. Bago has 7 major bridges crossing the Bago River in and around the city. Bago has a tropical monsoon climate (Köppen Am), similar to most of coastal Myanmar, with a hot, dry season from mid-November to mid-April and a, hot, extremely humid, and exceedingly rainy wet season from May to October. |Climate data for Bago, Myanmar (1981-2010)| |Average high °C (°F)||31.6 |Average low °C (°F)||15.8 |Average rainfall mm (inches)||1.3 |Source: Norwegian Meteorological Institute| Places of interest - Shwethalyaung Buddha (Reclining Buddha) - Shwemawdaw Pagoda - Kyaikpun Buddha - Kanbawzathadi Palace site and museum - Kalyani Ordination Hall - Mahazedi Pagoda - Shwegugyi Pagoda - Shwegugale Pagoda - Bago Sittaung Canal - Butterfly lake (Lake-pyar-kan) Bago has a 400 meter football field and 1 public fitness center. - Grand Royal Stadium - Bago General Hospital (500-bedded Public Hospital) - Bago Traditional Medicine Hospital Bago also has 9 high schools and a university. Bago's larger high schools have branches within the city. There are 28 monastic schools within the Township. Bago has a school attendance rate of 99.82% and 33% attendance rate for university. Overall, the literacy rate is 99.55%. - A version of the 18th century chronicle Slapat Rajawan as reported by Arthur Phayre (Phayre 1873: 32) states that the settlement was founded in 1116 Buddhist Era (572/573 CE). But another version of the Slapat, used by P.W. Schmidt (Schmidt 1906: 20, 101), states that it was founded on 1st waxing of Mak (Tabodwe) 1116 BE (c. 19 January 573 CE), which it says is equivalent to year 514 of "the third era", without specifying what the era specifically was. However, per (Phayre 1873: 39), one of the "native records" used by Maj. Lloyd says that Pegu was founded in 514 Burmese (Myanmar) Era (1152/1153 CE). If the year 514 is indeed the Burmese Era, then the Slapat's 1st waxing of Tabodwe 514 would be 27 December 1152, equivalent to 1st waxing of Tabodwe 1696 BE (not 1116 BE). - (Aung-Thwin 2005: 59) cites the inscription found at the Min-Nan-Thu village near Bagan, which as shown in (SMK Vol. 3 1983: 28–31) was donated by daughter of Theingathu, dated Thursday, 7th waxing of Nanka (Wagaung) 628 ME (8 July 1266), and lists Pegu as Pe-Ku. (Aung-Thwin 2017: 200, 332) updates by saying that the earliest extant inscriptions that mention Pegu date to 1212 and 1266 but does not provide the source of the 1212 inscription. It must be a recent discovery as none of the inscriptions listed in the Ancient Burmese Stone Inscriptions (SMK Vol. 1 1972: 93–102) for years 573 ME (1211/1212) or 574 ME (1212/1213) shows Pe-Ku or Pegu. - Chisholm, Hugh, ed. (1911). Encyclopædia Britannica. Vol. 21 (11th ed.). Cambridge University Press. p. 58. . - Burma Translation Society (1947). Myanma Swesone Kyan မြန်မာ့ စွယ်စုံကျမ်း [Burmese Encyclopedia]. Vol. 6. London: BStephen Austin & Sons. - Aung-Thwin 2017: 332 - Sastri, K. A. Nilakanta (2000) . The Cōlas. Madras: University of Madras. - Majumdar, R. C. (1937). Ancient Indian colonies in the Far East. Vol. 2: Suvarnadvipa. Dacca: Ashok Kumar Majumdar. pp. 212–218. - Chatterji, B. (1939). JAYAVARMAN VII (1181-1201 A.D.) (The last of the great monarchs of Cambodia). Proceedings of the Indian History Congress, 3, 380. Retrieved September 2, 2020, from www.jstor.org/stable/44252387 - Nilakanta Sastri, K. A. (1955) [reissued 2002]. A history of South India from prehistoric times to the fall of Vijayanagar. New Delhi: Indian Branch, Oxford University Press. ISBN 978-0-19-560686-7. - Luís Filipe Tomás (1976). "A viagem de António Correia a Pegu em 1519" (PDF) (in Portuguese). Junta de Investigações do Ultramar, [Lisboa]. Retrieved 2014-08-05. - Malekandathil, Pius M C (2010-10-26). "ORIGIN AND GROWTH OF LUSO-INDIAN COMMUNITY in Portuguese Cochin and the maritime trade of India, 1500-6663" (PDF). Pondicherry University. Retrieved 2014-08-05.| - Harvey, G. E. (1925). History of Burma: From the Earliest Times to 10 March 1824. London: Frank Cass & Co. Ltd. p. 153-157, 171. - Kala, U (1724). Maha Yazawin (in Burmese). Vol. 1–3 (2006, 4th printing ed.). Yangon: Ya-Pyei Publishing. - Casale, Giancarlo (2010-01-28). The Ottoman Age of Exploration. Oxford University Press. ISBN 978-0-19-537782-8. - Rajanubhab, D., 2001, Our Wars With the Burmese, Bangkok: White Lotus Co. Ltd., ISBN 9747534584 - British Museum collection - "On This Day: The 1930 Earthquake Which Flattened Bago". The Irrawaddy. 2019-05-05. Retrieved 2020-10-14. - "Myanmar coup: 'Dozens killed' in military crackdown in Bago". BBC News. 2021-04-10. Retrieved 2021-04-11. - "Bago Township Report" (PDF). 2014 Myanmar Population and Housing Census. October 2017. - Myanmar Information Management Unit (December 19, 2019). Bago Myone Daethasaingyarachatlatmya ပဲခူမြို့နယ် ဒေသဆိုင်ရာအချက်လက်များ [Bago Township Regional Information] (PDF) (Report). MIMU. Retrieved March 2, 2022. - "Oversea Major Project". SUNJIN Engineering & Architecture. Retrieved 23 June 2012.[permanent dead link] - "Myanmar Climate Report" (PDF). Norwegian Meteorological Institute. pp. 23–36. Archived from the original (PDF) on 8 October 2018. Retrieved 30 November 2018. - Aung-Thwin, Michael A. (2005). The Mists of Rāmañña: The Legend that was Lower Burma (illustrated ed.). Honolulu: University of Hawai'i Press. ISBN 9780824828868. - Aung-Thwin, Michael A. (2017). Myanmar in the Fifteenth Century. Honolulu: University of Hawai'i Press. ISBN 978-0-8248-6783-6. - Nyein Maung, ed. (1972–1998). Shay-haung Myanma Kyauksa-mya [Ancient Burmese Stone Inscriptions] (in Burmese). Vol. 1–5. Yangon: Archaeological Department. - Pan Hla, Nai (1968). Razadarit Ayedawbon (in Burmese) (8th printing, 2005 ed.). Yangon: Armanthit Sarpay. - Phayre, Major-General Sir Arthur P. (1873). "The History of Pegu". Journal of the Asiatic Society of Bengal. Calcutta. 42: 23–57, 120–159. - Phayre, Lt. Gen. Sir Arthur P. (1883). History of Burma (1967 ed.). London: Susil Gupta. - Schmidt, P.W. (1906). "Slapat des Ragawan der Königsgeschichte". Die äthiopischen Handschriften der K.K. Hofbibliothek zu Wien (in German). Vienna: Alfred Hölder. 151.
<urn:uuid:fea4dbba-4e94-4252-aefe-79bd403c1000>
CC-MAIN-2022-33
https://en.wikipedia.org/wiki/Bago,_Burma
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00497.warc.gz
en
0.886314
5,434
2.921875
3
|Year : 2018 | Volume | Issue : 2 | Page : 27-34 An overview of psychosocial impacts of disaster Preethi T Louis Sr. Demonstrator, College of Nursing, CMC, Vellore, India |Date of Web Publication||5-Jun-2020| Source of Support: None, Conflict of Interest: None Disaster or calamity causes widespread destruction or distress. For any catastrophe, we determine the degree of suffering by the size, shape, impact and the probability of reoccurrence. Stress and emotional pain can have a significant effect on individuals and communities. Recovering from the impact of the calamity and regaininga sense of control is a vital focus in delivering psychosocial interventions. The present article attempts to explore the psychosocial profiles of victims of flood, drought, storms, and cyclones that have occurred in the Indian subcontinent over the last decade and the immediate and long-term implications of the disaster. We discuss essential methods of providing relief and rehabilitation in this paper. Finally, although many South East Asian countries have been deliberating on several useful models for disaster management, further research on understanding the psycho-social impact of calamities are recommended. Keywords: disaster, psychosocial impact, victims, rehabilitation, intervention |How to cite this article:| Louis PT. An overview of psychosocial impacts of disaster. Indian J Cont Nsg Edn 2018;19:27-34 | India’s Disaster Profile|| | On account of its geographic region and climatic conditions, India tops the list as the most catastrophe inclined zones of the world. Around 58.7% of the total land mass experiences seismic tremors. About 40 million hectares of the Indian land (12%) is continuously affected by surging floods, and 68% of the area is prone to starvation (Mohandas, 2009). Tsunamis have continually influenced India since 2004 (Ministry of Home Affairs, 2011). Around 76% of the land encounters cyclones each year. In the Himalayas, torrential landslides have been a noteworthy and disastrous event causing enormous destruction to life and property. Cold and heat waves in different parts of India are common and numerous people encounter catastrophies. | Aims and Objectives|| | This article revolves on providing an account of cataclysms over the past ten years and the associated psychosocial issues. We also explore psychosocial mediations that might be applicable for people encountering loss and misery. This paper gives an emotional examination of a wide variety of issues in disasters over the last decade and draws on the implications to cultivate mental well-being. In short, this article aims to : - Provide a brief overview of the overall disasters that occurred a decade ago - Audit the psychosocial profile of the survivor victims. - Provide a record of psychosocial intercessions | Disaster and Mental Health|| | The manner in which an individual sees catastrophe changes from individual to individual. It depends on the type of calamity, level of adversity, a person’s strategy for managing pressure, and the resources available. It likewise relies upon the lifestyle of that society, and even on the country’s monetary and political hierarchy. To a specific degree people might not have the ability to adjust or adapt suitably to the conditions and experiences of the calamity. They may experience signs of distress and mental concerns. It becomes crucial to comprehend that not all psychological and emotional reactions are adverse. It can create feelings of endurance and increase the chances of survival. Traumatic experiences can transform the way we perceive events. | General Psychological Concerns|| | The mental health of an individual may improve when one understands that he or she needs to adapt to the demands posed. Overwhelming reactions can emerge after a disaster. People react contrastingly to disasters depending on their experiences and traits. However, there are some typical responses experienced by individuals affected (Beigel & Berren, 1985). They are as follows: - Emotional problems (freezing, panic, shock, fear, aggravation, outrage, pity and blame intent) - Psychosomatic symptoms (sleep deprivation, eating issues, physical issues, muscle strain, palpitation, migraines, loose bowels, and breathing challenges) - Cognitive problems (recalling unpleasant memories, reliving it, having bad dreams, perplexity, flashbacks, trouble in concentrating, memory issues, and losing the capacity to focus) - Behavioral and attitudinal concerns (disturbances in social relationships and friendships, poor inspiration, rumination, dormancy, misery, and loss of intrigue) | Stress, Post Traumatic Stress Disorder (PTSD), Depression|| | PTSD is a psychiatric condition which can arise after any traumatic, catastrophic life event. A systematic review on post disaster PTSD showed that symptoms of PTSD were seen more in the initial one to two years after natural disasters in the victims. The first responders such as fire fighters and police officers also had PTSD symptoms (Neria, Nandi, & Galea, 2008). The tsunami which occurred on 26th December 2004 in the Indian coastal areas was because of the tremor experienced in the Indian Ocean. In February 2005, scientists reviewed the seriousness of the earthquake to be 9.0 (McKee, 2005). Studies revealed that 78.5% of women had concerns of PTSD when contrasted with 61.7% of men. Women belonging to rural regions and from lower financial strata had higher risks for PTSD. They were 6.35% more likely to be at risk as compared to men (Pyari, Kutty, & Sarma, 2012). The encounters of Oxfam an NGO which worked for the tsunami affected territories in South India also found that women essentially experienced more stress (Mac Donald, 2005). Depression was the most common psychological effect and anxiety was seen more in women after the tsunami in Tamil Nadu (Nambi, Desai, & Shah, 2007). In the Kanyakumari districts, nearly 43% of male survivors had significant mental distress, and 31% had abnormal or severe levels of distress (George, Sunny, & John, 2007). Moreover, investigations on long term well-being outcomes (Kar, Krishnaraaj, & Rameshraj, 2014) demonstrated that 33% of the sample studied had depression (33.6%), uneasiness (23.1%), PTSD (70.9%) and associated comorbidities (44.7%). PTSD was also seen in many after the Chennai, South India floods in 2016 (Fernandes, Borah, & Shetty, 2016). | Sleep Deprivation and Somatoform Disorders|| | In 2007 Bihar witnessed floods, and the United Nations depicted this as the most noticeable and awful surge in the living memory of Bihar. It was accepted to be the most exceedingly terrible flood in Bihar over the most recent 30 years. The flooding had affected an expected 10 million individuals İn Bihar. Individuals experienced severe epidemics, distress, and anxiety. Hundreds had fever, diarrheas and these resulted in deaths accounting to nearly 1,287 (Jha & Raghavan, 2008). During the 2013 Uttarakhand tragedy, survivors experienced mental injury, melancholy, a sleeping disorder, and numerous other issues. Individuals who lost their dear ones had terrible dreams, episodes of resentment, misery and self-destructive contemplations. Females who expressed concerns of depression and anxiety were far greater in number than men. Women survivors had sleep deprivation, and men found it hard to refrain from worrying. The mental well-being of those affected was thus of significant concern (Channaveerachari et al., 2015). Among children and teenagers, nervousness, adjustment disorders, shock, somatoform disorders, and PTSD were present. Of the adolescents studied, 18% continued to experience psychological distress and 13% stress-related psychiatric symptoms one month post the catastrophe (Aneelraj et al., 2016). Different studies done on the elderly populations uncovered that nearly 16.13% experienced intermittent flashbacks following the disaster. There were 14.52% who experienced difficulties with sleep while 12.9% had recurrent memories of the catastrophe. About 6.45% suffered the loss of well-being and security while 20.97% experienced restlessness and immense physiological arousals. It was the geriatric group that experienced more prominent physical issues when compared to adults. Adults reported increasing concerns in mental problems (Chandran et al., 2015). The 2016 Chennai floods was another very exhausting emotional experience. The rains and flooding left behind permanent damages to life and property. It was hard to measure the mental and psychological decline in well-being, but it was not hard to understand that there would be immediate and aberrant impacts. Individuals experienced flashbacks. They rehearsed recollections of the excruciating occasion that lead to physical responses, for example, fast heartbeat, loss of awareness, perspiring, extreme fear and so on. They reported intense anxiety about whether the calamity would recur. Individuals experienced irritability, exceptional uneasiness, mood swings, and grief. Victims reported a loss of memory, suicidal thoughts, symptoms of sickness, tics, gastrointestinal problems, migraines, and chest pain (Fernandes et al., 2016). They experienced a lack of concentration, trouble in basic leadership skills, difficulty in sleeping and eating etc. Accoring to reports from the Institute of Mental Health victims, of the recent Kaja winds in South India stated that they couldnot sleep even for two hours. Fear of survival was also present (Ezhilarasan, 2018) | Substance Abuse|| | Men victims often responded to disaster by indulging in or escalating substance abuse. After the 2004 tsunami, substance abuse became common among men (Nambi et al., 2007). Depression, panic, fear lead to liquor abuse which was a great concern after the Chennai, South India floods, 2016 (Fernandes et al., 2016). |Figure 2: Uttarakhand Floods: Devastated Pithoragarh, Chamoli Regions Look to Limp Back To Normalcy (2016)| Click here to view The staggering Gaja violent wind in 2018, affected 12 regions in Tamil Nadu devouring 63 human lives. It resulted in 3.4 lakh destruction of homes out of which 2.8 lakhs were cottages. The poorest of people have been enormously affected. A few people used the money to build themselves roofs with gunny sacks. Unfortunately, they couldnot brave the severe rains. Loss ofjobs and property has made numerous individuals mentally discouraged. Individuals became terrified and began disturbing Government officials. Numerous survivors reported feelings of restlessness and palpitations (Ezhilarasan, 2018). A vast majority are unable to overcome the horrible encounters caused by the violent wind. Women, whose spouses took liquor, became highly anxious as they were the sole providers. Some of the symptoms that victims had were restlessness, palpitations, fretfulness and crying spells. |Figure 3: Cyclone Gaja Leaves at least 45 Dead in Tamilnadu and Puducherry: Relief Works are in Full Swing (2018)| Click here to view | Self Destructive Behaviors|| | Post Traumatic Stress Disorder (PTSD) whenever left untreated, can lead to the person experiencing severe distress and sorrow (Feng et al., 2007). A 19 year old committed suicide on discovering his mark reports destroyed, and a 54 year old hung himself on seeing his damaged house (Bidhuri, 2018). The 2018 Kerala, South India floods destroyed most of what individuals had worked over the years, and survivors are now attempting to recover from losses. They are struggling in adapting to the circumstance. As special teams are doing their best in eradicating illness and establishing psychological wellness, the affected cannot be disregarded. Post floods and avalanches, many Keralites have begun complaining about having nightmarish flashbacks of the occurrences (George, 2018). About 200 individuals reported unusual mental conditions in Chenganoor and Thiruvalla. These were two of the zones most affected by the floods and about 220 lives were lost. Huge numbers were reported to be contemplating suicide and felt enormously discouraged about their misfortune. |Figure 4: Tripura government announces to donate Rs. 1 crore for Kerala flood victims (2018)| Click here to view | Debts Induced Deaths|| | Maharashtra experiences drought, once in every 5 years (United Nations Office for Disaster Risk Reduction [UN1SDR], 2009). Since 80°/ū of the water resources are from groundwater, scarcity of rainfall had caused the Government to supply water through tankers and borewells resulting in enormous costs (Ministry of Agriculture & Farmers Welfare, 2016). The State government revealed that around 7,896 towns are enduring drought, out of which 3,299 municipalities belong to the region of Marathwada. The 2012 drought severely affected the agriculture of the State. According to the India Oilseeds and Products Update, there was a 21%, 5% and 18% decrease in cereals, pulses and food grain productions for the year 2012-13. Inland fishery also suffered the misfortune of 16% loss due to the drought. Crop failure, decrease in employment of workers and expanding costs significantly affected the country’s economy. Consequently, farmers were forced to borrow money from moneylenders and banks with high interests which influenced their financial status and additionally lowered their emotional well-being and social life (Udmale, Ichikawa, Manandhar, Ishidaira, & Kiem, 2014). The number of suicides among farmers continues to be a growing concern and worry. Studies have confirmed that similar conditions exist among other drought-prone nations (Kiem, 2013) | Rioting and Aggression|| | While disasters can bring a sense of community togetherness, the feelings of destruction and devastation can bring opposite emotions and reactions in individuals and communities. The 2010 Eastern Indian storm struck part of Bangladesh, Bihar, Assam, and West Bengal. In India, the storm destroyed more than 91,000 residences (Singh, 2010). Approximately 300,000 homes and almost 500,000 individuals were left homeless (Bhalla, 2010). In West Bengal, people were furious about the help rendered. They demanded more materials like food, water, and clothing and individuals were seen fighting with authorities. In Karandighi, outrage sparked, and the villagers on seeing that the supply would not be sufficient for everybody in need chose to take matters into their own hands. Looting of goods began. One victim said “What else would I be able to do? The administration is not helping us, so we need to help ourselves” (Strapped for relief, villagers strike - Anger in storm-hit North Dinajpur, 2010). People began expressing hostility and swung into aggressive acts. | Vulnerable Population|| | Studies have demonstrated that females experienced more prominent dimensions of mental issues, for example, uneasiness, sorrow, and PTSD (Math, Nirmala, Moirangthem, & Kumar, 2015). Gender was a risk factor for developing nervousness and PTSD. Studies reveal that women are at a higher risk for PTSD (Kumar et al., 2007). Members with a lower income or salary demonstrated a higher level of uneasiness, discouragement, and PTSD (Maj et al., 1989). Being female, being married, having lesser pay, living in impermanent lodgings were factors that lead to mental issues. The majority of the women affected by Bihar floods (2007) in flood-prone zones did not get to know about the early warning. When trapped by floods, it is was the women who decided to sleep on empty stomachs. Male individuals chose what to leave at home and what to take with them at the time of relocation. The daily wage paid to women laborers was only rupees 15 (Jha & Raghavan, 2008). Women were forced to work twice as much when their spouses relocated looking for work. They carried the entire responsibilities of the family. Children and adolescents are also vulnerable to psychological distress. Children and adolescents as primary (exposed directly to tsunami and earthquake) and secondary (those with close family and personal ties with primary survivors) survivors after a disaster were found to have adjustment disorder, depression, panic disorder, somatoform disorder, schizophrenia and other disorders (Aneelraj et al., 2016; Math et al., 2008). | Psychosocial Interventions|| | Psychosocial support can be tailored specifically to disaster circumstances and can enable people to respond effectively to the mental and physical needs. These interventions help victims accept the situation and adapt to it. Catastrophes have severe psychosocial outcomes. The emotional injuries might be more severe than the destruction. It takes far longer to recuperate from emotional distress than acquiring material misfortunes. Early help and adjustments are required (Murthy, 2009). Social impacts are felt due to sudden demise, separation, exploring of adversity, and vulnerability. Preventive Medicine experts regularly discuss the six R’s which are Readiness, Response, Relief, Rehabilitation, Recovery, and Resilience in rehabilitating disaster victims. Common psychological issues experienced in calamities are tension, sadness, and intense stress responses. The presence of these symptoms frequently relies upon the person’s weakness and adapting abilities. In an acute stress response, the individual might be in a bewildered or desensitized state, but this may frequently decline within a couple of days. We want to discuss some of these techniques used during interventions. It is characterized by exchanges that happen within 48-72 hours after a disaster. It is referred commonly to as ‘mental de-briefings.' By and large, these sessions urge members to portray and share both specific and emotional aspects of their experiences (Norris, Friedman, & Watson, 2002). The aim of doing this is to enable a person to intellectually rebuild the apparent disaster event in a less traumatic manner. 2. Cognitive Behavioural Therapy (CBT) Cognitive Behaviour Therapy (CBT) alludes to a class of interventions that focus on psychological distress and mental misery that are a result of intellectual components (Beck, 1970). The focus of this treatment approach is to deal with maladaptive thoughts that influence emotions and behaviors (Ellis, 1962). The point at which CBT was used for victims of the Wenchuan earthquake, turned out to be effective in lowering the extent of psychopathology experienced after a catastrophe (Zhang, Feng, Xie, Xu, & Chen, 2011). There have been studies to further suggest that a disaster focused CBT within three months has been promising in relieving many symptoms among disaster victims (Roberts, Kitchiner, Kenardy, & Bisson, 2009). 3. Community-Based Interventions This method of rehabilitation focuses on organizing the day to day activities. It enables people to recoup in building their homes, participating in social and religious ceremonies, paying attention to the family, talking and overcoming self-blame (Math et al., 2008). Children are engaged through painting, singing, playing. This type of intervention focuses on enabling survivors to participate in activities such as cooking, cleaning, and rebuilding activities. | Conclusion|| | Aside from the quantifiable and unmistakable harm and misfortune, there is also the loss of life due to the catastrophic circumstance. Research so far in India uncovers that the insufficient number of well-being experts, the absence of institutional resources and financial problems makes it challenging to address the concerns of disaster victims on a vast scale, and this is also a tedious exercise. The South East Asian nations are continuing to develop their fruitful models of improving emotional well-being and care that may be beneficial after a calamity. Standard experience- sharing opportunities in this area would empower everyone to conquer numerous difficulties and to accomplish their goals. Conflicts of Interest: The author has declared no conflicts of interest. | References|| | Aneelraj, D., Kumar, C. N., Somanathan, R., Chandran, D., Joshi, S., Paramita, P., … & Math, S. B. (2016). Uttarakhand disaster 2013: A report on psychosocial adversities experienced by children and adolescents. The Indian Journal of Pediatrics , 53(4), 316-321. doi.org/10.1007/sl2098-015- 1921-1 Beck, A. T. (1970). Cognitive therapy: Nature and relation to behavior therapy. Behavior Therapy , 7(2), 184- 200. doi.org/10.1016/S0005-7894(70)80030-2 Beigel, Α., & Berren, M. R. (1985). Human-induced disasters. Psychiatric Annals , /5(3), 143-150. doi.org/10.3928/0048-5713-19850301 -05 Chandran, D., Roopesh, N, Raj, Α., Channaveerachari, N., Joshi, S., Paramita, P., … & Badamath, S. (2015). Psychosocial Impact of The Uttarakhand Flood Disaster on Elderly Survivors. Indian Journal of Gerontology, 29( Channaveerachari, N. K., Raj, Α., Suvarna Joshi, P. P., Somanathan, R., Chandran, D., Kasi, S.,… & Math, S. В. (2015). Psychiatric and medical disorders in the after math of the Uttarakhand disaster: Assessment, approach, and future challenges. Indian Journal of Psychological Medicine, 37(2) Cyclone Gaja Leaves at Least 45 Dead in Tamilnadu and Puducherry: Relief Works are on in Full Swing. (2018, November 19). Tamilnadu. ICN National Feng, S., Tan, H., Benjamin, Α., Wen, S., Liu, Α., Zhou, J.,… & Li, G (2007). Social support and posttraumatic stress disorder among flood victims in Hunan, China. Annals of Epidemiology , /7(10), 827-833. doi.org/10.1016/j .annepidem.2007.04.002 Fernandes, E., Borah, H., & Shetty, S. (2016). Mainstream disaster health as a policy priority: Experiences from Chennai floods and a cross sectional study during disaster relief phase. International Journal of Community Medicine and Public Health , 3(6), 1589-1592. George, C., Sunny, G., & John, J. (2007). Disaster experience, substance abuse, social factors and severe psychological distress in male survivors of the 2004 tsunami in South India. Indian Journal of Psychiatry S, 49 Jha, M. К., & Raghavan, V. (2008). Disaster in Bihar: Äreport from the TIS S assessment team. Mumbai, India: Tata Institute of Social Sciences Kar, Ν., Krishnaraaj, R, & Rameshraj, К. (2014). Long-term mental health outcomes following the 2004 Asian tsunami disaster: A comparative study on direct and indirect exposure. Disaster Health , 2(1), 35-45. doi.org/10.4161 /dish.24705 Kiem, A. S. (2013). Drought and water policy in Australia: challenges for the future illustrated by the issues associated with water trading and climate change adaptation in the MurrayDarling Basin. Global Environmental Change, 23( 6), 1615-1626. doi.org/10.1016/j .gloenvcha.2013.09.006 Kumar, M. S., Murhekar, M. V., Hutin, Y., Subramanian, T., Ramachandran, V., & Gupte, M. D. (2007). Prevalence of posttraumatic stress disorder in a coastal fishing village in Tamil Nadu, India, after the December 2004 tsunami. American Journal of Public Health , 97(1), 99-101 Lessons of thirst. (2019, February 8). Maharashtra. The Indian Express MacDonald, R. (2005). How women were affected by the tsunami: A perspective from Oxfam. PLoS Medicine, 2(6) , e 178. Maj, M., Starace, F., Crepet, P., Lobrace, S., Veltro, F., De Marco, F., & Kemali, D. (1989). Prevalence of psychiatric disorders among subjects exposed to a natural disaster. Acta Psychiatrica Scandinavica, 79( 6), 544-549. doi.org/10.1111/j.l600 0447.1989. tbl0301.x Math, S. B., Nirmala, M. C., Moirangthem, S., & Kumar, N. C. (2015). Disaster management: Mental health perspective. Indian Journal of Psychological Medicine, 3 Math, S. B., Tandon, S., Girimaji, S. C., Benegal, V., Kumar, U., Hamza, Α., … & Nagaraja, D. (2008). Psychological impact of the tsunami on children and adolescents from the andaman and nicobar islands. Primary Care Companion to the Journal of Clinical Psychiatry, 10( 1 ), 31 McKee, M. (2005). Power of tsunami earthquake heavily underestimated. New Scientist, 9 Mohandas, E. (2009). Roadmap to Indian psychiatry. Indian Journal of Psychiatry, 51(3) Murthy, R. S. (2009). Introduction to Psychiatry. Indian Journal of Psychiatry , 57(1), 72. Nambi, S., Desai, N. G., & Shah, S. (2007). Mental health morbidity and service needs in tsunami affected population in coastal Tamil Nadu. Indian Journal of Psychiatry , 4P(Suppl), 2-3. Neria, Y.,Nandi, A., & Galea, S. (2008). Post-traumatic stress disorder following disasters: a systematic review. Psychological Medicine, 38( Norris, F. H., Friedman, M. J., & Watson, P. J. (2002). 60,000 disaster victims speak: Part II. Summary and implications of the disaster mental health research. Psychiatry: Interpersonal and Biological Processes , 65(3), 240-260. Pyari, T. T., Kutty, R. V., & Sarma, P. S. (2012). Risk factors of post-traumatic stress disorder in tsunami survivors of Kanyakumari District, Tamil Nadu, India. Indian Journal of Psychiatry, 54( 1 ), 48. Roberts, N. P., Kitchiner, N. J., Kenardy, J., & Bisson, J. I. (2009). Multiple session early psychological interventions for the prevention of post-traumatic stress disorder. Cochrane Database of Systematic Reviews , (3), CD006869-1. Tamil Nadu remembers 2004 tsunami victims. (26th December 2017). Tamilnadu. The Hindu Tripura government announces to donate Rs. 1 crore for Kerala flood victims.(2018, August 21). Tripura: Indiablooms Udmale, R, Ichikawa, Y., Manandhar, S., Ishidaira, H., & Kiem, A. S. (2014). Farmers perception of drought impacts, local adaptation and administrative mitigation measures in Maharashtra State, India. International Journal of Disaster Risk Reduction, 10 Uttarakhand floods: Devastated Pithoragarh, Chamoli regions look to limp back to normalcy. (2016, July 4). Uttarakhand. Financial Express Zhang, Y., Feng,В., Xie, J. P., Xu, F. Z., & Chen, J. (2011). Clinical study on treatment of the earthquake- caused post-traumatic stress disorder by cognitive- behavior therapy and acupoint stimulation. Journal ofTraditional Chinese Medicine, 31( [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5]
<urn:uuid:d72b3438-d93b-4323-9741-f8e9d57ddcf4>
CC-MAIN-2022-33
https://ijcne.org/article.asp?issn=2230-7354;year=2018;volume=19;issue=2;spage=27;epage=34;aulast=Louis;type=3
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00698.warc.gz
en
0.909544
6,174
2.765625
3
Brief communication: Post-wildfire rockfall risk in the eastern Alps - 1Geoconsult Holding ZT GmbH, Hölzlstraße 5, Wals bei Salzburg, 5071, Austria - 2Department of Israel Studies, University of Haifa, 199 Abba Khoushy Ave., Haifa, 3498838, Israel - 3Geological Survey of Israel, 32 Yesha'ayahu Leibowitz St., Jerusalem, 9692100, Israel - 4Department of Geography and Environmental Studies, University of Haifa, 199 Abba Khoushy Ave., Haifa, 3498838, Israel Correspondence: Sandra Melzner (email@example.com) In the eastern Alps, no previous research focused on the impact of wildfires on the occurrence of rockfalls. The investigation of wildfires and post-wildfire rockfalls gains new importance with respect to changes in weather extremes and rapid social developments such as population growth and tourism. The present work describes a wildfire that occurred in August 2018 in a famous world heritage site in Austria. Indicators of fire severity and rockfall occurrence during and after the fire are described. Many areas in the eastern Alps are prone to rockfalls endangering settlements and infrastructure, causing several fatalities every year. In recent years, wildfires in the Alps and their impact on the environment have gained new importance with respect to climate change and rapid social developments such as population growth and tourism. Most research on the impact of wildfires has been done in the USA and the Mediterranean-climate region (Cerdà, 1998; Cerdà and Doerr, 2005; Parise and Cannon, 2012). Although post-wildfire risk from debris flows have been studied by various authors (Marxer et al., 1998; Conedera et al., 2003; Calcaterra et al., 2007; Cannon et al., 2010; Santi et al., 2013), rockfalls associated with wildfires have been poorly studied (Swanson, 1981; De Graff and Gallegos, 2012; Santi et al., 2013; De Graff et al., 2015). De Graff et al. (2015) showed that out of 16 wildfires in California (USA), seven wildfire-affected areas experienced significant rockfall occurrence days after the burn. The slope steepness and underlying lithology were given, showing maximum sizes from 0.30–1.85 m in their largest dimension with an average of 0.5 m. Furthermore, all rockfalls were generated from steep slopes (over 39∘) of metasedimentary or granitic lithology experiencing moderate-to-high soil burn severity. According to Keeley (2009) the term “fire intensity” is defined as the energy output from fire, whereas the terms “fire severity” and “burn severity” are used interchangeably for the aboveground and belowground organic matter consumption from fire. The term “ecosystem response” is defined as the functional processes that are altered by fire including regeneration, recolonization by plants and animals, and watershed. According to Keeley (2009) “soil burn severity” is often used interchangeably with “fire severity”. In the USA, it is the preferred term (applied to soils) used in post-fire Burned Area Emergency Response assessments (Parson et al., 2010). Fire severity, however, is a more comprehensive term that also references “vegetation burn severity”. The aim of this work is to describe the impact of a wildfire which occurred in August 2018 at a steep rock wall in the heavily toured world heritage site “Hallstatt” in the Salzkammergut region in Upper Austria (47∘33′27.00′′ N, 13∘38′37.03′′ E). In order to assess the impact of the wildfire on the recent and future rockfall activity in the area, a helicopter flight and field survey were carried out. The survey was conducted by Sandra Melzner of the Geological Survey of Austria as part of the project “Georisks Austria” (GEORIOS). The focus of the inspection was on the identification of possibly changed potential rockfall areas and loosening of the rock due to the strong heat effect (Melzner, 2018) with regards to the rockfall hazard analysis conducted in 2014 (Melzner, 2015). The area was revisited conducted in May 2019 to record the temporal post-wildfire changes to the ecosystem. 2.1 Area settings The wildfire site is situated on the southwest exposed rock walls of a glacially over-steepened Alpine trough valley. The valley is characterized by almost vertical rock walls several hundred metres high, which are mainly made of Mesozoic limestone (Dachstein Formation). The limestone is characterized by predominantly thick bedding, sudden changes in the joint mass structures and the presence of dominant fault systems. In the wildfire-affected part of the rock wall, the bedding has a predominantly medium steep dipping of 35 to 45∘ in the direction of the rock wall (from northeast to northwest) (Melzner, 2015, 2018). The bedding planes form locations preferable for trees to grow and are usually covered with a thin layer of debris (Fig. 1). A fixed-rope climbing tour is installed in the rock wall, which is frequently used by numerous climbing tourists. The talus slope below the rock wall is relatively short and has an inclination between 30 and 40∘. The soil type consists of scree (Ø ≲10 cm) or medium compact soil with small rock fragments and some larger blocks. The scree is covered by a very thin layer of soil and organic matter which can be classified as brown rendzina and brown earth (Fig. 2). The vegetation is characterized by coniferous trees, mainly spruces and broad-leaved trees such as beech and larches. The pre-fire vegetation was composed of medium old forest and an understory of sparse bushes on the rockfall talus slope. The forest on the talus slope beneath the rock wall is designated as a protection forest for the houses in the valley floor. The annual precipitation is about 1743 mm. The highest 1 d precipitation amount since 1901 was measured on 12 August 1959 to be 118 mm, and the maximum annual precipitation measured was 2085 mm (1954). There are 20 to 30 convective summer thunderstorm days per year. Precipitation as snow occurs normally between November to April during which snow cover can reach thicknesses up to a few metres. 2.2 Event description On 21 August 2018 at 09:30 LT a wildfire was presumably initiated by a carelessly discarded cigarette or the reflection of a broken glass bottle at the foot of the rock wall. At that time there were three groups of about 20 climbers on the via ferrata. Since the fire could only be extinguished from the air by helicopters, the via ferrata had to be evacuated to protect the climbers from falling rocks and branches caused by the downwind of the helicopters while fighting the fire. The fire rapidly spread up the rock wall (area size of about 3 ha) affecting the trees growing mainly on the bedding planes of the limestone (Fig. 1). The protection forest beneath the rock wall was not affected by the fire (Fig. 2). During the night of 21 to 22 August the first evacuations of the houses beneath the rock wall took place as burned trunks, rootstocks and rock blocks were falling down the rock wall, the latter approaching two houses. Sixteen mapped rockfall boulders which reached the settlement area had volumes smaller than 0.3 m3 (Fig. 3). With the exception of minor damage to one building, no severe damage to buildings occurred, and local inhabitants were not injured. In the following days the firefighter brigades tried to extinguish the fire from above the rock wall with fire hoses and from the air with helicopters carrying water containers and buckets. In total, four police and military helicopters were flying during the days of the fire, refilling the buckets with water from the nearby Lake Hallstatt every two minutes. During the 4 d of the firefighting operation, up to 100 people (firefighters, police officers, military personnel and mountain rescue team members) were on duty every day. Unusual low-wind conditions and rainfall (starting on 24 August 2018) prevented the spread of the fire towards the village of Hallstatt. The official end of the firefighting mission was on 28 August 2018. A rockfall hazard and risk assessment conducted by the Geological Survey of Austria (Melzner, 2015) formed an important part of the wildfire emergency response. Preventive rockfall hazard actions by the Austrian Torrent and Avalanche Control (WLV) after the wildfire included the (i) establishment of temporary rockfall protection measures (embarkments and simple rockfall fences) in order to be able to clear the wildfire area, (ii) clearance of the wildfire area (removal of loose stones, boulders, trees at risk of falling, etc.), (iii) repair of pre-existing rockfall protective structures damaged by the rockfall, and (iv) sowing of seeds in the wildfire-affected scree and soil. 3.1 Loss and decomposition of organic matter Indicators of fire severity (Figs. 1 and 2) are the colour of the trees and the decomposition degree of the leaves and needles. Unaffected trees have a green and unaltered colour, whereas burned or heated trees are easily recognizable by their brown colour. Varying degrees of consumption of the needles and leaves and organic matter can be related to different classes of fire severity. According the classification of Keeley (2009), the trees in the affected area show moderate or severe surface burn. This is visible in that most of the burned trees still have needles, but all understorey plants and pre-fire soil organic layer (besides a post-wildfire needle cover) were consumed. In the transition area between the burned and not-burned areas, the vegetation shows indicators for light fire severity, expressed by green needles, although the stems may be scorched, and the understory plants and soil organic layer are largely intact. At the foot of the rock wall we observed a burning tree falling down the rock wall carrying a large rock, which burst into various rockfall boulders during the first impact with the ground. 3.2 Changes in soil and rock mass structure The vertical relief of the rock walls, the anabatic winds and patchy vegetation pattern caused an upward jumping of the fire, resulting in a spotty fire pattern (Fig. 1). Thus, the residence time of the fire and the heating duration were reduced, leading to a less direct influence of the high temperatures on the rock mass structure. Fire-induced rock surface alteration and cracking due to thermal shock are typical rock-weathering processes occurring during a wildfire (Dorn, 2003; Shtober-Zisu et al., 2015). Thermal shock takes place when the thermally induced stress event is of sufficient magnitude to make the material unable to adjust quickly enough to accommodate the required deformation and accordingly fail (Hall, 1999). As a result, the surface failure takes the form of cracking or exfoliation due to the compression and the shear stress it induces (Yatsu, 1988). Moreover, rocks composed of several minerals, each with different coefficients of thermal expansion, may experience stresses resulting from the minerals' differential thermal response to the heating and cooling cycles (McFadden et al., 2005). Spalling or the formation of exfoliation fissures (caused by insolation weathering) may be less severe in such exposed-terrain conditions compared to more gentle slopes (Blackwelder, 1927; Zimmerman et al., 1994; Shakesby and Doerr, 2006; Shtober-Zisu et al., 2015). In the course of the wildfire, abundant small rock fragments had come to rest directly at the base of the rock wall. The rockfall boulders which were detached from the rock wall during the wildfire could be easily identified in the field, as they usually have at least one black (scorched) side (Fig. 2). Some smaller rockfall boulders with volumes of <0.3 m3 have reached the valley floor (Fig. 3). The slope in the uppermost part of the rock wall is covered by gravel, stones and blocks with a matrix composed of fine clastic material and ash. It could be mobilized in the form of a debris slide or flow in a heavy precipitation event. Such an event has not been documented thus far in this area. As the organic material mantling the scree slope in the upper part of the rock wall was consumed completely (Figs. 1 and 2), we observed that the ash covers the surface. Ash has a kind of “sealing effect” which reduces the infiltration, accelerates the splashing effect and increases the surface runoff (Brook and Wittenberg, 2016). It can be assumed that future frost and thaw cycles will further weaken the rock or that the loose slope debris in the upper-rock-wall area will be remobilized by heavy precipitation events. In forests, wildfire usually generates a mosaic of different levels of burn severity (Neary et al., 2005). In sites affected by fire of light-to-moderate severity, needle cast occurs when leaves from the scorched trees fall down and blanket the surface, thus protecting the soil from further erosion (Cerdà and Doerr, 2008; Robichaud et al., 2013). There are numerous studies addressing the effect of ash deposits on runoff and erosion processes, rates, and quality (Bodí et al., 2011). Results, however, are inconclusive; while many suggest that ash temporarily reduces infiltration, either by clogging soil pores or by forming a surface crust (Onda et al., 2008), others indicate that ash and specifically the black char produced during light-to-moderate fires might increase infiltration by storing rainfall and protecting the underlying soil from sealing (Wittenberg, 2012). The ash layers may also protect the burned soil against raindrop impact and related splash erosion, and its leachates may reduce soil erodibility by promoting flocculation of the dispersed clays (Woods and Balfour, 2008). Ash particles penetrate, accumulate and shelter under the rock spalls formed during the fire, even for several decades (Shtober-Zisu et al., 2018). Increased rockfall activity of rather smaller rock blocks is recognizable during as well as after the wildfire. The destabilization of small rock blocks and the burn of tree roots may also cause the destabilization of larger rock masses (Fig. 1). These would pose a significant risk to the houses and infrastructure. Above the steep rock wall, some greater boulders in and on top of the scree slope can be remobilized as secondary rockfalls by falling trees or undercutting erosional processes (Fig. 4). The wildfire probably had a superficial impact on the rock mass structure of the vertical rock walls. According to Thomaz and Doerr (2014) moderate temperatures (<400 ∘C) had the most major effect on soil chemical properties. The study was conducted using a set of thermocouples that were placed at 0–2 cm soil depth. Even relatively low temperatures at the surface of the soil can trigger mineralogical changes. The burned roots in the joints and profound fractures accelerate physical weathering processes. The chemical weathering of rocks will speed their eventual transformation into secondary clay minerals causing slope instability due to it being a lower-strength material than the unweathered rock. The swelling potential of these secondary clays induce significant vertical overpressure, thus reinforcing subsequent progressive rockfall failure. According to Bierman and Gillespie (1991), wildfires increase a rock's susceptibility to weathering through several mechanisms: (1) uneven heating and thermal expansion, along with the vaporization of endolithic moisture, induces spalling; (2) intense heating increases the rate of thermal diffusion significantly and accelerates the loss of gases such as argon, helium and neon from the rock; and (3) heating causes the microfracturing of rock and could cause the loss of chlorine-rich fluid from inclusions. Additionally, if the temperatures reached during the burning are high enough, decarbonation in the limestone may occur, enhancing decomposition and further erosion. If calcrete overtops the rock surface, its laminar structure substantially decreases the rocks' tensile strength and the threshold magnitude of the thermal stress needed to weather them. Thus, the laminar structure of the calcrete plays a key role in all types of physical weathering, specifically in the exfoliation process that occurs along the bedding planes between the laminae. The development of empirical relationships for predicting the location, magnitude and frequency of increased post-wildfire rockfall activity requires further research and the collection of more data. Although the mechanism of the direct and indirect impact of wildfire on debris flows has been studied in numerous past studies, knowledge about post-wildfire rockfalls is limited and is completely absent in the Alpine region. The observations in the present study imply that falling trees and burned roots might have a significant impact on rockfall occurrence during and after a wildfire event, but this issue requires further investigation. Rockfalls during the fire may be triggered by human activities such as firefighting or winds caused by helicopters during firefighting operations. Vegetation recovery plays an important role in mitigating post-fire dynamics and increasing land stability. Rates and patterns of post-fire vegetation regeneration were extensively studied in the Mediterranean; however, Alpine vegetation has gained relatively little attention (Camac et al., 2013). In Austria, a study that documented patterns of post-fire land recovery indicated that 60 years after a fire trees covered only 40 % of the burned area, whilst grassland and exposed rock and debris areas have expanded and remained active. Moreover, it was suggested that the slope will not reach its former condition before 2070. This extreme window of disturbance of more than 120 years is attributed to the steepness of the slope and to the shallow soils and dolomitic bedrock that were severely damaged by the fire (Malowerschnig and Sass, 2014). In the eastern Alps, no work on wildfires and post-wildfire rockfall activity has been published so far. The August 2018 Hallstatt wildfire shows clearly that wildfires can have a significant impact on ecosystems and pose a high risk to settlements in the Alpine area. Wildfires in steep Alpine valleys behave differently than those in flat areas or on moderately inclined slopes. The vertical rock walls, the anabatic winds and patchy vegetation pattern caused an upward jumping of the fire resulting in a spotty fire pattern. This most probably results in spatially varying fire intensities and consequently highly heterogenic changes in the soil and rock mass structure. It makes it very difficult to predict future rockfall occurrences and estimate the associated risk. The rockfall hazard and risk assessment conducted in 2014 enabled fast decision-making as part of an emergency response during and after the wildfire catastrophe in terms of the identification of possibly endangered houses as well as the planning of preliminary rockfall preventive measures. Future research activities should focus on the study of wildfire behaviour in Alpine valleys. A national wildfire database in combination with a forest inventory map would help to plan forest management strategies for wildfires in the Alpine region. The development of tools to identify the days of high wildfire risk supported by the meteorological survey would enable a fire hazard rating system. Despite the logistical difficulties in the highly exposed relief, there is a practical need to understand the wildfire-induced rock surface alteration and cracking due to thermal shock in order to improve the prediction of potential post-fire rockfall problems and associated hazards and risks. The compound impact of fire and snow cover on future rockfall and debris slide and flow activity would be a very important future research topic. Data are available upon request to the corresponding author. SM conceptualized the study, collected data, prepared three of the figures and wrote the first draft of the paper. LW contributed to the drafting of the paper. NSZ prepared one of the figures and contributed to the drafting of the paper. OK edited the paper. The authors declare that they have no conflict of interest. The authors would like to thank the editor Mario Parise, the reviewer Jerome De Graff and a second anonymous reviewer for their constructive comments and suggestions regarding the paper. This paper was edited by Mario Parise and reviewed by Jerome De Graff and one anonymous referee. Bierman, P. and Gillespie, A.: Range fires: A significant factor in exposure-age determination and geomorphic surface evolution, Geology, 19, 641–644, 1991. Blackwelder, E.: Fire as an Agent in Rock Weathering, J. Geol., 35, 134–140, https://doi.org/10.1086/623392, 1927. Bodí, M. B., Mataix-Solera, J., Doerr, S. H., and Cerdà, A.: The wettability of ash from burned vegetation and its relationship to Mediterranean plant species type, burn severity and total organic carbon content, Geoderma, 160, 599–607, https://doi.org/10.1016/j.geoderma.2010.11.009, 2011. Brook, A. and Wittenberg, L.: Ash-soil interface: Mineralogical composition and physical structure, Sci. Total Environ., 572, 1403–1413, https://doi.org/10.1016/j.scitotenv.2016.02.123, 2016. Calcaterra, D., Parise, M., Strumia, S., and Mazella, E.: Relations between fire, vegetation and landslides in the heavily polulated metroplitan area of Naples, in: Proceedings 1st North American Landslide Conference, Vail, Colorado, 3–8 June 2007, edited by: Schaefer, V. R., Schuster, R. L., and Turner, A. K., AEG Special Publication 23, 1448–1461, 2007. Camac, J. S., Williams, R. J., Wahren, C. H., Morris, W. K., and Morgan, J. W.: Post-fire regeneration in alpine heathland: Does fire severity matter?, Austral Ecoly, 38, 199–207, 2013. Cannon, S. H., Gartner, J. E., Rupert, M. G., Michael, J. A., Rea, A. H., and Parrett, C.: Predicting the probability and volume of postwildfire debris flows in the intermountain western United States, GSA Bulletin, 122, 127–144, https://doi.org/10.1130/B26459.1, 2010. Cerdà, A.: Changes in overland flow and infiltration after a rangeland fire in a Mediterranean scrubland, Hydrol. Process., 12, 1031–1042, https://doi.org/10.1002/(SICI)1099-1085(19980615)12:7<1031:AID-HYP636>3.0.CO;2-V, 1998. Cerdà, A. and Doerr, S. H.: Influence of vegetation recovery on soil hydrology and erodibility following fire: an 11-year investigation, Int. J. Wildland Fire, 14, 423–437, https://doi.org/10.1071/WF05044, 2005. Cerdà, A. and Doerr, S. H.: The effect of ash and needle cover on surface runoff and erosion in the immediate post-fire period, CATENA, 74, 256–263, https://doi.org/10.1016/j.catena.2008.03.010, 2008. Conedera, M., Peter, L., Marxer, P., Forster, F., Rickenmann, D., and Re, L.: Consequences of forest fires on the hydrogeological response of mountain catchments: a case study of the Riale Buffaga, Ticino, Switzerland, Earth Surf. Proc. Land., 28, 117–129, https://doi.org/10.1002/esp.425, 2003. De Graff, J. and Gallegos, A.: The Challenge of Improving Identification of Rockfall Hazard after Wildfires, Environ. Eng. Geosci., 18, 389–397, https://doi.org/10.2113/gseegeosci.18.4.389, 2012. De Graff, J., Shelmerdine, B., Gallegos, A., and Annis, D.: Uncertainty Associated with Evaluating Rockfall Hazard to Roads in Burned AreasRockfall Hazard in Burned Areas, Environ. Eng. Geosci., 21, 21–33, https://doi.org/10.2113/gseegeosci.21.1.21, 2015. Dorn, R. I.: Boulder weathering and erosion associated with a wildfire, Sierra Ancha Mountains, Arizona. Geomorphology, Mountain Geomorphology – Integrating Earth Systems, Proceedings of the 32nd Annual Binghamton Geomorphology Symposium 55, 155–171, https://doi.org/10.1016/S0169-555X(03)00138-7, 2003. Hall, K.: The role of thermal stress fatigue in the breakdown of rock in cold regions, Geomorphology, 31, 47–63, 1999. Keeley, J. E.: Fire intensity, fire severity and burn severity: a brief review and suggested usage, Int. J. Wildland Fire, 18, 116–126, https://doi.org/10.1071/WF07049, 2009. Malowerschnig, B. and Sass, O.: Long-term vegetation development on a wildfire slope in Innerzwain (Styria, Austria), J. Forestry Res., 25, 103–111, 2014. Marxer, P., Conedera, M., and Schaub, D.: Postfire runoff and soil erosion in the sweet chestnut belt of southern Switzerland, in: Fire Management and Landscape Ecology, edited by: Trabaud, L., International Association of Wildland Fire, Washington, 51–62, 1998. McFadden, L. D., Eppes, M. C., Gillespie, A. R., and Hallet, B.: Physical weathering in arid landscapes due to diurnal variation in the direction of solar heating, GSA Bulletin, 117, 161–173, https://doi.org/10.1130/B25508.1, 2005. Melzner, S.: Einschätzung des Gefahrenpotentials durch Sturzprozesse als Folge des Waldbrandes (20.8.–25.8.2018) im Bereich der Echernwand/Hohe Sieg, Geological Survey of Austria, Vienna, Technical report, 16 pp., 2018. Melzner, S.: Analyse des Gefahrenpotentials durch primäre Sturzprozesse (Steinschlag/Felssturz) – Gemeindegebiet Hallstatt, Geological Survey of Austria, Vienna, Technical report, 156 pp., 2015. Neary, D. G., Ryan, K. C., and De Bano, L. F.: Wildland fire in ecosystems: effects of fire on soils and water, U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station, Ogden, UT, Gen. Tech. Rep. RMRS-GTR-42-vol.4, 250 pp., https://doi.org/10.2737/RMRS-GTR-42-V4, 2005. Onda, Y., Dietrich, W. E., and Booker, F.: Evolution of overland flow after a severe forest fire, Point Reyes, California, CATENA, 72, 13–20, https://doi.org/10.1016/j.catena.2007.02.003, 2008. Parise, M. and Cannon, S. H.: Wildfire impacts on the processes that gerate debris flows in burned watersheds, Nat. Hazards, 61, 217–227, 2012. Parson, A., Robichaud, P. R., Lewis, S. A., Napper, C., and Clark, J. T.: Field guide for mapping post-fire soil burn severity, U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station, Gen. Tech. Rep. RMRS-GTR-243, 49 pp., https://doi.org/10.2737/RMRS-GTR-243, 2010. Robichaud, P. R., Wagenbrenner, J. W., Lewis, S. A., Ashmun, L. E., Brown, R. E., and Wohlgemuth, P. M.: Post-fire mulching for runoff and erosion mitigation Part II: Effectiveness in reducing runoff and sediment yields from small catchments, Catena, 105, 93–111, https://doi.org/10.1016/j.catena.2012.11.016, 2013. Santi, P., Cannon, S., and DeGraff, J.: 13.16 Wildfire and Landscape Change, in: Treatise on Geomorphology, edited by: Shroder, J. F., Academic Press, San Diego, 13, 262–287, https://doi.org/10.1016/B978-0-12-374739-6.00365-1, 2013. Shakesby, R. A. and Doerr, S. H.: Wildfire as a hydrological and geomorphological agent, Earth-Sci. Rev., 74, 269–307, https://doi.org/10.1016/j.earscirev.2005.10.006, 2006. Shtober-Zisu, N., Tessler, N., Tsatskin, A., and Greenbaum, N.: Accelerated weathering of carbonate rocks following the 2010 wildfire on Mount Carmel, Israel, Int. J. Wildland Fire, 24, 1154–1167, https://doi.org/10.1071/WF14221, 2015. Shtober-Zisu, N., Brook, A., Kopel, D., Roberts, D., Ichoku, C., and Wittenberg, L.: Fire induced rock spalls as long-term traps for ash, CATENA, 162, 88–99, https://doi.org/10.1016/j.catena.2017.11.021, 2018. Swanson, F. J.: Fire and geomorphic process, Fire Regime and Ecosystem Properties, USDA Forest Serviece General Technical report WO-26, 1981. Thomaz, E. L. and Doerr, S. H.: Relationship between fire temperature and changes in chemical soil properties: a conceptual model of nutrient release, EGU General Assembly 2014, Vienna, Austria, 27 April–2 May 2014, EGU2014-166, 2014. Wittenberg, L.: Post-Fire Soil Ecology: Properties and Erosion Dynamics, Isr. J. Ecol. Evol., 58, 151–164, 2012. Woods, S. W. and Balfour, V. N.: The effect of ash on runoff and erosion after a severe forest wildfire, Montana, USA, Int. J. Wildland Fire, 17, 535–548, https://doi.org/10.1071/WF07040, 2008. Yatsu, E.: “The nature of weathering, An Introduction”, Sozosha, Tokyo, 1988. Zimmerman, S. G., Evenson, E. B., Gosse, J. C., and Erskine, C. P.: Extensive Boulder Erosion Resulting from a Range Fire on the Type-Pinedale Moraines, Fremont Lake, Wyoming, Quaternary Res., 42, 255–265, https://doi.org/10.1006/qres.1994.1076, 1994.
<urn:uuid:259e8354-ceda-42fc-afaf-088430122dc6>
CC-MAIN-2022-33
https://nhess.copernicus.org/articles/19/2879/2019/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00296.warc.gz
en
0.898554
6,763
2.8125
3
The United States is the richest and most powerful country on the planet. Yet despite this, the poison of racism remains an integral part of America. Blacks, together with the other racial minorities, remain the most exploited section of society, mostly employed in the lowest-paid and menial jobs. Racism remains an everyday part of their desperate existence. Today, despite all the "reforms" of the last thirty-odd years, blacks continue to suffer from lynchings and violence at the hands of the state, racist organisations and individuals, as well as being forced to live under conditions of mass poverty and oppression. The recent gruesome murder of a black man in Texas who was dragged to death behind a truck is a vivid reminder of American racism. Black youth are faced with daily harassment and intimidation by the police. Thirty years ago, a commission headed by Otto Kerner, the governor of Illinois, found that America was "moving towards two societies, one black, one white, separate and unequal." Today, despite all the promises from successive Administrations, a follow-up report claims the situation has grown far worse for the mass of blacks. The new report, which comes from the Milton S. Eisenhower Foundation, while conceding that the black middle class has grown, and that black high-school graduation rates have risen, points to the fact that unemployment in a large number of black inner-city neighbourhoods is at "Depression levels" of 50% or more. Unemployment amongst blacks is twice the rate for whites. America's child-poverty rate is four times higher than Western Europe, and the rate of incarceration for black men is four times higher than in the days of apartheid South Africa. Figures from the Justice Department show that between 1985 and 1995, as the number of white men sentenced to more than a year in gaol rose by 103%, the number of black male convicts grew by 143%. In 1997, the number of black Americans in poverty was 9.1 million while the number of poor Hispanics was 8.3 million. For children, the situation is horrific. Black infant mortality is twice that of whites. 45% of black children live below the poverty line compared with 16% of white children. These are the kind of figures you would expect in a third world country. In the US, blacks earn only 58% of whites' earnings. In 1979, a black worker was likely to earn 10.9% less than a white in a similar job, but by 1989 that differential had grown to 16.4%. According to the book "The State of Working America 1992-93" by Mishel and Bernstein, "This 'black-white earnings gap' jumped up 50 percent from 1979 to 1989... Education-wise, the greatest increase in black-white earnings gap was among college graduates, with minimal 2.5 percent differential in 1979 exploding to 15.5 percent in 1989." While the black middle class has grown, affirmative action and quotas have not prevented this deterioration for the mass of blacks. At the same time, the class divide has never been greater. The rich got richer, while the position of the majority has deteriorated. Corporate America has made a bonanza. Bill Gates has an income equal to the combined income of 115 million Americans. The poison of racism is deliberately fostered by the ruling class as a means of keeping the working class divided, and diverting attention away from the real problems of American capitalism. This policy of "divide and rule" on racial, national or religious lines, has been a common feature of the ruling class internationally. As the Black Panther, Bobby Seale correctly wrote: "Racism and ethnic differences allow the power structure to exploit the masses of workers in this country, because that's the key by which they maintain their control. To divide the people and conquer them is the objective of the power structure..." This situation also confirms the words of Malcolm X, "You cannot have capitalism without racism." The fact that the ruling class uses racism is also the fear at the rise of a powerful black working class and its inherent tendency to unite in action with its fellow white workers. Thus the working class as a whole is facing deteriorating living standards and attacks from big business. The 80% of the workforce that hold working class jobs saw their real weekly income decline by 18% from 1973 to 1995. With the emigration of blacks to the north (between 1940 and 1970, four million blacks left the country for the towns), they played a major role in the building of the trade unions. By 1983, 27% of black workers were union members compared with 19% for whites. Years of racism, police harassment and terrible social conditions has produced an explosive mix within the inner cities, especially amongst the black and Latino youth. This has periodically erupted in riots, most recently in Los Angeles, one of the richest cities in the USA. But riots have no perspective and arise spontaneously out of poverty conditions. If the labour leaders offered a real fighting alternative, then the energies of these youth could be harnessed in a positive direction. In the 1950s and 1960s, the revolt of the blacks against their discrimination and social position shook the ruling class to its foundations. Despite the oppression and the violence unleashed against the civil rights movement, the black revolt defeated the Jim Crow laws. This movement, if it had been linked to the struggle of the working class as a whole, could have been a massive force for social change. Unfortunately, the labour leaders, who looked to the pro-capitalist Democratic Party, were incapable of leading this movement against racism and the oppression and of uniting all workers on a class basis. As a result, the ruling class, in order to control the situation, made some concessions on voting rights and civil rights in the south. It sought to confine the movement within the confines of capitalism by moving in the direction of affirmative action and the quota system. This strategy went hand in hand with the murder of Martin Luther King, Malcolm X and a whole number of Black Panther leaders, who sought to go beyond capitalism and the Democratic Party. Since that time, while the position of the majority of blacks has grown worse, a substantial section of the black middle class has prospered. They have done well out of affirmative action. They have managed to further their careers and carve out a niche for themselves. A layer of political careerists has ended up in the Democratic Party. Some even in the Republicans, such as J.C.Watts, the conservative black congressman from Oklahoma. Meanwhile, others have promoted black nationalism. This idea has a long history amongst American blacks. It became a mass movement in the 1920s led by Marcus Garvey, which advocated that the blacks return to Africa. In the 1930s, Oscar C. Brown established a movement for the establishment of the "Forty-Ninth State". Before the war, the American Communist Party took up the idea of a separate black state, and came for ward with the slogan of the right of Negro self-determination in the south. During the height of the black revolt in the 1960s, Stokely Carmichael, one of the Black Panther leaders, first raised the slogan of "Black Power" as a rallying cry for blacks to unite and challenge white society. In so far as it represented a break at the time from the white liberals of both the Democratic and Republican parties it represented a step forward. As the black population made up only 13% of the population as a whole, it was clear that blacks by themselves could never transform society. Malcolm X, who began as a black nationalist came to the conclusion that an alliance with white workers was the only way forward. He was murdered before this idea was fully developed. But it was the Black Panthers that arrived at even clearer ideas on class unity and the struggle to transform society. According to Bobby Seale: "We fight racism with solidarity. We do not fight exploitative capitalism with black nationalism. We fight capitalism with basic socialism. And we do not fight imperialism with more imperialism. We fight imperialism with proletarian internationalism." The only way in which the socialist transformation of America can come about is through the united struggle of black and white workers and youth, and the establishment of a mass workers' party based on the trade unions and committed to a socialist programme. This does not mean that blacks have to wait before engaging in struggle. However, a revolutionary black movement needs to appeal for a united struggle with sections of radicalised white workers. Black liberation is inseparable from the liberation of the working class as a whole. Marxism has a responsibility to offer a perspective and a way forward for the movement at each stage, explaining its weaknesses and reinforcing its strengths. Unfortunately, there are those on the American left, who even purport to be Marxists, who raise all kinds of confusions in relation to the black question. Some, like the American Socialist Workers Party (SWP), simply bowed before black nationalism, advocating self- determination for blacks and the need for the creation of a separate black party. Rather than class unity, they promote racial separation in an attempt to reinforce black nationalism. Another similar group "gives uncompromising support to Black nationalism and the right of the oppressed to self-determination. We place no conditions on the social movements of oppressed people... The point is that it is up to Black people to decide what their future will be." It then goes on to call for "Black control of the Black community!" The mistakes of these groups can be traced to a misrepresentation of the writings of Leon Trotsky on black nationalism. These are based upon discussions between Trotsky and the American SWP in the 1930s. Here Trotsky drew upon the rich theoretical heritage of Bolshevism in regard to the national question. Lenin himself fought a battle to defend the right of nations to self determination as a means of winning the confidence of the oppressed nationalities that made up the tsarist empire. This did not mean that he advocated separation, on the contrary, he wanted the closest union of peoples, but on a voluntary basis. This can be defined as a socialist federation. At the same time, Lenin fought against the influence of bourgeois nationalism in the workers' movement. He emphatically opposed the idea of splitting up the workers' organisations on national lines. The Bolsheviks wanted the maximum unity of the workers and therefore waged a campaign against any taint of nationalism within the movement. They stood for one unified workers' party and trade union organisation throughout the Russian empire. The idea that Marxists would advocate a separate party for blacks would have been considered a crime. A national minority constitutes a nation with the right of self-determination, if it constitutes a majority in a certain territory, with a common language, national culture and consciousness. The right of self-determination does not apply to groups, religious minorities, races or individuals. It only applies to nations or to those which have the potential to develop into nations. But when Trotsky discussed with the SWP in the 1930s, three-quarters of American blacks lived in the twelve southern states. In 189 counties of this area, blacks accounted for more than half the population. In two states, Mississippi and Alabama, they comprised more than 50%. This was the so-called 'Black Belt'. At that time, the American Communist Party put forward the slogan of the right of Negro self-determination in the 'Black Belt'. This idea was originally opposed by the SWP leaders, but Trotsky explained that it was possible, if the fascist movement began to grow in the United States, which would persecute the blacks, that the blacks would demand a separate state in the south. In such conditions Trotsky explained that the Marxists would stand for the right of self-determination of blacks, and this meant their right to form a separate state if they so wished. He explained that "the Negroes are a race, nations grow out of racial material under definite conditions." However, Trotsky was very careful in his analysis, making it clear that such a development was not at all certain. He also criticised the Communist Party for putting forward this demand when there was no sentiment for it within the black population. In fact, the demand, under those circumstances, could be interpreted as being in favour of segregation. Trotsky's method and conclusions were absolutely correct at the time. But some of those groups who cling to his formulations today, without considering the colossal changes that have taken place since then, are drawing fundamentally false conclusions. With migration of the black population to the north, together with their absorption into the working class, the tendency towards a separate black state, and a "national" consciousness, has been completely cut across. In 1890, 80% of all blacks and 85% of all southern blacks lived in rural areas. By 1960 the percentage of the black urban population was 72.2% in the US as a whole, 58.4% in the south, and 95.2% in the north and west. By the 1950s and 1960s, the majority of blacks were living in the north. According to the 1960 population census of the five southern states (Mississippi, South Carolina, Louisiana, Alabama and Georgia), whites numbered 67.4% and non-whites 32.6%. Blacks were dispersed throughout the cities of the United States, drawn into the workplaces alongside white workers. Indeed, in 1970, blacks were more urbanised than whites. "These population movements have produced baffling problems not only for the cities but for black nationalism", states Theodore Draper (The Rediscovery of Black Nationalism). "If the internal black migration has been from South to North and from countryside to the cities, where is the 'black nation' in the United States?" In other words, the idea of a separate black state in the USA - which is the only form self-determination can take - has become completely unviable. Therefore the demand for the right of self-determination for black people is no longer relevant. It is impossible for the blacks in Detroit, Harlem, Los Angeles, etc., to link together in a separate state or nation. It is under present conditions a false idea from beginning to end. The belief that these ghettos could separate themselves off from the rest of American society is both ludicrous and reactionary. "The black ghettos have no viable economic existence apart from their predominantly white hinterlands; they are separated from one another, often by hundreds of miles..." states Draper. The migration to the north has not solved the problems of blacks. There they face new horrors in the ghettos: racism, police brutality, poverty, unemployment and slum conditions. The problems of black workers are the problems of the working class as a whole, only in a far more acute form. They form a specially oppressed substratum of the working class. The struggle against the double oppression of blacks and other oppressed minorities must be linked to the struggle of the working class as a whole. The only way the American blacks can achieve their emancipation is through the socialist transformation of society. When the ghettos exploded in the 1960s, the movement led to the rise of the Black Muslims, the Black Panthers, the League for Revolutionary Black Workers, including the demand for black power. These movements sprang out of the brutal conditions faced by blacks. They were also inspired by the unfolding colonial revolution. Their determination to find a solution to their problems showed the revolutionary potential amongst the most oppressed layers of American society. Many, especially the Panthers, became open to the ideas of Marxism and favoured the creation of a new workers' party. In a short space of time they evolved from a largely black nationalist movement to a revolutionary movement. Unfortunately, the Panther's lack of clear perspectives or a programme served to derail the movement. Subject to vicious state repression, the Panthers went into crisis, and suffered a whole series of splits. On top of the policy of state repression, the ruling class made a series of concessions which served to undermine the movement. These became known as affirmative action policies, which set quotas for the number of blacks to be employed in jobs. This system, in reality, has helped only a small minority of blacks, mainly from the middle class. The conditions of the mass of black people have deteriorated, as the above figures testify. Many on the left support affirmative action as a step forward. It is regarded as a "practical" measure to overcome years of discrimination. The problem with affirmative action is it attempts to solve a problem within the confines of capitalism. That is why Clinton can give his support for it. It does not challenge the rule of big business, seeking only a fairer division of existing jobs between the working class. Concretely, it serves to divide workers along lines of race and sex and keeps the movement within the limits of capitalism. For example, the school board Piscataway, New Jersey, used the quota system to cut a member of staff. It fired a white teacher, to maintain the racial balance. The school board recently agreed an out of court settlement to pay the teacher, who took the board to court, $433,500. Affirmative action takes the issue and puts it in the hands of lawyers, courts and bureaucrats who are controlled by big business and relish the in-fighting over the crumbs from the capitalists' table. The quota system cannot show any way forward. On the contrary, it is used as an excuse which is used by the labour leaders for not taking effective action. In practice, affirmative action has not worked. During this period, real wages and living standards have declined and the jobs market has shrunk. The position of black workers is no better than before - in fact, it is worse. However, the recent court attacks against affirmative action in Texas, Colorado and Maryland, as well as at a federal level, mean the American capitalists want total flexibility of labour, to fill any job with whom they choose. While we have no illusions in affirmative action, these attacks are part of the general attack by big business on the working class, and therefore must be opposed as such. The problem of jobs is a central issue. Does the labour movement simply ignore discrimination at work or elsewhere? Absolutely not! It must fight against discrimination over jobs, but link it to a fight against unemployment and better wages as a whole. We must fight for a class alternative to affirmative action, that can draw the ranks of the working class together in common struggle. The fight against discrimination against minorities in hiring must be fought through trade union control over hiring and firing. The labour movement must make it clear at all times that it is not prepared to stand for discrimination against blacks or other minorities. Labour must fight for equal employment prospects, wages and conditions for all workers. But the special oppression of blacks and other minorities must be linked to the oppression and exploitation of all workers. The bosses strategy of keeping a pool of cheap labour helps to divide and weaken the working class as a whole. This situation must not be simply opposed by words, but must be challenged by a programme of action. For a 32 hour, four day week with no loss of pay! A crash programme of public works! A living wage for all workers! Union control over hiring and firing! Mobilise the labour movement to combat racism! These must be linked to the creation of a workers' party committed to a socialist programme, as the basis for class unity. A workers' government would take over the corporate monopolies, banks and finance houses under workers' control and management. A socialist planned economy could unleash the resources to give everyone a job, a decent wage, a house and a real education and future for their children. The struggle of blacks and the oppressed minorities for a better life cannot take place in isolation from the working class as a whole and the need to transform society on socialist lines. The general crisis of American capitalism bears down heavily on the blacks and other racial minorities. But the Million Man March and the Million Youth March, despite its leadership, indicate the stirring once again of the black population. With the deepening crisis, it will be the class issues that will inevitably come to the fore. The American working class will take the road of struggle in the same tradition at the mighty battles surrounding the foundation of the CIO. The black working class, as with all the oppressed racial minorities, constitute the most courageous and determined section of the class. It is destined to play a vital role - along with its white brothers and sisters - in the future struggles to transform American society on socialist lines.
<urn:uuid:42059cd6-7cba-429a-8594-1d3a7bfc8eba>
CC-MAIN-2022-33
https://www.marxist.com/black-nationalism-or-socialist-revolution.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00697.warc.gz
en
0.963317
4,169
3.234375
3
Part Five (of 22) –Music in Ramayana 1.1. After the Music of Sama comes the singing of Akhyana or ballads, narrating a story in musical forms. Of all the Akhyana-s, the Ramayana of the Adi Kavi Valmiki is the most celebrated one. It is a divine ballad (Akhyanam Divyam) narrating history of ancient times (Itihasam puratanam). 1.2. It is believed; the Ramayana had its origins in folk lore; and was preserved and spread as an oral epic (Akhyana), for a very long-time. It is suggested that poet Valmiki rendered the folk lore into a very beautiful, sensitive and lyrical epic poem by about 7th century BCE. Thereafter, in age after age, the Suthas narrated and sang the glory of Rama and Sita, in divine fervour; and spread the epic to all corners of the land and beyond. Even to this day , the tradition of devote groups of listeners gathering around a Sutha to listen to the ancient story of chaste love between Rama and his beloved, and their unwavering adherence to Dharma amidst their trials and tribulations; is still very alive. What characterize the Dharma in Ramayana are its innocence, purity and nobility. The Indian people prefer listening with joy, the rendering of Ramayana as musical discourse, to reading the epic themselves. 1.3. Ramayana of Valmiki is a renowned Kavya, an Epic poem in classic style. It is also the Adi-Kavya, the premier Kavya; the most excellent among the Kavyas (Kavyanam uttamam); and, the best in all the three worlds (Adikavyam triloke). The Epic of Valmiki is at the very core of Indian consciousness; and is lovingly addressed variously as: Sitayasya-charitam-mahat; Rama-charitam; Raghuvira-charitam; Rama-vrttam; Rama-katha; and Raghu-vamsa-charitam. 1.4. The Great scholar-philosopher Abhinavgupta (Ca.11th century) hailed Valmiki as Rasa Rishi one who created an almost perfect epic poem adorned with the poetic virtues of Rasa, Soundarya (beauty of poetic imagery) and Vishadya (lucid expression and comfortable communication with the reader) ; all charged and brought to life by Prathibha , the ever fresh intuition. 2.1. Ramayana is more closely associated with music than other epics. That might be because Ramayana is rendered in verse; and, its poetry of abiding beauty melts into music like molten gold, with grace and felicity. Further, the epic has a certain lyrical lustre to it. The epic itself mentions that the Rama tale was rendered in song by two minstrels Kusi and Lava to the accompaniment of Veena, Tantri- laya-samanvitam (I.20.10), during the Asvamedha. 2.2. There are innumerable references to Music in Ramayana. Music was played for entertainment and in celebration at the weddings and other auspicious occasions; (II.7.416-36; 48.41.69; III.3, 17; 6.8; IV 38.13; V.53.17; VI.11.9; 24.3; 75.21 etc.) . Music was also played in palaces and liquor parlors (IV 33.21; V.6.12; X.32; 37.11.4; Vi.10.4). Soulful songs were sung to the accompaniment of instruments, at religious services and in dramas. Music was played in the festivities; to welcome and see off the guests. The warriors fighting on the battlefield were lustily cheered and enthused by stout drum beats; and piercing blow of conchs, horns and trumpets. There is also mention of those who took to music as a profession. Besides, there were court (state) sponsored musicians. Music was thus a part of social fabric of the society as described in Ramayana. 2.3. There are numerous events narrated in Ramayana where Music was sung or played. The word Samgita in Ramayana is a composite term covering Gana (vocal), Vadya (instrumental) and Nritya (dance). Samgita or Music was referred to as Gandharva-vidya. There is also a mention of Karna sung to the accompaniment of Veena (R. VII. 71.5). Samgita was also Kausika (kaisika) the graceful art of singing and dancing (gana-nrtya-vidya), the delightful art of singing and dancing in groups (kausika-charya) to the accompaniment of instruments. :– The sage Valmiki, the author of the Epic, at the commencement says that the Ramayana he composed is well suited to musical rendering in melodious (madhuram) tunes (Jatis) having all the seven notes (Svaras) in three registers (vilambita, Madhyamaand Drita) with proper rhythm (laya) to the accompaniment of string instruments (tantrī laya samanvitam) :- Describing the glory and the beauty of Ayodhya, it is said the city resounding with the rhythmic drum beats of Dundubhi, Mrudanga and Panava; with the melodious tunes of string instruments like Veena , the city , indeed, was unique ; and undoubtedly the best city on earth – dundubhībhiḥ mṛdangaiḥ ca vīṇābhiḥ paṇavaiḥ tathā | nāditām bhṛśam atyartham pṛthivyām tām anuttamām (R.1.5.18) : – And, in the hermitage of Rishyasrnga the girls sent by King Lomapada sang and danced – tāḥ citra veṣāḥ pramadā gāyaṃtyo madhura svaram (R.I .10.11 ). :- When Sri Rama and his three brothers took birth, the Gandharvas in great jubilation sang cheerfully; the celestial nymphs Apsaras danced with great delight, the Devas played on the drums enthusiastically, while the heavens showered flowers ; and, with that there was a great festivity in Ayodhya among its joyous people who had thronged in celebration – jaguḥ kalam ca Gandharvā nanṛtuḥ ca Apsaro gaṇāḥ | deva duṃdubhayo neduḥ puṣpa vṛṣṭiḥ ca khāt patat utsavaḥ ca mahān āsīt ayodhyāyām janākulaḥ (R. 1-18-17 ) : – Sri Rama himself is said to have been proficient in Music (Gandharve Ca bhuvi Sresthah). : – As Lakshmana enters the inner court of the Vanara King Sugriva, he hears singing and ravishing strains of the music of the Veena and other string instruments. : – As Hanuman flew over the sea towards Lanka he heard a group of musicians singing sons (kausika-charya). :- Hanuman , as he entered the city of Lanka, while going from one building to another, heard a sweet song which was decorated by sound from the three svaras – Mandra, Madhya and Tara of love lorn women like Apsara women in heaven. :-Hanuman while wandering at night through the inner courts of Lanka heard melodious and sweet songs adorned with Tri-sthana and Svara; and, the songs had regular Taala (sama-taala) and aksara (words) – (R.V.4.10) – Śuśrāva madhuram gītam tri sthāna svara bhūṣitam | strīṇām mada samṛddhānām divi ca apsarasām iva (R . 5-4-10 ) :- Hanuman heard musical notes coming from stringed instruments which were comforting to ears: :- Hanuman found the huge palace of Ravana, vast like the legendary mansions of Kubera, encircled by many spacious enclosures; filled with hundreds of best women; and, resounding with the sounds of percussion on Mrudangas with deep sound – mṛdanga tala ghoṣaiḥ ca ghoṣavadbhir vināditam ( R.5-6-43) :- Silently wandering through the inner courts of Ravana, in the middle of the night, the bewildered Hanuman came upon sleeping groups of women, adorned with rich and sparkling ornaments (R 5.10-37-44) . These women who were skilled in dance and music, tired and fast asleep, lying in various postures, was each clutching or hugging to a musical instrument ; such as Hanuman sees a lady of the court, tired and asleep, clutching to her Veena, like a cluster of lotuses entwining a boat moored on the banks of a stream – kācid vīṇām pariṣvajya prasuptā samprakāśate | mahā nadī prakīrṇā iva nalinī potam āśritā (R. 5-10-37) There was one woman with black eyes sleeping with an instrument called Maddukaunder arm pit shone like a woman carrying an infant boy with love – Maḍḍukena asita īkṣaṇā | prasuptā bhāminī bhāti bāla putrā iva vatsalā (5-10-38). A woman with beautiful body features and with beautiful breasts slept tightly and hugged instrument called Pataha as though hugging a lover, getting him after a long time – paṭaham cāru sarva angī pīḍya śete śubha stanī | cirasya ramaṇam labdhvā pariṣvajya iva kāminī (5-10-39) Another woman with lotus like eyes hugging a vaṃśam (flute ) slept like a woman holding her lover in secret – kācid vaṃśam pariṣvajya suptā kamala locanā | rahaḥpriyatamam gṛhya sakāmeva ca kāminī (R. 5-10-40 ) Another woman skilled in dance obtained sleep getting Vipanchi an instrument like Veena and being in tune with it like a woman together with her lover – vipañcaiim parigṛhyānyā niyatā nṛttaśālinī | nidrā vaśam anuprāptā saha kāntā iva bhāminī (R.5-10-41) Another woman with lusty eyes slept hugging a percussion instrument called Mridanga – Anya kanaka … mṛdangam paripīḍya angaiḥ prasuptā matta locanā (R. 5-10-42 ) Another tired woman slept, clutching an instrument called Panava between her shoulders and reaching arm pits – bhuja pārśva antarasthena kakṣagena krśa udarī | paṇavena saha anindyā suptā mada krta śramā (R. 5-10-43 ) Another woman with an instrument called Dindima near her slept in the same way as a woman hugging her husband and also her child – ḍiṇḍimam parigrhya anyā tathaiva āsakta ḍiṇḍimā | prasuptā taruṇam vatsam upagūhya iva bhāminī (R. 5-10-44) And, Another woman with eyes like lotus petals slept making the instrument called Adambara pressing it by her shoulders – kācid āḍambaram nārī bhuja sambhoga pīḍitam |kṛtvā kamala patra akṣī prasuptā mada mohitā (R. 5-10-45) Some excellent women slept hugging strange instruments – ātodyāni vicitrāṇi pariṣvajya vara striyaḥ (6.10.49) :-Some versions of Ramayana mention that Ravana was a reputed Saman singer; and music was played in his palace. He, in fact, suggests to Sita, she could relax like a queen listening to music in his palace, instead sitting tensely under the tree – mahārhaṇi ca pānāni śayanānyāsanāni ca | gītam nṛttaṃ ca vādyaṃ ca labha maṃ prāpya maithili (R. 5-20-10 ) :- According to some versions of the Ramayana , Ravana was a well known player of Veena called Ravana-hastaka (an instrument played with a bow). :- As Ravana’s soldiers prepare for the war, they hear the sounds of the Bheri played by Rama’s monkey –army. Sarama asks Sita to listen and rejoice the Bheri sounds resembling the thundering rumbles of the clouds – Samanahajanani hesya bhairava bhiru bherika / Bherinadam ca gambhiram srunu toyadanihsvanam – (6-33-22) :- Ravana compared the battlefield to a music stage; bow (weapon for firing arrows) to his Veena; arrow to his musical bow; and the tumultuous noise of the battle to music – jyā śabda tumulām ghorām ārta gītam ahāsvanām | nārā catalasam nādām tām mamā hita vāhinīm | avagāhya maha raṅgam vādayiṣyāntagan raṇe – ( R. VI: 24:43-44) :- As the battle ended with victory to Rama, the Apsaras danced to the songs of Gandharvas, such as Narada the king of Gandharvas (Gandharva-rajanah), Tumbura, Gopa, Gargya, Sudhama, Parvata, and Suryamandala (R.6.92.10). Tumbura sang in divine Taana (divya-taaneshu). :-The triumphant Rama, the foremost among men, on his return, was greeted and loudly cheered by the people of Ayodhya accompanied by sounds of conchs (shankha) buzzing in the ears and tremendous sounds of Dundhubi – Śankha śabda praṇādaiśca dundubhīnān ca nisvanaiḥ | prayayū puruṣavyāghrastāṃ purīn harmyamālinīm (R. 6-128-33) :- Rama drove to his palace, surrounded by musicians cheerfully playing on the cymbals, Swastika and such other musical instruments singing auspicious (mangalani) songs – Sa purogāmi abhistūryaistālasvastikapāṇibhiḥ | pravyāharadbhirmuditairmaṅgalāni yayau vṛtaḥ ( 6-128-37 ) :- On that auspicious and most joyous occasion of the coronation of the noblest Sri Rama, the Devas, the Gandharva sang gracefully ;and , the troupes of Apsaras danced with great delight – Prajagur deva-gandharvā nanṛtuśc āpsaro gaṇāḥ | abhiṣeke tadarhasya tadā rāmasya dhīmataḥ (6-128-72 ) 3.1. Ramayana is not a thesis on music; it is an epic poem rendering the story of chaste love between a husband and his wife. The music or whatever musical elements mentioned therein is incidental to the narration of the story. And, yet, Valmiki accorded importance to music and elements of music in his work. He crafted situations where music could be introduced naturally. More importantly, his verses have a very high lyrical quality; and, can be rendered into music quite easily. All these speak of Valmiki’s love for music and his aesthetic refinement. 3.2. Many Music-terms are mentioned in Ramayana, indicating the state of Music obtaining during the time of its composition – (not necessarily during the event-period). : – Valmiki mentions that Kusi–Lava sang in Marga style – Marga-vidhana-sampada – (R. I.4.35); in seven melodic modes called Jatis (jatibhih saptabhiyuktam) that were pure (shuddha) ; to the accompaniment of the musical instrument like veena- tantri – laya – samanvitam (R. I.4.8.34 ); :- Valmiki endorsed use of sweet sounding words, with simple and light syllables; and advises against harsh words loaded with heavy syllables (R. IV.33.21). : – The music of Kusi-Lava was Baddha, structured into stanzas – with apt rhythm (Taala), tempo (Laya) and words (Pada); and with alamkaras – pathye geye cha madhuram” (R.I.4.8). : – Valmiki mentions, Kusi-Lava were well-versed with Murchana and Tri- Sthana (sthana-murcchana-kovidau); the art of Gandharva (tau tu gandharva – tattvajnau) and (bhrataran svara – sampannau gadharva viva- rupinam); as also with the rhythmic patterns – Laya, Yati – in three-speeds. Tri-Sthana might either refer to three voice registers (Mandra, Madhyama and Tara) or three tempos (Vilamba, Madhyama and Druta). : – Lava and Kusi were said not to fall away from Raga. Here, the term Raga is said to mean sweetness of voice (kanta-madhurya). :- Lava and Kusha used to sing Ramayana gana with the application of kaku (variations of the vocal sound for expressing aesthetic rasas) –Tam sa shushrava kakusthah purvacharya vinirmitam | Apurvam pathyajatim cha geyena samalamkritam 1 1 From these it is evident that Lava – Kusa were well trained in in the Gandharva type of music; sung with the seven shuddha jati-raga (like like shadji arshabhi, gandhdri, madhyami, panchami, dhaivati and naishadi) having seven svaras, murcchana, sthana or register, rhythm and tempo, and aesthetic ornamentation (alamkara) and mood (rasa and bhava} – rasair-yuktam kavyametadgayatam Here are some terms that perhaps need short explanations: : – Marga or Gandharva is regarded the music fit for gods. It is said to have been derived from Sama Veda; and constituted of Pada (the text), Svara (notes) and Taala (rhythm). Marga was rather somber and not quite flexible too. Marga or Gandharva in the later centuries gave place to free flowing Desi, the Music derived from the folk and the regions. : – Baddha is a song format that is well structured into stanzas. It contrasts with Anibaddha unstructured Music without restrictions of Taala. It is analogous to the present-day Aalap, and rendering of Ragamalika, Slokas etc. The Baddha – Anibaddha distinction is observed even today, just as in Valmiki’s time. : – Grama (group) was the basic gamut of notes employed in the early music-tradition. The ancient tradition is said to have employed three Grama-s beginning from Shadja, Madhyama, or Gandhara note. Later, the third Grama, based on Gandhara reportedly went out of vogue as it required moving in a usually high range of notes. : – Jaati refers to the classification of musical compositions as per the tones. Svaras and Jaati-s were seven primary notes such as Shadja, Rshabha etc of the octaves – patya-jati. Ana is said to be a drag note generally called ekasruti. It means Kusi Lava rendered the verses in several melodies. However, since the raga concept was, then, yet to be evolved, there might not have been much depth and variation in their rendering. :- Murchhana was the ancient mode of extending available tonal frameworks by commencing ascents and descents, ranging over (purna) seven notes, every time from a new note. This mode gave place to the Mela system around the 15th -16th century. 4.1. Valmiki’s Ramayana mentions varieties of musical instruments. The term Atodhya denoted instrumental music. The musical instruments, of the time, were categorized, broadly, as those played by hand (hastha-vadya); and as those played by mouth (mukha- vadya) (R. II.65.2). The string and percussion instruments came under the former category; while the wind instruments were among the latter category. Instrumental Music was primarily individualistic; not orchestrated. It appears instruments were used mainly as accompaniments (not solo) and depended on vocal music. Group music- vocal with instruments –appeared to be popular. 4.2. Among the string instruments, Ramayana mentions two kinds of Veena: Vipanchi (fingerboard plucked ones with nine strings like the Veena as we know) ;Vana or Vallaki (a multi stringed harp); and, Kanda-Veena (made by joining reeds). In fact, till about 19th century, string instruments of all kinds were called Veena: harps like the Chitra; fingerboard plucked ones like the Vipanchi, Rudra Veena, the Saraswati Veena and the Kacchapi Veena; bowed ones such as the Ravana hastaveena and the Pinaki Veena. 4.3. As regards the percussion instruments, the Epic refers to quite a large number of them: - Panava (a kind of Mridanga which had a hole in the middle with strings were laid from one side to another); - Madduka ( a big drum of two faces having twelve and thirteen angula- finger lengths ); - Dundubhi (Nagaara); - Dindima (resembling Damaru but smaller in size); - Muraja ( a bifacial drum, the left one of eight fingers and right one of seven fingers); - Adambara ( a sort of kettle drum made of Udambara wood); - Bheri (two faced metal drum in a conical shape , the leather kept taut by strings; the right face was struck by a Kona and the left one by hand, striking terror in the heart of the enemy - Pataha (resembling Dholak); and - Dundubhi (drums made of hollow wood covered with hide) played during wedding ceremonies as also for welcoming the winning-warriors . Gargara was another drum used during the wars. All these were leather or leather bound instruments. They were played with metal or wooden drum-sticks with their ends wrapped in leather. There is also a mention of Bhumi –Dundubhi where the lower part of a huge drum is buried in a pit while the exposed upper part covered with animal hide is beaten with big sized metal or wooden drum-sticks to produce loud booming sounds. It was played during battles to arouse the warriors; to celebrate victory; or in dire emergency. Bhumi –Dundubhi was also played at the time of final offering (Purna-Ahuthi) at the conclusion of a Yajna. The other instruments to keep rhythm (Taala) were: Ghatam and cymbals. Aghathi was a sort of cymbal used while dancing. 4.4. The instruments played by mouth (mukha- vadya) , that is the wind instruments, mentioned in Ramayayana include : - Venu or Vamsa (flute) , - Shankha (conch) blown on auspicious occasions and at the time of wars ; - Tundava (wind instrument made of wood); - Singa (a small blower made of deer horns to produce sharp and loud sounds); and, - kahale or Rana-bheri (long curved war- trumpet). The flute was also used for maintaining Aadhara- Sruthi (fundamental note). [Tambura or Tanpura did not come into use till about 15th-16th century.] State of Music 5.1. It is evident that during the period in which Ramayana was composed (say 7th century BC) , the Music was fairly well developed ; and the basic concepts were, in place. However, a full-fledged musicology and elaborate theories on music were yet to develop. Marga system was prevalent; and, Desi with its Ragas was yet centuries away. 5.2. The Singing of well known texts of poetry, in public, appeared to be the standard practice. Instruments were used for accompaniment and not for solo performances. Group singing with instrumental support appeared to be popular. Music was very much a part of the social and personal life. [ As compared to Ramayana, there is relatively less information about Music in Mahabharata. Yet; Music (Gandharva) did occupy an important place in the life of its people. There are references to Music played on various occasions, including welcoming and seeing off the guests. Along with singing (Gita) such Musical instruments as Panav, Vansa and Kansya Tala etc., were played. The Music instruments were broadly covered under the term Vaditra, denoting the four-fold group of Tata, Vitata, Ghana and Sushira -Vadyas. In Shanti-parva, there are references to Veena and Venu. The string instrument (Tantri-Vadya) Veena, was played during religious ceremonies like Yajnas; and, for relaxation by the ladies of the Queen’s court- vīṇā-paṇava-veṇūnāṃ svanaś cāti manoramaḥ / prahāsa iva vistīrṇaḥ śuśruve tasya veśmanaḥ – 12,053.005 In Dronaparva, there are references to Drum class instruments like: Mridanga, Jharjhara, Bheri, Panava, Anaka, Gomukha, Adambara, and Dundubhi (paṇavānaka-dundubhi-jharjhar-ibhiḥ – 07,014.037). And, in Virata-parva, there is a reference to Kansya (solid brass instrument), the cymbal; Shankha (conch) and Venu (flute), the wind instruments Sushira -Vadyas. And, Gomukha was perhaps a cow-faced horn or trumpet – śaṅkhāś ca bheryaś ca gomukhā-ḍambarās tathā – 04,067.026. The known Musical Instruments of the Mahabharata Period could be grouped as under: Continued in Part Six Gandharva or Marga Music Sources and References Ramayanadalli Sangita (Kannada) by Prof. Dr. R Satyanarayana Telling a Ramayana Music of India Glossary of music terms The Music and Musical Instruments of North Eastern India by Dilip Ranjan Barthakur Painting by Shri S Rajam
<urn:uuid:d8b4ee40-676a-45eb-a0b9-88c411a3e644>
CC-MAIN-2022-33
https://sreenivasaraos.com/2015/04/22/music-of-india-a-brief-outline-part-five/?replytocom=17843
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00297.warc.gz
en
0.93111
6,532
3.234375
3
Astrology (jotisattha or nakkhattavijjā) is the belief that the position of the planets and the stars in relation to each other has a significant influence on human character and Destiny. This belief contradicts the most Basic Buddhist Doctrine that our character and Destiny are the result of the sum total of our intentional thoughts, speech and actions, i.e. our Kamma. The Buddha said Monks and nuns must not practise astrology or any other Form of Fortune-telling, all of which he called ‘base arts’ (tiracchānavijjā,D.I,8). In the Jātaka he tells a story which highlights the foolishness of relying on astrological predictions (Ja.I,258). Logically, all astrological calculations up to 1930 must have been incorrect, because they did not allow for the planet Pluto, which was only discovered in that year, and all calculations since 1930 must have also been incorrect because they did not account for the new planet discovered in 2005. Ironically, many people in Buddhist countries believe in astrology and Monks commonly do astrological predictions. Astrology consists of belief systems which hold that there is a relationship between astronomical phenomena and events in the human world. In the West, astrology most often consists of a system of horoscopes that claim to explain aspects of a person's personality and predict future events in their life based on the positions of the sun, moon, and other planetary objects at the time of their birth. Many cultures have attached importance to astronomical events, and the Indians, Chinese, and Mayans developed elaborate systems for predicting terrestrial events from celestial observations. Astrology (horà÷àstra) is the belief that the position of the planets and the stars in relation to each other has a significant influence on human character and destiny. This belief contradicts the most basic Buddhist doctrine that our character and destiny are the result of the sum total of our intentional thoughts, speech and actions, i.e. our kamma. The Buddha said monks and nuns must not practice astrology or any other form of fortune-telling, all of which he called ‘base arts’ (Digha Nikaya 1. 8). In the Jataka he tells a story which highlights the foolishness of relying on astrological predictions (Jataka 1. 258). Logically, all astrological calculations up to 1930 must have been incorrect because they did not allow for the planet Pluto, which was only discovered in that year, and all calculations since 1930 must have also been incorrect because they did not account for the new planet discovered in 2005. Many people in Buddhist countries believe in astrology and monks commonly do astrological predictions. Among Indo-European peoples, astrology has been dated to the 3rd millennium BCE, with roots in calendrical systems used to predict seasonal shifts and to interpret celestial cycles as signs of divine communications. Through most of its history, astrology was considered a scholarly tradition. It was accepted in political and academic contexts, and was connected with other studies, such as astronomy, alchemy, meteorology, and medicine. At the end of the 17th century, new scientific concepts in astronomy and physics (such as heliocentrism and Newtonian mechanics) called astrology into question, and subsequent controlled studies failed to confirm its predictive value. Astrology thus lost its academic and theoretical standing, and common belief in astrology has largely declined. Astrology has been rejected by the scientific community as having no explanatory power for describing the universe. Scientific testing of astrology has been conducted, and no evidence has been found to support any of the premises or purported effects outlined in astrological traditions. Where astrology has made falsifiable predictions, it has been falsified. :424 There is no proposed mechanism of action by which the positions and motions of stars and planets could affect people and events on Earth that does not contradict well understood, basic aspects of biology and physics. The word astrology comes from the early Latin word astrologia, deriving from the Greek noun ἀστρολογία, 'account of the stars'. Astrologia later passed into meaning 'star-divination' with astronomy used for the scientific term. Principles and practice Advocates have defined astrology as a symbolic language, an art form, a science, and a method of divination. Although most cultural systems of astrology share common roots in ancient philosophies that influenced each other, many have unique methodologies which differ from those developed in the West. These include Hindu astrology (also known as "Indian astrology" and in modern times referred to as "Vedic astrology") and Chinese astrology, both of which have influenced the world's cultural history. Western astrology is a form of divination based on the construction of a horoscope for an exact moment, such as a person's birth. It uses the tropical zodiac, which is aligned to the equinoctial points. Western astrology is founded on the movements and relative positions of celestial bodies such as the Sun, Moon, planets, which are analyzed by their movement through signs of the zodiac (spatial divisions of the ecliptic) and by their aspects (angles) relative to one another. They are also considered by their placement in houses (spatial divisions of the sky). Astrology's modern representation in western popular media is usually reduced to sun sign astrology, which considers only the zodiac sign of the Sun at an individual's date of birth, and represents only 1/12 of the total chart. The names of the zodiac correspond to the names of the constellations originally within the respective segment and are in Latin. Along with tarot divination, astrology is one of the core studies of Western esotericism, and as such has influenced systems of magical belief not only among Western esotericists and Hermeticists, but also belief systems such as Wicca that have borrowed from or been influenced by the Western esoteric tradition. Tanya Luhrmann has said that "all magicians know something about astrology," and refers to a table of correspondences in Starhawk's The Spiral Dance, organized by planet, as an example of the astrological lore studied by magicians. Indian and South Asian Hindu astrology originated with western astrology. :361 In the earliest Indian astronomy texts, the year was believed to be 360 days long, similar to that of Babylonian astrology, but the rest of the early astrological system bears little resemblance. :229 Later, the Indian techniques were augmented with some of the Babylonian techniques. :231 Chinese and East-Asian Chinese astrology has a close relation with Chinese philosophy (theory of the three harmonies: earth and man) and uses concepts such as yin and yang, the Five phases, the 10 Celestial stems, the 12 Earthly Branches, and shichen (時辰 a form of timekeeping used for religious purposes). The early use of Chinese astrology was mainly confined to political astrology, the observation of unusual phenomena, identification of portents and the selection of auspicious days for events and decisions. The constellations of the Zodiac of western Asia and Europe were not used; instead the sky is divided into Three Enclosures (三垣 sān yuán), and Twenty-eight Mansions (二十八宿 èrshíbā xiù) in twelve Ci (十二次). The Chinese zodiac of twelve animal signs is said to represent twelve different types of personality. It is based on cycles of years, lunar months, and two-hour periods of the day (the shichen). The zodiac traditionally begins with the sign of the Rat, and the cycle proceeds through 11 other animals signs: Complex systems of predicting fate and destiny based on one's birthday, birth season, and birth hours, such as ziping and Zi Wei Dou Shu (simplified Chinese: 紫微斗数; traditional Chinese: 紫微斗數; pinyin: zǐwēidǒushù) are still used regularly in modern day Chinese astrology. They do not rely on direct observations of the stars. The Korean zodiac is identical to the Chinese one. The Vietnamese zodiac is almost identical to Chinese zodiac except the second animal is the Water Buffalo instead of the Ox, and the fourth animal is the Cat instead of the Rabbit. The Japanese zodiac includes the Wild Boar instead of the Pig, and the Japanese have since 1873 celebrated the beginning of the new year on the 1st of January as per the Gregorian Calendar. The Thai zodiac includes a Naga in place of the Dragon and begins, not at Chinese New Year, but either on the first day of fifth month in the Thai lunar calendar, or during the Songkran festival (now celebrated every 13–15 April), depending on the purpose of the use. Astrology, in its broadest sense, is the search for meaning in the sky. It has therefore been argued that astrology began as a study as soon as human beings made conscious attempts to measure, record, and predict seasonal changes by reference to astronomical cycles. Early evidence of such practices appears as markings on bones and cave walls, which show that lunar cycles were being noted as early as 25,000 years ago; the first step towards recording the Moon’s influence upon tides and rivers, and towards organizing a communal calendar. Agricultural needs were also met by increasing knowledge of constellations, whose appearances change with the seasons, allowing the rising of particular star-groups to herald annual floods or seasonal activities. By the 3rd millennium BCE, widespread civilizations had developed sophisticated awareness of celestial cycles, and are believed to have consciously oriented their temples to create alignment with the heliacal risings of the stars. There is scattered evidence to suggest that the oldest known astrological references are copies of texts made during this period. Two, from the Venus tablet of Ammisaduqa (compiled in Babylon around 1700 BCE) are reported to have been made during the reign of king Sargon of Akkad (2334–2279 BCE). Another, showing an early use of electional astrology, is ascribed to the reign of the Sumerian ruler Gudea of Lagash (c. 2144 – 2124 BCE). This describes how the gods revealed to him in a dream the constellations that would be most favorable for the planned construction of a temple . However, there is controversy about whether they were genuinely recorded at the time or merely ascribed to ancient rulers by posterity. The oldest undisputed evidence of the use of astrology as an integrated system of knowledge is therefore attributed to the records of the first dynasty of Mesopotamia (1950–1651 BCE). The system of Chinese astrology was elaborated during the Zhou dynasty (1046–256 BCE) and flourished during the Han Dynasty (2nd century BCE to 2nd century CE), during which all the familiar elements of traditional Chinese culture – the Yin-Yang philosophy, theory of the five elements, Heaven and Earth, Confucian morality – were brought together to formalise the philosophical principles of Chinese medicine and divination, Medieval Islamic world Astrology was taken up by Islamic scholars following the collapse of Alexandria to the Arabs in the 7th century, and the founding of the Abbasid empire in the 8th. The second Abbasid caliph, Al Mansur (754–775) founded the city of Baghdad to act as a centre of learning, and included in its design a library-translation centre known as Bayt al-Hikma ‘Storehouse of Wisdom’, which continued to receive development from his heirs and was to provide a major impetus for Arabic-Persian translations of Hellenistic astrological texts. The early translators included Mashallah, who helped to elect the time for the foundation of Baghdad, and Sahl ibn Bishr, (a.k.a. Zael), whose texts were directly influential upon later European astrologers such as Guido Bonatti in the 13th century, and William Lilly in the 17th century. Knowledge of Arabic texts started to become imported into Europe during the Latin translations of the 12th century, the effect of which was to help initiate the European Renaissance. By the 17th century, in England, astrology had reached its zenith. Astrologers were theorists, researchers, and social engineers, as well as providing individual advice to everyone from monarchs downwards. Among other things, astrologers could advise on the best time to take a journey or harvest a crop, diagnose and prescribe for physical or mental illnesses, and predict natural disasters. This underpinned a system in which everything - people, the world, the universe - was understood to be interconnected, and astrology co-existed happily with religion, magic and science. Astrology saw a popular revival starting in the 19th century as part of a general revival of spiritualism and later New Age philosophy , and through the influence of mass media such as newspaper horoscopes and astrology software. Early in the 20th century psychologist Carl Jung developed some concepts concerning astrology, which led to the development of psychological astrology. In the West there have been occasional reports of political leaders consulting astrologers. Louis de Wohl worked as an astrologer for the British intelligence agency MI5, after it was claimed that Hitler used astrology to time his actions. The War Office was "interested to know what Hitler's own astrologers would be telling him from week to week". In fact de Wohl's predictions were so inaccurate that he was soon labelled a "complete charlatan" and it was later shown that Hitler considered astrology to be "complete nonsense". After John Hinckley's attempted assassination of U.S. President Ronald Reagan, first lady Nancy Reagan commissioned astrologer Joan Quigley to act as the secret White House astrologer. However, Quigley's role ended in 1988 when it became public through the memoirs of former chief of staff, Donald Regan. Birth (in blue) and death (in red) rates of Japan since 1950, with the sudden drop in births during hinoeuma year (1966) In India, there is a long-established and widespread belief in astrology. It is commonly used for daily life, particularly in matters concerning marriage and career, and makes extensive use of electional, horary and karmic astrology. Indian politics has also been influenced by astrology. It remains considered a branch of the Vedanga. In 2001, Indian scientists and politicians debated and critiqued a proposal to use state money to fund research into astrology, resulting in permission for Indian universities to offer courses in Vedicastrology. In February 2011, the Bombay High Court reaffirmed astrology's standing in India when it dismissed a case which had challenged its status as a science. In Japan, a strong belief in astrology has led to dramatic changes in the fertility rate and the number of abortions in the years of "Fire Horse". Women born in hinoeuma years are believed to be unmarriageable and to bring bad luck to their father or husband. In 1966, the number of babies born in Japan dropped by over 25% as parents tried to avoid the stigma of having a daughter born in the hinoeuma year. Astrology has not demonstrated its effectiveness in controlled studies and has no scientific validity, and as such, is regarded as pseudoscience . The majority of professional astrologers rely on performing astrology-based personality tests and making relevant predictions about the remunerator's future. Those who continue to have faith in astrology have been characterized as doing so "in spite of the fact that there is no verified scientific basis for their beliefs, and indeed that there is strong evidence to the contrary". Astrophysicist Neil de Grasse Tyson commented on astrological belief, saying that "part of knowing how to think is knowing how the laws of nature shape the world around us. Without that knowledge, without that capacity to think, you can easily become a victim of people who seek to take advantage of you". The former astrologer, and scientist, Geoffrey Dean and psychologist Ivan Kelly conducted a large scale scientific test, involving more than one hundred cognitive, behavioral, physical and other variables, but found no support for astrology. Furthermore, a meta-analysis was conducted pooling 40 studies consisting of 700 astrologers and over 1,000 birth charts. Ten of the tests, which had a total of 300 participants, involved subjects picking the correct chart interpretation out of a number of others which were not the astrologically correct chart interpretation (usually 3 to 5 others). When the date and other obvious clues were removed no significant results were found to suggest there was any preferred chart. A further test involved 45 confident[a] astrologers, with an average of 10 years experience and 160 participants (out of an original sample size of 1198 participants) who strongly favored certain characteristics in the Eysenck Personality Questionnaire to extremes. The astrologers performed much worse than merely basing decisions off the individuals age, and much worse than 45 control subjects who did not use birth charts at all. Science and non-science are often distinguished by the criterion of falsifiability. The criterion was first proposed by philosopher of science Karl Popper. To Popper, science does not rely on induction; instead, scientific investigations are inherently attempts to falsify existing theories through novel tests. If a single test fails, then the theory is falsified. Therefore, any test of a scientific theory must prohibit certain results which will falsify the theory, and expect other specific results which will be consistent with the theory. Using this criterion of falsifiability, astrology is a pseudoscience. Popper regarded astrology as "pseudo-empirical" in that "it appeals to observation and experiment", but "nevertheless does not come up to scientific standards". In 1953, sociologist Theodor W. Adorno conducted a study of the astrology column of a Los Angeles newspaper as part of a project examining mass culture in capitalist society. Adorno concluded that astrology was a large-scale manifestation of systematic irrationalism, where individuals were subtly being led to believe that the author of the column was addressing them directly through the use of flattery and vague generalizations. Some of the practices of astrology were contested on theological grounds by medieval Muslim astronomers such as Al-Farabi (Alpharabius), Ibn al-Haytham (Alhazen) and Avicenna. They said that the methods of astrologers conflicted with orthodox religious views of Islamic scholars through the suggestion that the Will of God can be known and predicted in advance. For example, Avicenna’s 'Refutation against astrology' Risāla fī ibṭāl aḥkām al-nojūm, argues against the practice of astrology while supporting the principle of planets acting as the agents of divine causation which express God's absolute power over creation. Avicenna considered that the movement of the planets influenced life on earth in a deterministic way, but argued against the capability of determining the exact influence of the stars. In essence, Avicenna did not refute the essential dogma of astrology, but denied our ability to understand it to the extent that precise and fatalistic predictions could be made from it. Ibn Qayyim Al-Jawziyya (1292–1350), in his Miftah Dar al-SaCadah, also used physical arguments in astronomy to question the practice of judicial astrology. He recognized that the stars are much larger than the planets, and argued: And if you astrologers answer that it is precisely because of this distance and smallness that their influences are negligible, then why is it that you claim a great influence for the smallest heavenly body, Mercury? Why is it that you have given an influence to al-Ra's and al-Dhanab, which are two imaginary points [ascending and descending nodes) —Ibn Qayyim Al-Jawziyya Belief in astrology is incompatible with Catholic beliefs such as free will. According to the Catechism of the Catholic Church: All forms of divination are to be rejected: recourse to Satan or demons, conjuring up the dead or other practices falsely supposed to "unveil" the future. Consulting horoscopes, astrology, palm reading, interpretation of omens and lots, the phenomena of clairvoyance, and recourse to mediums all conceal a desire for power over time, history, and, in the last analysis, other human beings, as well as a wish to conciliate hidden powers. They contradict the honor, respect, and loving fear that we owe to God alone. —Catechism of the Catholic Church St. Augustine believed that astrology conflicted with church doctrine, but he grounded his opposition with non-theological reasons such as the failure of astrology to explain twins who behave differently although are conceived at the same moment and born at approximately the same time.
<urn:uuid:4c7e665f-5b34-43c1-a146-e0bf4f7fe195>
CC-MAIN-2022-33
http://tibetanbuddhistencyclopedia.com/en/index.php/Astrology
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00298.warc.gz
en
0.959963
4,588
2.9375
3
The 1936 Olympics have become a mere footnote in history, remembered mostly for the heroics of Jesse Owens. The events that followed in Germany, namely the Holocaust and World War II overshadowed the Berlin games. However, it is very important to note that a world gathering like the Olympics could take place in a country that was in the process of eliminating an entire race of people. These games were used by the Nazis as a huge propaganda effort for Germany to show to the rest of the world that they had again become a powerful nation under the leader of the Adolf Hitler. The games were a huge success in this regard, the Nazi regime was able to fool and world and prove to Germany that they were everything the Nazi had said. But did the Olympic Games have any effect on the chain of events that led up the Holocaust and World War II?Germans became quite obsessed with sport in the 1870’s following the end of the Napoleonic wars in Europe. Friedrich Ludwig Jahn popularized gymnastics which became a staple of the German education system. At this time gymnastics was not the sport we think of today, but instead more of a show of mass strength and to promote national unity in the newly formed Germany at the end of the nineteenth century. These ideas were very popular and every German youth was required to participate in them as part of their education. Along with promoting sporting programs in school, the Germans played a major role in the reinstatement of the Olympics. Men like Jahn and Ernest Curtius went around the country giving speeches on the subject. The goal was to create a powerful state like the old Greeks, and the holding of annual athletic Olympics was a big part of this idea. With the help of the Germans as well as many other European nations the Olympics were reinstated in 1896, with the first Olympics being held in Athens Greece. The Germans waited patiently and were extremely happy when they were awarded the With Olympiad, scheduled to take place in Berlin in 1916. By the time 1916 arrived most of Europe was involved in the “Great War” which was entirely blamed on Germany and these games were canceled to the great disappointed of the German sports officials. During the next three Olympics: Paris in Belgium in 1920, Paris in 1924, and Amsterdam in 1928 the Germans were not even invited to compete. During this time Germany’s sports program was almost non existent, the only countries they competed with were there World War One allies and this was only sparingly. During this time the Weimar Republic was beginning to rebuild itself in the eyes of the world and the International Olympic Committee met in 1933 to decide who would be granted the 1936 games they only had two proposals one from Spain and one from Germany. At this time most of the world was mired in a deep depression and Germany was more confident about their economic situation that the rest of the world because many of the National Socialists plans were working. The main reason the Germans were awarded the games was because they already had most of the buildings and equipment built from their preparations for the 1916 Olympics. The IOC was confident that the Germans would be able to put on the games financially. Just months after the games were awarded to Berlin Hitler and the Nazi party began there astonishing political ascent in Germany. In July, just two months after the IOC met the Nazi party becomes the largest party in the Reichstag. In January of the following year Hitler is named Chancellor of Germany denoting him as the leader of the largest party in the Reichstag. On February 27th, 1933 just a month after he becomes Chancellor the Reichstag burns down. In March he gets congress to pass the enabling acts which suspends the Weimar Constitution and the personal liberties it guaranteed for four years. This set the table for his dictatorship, which he gained on August 3rd 1936 when President Hindenburg dies. Anti-Semitism was rising in Germany even before the death of Hindenburg. Jewish people were already having there houses, apartments and Cenogouges ransacked. Anti-Jewish publication like “Der Sturmer” were very popular throughout Germany and Nazi propaganda from Joseph Goebbels calling for the mistreatment of Jews was prevalent. When Hitler became dictator the anti-Semitism was escalated by the Nazi’s, Jews had their citizenship taken away from them when the Nuremberg laws were passed on November, 1935. These laws said that”a Reich citizen is only that subject of German or kindred blood who proves by his conduct that he is willing and suited loyally to serve the German People and the Reich. ” This law took the civil liberties away from many Jews in German, including athletes. After the enacting of the Nuremberg the Reichsportfuhrer, Captain Hans von Tschammer und Osten gave this order to all German athletic clubs and associations. “Anyone who sets himself up as a defender of Jewry no longer has any place in our associations. Every personal contact with Jews is to be avoided. There is absolutely nothing for any Jew in German men’s associations. Let us take as our example the heroic struggle that Julius Streicher, the Gauleiter of Franconia, has been waging for many years against the Jews. We too, with our societies, must help him on to final victory. It is the obvious duty of our associations to give the defense movement against Jewry our energetic support. “As a result of this statement no Jewish people or Mischlings were allowed to compete on German sports teams. This led to the dumping of some very good talent. Alex Natan, Germany’s fastest sprinter defected to Great Britain. Dr. Daniel Prenn, Germany’s best tennis player was kicked off of their Davis Cup team by the German Lawn Tennis Association when they announced that no “non Aryans” would be allowed to compete. The most celebrated example of Jewish discrimination was against Helene Mayer who was born to a Christian Mother and a Jewish father, making her a Mischling under the Nuremberg. Mrs. Mayer was the most famous female fencer in the world, when she was expelled from her fencing club and told she would not be allowed to compete in the upcoming Olympics. Even though there was a lot of racial violence going on in Germany, there was no serious moral outcry for a boycott from most of the nations. Tentative movements for boycotting the Berlin games occurred in Sweden, The Netherlands, and Czechoslovakia. In the end though the only country to boycott the games was Ireland. In Great Britain Walter Maclennon wrote a pamphlet called “Under the heel of Hitler: The Dictatorship over sport in Nazi German”, in which he called for protests against the nazification of German Sport. Maclennon was right sports were under the regulations of the Nazi party, but his outcry gained very little support in Great Britain and there was never a real outcry for them to boycott the games. For the most part Britain took the view stated by Lord Aberdeen, who said”Britain should have no problem attending the Berlin Olympics, since it had so few Jewish Citizens. Maclennon was absolutely right though, sports in the Reich had been taken over completely by the Nazi Party with Reichsportsfuhrer Tschammer und Osten in control would pay women to have the children of great German athletes. The foreward of the proficiency Book for German Youths’ outlined the role athletics would take in Nazi Germany. “Physical training is not the private concern of the individual. The National Socialist movement orders every German to place his whole self at its service. Your body belongs to your country, since it is to your country that you owe your existence. You are responsible to your country for your body. Fulfil the demands of this manual, and you will fulfil your duty to the German people. “In Germany this meant that if you showed talent in a specific sport you would shipped off to its training location where you would live and train, much like what the Soviet Union did in the second half of the twentieth century. Sports was clearly a part of the Nazi’s plans of creating a strong nation and promoting Aryan domination. The Berlin games would be the pinnacle of this plan. But only in the whole world was there, which at times was a huge question mark. America saw the biggest movement for a boycott of the games. Both and Amateur Athletic Union and the American Olympic Committee were upset at the fact that Jews were not allowed to compete for a spot on the Olympic squad. No mention is ever made of the treatment of common Jews by the Nazi party. This is because many of their Nuremberg laws copied the old Jim Crowe laws of the South which were still in effect at that time. However, blacks were allowed to compete for a spot on the American Olympic so we were able to speak about this. Germany who knew their Olympics could not be a success without America quickly gave in to our wishes and invited 21 Jewish athletes to their Olympic training facilities. Of these only one made the team, the previously mentioned fencer Helene Mayer. It is interesting to note that the Nazi’s took the Mischling tag off of her saying in German papers that they had made a mistake earlier and that in fact she was a full Aryan. What they said was that her Christian mother had an affair while married to her Jewish father with a Christina man. So in actuality the German team had no Jews on. In any case this appeased the Americans who had sent representatives to Germany to examine the situation. Two of the American’s sent to Germany were General Shrill, a member of both the American and International Olympic Committees. When he came back from his trip he gave this statement tot he AAU and the AOC, “I went to Germany for the purpose of getting at least one Jew on the German Olympic team and I feel that my job is finished. As for obstacles placed in the way of Jewish athletes or any others in trying to reach Olympic ability, I would have no more business discussing that in Germany than if the Germans attempted to discuss the Negro situation in America. “Again we see the excuse from America that we have no right to speak out when we are doing the same, an excuse that would be used throughout the 1930’s when discussing the Jewish situation in German. Another member of the AOC who took a far more racist view, “Germans are not discriminating against Jews in their Olympic tryouts. The jews are eliminated because they are not good enough as athletes. Why there are not a dozen Jews in the world of Olympic caliber. ” This statement shows the other prevalent view in America and that is showing the same racism against the Jewish community, both in Germany and in America. Avery Brundage who was the President of the AAU, addressed a crowd at a German-American day rally at Madison Square Garden after he returned from Germany. In his statement tot he crowd he said, “We can learn much from Germany. We too, if we are to preserve our institutions, must stamp out communism. Germany has progressed as a nation out of her discouragement of five years ago into a spirit of confidence in herself. No country since ancient Greece has displayed a more truly national interest in the Olympic spirt than you find in Germany today. ” This was the deathblow for the boycott movement in America. Brundage was the most powerful and respect official on the Amateur Athletic scene, both in America and the world. As a result of the reports brought back by members of the AOC and AAU voted not to boycott the games by an almost unanimous decision. This was the deathblow for the boycott movement in America. Brundage was the most powerful and respected official on the Amateur Athletic scene, both in America and the world. And when he delivered this speech, he made the decision that America would not boycott. Though crippled by the decisions of the AAU and the AOC the boycott movement was kept alive by the Jewish community around the world who had the full support of the NAACP, but neither black nor Jewish organizations had no real power their athletes going to Berlin to compete. All they could do was speak their minds, which they didthe Governor of New York Al Smith who said, “Germany’s pagan putsch makes its acceptance of the real Olympic oath either an impossibility or an hypocracy”Also, the Maccabi World Union, an international organization of Jewish sporting clubs gave this heartfelt plea to the Jews of the world. “We cannot as jews accept lightly the situation created by the Olympic games being held in Berlin. I, in common with all other Jews and many non-Jews, look upon the state of affairs in Germany from the point of view of general humanity and social decency. We certainly do urge all Jewish sportsmen, for their own self respect, to refrain from competing in a country where they are discriminated against as a race and our Jewish brethren are treated with unexampled brutality. “A few famous Jewish athletes did skip the games in Berlin. Judith Deutsch, an Austrian swimmer who had won gold in Los Angeles in 1932, and Albert Wolff a French fencer who to had won gold in the L. A. Olympics both refused to compete in the Berlin games. These Jews however were the minority. Most Jewish athletes did not have the luxury of already having fulfilled there dreams as these two did. On the American team there were five Jewish athletes: Sam Stoller and Marty Glickman, who were members of the of the 400-meter relay team(along with Jesse Owens), David Mayer, a weight lifter, Sam Balter, a basketball player, and Huyman Goldberg, a baseball player. It is easy to look at these individuals as well as all the other Jews who participated in the games as traitors to their heritage, which they were labeled by many Jewish organizations. But, they had no idea of the events that would take place in Germany after the games concluded. This was the life long dreams of these men, and to place blame on them for fulfilling their athletic destiny is not fair. They proudly represented both America and their Jewish brethren on Athletics ultimate stage. When the time had finally come, the Germans welcomed their guests with open arms. They went to extreme necessary to give each country a lavish welcomed, even if that meant waiting by a dock at 2 am in the morning to do so. Over thirty million dollars was spent by Germany on their Olympic games, three times as much that had been spent on any other Olympics to date(L. A. 1932, 10 million was the previous high). The main stadium, Reichsportsfield was the biggest in the world with a seating capacity of 110,000. With all the money spent the Nazis had created exactly what they had planned, the greatest athletic spectacle ever. The Nazi preparations did not stop at the sports arenas, they went to great lengths to clean up the city and eliminate all signs of the antisemitism from the city. All know criminals were rounded up around the city and jailed for the duration of the games. . “Der Sturmer” and other anti-Semitic publications were removed from newsstands and signs barring Jews from buildings were removed, anti-Semitic graffiti was scrubbed off of city walls and Jew baiting was order to cease by HimmlerFinally, Der Angriff, a Nazi Journal instructed its readers, “We must be more charming than the Parisians, more easy-going than the Viennese, more vivacious than the Romans, mor cosmopolitan than London, and more practical than New York. . In a rally the night before the Olympics, propaganda minister Goebbels announced to a crowd of Nazi supports, “Every on of you must be a good host, the future of the Reich will depend upon the impressions that are left upon our guests. “Berlin was truly cleaned up for the occasion with the sole intention of convincing Germans and its foreign guest that National Socialism was a success and that Germany was once again a powerful nation and they succeeded. The propaganda of Goebbels did not stop with the people of Germany. In a speech given at the pre-games press meeting Goebbels pronounced, “If all unavoidable differences of opinion were to be fought out with clean and descent weapons of mind and with proper respect for the others honest conviction. There would be created in political debate the same atmosphere that has become natural in respect to sport events. ” All the had work by the Nazi party and the city of Berlin payed off. They truly had the world fooled, especially the sporting press who painted a wonderful picture of the 1936 Berlin games, starting with their reply to the comments of Goebbels, “Jewish sentiment and all German opposition toward the Nazi regime are per force silent in Germany. But if they could be heard they would undoubtedly be unanimous for Dr. Goebbels ideas, and eager for him to put it into affect as soon as possible. Unfortunately, this was just another falsehood from the Nazi propaganda machine. But, it does set the scene for the show the Nazi’s would put on for the world over the next two weeks, starting on August first with the opening ceremonies. The opening ceremonies were a lavish well orchestrated event that focused around the Fuhrer. A blonde haired blue eyed marathon runner was the last of the 3,000 Olympic torch bearers. “Bearing the flame high above his head in a silver torch, a tall blonde runner raced swiftly through the stadium today, with even glides and perfect grace he dipped the flame before Adolf Hitler and sprinted up the stairs to the Olympic Alter. ” After the lighting of the flame, to the cheers of a roaring crowd Adolf Hitler pronounced, “I declare the games of Berlin in celebration of the 11th Olympiad of modern times have opened. ” Many of the nations gave Hitler the Nazi salute as they passed by on the flag procession, among them France and Canada. America simply doffed their hats to him which caused them to be booed by the 110,000 fans. This however, was the only non-roar heard out of the crowd, this was a very successful large scale Nazi rally that proved to the world that the Germans were once again capable of being a world power. It was Hitler’s full sponsorship of the games that made them such a huge success. Both Hitler and Goebbels viewed the games “as a grand opportunity to raise gigantic monuments and to state civically beneficial pomp and ceremony”Hitler remained in his seat throughout the two weeks of the games, always flanked by both Goebbels and Himmler. Parties were thrown every night by Nazi leaders to prove how power and majestic Germany had become since the Nazi’s took over political control. In American papers you could not find a bad word spoken about Adolf Hitler, in fact he was referred to as the Caesar of our generation by the associated press. After the opening ceremonies the games were almost anti-climatic. The Americans surged out early with huge wins in track and field. The hero the games was Jesse Owens who won three gold medals and gave the Nazi theory of Aryan domination a slap in the face. American papers proclaimed the next day, After the track and field events the Americans held a lead of 95 points. As the games continued though the lead narrowed at the end of competition the Germans had pulled off a huge upset by defeating the Americans by 57 points in the final point tally. At the end of the games with their victory in hand the German Crowd Shouted, “Sieg Heil user Fuhrer Adolf Hitler Sieg Heil” after he proclaimed the XIth olympiad over. The next day every German newspaper read “we won”. Aside from the German victory, Italy scored more points that France and took third, and Japan scored more points than Great Britain for the first time in Olympic history. As a result of these results high praises of Totalitarianism were also seen in all of the German newspapers, “The preparations rested on the totality of the nationalist art of government and its fundamental idea of the community of the whole people. The world stands in honest admiration before this work because it has totalitarian character. Without unitary will that which today has astonished the world would have been impossible. It is the supreme achievement of the totalitarian state. . American sportswriters were also singing the praises of Germany. The New York Times reported, “at the conclusion of the games the Reich has more reason than ever to sit back and admire the Athletic miracle that has just happened to them. ” The victory also caused sportswriters to question the AAU and AOC, “America must work out a new method of choosing and training its teams to meet the fiercely nationalistic feelings of a number of countries. “The Nazi Olympics was truly a huge world success, the question then was what would they do with this success. Things in Germany went back to normal almost immediately, the persecution of the Jews picked back up, heading on a course of mass destruction. The Olympic flame had hardly grown cold when Hitler made this Racist comment about the Americans at a Nazi rally, “The Americans should be ashamed of themselves for letting their medals be won by Negroes. I would never shake hands with one of them. ” Things in Germany were back to normal, only now Hitler and Nazi party had an even firmer platform on which to stand. They had used the athletes of the Germany to prove their theories about Aryan domination. The success of the eleventh Olympiad gave Hitler an enormous boost, both in the moral and political feelings of Germany. The world had come to Berlin, with doubts an left overwhelmed by the show they had just seen. Hitler, however was only happy with success of the Olympics for so long when his giant ego and visions of grandeur got the better of him. In the Spring of 1937, Hitler announced that Germany would begin having National Socialist sporting meets that were to be much like the original Olympics of the ancient Greeks. He then wrote this note to the IOC, “In 1940 the Olympic games will take place in Tokyo. But thereafter they will take place in Germany for all time to come in this stadium. And then we will determine the measurements of the athletic field. ” The stadium Hitler was talking about was not the Berlin Reichsportsfield that had served as the main stadium for the Olympics games. Instead, he was speaking of the stadium designed by architect Albert Speer, who tried to warn Hitler that the stadium was an impossibility(Hitler would not listen). The stadium was to be called the Nuremberg Reichsportsfield and was to have had a seating capacity of 400,000. The lofty plan failed, not one brick was ever laid for the stadium. Obviously the Berlin Games were a huge success for Hitler and Nazi party, but the question we have to ask is did it in any way lead to the horrific events in Germany over the next nine years. This is an impossible question to answer because the Jewish persecutions already seemed to be headed down a horrifying road even before the games began. Yes, the Nazi’s were able to fool the world during these games and make them believe they were giving Jews as much freedom as the Americans were giving African Americans. However, even if the games had not been held there that year the persecutions would have continued down the same path to the Holocaust. I do however believe that this was the worlds one chance to stop the Nazi’s actions against the Jews before they really got started. A serious boycott effort by the powerful nations of the world like the United States would have been damning to the Nazi efforts who needed all the world to be there. A boycott would have crushed all of the Nazi plans because it would have made them look bad in front of their own people and it would have made the games that year a disaster. 30 million would have been wasted on an Olympics with no significant countries participating. That being said the world can hardly be blamed for going to the Berlin Olympics, because who could have ever dreamed that in ten years these same gracious hosts would have murdered over six million Jews
<urn:uuid:20799aff-943d-40e8-a77b-cd2ca3fe3ed6>
CC-MAIN-2022-33
https://artscolumbia.org/1936-nazi-olympics-essay-73677/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00696.warc.gz
en
0.985757
4,999
3.21875
3
What is Just Intonation? Just intonation (hereinafter “JI”) is any system of tuning in which all of the intervals can be represented by ratios of whole numbers, with a strongly implied preference for the smallest numbers compatible with a given musical purpose. Unfortunately, this definition, while accurate, doesn't convey much to those who aren't already familiar with the art and science of tuning. The aesthetic experience of just intervals and chords, however, is unmistakable. The simple-ratio intervals upon which JI is based are what the human auditory system recognizes as consonance, if it ever has the opportunity to hear them in a musical context.1 The significance of whole-number ratios has been recognized by musicians around the world for at least 2,500 years. JI is not a particular scale, nor is it tied to any particular musical style. It is, rather, a set of principles that can be used to create a virtually infinite variety of intervals, scales, and chords that are applicable to any style of tonal music (or even, if you wish, to atonal styles). JI is not, however, simply a tool for improving the consonance of existing music; It is a gateway to a new and expanded palette of musical intervals and subtle distinctions hitherto unknown to Western composers. Ultimately, it is a method for understanding and navigating through the boundless reaches of the pitch continuum—a method that transcends the musical practices of any particular culture. In order to understand what “simple-ratio intervals” are and why they are musically significant, we must first examine some concepts of acoustics and psychoacoustics. When we speak of musical intervals, we are describing relations of pitch, a perceptual phenomenon, which is closely correlated with frequency, a physical phenomenon. Sounds that have definite pitch, i.e., those to which we can assign note names such as A, Bb, C#, and so on, correspond to periodic vibrations, waveforms that repeat at regular time intervals. A musical interval is a relation between two pitches, and hence, a relation between two periodic vibrations, which can be represented by numbers. Frequencies are measured in Hertz (abbreviated Hz), where 1Hz = one cycle per second. Periodic vibrations may be simple or pure tones (sine waves) but the tones of most musical instruments are composed of a number of simple tones (partials) that are whole-number multiples of the fundamental frequency of the tone (the lowest frequency component and the one that normally corresponds to the perceived pitch). This series of whole number multiples2 is known as the harmonic series. In other words, the harmonic series is the series of all integer (positive whole-number) multiples of some frequency f (f, 2f, 3f, 4f, 5f, …). Hence, the relationship between any two members of the series is an integer ratio. Figure 1 illustrates the first sixteen members of the series on the pitch C2, but what is significant is the pattern of intervals between the successive tones of the series, rather than the specific pitches of the series. Although the series is represented here in a variant of conventional staff notation, all of the intervals of the series, except the octave, deviate significantly from those of the equally tempered scale (or to put things in a more proper order, the intervals of the tempered scale deviate from those of the harmonic series). The precise deviations, expressed in cents, where 1 cent = 1/100 equal semitone or 1/1200 octave, are given in Table 1 When the human auditory system encounters a group of pure tones with relative frequencies that correspond to low-numbered degrees of a harmonic series sounding simultaneously, it does not normally hear several separate tones with distinct pitches. Rather, it hears a single entity. This entity is perceived as a tone with a pitch corresponding to the repetition rate of the composite wave, which corresponds to that of the fundamental of the series. It is not necessary for the fundamental or one of its octave multiples to be among those tones actually being sounded for this response to occur, nor is it necessary for other conflicting tones to be absent. As few as two or three relatively low-numbered members of the series are sufficient to produce the sensation that the fundamental is being heard. The reason our auditory system is specially equipped to recognize the harmonic series should be obvious: All sustained, pitched sounds produced by the human voice consist of a portion of a harmonic series, with certain portions of the harmonic spectrum receiving a characteristic emphasis. The human auditory system is, of course, highly specialized for the recognition of human speech. Although the harmonic series, like the series of integers, is theoretically infinite, the series of partials making up any complex musical tone is finite. The number of partials present in a given tone, their relative intensities, and the way their intensities vary over time are the primary determinants of the musical property known as timbre or tone color. The relative intensities of the different partials making up a given complex tone are referred to as the harmonic spectrum of that tone. As will be explained below, harmonic partials have a significant effect on the consonance of musical intervals, so the presence or absence of a particular partial in the spectrum of a given instrument will have an effect on the quality of certain intervals played on that instrument. Coincident Or Beating Harmonics The aspect of relationships between simultaneously sounded complex tones that has attracted the most attention among theorists is the coincidence of certain harmonics when pairs of complex tones are tuned in simple-ratio intervals, or conversely, the presence of beats resulting from the non-coincidence of these same partials when pairs of complex tones deviate from these simple intervals. Beats take place between two simple tones whose frequencies are near unison. These beats—regular variations in loudness—occur at a rate that is the difference in Hz between the two generating frequencies. Beats can be perceived clearly when the difference is less than 20–25Hz, but as the difference increases beyond this point the beats blend together, giving rise to a general sensation of roughness. This roughness gradually decreases as the difference increases, persisting until the difference exceeds an interval called the critical band, which, for most of the audio range, falls between a whole tone and a minor third. Beating will occur between the partials of complex tones when they fall in the near-unison range, as described above. When two complex tones with harmonic partials form a whole-number ratio interval, the numbers of the ratio indicate the lowest pair of harmonics that will match between the two tones. For any integer ratio of the form m:n, where m represents the higher tone, the mth harmonic of the lower tone, and the nth harmonic of the higher tone will coincide. For example, in the case of the perfect fifth (ratio 3:2), the third harmonic of the lower tone matches the second harmonic of the higher tone. In the case of the perfect fourth, (ratio 4:3) the fourth harmonic of the lower tone matches the third harmonic of the higher tone. Higher harmonics that are integer multiples of the lowest matching pair will, of course, also match, for example, in the case of the perfect fifth, 6:4, 9:6, 12:8, etc. The simple-ratio forms of several of the principal consonances with their matching harmonics are illustrated in Figure 2. (The matching harmonics are represented by triangular noteheads joined by dotted horizontal lines.) In intervals that deviate from these simple-ratio forms, the mismatched harmonics form mistuned unisons, and beats are generated at a frequency corresponding to the difference between the frequencies of the harmonics in question. This fact provides an essential cue for tuning any of these simple consonances by ear on instruments producing harmonic or nearly harmonic partials. One simply listens for and eliminates the beats between the defining pair of harmonics. Arthur H. Benade's "Special Relationships" n his excellent book, The Fundamentals of Musical Acoustics3, physicist and flutist Arthur H. Benade describes an experiment that he frequently performed with his students, which gives an accurate picture of the role played by beats as cues for tuning simple-ratio intervals between sustained tones with harmonic partials. Benade used two audio oscillators that were constructed so as to produce tones with three or four exactly harmonic partials of appreciable strength. One oscillator is tuned to a fixed frequency somewhere in the range of 250–1,000Hz (C4–C6). A volunteer is then invited to tune the second oscillator up or down until he or she finds a setting that produces a “special relationship,” which Benade describes as “a beat-free setting, narrowly confined between two restricted regions in which a wide variety of beats take place.” According to Benade, experiments of this type consistently identified as special relationships those intervals listed in Table 2. As those who are familiar with theories of JI will recognize, the intervals in the table are precisely those that are almost universally regarded as consonances by advocates of JI. It is particularly notable that, in addition to intervals commonly regarded as consonances (albeit in their tempered versions) by conventional music theory, the table includes three ratios involving seven (7:4, 7:5, and 7:6) that are not recognized in conventional music theory. Indeed, 7:4 is in the vicinity of the tempered minor seventh (1,000 cents) and 7:5 is even closer to the tempered tritone (600 cents), both intervals that conventional theory identifies as dissonances. It is apparent, therefore, that the experimenters are not simply picking out intervals that are familiar as consonances from their musical training but are really responding to the special physical properties of these particular whole-number ratio intervals. Equally revealing is the reaction Benade got from his students when he retuned the variable oscillator from one of the special integer relationships to a nearby tempered interval normally accepted as a consonance. When, for example, he substituted the tempered major third (400 cents, approximately 1.25992:1) for the just 5:4, all of the musicians in the room agreed that the interval was “an out of tune (sharp) major third.” His listeners “typically react with skepticism or dismay” when they are told that the out-of-tune interval they are hearing is a correctly tuned major third from the perspective of the culturally dominant system. “What,” they ask, “makes anyone think that those are acceptable tunings?” In addition to the absence of beating partials, there are other phenomena that distinguish simple-ratio intervals from all others. When two or more tones are sounded simultaneously with sufficient intensity, the nonlinear response of the ear may generate additional tones that are musically significant. These tones are known variously as combination tones, resultant tones, summation tones and difference tones, or intermodulation products. The most commonly heard of these tones is that known as the primary or first-order difference tone. For two tones with frequencies f1 and f2, the frequency of the first-order difference tone is (f2 – f1), where f2 is the higher frequency. Difference tones with the frequencies 2f1 – f2 or 3f1 – 2f2 may also sometimes be detected. Figure 3 illustrates the difference tones generated by the most important simple-ratio intervals. In every case, the difference tones correspond to lower degrees of a harmonic series to which the parent tones belong. In a sense, the difference tones identify the intervals as belonging to larger tonal sets. For example, the major third 5:4 (between middle C and E in the figure) generates as difference tones two Cs and a G, implying a strong C Major tonality. A third phenomenon, known variously as periodicity pitch, virtual pitch, subjective pitch, residue tone, or the missing fundamental, also reinforces the harmonic series identities of simple-ratio intervals. Unlike difference tones, which are produced by nonlinear responses in the middle and inner ear, periodicity pitch appears to be the result of higher-level neural processes. In many cases, the periodicity pitch of an interval is the same as its first-order difference tone. (Figure 4). The combination of the three properties just described, absence of beating harmonics and harmonically related difference tones and periodicity pitches, distinguish this small set of simple-ratio intervals from all other possible musical relationships. Although each interval is unique in its effect, all share common properties of clarity, purity, smoothness, and stability; these properties make them the foundation of JI. This is not to say that they are the only musically useful intervals in JI. On the contrary, adding, subtracting, and recombining these intervals in various combinations gives rise to large families of intervals that are essential for the construction of scales and melodic lines. A selection of significant intervals in 7-limit JI is shown in Table 3. How do the intervals of twelve-tone equal temperament (12TET), the predominant tuning system in the West over the past 250+ years, compare with the simple-ratio intervals I just described? The basic premise of temperament (any temperament) is that the number of pitches required to play in different keys can be reduced by compromising the tuning of certain tones so that they can perform different functions in different keys, whereas in JI a slightly different pitch would be required to perform each function. In other words, temperament compromises the quality of intervals and chords in the interest of simplifying instrument design and construction and playing technique. 12TET takes advantage of the fact that the sum of twelve perfect fifths (312:212) is slightly greater than seven octaves (27:1). The difference is the Pythagorean comma (531,441:524,288, about 23.5 cents). Each fifth in 12TET is flatted (narrowed) by 1/12 of the Pythagorean comma (approximately 1.96 cents) causing the fifths to form a closed circle. To achieve this, the starting frequency is multiplied twelve times in succession by the twelfth root of two (12√2), an irrational number that is approximately 1.05946. This ensures that none of the resulting intervals except the octave will be simple ratios. The problems with 12TET are not limited to its fifths being slightly narrow; chaining fifths, whether perfect or slightly flatted, does not result in good thirds or sixths. And, of course, 12TET, being a closed system, makes no provision for the admission or understanding of any additional intervals. It provides “acceptable” approximations of those intervals that most musicians in Europe c. 1750 considered useful and no more. Attempts by some twentieth-century composers and musicians to expand the resources of 12TET by subdividing it into smaller arbitrary intervals, such as quarter tones, third tones, etc., failed to solve its fundamental psychoacoustic problems. Let us examine a few of the intervals of 12TET. As expected, the 12TET perfect fifth is not bad. In the case of the fifth between middle C and the G above, the third harmonic of the C and the second harmonic of the G beat at less than 1Hz (before the advent of electronic tuners, the standard method for tuning a piano in 12TET was to count the beats of the fifths with a stopwatch). The first and second order difference tones, (f2 – f1) and (2f1 – f2) are each less than one cent away from the C below middle C. The third order difference tone is below the range of human hearing. The case of the perfect fourth is similar. If one wanted to make music in which only fifths, fourths, and octaves were consonant, 12TET would be adequate. The 12TET major third fares much worse. At 400 cents, it is 13.68 cents wider than the just 5:4. In the case of the major third between middle C (261.6Hz) and the nearest E above it (329.6Hz), the fifth harmonic of the C and the fourth harmonic of the E beat at a frequency of approximately 10Hz. (The harmonics will beat at higher or lower frequencies as the pitches are transposed up or down.) The weaker tenth and eighth harmonics will beat at approximately 20Hz. The difference tones further complicate the picture. The first order difference tone (f2 – f1) is 68Hz, closer to C# than C. The second order difference tone (2f1 – f2) is 193.65Hz, a very flat G. The third order difference tone (3f1 – 2f2) is 125.6Hz, about a third-tone above B. None of these difference tones, if present, reinforce the identities of either of the tones that comprise the interval or any other closely related tone. The quality of this interval, which is accepted by most listeners as one of the most important consonances in Western music, is murky and ambiguous when compared to the just 5:4. The minor third and the major and minor sixths yield similar results. Hence, 12TET is a very poor system for music that requires consonant thirds or triads (which is the purpose for which it is commonly used). One would get even worse results in comparing the just harmonic seventh (7:4) with the 12TET minor seventh, but this does not seem like a fair comparison, since 12TET was never intended to approximate septimal, just intervals. Prime Numbers And Prime Limits Composers and theorists working in just intonation frequently classify intervals, scales, and chords in terms of prime numbers and “prime limits.” A prime number is an integer (positive whole number) that is evenly divisible only by itself and one. The prime numbers that have an obvious role in just intonation are 2, 3, 5, 7, 11, and 13, though some composers have used much higher primes. The highest prime number used as a factor in the ratios that comprise a scale or gamut is called its “prime limit.” Why are prime numbers important? Each prime number used in a just tuning spawns a family of intervals that can be generated by no other method. The first prime, 2, generates the octave and its multiples: Multiply or divide any frequency by 2 and powers of 2 as many times as you want and all you’ll get are octaves above or below your starting pitch. The octave is necessary, but it is just an empty frame. To add musically useful tones, you must use other prime numbers. The second prime, 3, in combination with 2, generates the perfect fifth (3:2) and perfect fourth (4:3). Making chains of fifths and/or fourths and “folding” them back into an octave (one of the oldest known tuning methods) can generate melodically useful scales, known as 3-limit or Pythagorean scales, but such scales lack consonant thirds and sixths and hence, consonant triads. To create consonant thirds and sixths requires the prime number 5. Combined with the previous two primes, this yields the major third (5:4), minor sixth (8:5), minor third (6:5), and major sixth (5:3). 5-limit just intonation is the closest to the conventional Western vision of harmony and melody, though with triads that are truly consonant and an expanded intervallic palette. The prime 7 adds three more consonant intervals, the subminor third (7:6), the septimal tritone (7:5), and the harmonic or subminor seventh (7:4), which lie outside of conventional Western music theory and practice. The next two primes, 11 and 13, yield intervals that are not easily classified, but are, in many cases, as remote as possible from the intervals of 12TET. Prime 11, in particular, yields melodic intervals such as neutral seconds and neutral thirds, which are more typical of Arabic, Persian, and Turkish scales. It is often useful to represent just tunings in the form of a lattice, such as that in Figure 5. Such a lattice normally uses one axis for each prime factor (2 is usually omitted, because each pitch is assumed to exist in all octaves). Unless one wants to draw hypercubes and the like, this technique is limited to tunings with three prime factors, excluding 2. The lattice in Figure 5 shows a portion of 7-limit tonal space centered (arbitrarily) around C 1/1. The horizontal axis represents the perfect fourth/fifth (prime 3), the vertical axis represents the major third/minor sixth (prime 5), and the diagonal axis represents the subminor seventh/supermajor second (prime 7). What about actual music? So far, we have been talking about the properties of musical intervals in isolation. This is much like talking about the properties of paint squeezed from a tube onto a palette. Musical intervals or painter’s colors are not without aesthetic affect when encountered in isolation like this, but it requires the work of a skilled painter or composer to use these materials in a way that reveals their true significance. My first encounter with JI was via the music of Harry Partch (1901–1974). While a student at the Chouinard Art Institute in Los Angeles in 1969, when I was first beginning to think of composing, I saw two films involving Partch’s music and instruments: Windsong and Music Studio, as part of a talk by Partch disciple Dean Drummond. Shortly thereafter, I purchased the Columbia LP The World of Harry Partch, which was one of the few commercial recordings of contemporary music in JI then available. At the time, I was more impressed by the timbres of Partch’s instruments, the textures of his music, the visual beauty of the instruments, and the general air of the strange and exotic hovering over it all. I did not understand anything about JI theory beyond the fact that Partch’s scale involved 43 tones and, of course, that this was not the conventional Western scale. For me, then, the initial experience of just intervals was indistinguishable from the experience of Partch’s music. My current opinion is that, whatever its musical value, Partch’s later music, such as I heard in 1969, is not the ideal introduction to JI. Whether one loves or hates Partch’s work (and it usually elicits strong responses), much of it does not show the intervals of JI to good advantage, being dominated by rapid percussion figuration, complex inharmonic timbers, and microtonal glissandi. A simple consonance or consonant chord, sustained and unadorned, is seldom heard. It was not until about five years later that I had a second and more revelatory encounter with JI. In the summer of 1975, I enrolled in a class called “Intonation in World Music,” presented by Lou Harrison at the Center for World Music in Berkeley, California. In this course, I learned the fundamentals of JI, its relation to various musical cultures, and how to work with musical ratios. Equally important, I had the experience of JI in conjunction with a very different musical aesthetic. At the time, Lou and Bill Colvig were building their first American gamelan, later known as “Old Granddad” (the instruments used in the Suite for Violin and American Gamelan and La Koro Sutro). Students were invited to perform in a concert on these instruments at the end of the term and several also composed for the ensemble (I did not). Old Granddad was tuned in a straightforward just D-major scale, about as different from Partch’s scale as possible in the realm of JI. Several of the pieces presented were in a minimalist/proceduralist style, a compositional practice that interested me at that time. The simple intervals of 5-limit JI, heard in that context, were stunningly beautiful. That experience changed my life. I came away from that class and concert with both the tools necessary to understand JI and the germ of a compositional and instrument-building program that culminated in the creation of Other Music’s American Gamelan in 1977–78.4 What is the point of this personal reminiscence? When I first became aware of JI and decided that I wanted to incorporate it into my music, examples of recorded music in JI were rare, and opportunities to hear it in performance were rarer. Books and articles on the subject were few in number and often difficult to understand. The second edition of The Harvard Dictionary of Music (1969) described JI as “practically useless.” And in order to work in JI, it was necessary to build or modify instruments, as Partch and, to a lesser extent, Harrison and Colvig had done, or to learn how to play in JI on an existing instrument (with few models, if any, to emulate). Things have changed enormously in the intervening 43 years. The first revolution came with the introduction of commercial synthesizers with user programmable tuning. The first of these was the Sequential Prophet V, a digital/analog hybrid. Version 3 of this instrument, produced in the early 1980s, had the ability to store 12-tone tunings in patch memory. The Yamaha DX7 II digital FM synthesizer, introduced in 1983, brought this capability to a truly mass-market instrument, and many others followed. Anyone with an appropriate synth and a MIDI sequencer could begin exploring JI.5 It was these developments that motivated me and my colleagues to found The Just Intonation Network in 1984, so that those following this path would have access to accurate and understandable information. The second revolution was marked by the creation of the Scala file format for representing tunings and the availability of “soft” instruments (“plug-ins”) that work in conjunction with a digital audio work station (DAW) or MIDI sequencer. Many available soft instruments support the Scala format, making it even simpler for the novice microtonalist to explore and compare tunings on a properly equipped computer. An example of a Scala file for Lou Harrison’s Incidental Music for Corneille’s Cinna (1956) is shown in Figure 6.6 Even if one ultimately decides to commit to composing for acoustic instruments, these tools are invaluable for exploring and composing in different tunings. The advent of downloadable and streaming music services, though a bane to composers’ and musicians’ finances, allows the student to easily hear examples of almost style of music imaginable by almost any composer. Thus, the paths now open to the beginner are both more numerous and less laborious than those I encountered in the 1970s. How To (And How Not To) Compose In Just Intonation There is no simple answer to the question “how do I go about composing in JI?” once you have acquired the basic resources needed to begin. There is no manual or course that will tell you how to do it. It is a question that individual composers must answer for themselves, depending on their musical taste and aesthetic preferences, and on how the music is ultimately to be performed. Lou Harrison, in his Music Primer, made a useful distinction: After only a brief study of intervals it becomes clear that there are two ways of composing with them: 1) arranging them in a fixed mode, or gamut, & then composing within that structure. This is the Strict Style, & is the vastly predominant world method. However, another way is possible—2) to freely assemble, or compose, with whatever intervals one feels that he needs as he goes along. This is the Free Style…7 I would add that there is a “middle way” (with apologies to Buddhists): begin with a fixed scale or mode and then add pitches as required for harmony, counterpoint, transposition, modulation, and so on. This is the approach I use in much of my work. Additionally, pieces composed within a sufficiently large gamut, such as Partch’s 43-tone scale or Ben Johnston’s hyperchromatic scales, may be indistinguishable from “free style” compositions. The Strict Style, in addition to being the “vastly predominant world method,” is the method used in many works by composers including Lou Harrison himself, Terry Riley, La Monte Young, Michael Harrison, and many others. This approach is more or less necessary if one intends to compose for a fixed-pitch instrument, such as a retuned piano or harpsichord or a refretted guitar, or an ensemble such as the Harrison/Colvig American gamelan. (Composing in the free style is a subject for advanced students, and is beyond the scope of this article.) If you have some experience with composition, you already have some idea of what sorts of scales, modes, and harmonic progression interest you and what their expressive properties are. Try mapping different versions of these figures onto a JI lattice like that in Figure 5. (Learn to construct such lattices for yourself; they’re a vital tool for understanding JI.) You will discover that there are many different versions of familiar resources such as pentatonic or diatonic scales. In 12TET, there is only one whole tone and one semitone, which can be arranged in a limited number of ways to construct five- or seven-tone scales. (There are only eleven unique intervals smaller than an octave in 12TET, though they may, in some cases, be called by different names.) In contrast, JI offers a much larger variety of such intervals (see Table 3), which can be used in constructing scales that are at once familiar and novel. Different versions of familiar scales will have different expressive properties and tonal centricities. Once one has chosen a mode or gamut, what then? Take time to familiarize yourself with all of the tones and the intervals that connect them. By this I mean familiar both aurally and computationally. Which intervals feel stable and which are in need of resolution? What melodic or harmonic figures lead to or away from the stable intervals? When composing in 12TET, you never need ask yourself, “which Bb do I want here?” or “am I sure that I want a G# here rather than an Ab?” (and which G# or Ab?). In JI, such questions are meaningful and often critical. Referring to the lattice in Figure 5, you will see that there are three varieties or “flavors” of “Bb” in close proximity to C 1/1. B7b 7/4 is the harmonic seventh of C. Bb– 16/9 is a perfect fifth below F 4/3 and a 9:8 whole tone below C. Bb 9/5 is a minor third above G 3/2 and a smaller 10:9 whole tone below C. Any of these tones could be used melodically in relation to C, though with different effects. Harmonically, their uses are quite distinct. B7b 7/4 can be used to form a consonant dominant-seventh type chord with C, E, and G or a subminor triad with G and D. Bb 9/5 is the minor third of a minor triad on G 3/2. Bb– 16/9 is the root of a triad including D– 10/9 and F 4/3. All of these tones also perform other roles in tonalities more distant from C 1/1. One need not look far on the lattice to see many other pairs of tones near C 1/1 that are represented by a single tone in 12TET, such as D 9/8 and D– 10/9 or A 5/3 and A+ 27/16 or C 1/1 and C7+ 63/32. Recognizing such distinctions and understanding their melodic and harmonic implications is an essential aspect of understanding and working with just intonation. It is much easier to explain how not to compose in JI: Write a piece as you typically do in 12TET and then try to impose a just tuning after the fact. This will cause you no end of trouble.
<urn:uuid:1aff7219-69cd-4503-922e-095712959afc>
CC-MAIN-2022-33
https://soundamerican.org/issues/just-intonation/why-how-and-wherefore
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00297.warc.gz
en
0.948798
6,828
3.609375
4
Chiro National Sorghum Research and Training Centre, P.O.BOX 190, Chiro, Ethiopia Maize (Zea mays L.) is that the world’s third most vital cereal crop that has remarkable productive potential. The primary center of origin of maize is considered by most authorities to be Central America and Mexico, where many diverse types of maize are found. It is one of the leading economic crops of the world. Besides its uses as food and feed, maize is a priority and strategic crop to respond to the world’s quest for alternative energy sources. In Ethiopia, it ranks first in total production and yield per unit area and it is the staple crop for millions of people. The selection for high yield with desirable traits depends on the genetic variability in the existing germplasm. Successful breeding programs need adequate genetic variation for selection and improvement supported necessity. Knowledge of the magnitude of genetic variability, heritability, and genetic gains in the selection of desirable characters could assist the plant breeder in ascertaining criteria to be used for the breeding programs. Many studies on genetic variability with the help of suitable biometrical tools such as variability, heritability, genetic advance gives an idea about the extent of genetic variability present in the population. Heritability is a suitable measure for assessing the magnitude of the genetic portion of total variability and genetic advance aids to make improvements in the crop by selection for various characters. This review paper was prepared to assess the genetic variability, heritability, genetic advance of maize genotypes. Maize, Genetic variability, Heritability, Genotypes, Correlation Maize (Zea mays L, 2n=2x=20), a member of the Gramineae (Poaceae), is one of the oldest cultivated crops. Maize is predominately cross-pollinated by wind, but self-pollination is also possible . Maize is the most important crop worldwide and a basic trade product recurring ingredient for millions of people in Sub-Saharan Africa. Currently, maize is widely grown in most parts of the planet over a good range of environmental conditions ranging between 50º latitude north and south of the equator. Maize has a wide range of adaptations and is an important cereal crop in Ethiopia as a source of both food and cash . Maize is one of the foremost important cereal crops in the world following wheat and rice. It is widely used for food, feed, fuel, and fiber in many parts of the world. Maize has broad morphological variability and geographical adaptability thanks to its cross-pollinated nature. According to World Food and Agriculture, 197 million hectares of land were covered by maize and produced 1,134 million tons of maize grain in the 2017 production season . Maize is one of the main cereals that play a core role in Ethiopia’s agriculture and food economy. It has the largest smallholder farmers’ coverage and greatest production and consumption compared to other cereals . According to the CSA 2017/18, maize exceeds teff, sorghum, and wheat by 58.9, 62.4, and 80.7%, respectively, with a total production of 8.4 million tons produced over 2.1 million hectares. About 11 million formers contributed to maize production and productivity (3.9-ton\ha). Maize was introduced to Ethiopia by the Portuguese within the 16th or 17th century . Since its introduction, it's gained importance as the main food and feed crop. In Ethiopia, maize growing agro-ecologies are broadly classified into four major categories: mid-altitude sub-humid (1000-1800 m.a.s.l), highland sub-humid (1800-2400 m.a.s.l.), lowland moisture stress areas (300-1000 m.a.s.l.) and lowland sub-humid (<1000 m.a.s.l.). Currently, the national maize research program has three main breeding stations located within the above three major agro-ecologies excluding the lowland sub-humid agro ecology. Several improved OPVs and hybrids with resistance to certain biotic stresses were released for giant scale production across different agro-ecologies by these breeding centers of the National Maize Research Program of the Ethiopian Institute of Agricultural Research (EIAR). The high altitude subhumid agro ecology, including the highland transition and true highlands, is next to the mid-altitude agro ecology with greater maize area and production in Ethiopia. This agro-ecology covers an estimated 20% of the land dedicated to annual maize cultivation and consisted of quite 30% of small-scale farmers who depend upon maize production for his or her livelihoods . Maize breeding in Ethiopia has been ongoing since the 1950s and has skilled three distinctive stages of research and development. These are from 1952 to 1980, the most activities were the introduction and evaluation of maize materials from different a part of the planet for adaptation to local condition, from 1980 to 1990, the work was focused on the evaluation of inbred lines and development of hybrid and open-pollinated varieties. From 1990 to this, the most activities were (a) extensive inbreeding and hybridization, (b) development of early maturing or drought-tolerant cultivars, and (c) collection and improving maize with adaptation to highland agro-ecologies. As a result, various improved hybrids and open-pollinated varieties were released for large-scale production, especially for mid-altitude zones. The high land maize breeding program was also started in 1998 together with the international maize and wheat improvement center . The main goal of all maize breeding programs is to get new open-pollinated varieties (OPVs), inbred lines, and from their hybrids and synthetics which will outperform the prevailing cultivars with reference to a variety of traits. In working toward this goal, attention must be paid to grain yield because the most vital agronomic trait . Genetic diversity is that the existed variability within the genotypes of the individuals of a population that belongs to the same species. The variation could prevail within the entire genome, chromosomes, gene, or within the nucleotide levels. Maize is both phenotypically and genetically diverse. Genetic variability among individuals in population offers effective selection. Genetic diversity among maize lines is often examined supported by morphological traits. Grain weight and grain yield; kernel weight and days to maturity, ear height, days to silking, % tryptophan content, cob length and 1000-seed weight; ear length and diameter; days to 50% anthesis, days to 50% silk emergence, days to maturity, ear aspects, grain yield, plant height, ear height and a number of diseased cobs are variables which will contribute to genetic diversity assessment . Characterization of obtainable maize genotypes supported phenotypes is critical to utilize the resources . The existing magnitude and nature of genetic variability among genotypes matters the preference of approaches of breeding for genetic improvement of a crop. The probability that two randomly sampled alleles are different is genetic diversity. The space reflects a definite amount of genetic difference present among the genotypes. These measures are often calculated by measuring morphological characteristics and/or using molecular markers. Albeit, phenotypic evaluation has useful attributes for grouping inbred lines and populations, these phenotypic traits have limitations in distinguishing variation in highly related genotypes and elite breeding germplasm thanks to genotype by environment interaction (GEI). Advances in molecular technology have produced a shift towards detecting individual differences using molecular markers. the character and magnitude of genetic variability of each elite maize inbred line is an important however limited number of highland maize inbred lines are characterized thus far due to only certain researches are conducted for the agro-ecology . Generally, knowledge of the nature and magnitude of variation in genotypes is of great importance to developing genotypes for top yield and other desirable traits. The magnitude of genetic variability, heritability, and genetic advances in the selection of desirable traits are pertinent and compulsory issues for the plant breeder to think about the traits during the crossing in the breeding program. Monitoring of genetic advances in crop improvement programs is important to live the efficiency of the program. Periodic measurement of genetic advances also allows the efficiency of the latest technologies incorporated into a program to be quantified. Estimation of genetic progress in variety development helps breeders to form a choice on the increment of productivity also on considers the breeding strategies within the future . Therefore the target of this paper is to review the genetic variability and future trends of maize genotypes. Origin, Distribution and Adaptableness of Maize Maize originated under warm, seasonally dry conditions of Mesoamerica, and was by human selection converted from a low-yielding progenitor species into its modern forms, with an outsized rachis (cob) of the feminine inflorescence bearing up to 1,000 seeds . The first center of origin of maize is taken into account by most authorities to be Central America and Mexico, where many diverse sorts of maize are found. The invention of fossil maize pollen with other archaeological evidence in Mexico indicates Mexico to be the native of maize. It had been the principal food crop of the American Indians when Columbus arrived and still remains the foremost important cereal food crop in Mexico, Central America, and lots of countries in South America and Sub-Saharan Africa. Two locations are suggested as the possible center of origin for maize, namely, the highlands of Peru, Ecuador, and Bolivia, and therefore the region of southern Mexico and Central America . Today, maize is widely grown in most parts of the planet, over a good range of environmental conditions, between latitudes of 500 North and South of the equator. It grows from water level to over 3000 m above sea level. It’s believed that maize was introduced to West Africa within the early 1500s by Portuguese traders and reached Ethiopia within the 1860s . It spread around the world, particularly in temperate zones, after the European discovery of America within the 15th century. The Portuguese introduced maize to Southeast Asia from America within the 16th century. The maize was introduced into Spain after the return of Columbus from America and from Spain it visited France, Italy, and Turkey. In India, the Portuguese introduced maize during the seventeenth century. From India, to China and later it had been introduced in the Philippines. There is no evidence of maize cultivation in Africa until the 16th century. When it had been introduced from America to Africa along the western and eastern coasts, gradually moving inward as a ration with the slave traffic. Before 1965, the increase of maize production altogether of African countries was propelled to a greater or lesser extent by the subsequent driving factors: the agronomic suitability of maize, British starch market, milling technology, and therefore the integration of Africans into the settler wage economy; and market and trade policies promoted by settler farm lobbies . Taxonomy, Reproductive, Biology and Genetics of Maize Maize belongs to the tribe Maydeae of the Gramineae Poaceae. “Zea” (zela) was derived from an old Greek name for food grass. The Zea consists of 5 species including Z. diploperennis, Z. perennis, Z. luxurians, Z. nicaraguensis, and Z. mays. The species corn is split into four subspecies: huehuetenangensis, mexicana, parviglumis and mays of which the subspecies mays is economically important. The opposite first three subspecies are teosintes which are wild grasses in Mexico and Central America . An early hypothesis on the origin of maize proposed that maize was produced by natural hybridization between two wild types of grass, a species of Tripsacum and a perennial subspecies of teosinte (Zeadiploperennis). Further, teosinte was crossed with wild maize and therefore the modern maize was produced as a result . Species of Zea have a chromosome number of 20 apart from Zeaperennis which features a total of 40 (Table 1). |Species||Zea mays L.| Table 1:Taxonomy and classification of maize Source: Verheye (2010) Maize may be a monoecious species having separate female and male reproductive parts on an equivalent plant. The ear or shoot is that the female reproductive part of the plant; the silks are elongated stigmas, each growing from and prod the cob . The tassel, which is found at the highest of the plant, is that the male reproductive part that produces pollen grains. Pollen grains are the microscopic body that contains the male germ cell of a plant. Maize may be a crosspollinated crop normally, about 95% of the ovules on a shoot are cross-pollinated and 5% self-pollinated. Production, Importance, and Utilization of Maize Maize is cultivated throughout the year in almost every part of the planet. About 875,226,630 plenty of maize was produced in 2016 alone and production has increased by 600 million metric tons since 1990 . World maize production has grown at roughly an equivalent rate to consumption. One mechanism which will be wont to increase maize production is increasing the quantity of land dedicated to producing it and therefore the area of harvested maize has increased at a rate of 1.32% annually since 1990. Similarly, world maize yield increased at the speed of 1.3% per annum from 1990-2016. In addition to producing maize locally, many African countries import additional maize for food and feed consumption . In contrast, some African countries like South Africa, Uganda, Tanzania, Rwanda, and Namibia are important exporters of maize. In 2013, 20% of the worldwide export of maize flour came from Africa, while the USA and France accounted for 14.9% and 10.5%, respectively . More than 75 you look after maize production in Africa is completed by small-scale farmers, while some large-scale farmers mainly work for global export . In Ethiopia, smallholder farms account for quite 95 attempts to use draft animals for land preparation and cultivation. Approximately 88% of maize produced in Ethiopia is consumed as food, both as green and dry grain . According to the Ethiopian Institute of Agricultural Research (EIAR) has while collaboration with the International Maize and Wheat Improvement Center-CIMMYT and it's developed a complete of quite forty improved maize varieties including hybrids and OPVs within the last four decades. OPVs are more common in the drought-prone areas through the farmers within the central valley and a number of other nitrogen use efficient maize varieties, namely, Melkassa II, III, IV and V were developed within the 1990s under the primary phase of the African Maize Stress (AMS) project, a joint undertaking of the International Maize and Wheat Improvement Center (CIMMYT) and national agricultural research institutes across Eastern and Southern Africa. Maize plays a crucial role within the livelihoods of many small farmers, who grow maize for food, animal feed and income. For example, about 9 million households in Ethiopia are currently engaged in maize cultivation . Maize is that the world’s favorite feed and is employed because the main source of calories in animal feed and feed formulation in both developed and developing countries. Approximately 60% of the maize produced globally is employed for animal feed. Everywhere on the planet, maize may be a major food source thanks to its excellent properties: it's easy to propagate from single plants or small nurseries to many hectares, and therefore the ears with their kernels are easy to reap. It’s one of the cereals that provide most of the calorie requirements within the traditional Ethiopian diet. it's prepared and used as matzo, roasted and boiled green ears, parched mature grain porridge, and in local drinks . Understanding the genetic variability, heritability and genetic advance of traits in any plant population is a crucial pre-requisite for a breeding program. Genetic improvement in traits of economic importance alongside maintaining a sufficient amount of variability is usually the specified objective in maize breeding programs observed considerable genotypic variability among various maize genotypes for various traits [27,28]. It also reported significant genetic differences for the morphological parameters for maize genotypes . This variability may be a key to crop improvement . Genetic diversity is that the existed variability within the genotypes of the individuals of a population that belongs to the same species. The variation could prevail within the entire genome, chromosomes, and gene or within the nucleotide levels. Maize is both phenotypically and genetically diverse. Genetic variability among individuals in population offers effective selection. Genetic diversity among maize lines is often examined supported by morphological traits. It conducted genetic diversity research on, twelve maize genotypes in the Humid Tropic of Ethiopia. The high phenotypic and genotypic coefficient of variations were observed from the number of ears per plant (45.44 and 41.77), and ear diameter (24.60 and 23.83) therein order. On the opposite hand, relatively moderate values were recorded for grain yield per hectare (16.93 and 16.70), ear length (15.06 and 10.49), number of rows per ear (14.29 and 13.38) for the phenotypic and genotypic coefficient of variation therein order. A moderate phenotypic coefficient of variation was observed from the number of kernels per row (10.49). In that study, the number of ears per plant, and ear diameter had high phenotypic and genotypic coefficients of variation and hence these traits provide a greater chance for effective selection whereas grain yield per hectare, ear length, and a number of rows per ear had moderate genotypic and phenotypic coefficients of variation, and hence these traits provide an average chance for selection. On the contrary, thousand kernel weights (2.62 and 2.44), days to maturity (3.92 and 3.83), plant height (4.17 and 3.62), days to silking (6.18 and 5.43), days to anthesis (6.21 and 5.47) and ear height (7.47 and 4.36) had the smallest amount phenotypic and genotypic coefficients of variation therein order, and hence these traits provide less chance for selection. Consistent with genetic traits having high GCV indicate high potential for effective selection . It evaluated the varied parameters of genetic variability and nature of associations among traits affecting grain yield in thirty-three inbred lines of maize (Zea mays L.) and located the presence of considerable variability among the genotypes for all the 11 traits studied . The equivalence between the genotypic coefficient of variation (GCV) and phenotypic coefficient of variation (PCV) was close for all traits indicating that these characters were less influenced by the environment. High GCV, heritability, and the comparatively high genetic advance were observed for the traits viz., number of grains per cob, grain yield per plant, number of grains per row, plant height and ear height indicating that selection for these characters would prove quite effective since these characters appeared to be governed by additive gene action. They conducted research on genetic variability, heritability and genetic advance studies on newly developed eightysix maize genotypes to work out the varied parameters of genetic variability, broad-sense heritability, genetic advance, and Analysis of variance revealed that the mean sum of squares thanks to genotypes showed significant differences for all the 12 characters studied . Traits yield per plant, plant height, ear height, number of kernels per row, 100-kernel weight were showed high heritability accompanied with high to the moderate genotypic and phenotypic coefficient of variation and genetic advance which indicates that the majority likely the heritability is thanks to additive gene effects and selection could also be effective in early generations for these traits. Whereas high to moderate heritability alongside low estimates of genetic advance were observed for days to 50 percent tasseling, days to 50 percent silking, shelling percentage, ear length and days to maturity ear girth and number of kernel rows per ear. It also estimate the extent of genetic variability and traits association in maize, fifty-five genotypes available in India and analysis of variance revealed significant differences for 18 characters studied among the genotypes . High genotypic and phenotypic coefficient of variation was recorded for grain yield/plant, biological yield/plant and cob weight including high heritability and genetic advance. Strong positive associations were displayed to grain yield per plant with plant height, ear height, leaf area index, cobs/plant, cob weight, cob length, cob girth, grains/row and biological yield/plant both at genotypic and phenotypic levels. It conducted research to work out the existing genetic variability of 20 maize genotypes in Bangladesh and observed a high degree of variation among the genotypes used. Correlation coefficient analysis revealed that yield plant−1 (g) had a positive and significant association with ear girth (cm), 1000-kernel weight (g), yield plot−1 (g), grain yield (t/ ha) with dry weight. The genotypes differed significantly for many of the phenotypic traits. The phenotypic coefficient of variation (PCV) was above the genotypic coefficient of variation (GCV) altogether traits studied indicating that those traits interacted with the environment. The traits under study expressed wide heritability estimates (26.81% to 99.95%). Among the characters, the highest heritability was recorded for the 1000-kernel weight (g). High heritability alongside high genetic advance was noticed for the 1000-kernel weight (g), yield plot−1 (g) and grain yield (t/ha). They studied the extent of genetic variability, heritability and genetic advance of thirteen agronomic and fresh yield traits among twelve shrunken-2 super-sweet corn populations for 2 years in Ibadan, Nigeria and therefore the shows all the traits exhibited significant genotypic differences . The genotypic variance was significant for the number of marketable cobs, the yield of cobs, number of cobs, number of kernel rows, husk cover, ear height and days to anthesis, while environmental variance was significant for all the traits. Phenotypic coefficients of variation were above the corresponding genotypic coefficients of variation for all traits. It conducted an experiment to assess the magnitude of genetic variability, heritability and genetic advance of 24 maize inbred lines for 16 quantitative traits at Jimma Agricultural Research Center (JARC) and Analysis of variance showed high significance (P<0.01) differences among genotypes for all traits studied except tassel size. The genotypic coefficient of variation (GCV) for all traits studied was smaller than the phenotypic coefficient of variation (PCV), indicating the significant role of environment in the expression of traits studied . The estimates of PCV and GCV were high for grain yield, thousand kernel weight, ear height, ear diameter, anthesis and silking interval and plant aspect. Also evaluated forty-three genotypes of maize for eleven traits to study their genetic divergence and various genetic parameters and the analysis of variance showed a significant (P<0.01) difference between genotypes for all the characters, which revealed a wide range of variability and high heritability for all the characters . Heritability and Genetic Advance Stanfield (1988) defined heritability because the proportion of the entire phenotypic variance that happens thanks to gene effects. Heritability estimates are of tremendous significance to the breeder, as their magnitude indicates the accuracy with which a genotype is often recognized by its phenotypic expression. High heritability doesn't always indicate a high genetic gain; heritability should be used alongside genetic advances in predicting the last word effect for choosing superior varieties. They recorded higher genetic advances for plant height, number of kernels ear−1 and yield plot−1 which indicated the preponderance of additive gene action for the expression of those traits which is fixable in subsequent generations . According to broad-sense heritability (H2), an estimate of the entire contribution of genetic variance to the entire phenotypic variance and ranged from 24.44 for the anthesis-silking interval to 96.02 for 1000-kernel weight. Higher heritability estimates were scored for the 1000-kernel weight (96.02), leaf length (74.79), plant height (69.47), days to 50% anthesis (69.46), days to 50% silking (68.75), leaf width (64.70), ear length (64.62) and leaf area (63.95). Moderate heritability estimates were observed for grain yield per hectare (58.42), ear height (52.99), days to maturity (50.07), number of kernels per row (47.38), plant aspect (33.38), and kernel row per ears (32.26). In contrast, ear diameter (29.82), anthesis and silking interval (24.45) had low heritability estimates. Estimates of genetic advance as percent of mean at 5% selection intensity ranged from 2.76% for days to maturity to 50.69% for grain yield. The high heritability estimates suggest a selection of such character might be fairly easy. Therefore, 1000- kernel weight, leaf length, plant height, days to 50% anthesis, days to 50% silking, leaf width, and leaf area could easily be passed from one generation to subsequent then enhancing the efficiency of selection in the maize improvement program. This indicated that the traits are under genetic control and therefore the environmental factors didn't greatly affect their phenotypic variation. Genetic advance (GA) as a percentage of the mean was higher for traits like grain yield per hectare, 1000-seed weight and ear height showing that these traits are under the control of additive gene action. This is supported by the findings of who reported high genetic advances for plant height, kernel rows per ears, 1000 kernel weight, ear height, and grain yield per hectare . The traits like days to maturity and days to 50% silking indicated low values of genetic advance as percent of mean and which correspondingly indicated low value of genetic variation for the traits as indicated by low GCV and PCV values. This suggests the importance of genetic variability in improvement through selection. This result's also confirmed by the results observed the genetic advance as percent of mean was high for grain yield per plant (73.19%), ear height (51.05%), number of kernels per row (44.40%), plant height (43.46%), 100-grain weight (42.88%), ear length (30.79%), number of kernel rows per ear (25.23%), which is usually similar with these result. The result's also in line with the findings of Abe and broad sense heritability ranged from 22.2% for the anthesissilking interval to 85.1% for husk cover . The genetic advance was high (32.7%) for husk cover, medium (12.0%) for a yield of cobs and low for other traits. Studies on correlation coefficients of various plant traits are useful criteria to spot desirable traits that contribute to enhance the variable (grain yield). The knowledge of correlations between seed yield and its attributing characters is vital for the simultaneous improvement of several characters in breeding programs. Correlation is either thanks to pleiotropic gene action or thanks to linkage or both. The phenotypic correlation refers to the observable association between two characters while the environmental correlation is entirely thanks to environmental effects. The correlation value denotes the character and extent of association existing between pairs of characters. Correlation is additionally a measure that indicates traits to be considered to extend yield. They evaluated the genetic variability of 20 maize genotypes and observe some positive significant correlation: yield plant−1(g) with ear girth(cm), 1000-kernel weight(g), yield plot−1(g), grain yield(t ha−1) with dry weight; plant height(cm) with ear length(cm), ear girth(cm), no. of kernel ear−1; ear height(cm) with ear length(cm); ear length(cm) with ear girth(cm), no. of kernel ear−1; ear girth(cm) with no. of kernel ear−1, yield plot−1(g), grain yield(t ha−1) with dry weight; 1000-kernel weight(g) with yield plot−1 (g), grain yield(t ha−1) with dry weight; yield plot−1(g) with grain yield(t ha−1) with dry weight [41,42]. It also reported that grain yield (0.68), grains per row (0.74), grains per ear (0.80), ear height (0.46), ear-down leaves (0.40), total leaves (0.58), grain depth (0.81), grain dry matter weight (0.87) and 1000-grain weight (0.56) had significant and direct correlation. This correlation is often used as the basis for character selection if similar research is conducted in the future using additional morphological traits [43-45]. Summary and Conclusion The progress of the crop improvement program depends on the selection of the breeding material, the extent of variability and therefore the knowledge of quantitative traits with yield and yield-related traits. The success of any breeding program depends upon the genetic variation within the materials at hand. The greater the genetic variability, the upper would be the heritability and hence the higher the probabilities of success to be achieved through selection. There was considerable variability present within the materials used. The existence of variability is important for resistance to biotic and abiotic factors also for wide adaptability within the genotypes. Selection is effective when there's genetic variability among the individuals during a population. Hence, insight into the magnitude of genetic variability present during a population is of paramount importance to a plant breeder for stating a judicious breeding program. Knowledge of heritability and the genetic advance of the character indicates the scope for the development through selection. Heritability estimates alongside genetic advances are normally more helpful in predicting the gain under selection than heritability estimates alone. Without having genetic diversity further varietal improvement is unexpected. The presence of greater genotypic difference and high heritability estimates for various traits among the populations indicates that the populations might be utilized in future maize breeding programs. Moreover, high genetic variability and heritability estimates for many of the traits show a greater amount of additive gene action thus the populations might be utilized in future maize breeding programs as a base or a source population for deriving superior inbred lines through recurrent selection, S1 line selection, etc. Generally information about the extent of variation, estimates of heritability and expected genetic advance in respect of maize grain yield and yield contributing characters constitutes the essential requirement for a crop improvement program. Broad sense heritability is beneficial for measuring the relative importance of additive portion of genetic variance which will be transmitted to the offspring. The preponderance of additive gene effects controlling a trait usually resulted in both high heritability and genetic advance, while those governed by non-additive gene actions could give high heritability with low genetic advance.
<urn:uuid:8e2820eb-93e3-4d3b-8a05-b5bc49f4cdd3>
CC-MAIN-2022-33
https://www.imedpub.com/articles/review-on-genetic-variability-in-different-genotypes-of-maize-zea-mays-l.php?aid=41817
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00094.warc.gz
en
0.928974
6,596
3.359375
3
Purpose: Although we have tendency to think that authoritarian governments are dying or is a concept of the past, this current article shows us, that almost overnight the way of live can change for those living in unstable parts of the world. - Read the article "The ISIS files" (Links to an external site.) from the New York Times which is found at the end of the instructions. - Analyze the article in terms of the features of Authoritarianism shown below. In other words, for each of the twelve features presented you have to show how each specific feature is evident in the ISIS article. Write a document where you list one feature at time, followed by your analysis of how that feature is exemplified in the article. Be concrete and specific in your answer, and write the analysis for someone who has not read the article. Important Features of Authoritarianism: A.R. Ball, specifies the following features of Authoritarianism or Authoritarian state: - Limitations on Political Process: Important limitations are imposed on open political process, political parties and elections. - Use of an Ideology: Ideological principles like racialism or fundamentalism or nationalism often provide some basis for the exercise of state power over the people. - Rulers determine all decisions: The rulers and not the people determine all decisions. - Dependence on Coercion and Force: Authoritarian rulers mostly use force and coercion to command political uniformity and obedience. - Less importance to Rights and Liberties: Civil liberties enjoy a low priority. Governmental control over judiciary and mass media is direct and considered justified in the interest of public good. - Authoritarianism can involve Family rule or Military rule: The basis for rule is found either in traditional family elite or in a new modernizing group, often the army, which seizes power by a coup. - A Small Group uses all the powers: Under authoritarianism, one group monopolies political power and control. - Based on Power and Manipulations: Manipulations, suppressions and coercion constitute the basis of the power of the rulers. - Bureaucracy as the main tool of the rule of the rulers: The rulers use bureaucracy and police as the instruments of their control over the people. - Centralization of authority in a few hands: Centralization of authority is practiced and very often an attempt is made to cover this centralism with the cloak of power-sharing among several political groups who are, however, totally loyal to the ruling group/leader. - Use of Propaganda: Legitimacy for the rulers’ authority is secured through declarations, manipulations and propaganda or by the use of the ideology of peace, development and security. - Rulers Control Public Opinion: In an authoritarian system, public opinion is controlled. Only that opinion is allowed to move in society as is deemed favorable for the authority of the ruling group or rulers. In an authoritarian state, the individual and social life is largely controlled by the state i.e. by the government of the state and which is formed by one party or group. The above material was taken from http://www.preservearticles.com/2014071933479/12-important-features-of-authoritarianism-explained.html (Links to an external site.) for educational purposes. In addition to the documentary in part A of this assignment, the PowerPoint presentation of the chapter on socioeconomic ideologies has links to more than 10 videos (some short, some a little longer – click on dark blue words spread throughout the presentation). Please select any threevideos and for each one write three bits of knowledge you have gained from it. Make sure you properly identify each video and distinctly communicate the three important concepts you learned from watching it. Here is the presentation for the chapter: Note: Please double check your work before submission. Submission of empty files, corrupted files or wrong assignments (from other courses) are considered late until you submit the right file. The instructor usually takes three days to grade an assignment but under no circumstances it will take more than one week. Article: The ISIS Files The ISIS Files. By Rukmini CallimachiPhotographs by Ivor PrickettApril 4, 2018 On five trips to battle-scarred Iraq, journalists for The New York Times scoured old Islamic State offices, gathering thousands of files abandoned by the militants as their "˜caliphate' crumbled. Weeks after the militants seized the city, as fighters roamed the streets and religious extremists rewrote the laws, an order rang out from the loudspeakers of local mosques.Public servants, the speakers blared, were to report to their former offices. To make sure every government worker got the message, the militants followed up with phone calls to supervisors. When one tried to beg off, citing a back injury, he was told: "If you don't show up, we'll come and break your back ourselves." The phone call reached Muhammad Nasser Hamoud, a 19-year veteran of the Iraqi Directorate of Agriculture, behind the locked gate of his home, where he was hiding with his family. Terrified but unsure what else to do, he and his colleagues trudged back to their six-story office complex decorated with posters of seed hybrids. They arrived to find chairs lined up in neat rows, as if for a lecture. The commander who strode in sat facing the room, his leg splayed out so that everyone could see the pistol holstered to his thigh. For a moment, the only sounds were the hurried prayers of the civil servants mumbling under their breath. Their fears proved unfounded. Though he spoke in a menacing tone, the commander had a surprisingly tame request: Resume your jobs immediately, he told them. A sign-in sheet would be placed at the entrance to each department. Those who failed to show up would be punished Meetings like this one occurred throughout the territory controlled by the Islamic State in 2014. Soon municipal employees were back fixing potholes, painting crosswalks, repairing power lines and overseeing payroll."We had no choice but to go back to work," said Mr. Hamoud. "We did the same job as before. Except we were now serving a terrorist group." The disheveled fighters who burst out of the desert more than three years ago founded a state that was acknowledged by no one except themselves. And yet for nearly three years, the Islamic State controlled a stretch of land that at one point was the size of Britain (Links to an external site.), with a population estimated at 12 million people (Links to an external site.). At its peak, it included a 100-mile coastline in Libya, a section of Nigeria's lawless forests and a city in the Philippines, as well as colonies in at least 13 other countries. By far the largest city under their rule was Mosul. How Far ISIS Spread Across Iraq and Syria and Where It's Still Holding On Since declaring a caliphate in 2014, the Islamic State has controlled large swaths of territory in Iraq and Syria. But after the group retreated from Mosul and Raqqa in 2017, it lost nearly all of its territory. Nearly all of that territory has now been lost, but what the militants left behind helps answer the troubling question of their longevity: How did a group whose spectacles of violence galvanized the world against it hold onto so much land for so long? Part of the answer can be found in more than 15,000 pages of internal Islamic State documents I recovered during five trips to Iraq over more than a year.The documents were pulled from the drawers of the desks behind which the militants once sat, from the shelves of their police stations, from the floors of their courts, from the lockers of their training camps and from the homes of their emirs, including this record detailing the jailing of a 14-year-old boy for goofing around during prayer. The New York Times worked with outside experts to verify their authenticity, and a team of journalists spent 15 months translating and analyzing them page by page. Individually, each piece of paper documents a single, routine interaction: A land transfer between neighbors. The sale of a ton of wheat. A fine for improper dress. But taken together, the documents in the trove reveal the inner workings of a complex system of government. They show that the group, if only for a finite amount of time, realized its dream: to establish its own state, a theocracy they considered a caliphate, run according to their strict interpretation of Islam. The world knows the Islamic State for its brutality, but the militants did not rule by the sword alone. They wielded power through two complementary tools: brutality and bureaucracy. ISIS built a state of administrative efficiency that collected taxes and picked up the garbage. It ran a marriage office that oversaw medical examinations to ensure that couples could have children. It issued birth certificates "” printed on Islamic State stationery "” to babies born under the caliphate's black flag. It even ran its own D.M.V. The documents and interviews with dozens of people who lived under their rule show that the group at times offered better services and proved itself more capable than the government it had replaced. They also suggest that the militants learned from mistakes the United States made in 2003 after it invaded Iraq, including the decision to purge members of Saddam Hussein's ruling party from their positions and bar them from future employment. That decree (Links to an external site.)succeeded in erasing the Baathist state, but also gutted the country's civil institutions, creating the power vacuum that groups like ISIS rushed to fill A little more than a decade later, after seizing huge tracts of Iraq and Syria, the militants tried a different tactic. They built their state on the back of the one that existed before, absorbing the administrative know-how of its hundreds of government cadres. An examination of how the group governed reveals a pattern of collaboration between the militants and the civilians under their yoke. One of the keys to their success was their diversified revenue stream. The group drew its income from so many strands of the economy that airstrikes alone were not enough to cripple it. Ledgers, receipt books and monthly budgets describe how the militants monetized every inch of territory they conquered, taxing every bushel of wheat, every liter of sheep's milk and every watermelon sold at markets they controlled. From agriculture alone, they reaped hundreds of millions of dollars. Contrary to popular perception, the group was self-financed, not dependent on external donors More surprisingly, the documents provide further evidence that the tax revenue the Islamic State earned far outstripped income from oil sales. It was daily commerce and agriculture "” not petroleum "” that powered the economy of the caliphate. The United States-led coalition, trying to eject the Islamic State from the region, tried in vain to strangle the group by bombing its oil installations. It's much harder to bomb a barley field. It was not until last summer that the militants abandoned Mosul, after a battle so intense that it was compared to the worst combat of World War II. While the militants' state eventually crumbled, its blueprint remains for others to use."We dismiss the Islamic State as savage. It is savage. We dismiss it as barbaric. It is barbaric. But at the same time these people realized the need to maintain institutions," said Fawaz A. Gerges, author of "ISIS: A History (Links to an external site.)." "The Islamic State's capacity to govern is really as dangerous as their combatants," he said Land for the Taking The day after the meeting, Mr. Hamoud, a Sunni, returned to work and found that his department was now staffed 100 percent by Sunnis, the sect of Islam practiced by the militants. The Shia and Christian colleagues who previously shared his office had all fled. For a while, Mr. Hamoud and the employees he supervised at the agriculture department went on much as they had before. Even the stationery they used was the same, though they were instructed to use a marker to cover up the Iraqi government's logo.But the long-bearded men who now oversaw Mr. Hamoud's department had come with a plan, and they slowly began to enact it. For generations, jihadists had dreamed of establishing a caliphate. Osama bin Laden frequently spoke of it (Links to an external site.) and his affiliates experimented with governing in the dunes of Mali (Links to an external site.), in the badlands of Yemen (Links to an external site.) and in pockets of Iraq. Their goal was to recreate the society that existed over a millennium ago during the time of the Prophet Muhammad. In Mosul, what had been called the Directorate of Agriculture was renamed Diwan al-Zera'a,which can be translated as the Ministry of Agriculture. The term "diwan" harks back to the seventh-century rule of one of the earliest caliphs. ISIS printed new letterhead that showed it had branded at least 14 administrative offices with "diwan," renaming familiar ones like education and health. Then it opened diwans for things that people had not heard of: something called the hisba, which they soon learned was the feared morality police; another diwan for the pillaging of antiquities; yet another dedicated to "war spoils." What began as a cosmetic change in Mr. Hamoud's office soon turned into a wholesale transformation. The militants sent female employees home for good and closed the day care center. They shuttered the office's legal department, saying disputes would now be handled according to God's law alone. And they did away with one of the department's daily duties "” checking an apparatus, placed outside, to measure precipitation. Rain, they said, was a gift from Allah "” and who were they to measure his gift? Employees were also told they could no longer shave, and they had to make sure the leg of their trousers did not reach the ankle.Glossy pamphlets, like the one below, pinpointed the spot on the calf where the hem of the garb worn by the companions of the Prophet around 1,400 years ago was said to have reached.Eventually, the 57-year-old Hamoud, who wears his hair in a comb-over and prides himself on his professional appearance, stopped buying razors. He took out the slacks he wore to work and asked his wife to trim off 5 centimeters. But the biggest change came five months into the group's rule, and it turned the hundreds of employees who had reluctantly returned to work into direct accomplices of the Islamic State. The change involved the very department Mr. Hamoud headed, which was responsible for renting government-owned land to farmers. To increase revenue, the militants ordered the agriculture department to speed up the process for renting land, streamlining a weekslong application into something that could be accomplished in an afternoon.That was just the beginning. It was then that government workers got word that they should begin renting out property that had never belonged to the government. The instructions were laid out in a 27-page manual emblazoned with the phrase "The Caliphate on the Path of Prophecy." The handbook outlined the group's plans for seizing property from the religious groups it had expelled and using it as the seed capital of the caliphate. "Confiscation," the manual says, will be applied to the property of every single "Shia, apostate, Christian, Nusayri and Yazidi based on a lawful order issued directly by the Ministry of the Judiciary." Islamic State members are exclusively Sunni and see themselves as the only true believers. Mr. Hamoud's office was instructed to make a comprehensive list of the properties owned by non-Sunnis "” and to seize them for redistribution. The confiscation didn't stop at the land and homes of the families they chased out. An entire ministry was set up to collect and reallocate beds, tables, bookshelves "” even the forks the militants took from the houses they seized. They called it the Ministry of War Spoils. It was housed in a stone-faced building in western Mosul that was hit by an airstrike in the battle to retake the city. The ensuing fire consumed the structure and blackened its walls. But the charred shapes left behind still told a story. Each room served as a warehouse for ordinary household objects: kerosene heaters in one; cooking ranges in another; a jumble of air coolers and water tanks in yet another. The few papers that did not burn up showed how objects seized from the religious groups they had chased out were offered as rewards to ISIS fighters. "Please kindly approve the request of the family of the late Brother Durayd Salih Khalaf," says one letter written on the letterhead of the Islamic State's Prisoners and Martyrs Affairs Authority. The request was for a stove and a washing machine. A note scribbled at the bottom says: "To be provided with a plasma TV and stove only." Another application from the General Telecommunications Authority requested, among other things, clothes hangers. The Islamic State's promise of taking care of its own, including free housing for foreign recruits, was one of the draws of the caliphate. "I'm in Mosul and it's really the top here," Kahina el-Hadra, a young Frenchwoman who joined the group in 2015, wrote in an email that year to her secondary school teacher, according to a transcript contained in a report by the Paris Criminal Brigade, which was obtained by The Times. "I have an apartment that is fully furnished," Ms. Hadra gushed. "I pay no rent nor even electricity or water lol. It's the good life!!! I didn't buy so much as a single fork." When her concerned teacher wrote back that the apartment had probably been stolen from another family, she shot back: "Serves them right, dirty Shia!!!" Ms. Hadra, according to police records, was the pregnant wife of one of the suicide bombers who blew himself up in the packed Bataclan concert hall (Links to an external site.) during the Paris attacks of 2015. The Paper Trail I got into the habit of digging through the trash left behind by terrorists (Links to an external site.) in 2013, when I was reporting on Al Qaeda in Mali. Locals pointed out buildings the group had occupied in the deserts of Timbuktu. Beneath overturned furniture and in abandoned filing cabinets, I found letters the militants had hand-carried across the dunes that spelled out their vision of jihad. Those documents revealed the inner workings of Al Qaeda, and years later I wanted to investigate the Islamic State in the same way.When the coalition forces moved to take Mosul back from the militants in late 2016, I rushed to Iraq. For three weeks, I tried "” and failed "” to find any documents. Day after day, my team negotiated access to buildings painted with the Islamic State logo, only to find desk drawers jutting out and hard drives ripped out. Then, the day before my return flight, we met a man who remembered seeing stacks of paper inside the provincial headquarters of the Islamic State's Ministry of Agriculture in a small village called Omar Khan, 25 miles southeast of the city. The next day we traveled to the town, no more than a speck on the map of the Nineveh Plains, and entered House No. 47. My heart sank as we pushed open the door and saw the closets flung open "” a clear sign that the place had already been cleared. But on the way out, I stopped at what seemed to be an outhouse. When we opened the door, we saw piles of yellow folders cinched together with twine and stacked on the floor. We pulled one out, laid it open in the sun "” and there was the unmistakable black banner of the Islamic State, the flag they claim (Links to an external site.) was flown by the Prophet himself. Folder after folder, 273 in all, identified plots of land owned by farmers who belonged to one of the faiths banned by the group. Each yellow sleeve contained the handwritten request of a Sunni applying to confiscate the property. Doing so involved a step-by-step process, beginning with a report by a surveyor, who mapped the plot, noted important topographical features and researched the property's ownership. Once it was determined that the land was owned by one of the targeted groups, it was classified as property of the Islamic State. Then a contract was drawn up spelling out that the tenant could neither sublet the land nor modify it without the group's permission. The outhouse discovery taught me to stay off the beaten track. I learned to read the landscape for clues, starting with باقية "” "baqiya" "” the first word of the Islamic State slogan. It can be translated as "will remain," and marked the buildings the group occupied, invoking its claim that the Islamic State will endure. Once we confirmed that a building had been occupied by the group, we lifted up the mattresses and pulled back the headboards of beds. We rifled through the closets, opened kitchen cupboards, followed the stairs to the roof and scanned the grounds. The danger of land mines and booby-traps hung over our team. In one villa, we found a collection of records "” but could search only one set of rooms after security forces discovered an unexploded bomb. Because the buildings were near the front lines, Iraqi security forces nearly always accompanied our team. They led the way and gave permission to take the documents. In time, the troops escorting us became our sources and they, in turn, shared what they found, augmenting our cache by hundreds of records. The Times asked six analysts to examine portions of the trove, including Aymenn Jawad al-Tamimi, who maintains his own archive (Links to an external site.) of Islamic State documents and has written a primer on how to identify fraudulent ones; Mara Revkin, a Yale scholar who has made repeated trips to Mosul to study the group's administration (Links to an external site.); and a team of analysts at West Point's Combating Terrorism Center (Links to an external site.) who analyzed the records found in Bin Laden's hide-out in Pakistan. They deemed the records to be original, based on the markings, logos and stamps, as well as the names of government offices. The terminology and design were consistent with those found on documents issued by the group in other parts of the caliphate, including as far afield as Libya. As lease after lease was translated back in New York, the same signature inked at the bottom of numerous contracts kept reappearing: "Chief Technical Supervisor, Mahmoud Ismael Salim, Supervisor of Land." On my first trip back to Iraq, I showed the leases to a local police officer. He recognized the angular signature and offered to escort me to the home of the ISIS bureaucrat. The officer shrugged when asked why a man who had taken part in the group's organized land theft had not been arrested. His men were overwhelmed investigating those who had fought and killed on behalf of the terrorist group, he said. They didn't have time to also go after the hundreds of civil servants who had worked in the Islamic State's administration. Hours later, the man whose signature appeared on the lease for farmland seized from a Christian priest, on the contract for the orchards taken from a monastery, and on the deed for land stolen from a Shia family allowed us into his modest home. The only decoration in his living room was a broken clock whose hand trembled between 10:43 and 10:44. A stooped man with thick glasses, the 63-year-old Salim was visibly nervous. He explained that he had spent years overseeing the provincial office of the government of Iraq's Directorate of Agriculture, where he reported to Mr. Hamoud, whom we contacted for the first time a few days later. Mr. Salim acknowledged that it was his signature on the leases. But speaking haltingly, he claimed to have been forcibly conscripted into the bureaucracy of the terrorist state."They took our files and started going through them, searching which of the properties belong to Shia, which of them belong to apostates, which of them are people who had left the caliphate," he said. He described informants phoning in the addresses of Shias and Christians. Sunnis who were too poor to pay the rent upfront were offered a sharecropping agreement with the Islamic State, allowing them to take possession of the stolen land in return for one-third of the future harvest. On busy days, a line snaked around his office building, made up of Sunni farmers, many of them resentful of their treatment at the hands of a Shia-led Iraqi government. In the same compound where we found the stacks of yellow folders, Mr. Salim received men he knew, whose children had played with his. They came to steal the land of other men they all knew "” whose children had also grown up alongside theirs. With the stroke of his pen, farmers lost their ancestors' cropland, their sons were robbed of their inheritance and the wealth of entire families, built up over generations, was wiped out. "These are relationships we built over decades, from the time of my father, and my father's father," Mr. Salim said, pleading for understanding. "These were my brothers, but we were forced to do it." A Clean Sweep As 2014 blurred into 2015 and Mr. Hamoud and his colleagues helped keep the machinery of government running, Islamic State soldiers set out to remake every aspect of life in the city "” starting with the role of women. Billboards went up showing an image of a woman fully veiled. The militants commandeered a textile factory, which began manufacturing bales of regulation-length female clothing. Soon thousands of niqab sets (Links to an external site.) were delivered to the market, and women who didn't cover up began to be fined. Mr. Hamoud, who is known as "Abu Sara," or Father of Sara, gave in and bought a niqab for his daughter. As he walked to and from work, Mr. Hamoud began taking side streets to dodge the frequent executions that were being carried out in traffic circles and public squares. In one, a teenage girl accused of adultery was dragged out of a minivan and forced to her knees. Then a stone slab was dropped onto her head. On a bridge, the bodies of people accused of being spies swung from the railing. But on the same thoroughfares, Mr. Hamoud noticed something that filled him with shame: The streets were visibly cleaner than they had been when the Iraqi government was in charge. Omar Bilal Younes, a 42-year-old truck driver whose occupation allowed him to crisscross the caliphate, noticed the same improvement. "Garbage collection was No. 1 under ISIS," he said, flashing a thumbs-up sign. The street sweepers hadn't changed. What had was that the militants imposed a discipline that had been lacking, said a half-dozen sanitation employees who worked under ISIS and who were interviewed in three towns after the group was forced out. "The only thing I could do during the time of government rule is to give a worker a one-day suspension without pay," said Salim Ali Sultan, who oversaw garbage collection both for the Iraqi government and later for the Islamic State in the northern Iraqi town of Tel Kaif. "Under ISIS, they could be imprisoned." Residents also said that their taps were less likely to run dry, the sewers less likely to overflow and potholes fixed more quickly under the militants, even though there were now near-daily airstrikes. Then one day, residents of Mosul saw earthmovers heading toward a neighborhood called the Industrial Area in the eastern half of the city. Laborers were seen paving a new blacktop road that would eventually run for roughly one mile, connecting two areas of the city and reducing congestion.
<urn:uuid:9e4f4659-3535-4f27-a5c0-de6837b0cbee>
CC-MAIN-2022-33
https://www.avceil.com/2022/07/25/socioeconomic-ideologies/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00498.warc.gz
en
0.972899
6,162
3.484375
3
Image: A Peruvian military helicopter flies over the valle de los Ríos Apurímac, Ene y Mantaro, also known as the VRAEM. Source: VOA. Crises within the realms of politics and governance in Peru have reinforced the pre-existing economic and fiscal pressures that the COVID-19 pandemic catalyzed, in addition to the increases in food and fuel prices that have been caused by the ongoing war in Ukraine. These issues are now complicating the grave challenges from transnational organized crime and terrorism in the country. Investing in the modernization, adaptation, and strengthening of Peruvian security institutions, along with other parts of its whole-of-government response, requires resources that these stresses have now undermined. In terms of traditional measures of citizen security, the criminal challenge in Peru is far less than in other parts of Latin America, with only 3.3 murders in the region per 100,000 people in 2021. However, increasing rates of other forms of crime led to the declaration of a state of emergency in the Lima metropolitan area in February 2022. The challenge in Peru is not simply a matter of individual criminal groups. The web of money and influence from such criminality has profoundly permeated and undermined the nation’s politics, economic institutions, and social structures—particularly at the provincial level in the interior of the country. Organized crime, from narcotrafficking to illegal mining and logging, is altogether an interdependent, synergistic, if decentralized criminal economy. Indeed, in 2022, Peru’s own government calculated state losses from corruption and malfeasance of at least USD $6 billion. As opposed to countries such as Mexico or Colombia, in which named groups struggle, often overtly, and generate high levels of public violence to impose their criminal dominion, the culture and geography of Peru fosters a different dynamic. In Peru, the geographic separation of the mountainous and forested interior from the coast, the isolation of individual mountain valleys from each other, and the relative lack of land transportation within the Amazon jungle interior have all led to a highly fragmented criminal culture. The relative isolation of each geographical subregion from others gives individual family-based clans relative security from outsiders and unity in the area that they dominate, while simultaneously limiting their ability and interest in extending their domination to the national or international level. The result of this complex structure of incentives and limitations is a Peruvian criminal heartland, one that is very difficult for outsiders to penetrate in geographic and sociopolitical terms. Important synergies exist within each subregion between illicit activities—including coca growing, illegal mining and timber, and the cooptation of local politics to maintain the system—leveraging both the state and the broader, largely informal economy. The ability of small family groups to dominate their local economies in multidimensional ways facilitates the laundering of proceeds, which occurs through institutions including universities, casinos, restaurants, sports clubs, public works, and even media organizations. It also supports the logistics required to maintain the viability of that criminal economy, including importing precursor chemicals and items needed for mining and timber operations and smuggling illicit products out of the country. In the process, it makes those local criminal economies remarkably resilient and synergistic. Ironically, these local dynamics were reinforced by the pandemic. Border closures and restrictions on internal movement created temporary problems for precursor chemical supply chains, the transport of drugs, and the ability of illegal miners to move between their home communities and mines. The pandemic also obliged security forces to distance themselves to some degree from regular contact with local populations, at the same time worsening the economic plight of those communities, giving criminal groups the opportunity to strengthen their positions within them. Moreover, COVID-19 lockdowns, by restricting internal and cross-border movements of persons, obliged criminal groups to find new modalities to move illicit products and precursor chemicals. With national borders closed, criminal organizations even used ambulances to smuggle cocaine and people across checkpoints. Authorities are now racing to catch up with these changes and understand the new dynamics between groups. There are important synergies between criminal activities in Peru, including but not limited to narcotrafficking, illegal mining, and illegal logging. However, these synergies vary within each part of the country. In the northeast, illegal mining, narcotrafficking, and illegal logging are all present in the same zone. There, narcotraffickers sometimes finance illegal mining activities to launder their illicit earnings, while also utilizing logging as a vehicle to smuggle their illicit products out. There are also overlaps in the routes and sometimes the personnel used to smuggle inputs into the region for each of the activities. Further south, in Ucayali, there are relationships between illegal timber operations and narcotrafficking, while mining is relatively less present than in other problematic regions of the country. In the southeast, in the Department of Madre de Dios, for example, illegal mining and narcotrafficking are both present with synergies between them. However, as opposed to Ucayali, the nature of vegetation in the southeast means that the land is often cleared by burning for planting coca and mining, prohibiting the rise of a timber industry that can be exploited by other illicit activities. The laundering of money from the aforementioned illicit activities is another part of the dynamic that is critical to their occurrence, exerting a corrosive effect on the Peruvian economy, institutions, and society. In the interior of the country, rural cooperatives “Cajas Rurales,” a type of community savings in loan, were believed by most persons consulted for this work to play a role in the laundering of money, although virtually all parts of Peruvian society are also permeated by illicitly earned money. Peru’s large informal sector—along with the many people and small businesses struggling to stay solvent in the wake of COVID-19 and the inflationary effects of the Russo-Ukrainian war—also facilitate opportunities for laundering illicit money throughout the Peruvian economy. In addition to such challenges, relationships between international criminal actors and those in Peru continue to deepen and diversify. These include ties to criminal actors based in neighboring countries such as Bolivia and Colombia, as well as the incorporation of outside groups such as Mexican and Colombian cartels and Brazilian gangs. The presence of such groups, however, is largely limited to major cities and key logistical nodes, which are needed to link the Peruvian criminal economy to international markets. There are also worrisome indications that external ideological actors, including those from Cuba and Venezuela, have also penetrated and are exploiting Peru’s criminal networks in ways like how they exploited networks of subversion and terrorism in previous eras. The expanding range of criminal challenges set forth in this section have led Peru’s security forces to focus not only on zones historically linked to terrorism and drug production such as the Apurimac-Ene-Mantauro River valley (VRAEM), but also on the Amazon. The rainforest comprises 60 percent of the national territory, into which illicit activity such as coca production is also diversifying. Within the Amazon, due to the transnational character of illicit activities, the nine regions of Peru which have a border with a foreign country—and the “frontier districts” within them where that occurs—have become increasingly strategic. Beyond illicit activity, Peru’s Amazon also has strategic importance for environmental reasons. The area, including the Peruvian Andes, which contain the headwaters for many of the rivers flowing into the Amazon basin, is the source of 20 percent of the freshwater for the entire continent. Although Brazil, in geographic terms, accounts for the largest portion of the Amazon, Peru claims 11 percent. Most importantly, the mountains bearing the Amazonian headwaters are in Peru, making their protection and use of that water, and those areas, extraordinarily impactful for the entire continent, particularly in neighboring Brazil. Peru has long been a key producer of coca for cocaine, initially in the upper Huallaga River valley (UHV), and, more recently, in the remote Apurimac, Ene and Mantaro River valley (VRAEM). Before 2020, the government had made significant progress in reducing coca production in the UHV, and some progress in the VRAEM. After the outbreak of COVID-19, however, the demands on security forces for operations to combat the pandemic, and the associated limitations on contact with local populations, led the Peruvian government to stop coca eradication efforts. More than 500 Peruvian police died of COVID-19 during the first year of the pandemic alone. Without eradication measures being taken by the Peruvian government, the U.S. Office of National Drug Control Policy (ONDCP) estimates that coca production in the country grew by 22 percent, reaching 88,200 hectares under cultivation by the end of 2020 (versus 72,000 hectares in the prior year). In addition to the VRAEM, the growing of coca has spread to multiple other sites, particularly near the borders with Bolivia, Brazil, and Colombia. Although the terrain there is not as suitable for growing coca with high alkaloid content, the increased volume that can be produced in these non-traditional areas, combined with genetic improvements to the plants themselves, have led to an increase in total cocaine produced from Peru’s coca leaves from 409 metric tons per year in 2014 to an astounding 810 metric tons per year in 2021. As in other countries where coca is produced, the problem in Peru goes beyond illicit production itself. In the context of weak or poorly performing state institutions and the relative absence of transportation and other infrastructure that makes licit agricultural production viable, coca production has become a way of life, perceived as necessary rather than bad, and integrating the entire community, including children. Coca plants yield usable product within months of first being planted and may generate an income of 140 Soles ($36) per day, with the local narco representatives picking up the product from the producer. By contrast, alternative products such as coffee or cacao yield an income of only 40 Soles ($10) per day, with new plants requiring 2-3 years to bear fruit, needing care much more frequently, and presenting challenges of how to get the product to market with often inadequate local infrastructure. With respect to production and transportation of cocaine, the routes and modalities are different in each region. In the VRAEM, long the heartland of coca production, precursor chemicals are generally smuggled in from Lima, often concealed in vehicles. The coca that is produced is usually transformed into cocaine in the region, then moved out through a combination of methods: planes departing from clandestine airstrips, river routes, concealment in vehicles, or individuals (“mulas”) who carry the product over treacherous mountain passes. The most common route out of the VRAEM goes through the southeast, to Bolivia and onto southern Brazil, Argentina, Uruguay, and eventually Europe. Some cocaine from the region also proceeds along a northerly route along the Amazon River corridor leading into Brazil at the Peru-Colombia-Brazil triple frontier. Although the Peruvian government has continued the destruction of clandestine airstrips, in an effort to leave the VRAEM under the special authorities that they designate to operate in the zone, locals are able to rapidly repair them. Moreover, the previously noted spread of cocaine production to other areas of the country has limited the effect of such operations on the export of the product. According to many interviewed for this work, the north of the country—particularly the area south of the Putumayo River defining Peru’s border with Colombia, along the border with Ecuador, and along the Napo River from the Ecuador border across the north of Peru—is becoming ever more significant as a narco hotspot. There, as noted previously, coca growing reinforces other criminal activities including illegal mining and logging in the zone. Despite the local alkaloid levels from coca plants grown in the lower, more humid jungle environment, genetic engineering of the coca plants, in combination with more intensive farming techniques, has permitted significant production of coca in the region with acceptable yields. Precursor chemicals for coca production near the Putumayo and Napo rivers are reportedly smuggled in from Colombia or the coast through a combination of river and overland routes. The cocaine produced there is often moved via river into the Brazilian Amazon, passing through the triple frontier area at Tabatinga, as noted previously. The production of cocaine in the region is reportedly overseen and facilitated by the Carolina Ramirez front and 48th Fronts of FARC dissidents from Colombia, who were engaged in a struggle for control of illicit production in the territory at the time of this writing. Such facilitation reportedly included help with the logistics of precursor chemicals, purchase of product from locals in the zone, and associated “protection” of their activities, without the FARC fronts actually attempting to establish a political presence in the zone. Peru’s eastern border with Brazil, including the Department of Ucayali, as well as the southeast of the country bordering Bolivia, has also begun to transform from being a transit area for cocaine to a production region. As noted previously, in Ucayali, the smuggling of cocaine is sometimes supported by the movement of timber, which is rarely inspected by authorities. In the south, as in the north, cocaine production is interconnected with the illicit infrastructures of the illegal mining sector, including prostitution and other activities, and sometimes help to finance them. One notable confluence of such illicit activity is the Department of Puno, where a mountainous route through Bolivia, and ultimately to Chile, was used by an organization called “La Culebra,” (the snake), due to its convoys of vehicles following the precipitous winding road that has long been a route for contraband. Notable along this route is La Rinconada, one of Peru’s highest cities, where the relative absence of the state, combined with contraband, narcotrafficking, illegal mining, and other activity has reportedly made it a “no man’s land” of illicit activity. In addition to cocaine, since 2008, the Department of Cajamarca, home territory of President Pedro Castillo, has also become a source of the poppies used to produce heroin. Production is reportedly centered around the town of Jaen, traditionally a coffee growing region, although processing into heroin latex is reportedly done in Ecuador, just to the north. As noted previously, major foreign narcotrafficking organizations—such as Mexico’s Sinaloa and Jalisco Nuevo Generacion (CJNC) cartels, Colombia’s Gulf Clan, and Brazil’s First Capital Command (PCC) and Red Command (CV)—have representatives in Peru to facilitate the production and extraction of cocaine and other products. In general, their presence is limited to major cities and nodes in logistical routes, without integration into the communities in Peru’s interior where criminal operations take place. Different groups play distinct, often shifting roles in various parts of the country. The Sinaloa Cartel, for example, had a presence for some time in Trujillo, and may have operatives near the Bolivian border involved in the export route through the southeast of the country. As noted previously, the dissident 48th Front and Carolina Ramirez dissident fronts of Colombia’s FARC operate in Peru’s northern border region near the Putumayo River. Colombian and Bolivian criminal groups are reportedly operating near Pichari, central to cocaine production operations in the VRAEM. Brazil’s PCC is reportedly present in the triple frontier region, and Red Command is also said to be present along other parts of the extensive Peru-Brazil border. The 69,000 Peruvians killed in the long war against Shining Path in the 1980s and 1990s makes the continuing presence of the organizations’ remnants a significant issue for security forces. According to Peruvian security experts consulted for this work, the group is now largely restricted to 200-300 adherents, principally in the mountainous areas around Vizcatán. There is, nonetheless, a much larger community integrated into their network, in part through their involvement in the coca industry, facilitated and protected by Shining Path, which is key to their livelihood. This broader network is key to providing intelligence and logistical support to the group in the area. The principal Shining Path effort was largely defeated in the early 2000s under the government of Alberto Fujimori, leading to its split between a more ideologically-oriented faction in the Upper Huallaga Valley (UHV) lead by Artemio, who remained loyal to the teachings of the group’s founder Abimael Guzman, and a more militarily powerful group tied to cocaine production in the Apurimac, Ene and Mantaro River valley (VRAEM), under the leadership of the Quispe Palomino brothers. In 2012, the government captured Artemio and, in the years that followed, largely wiped out the presence of Shining Path in the UHV. It was also making progress toward combating the group in the VRAEM, led by the Quispe Palomino brothers. A key advance in this regard was the death of “Raul” in January 2021, reportedly due to wounds suffered in combat with Peruvian government forces the prior October. In the years preceding the pandemic, Shining Path came to be militarily isolated in the VRAEM, conducting occasional terrorist attacks against military bases and patrols there. Beyond its military wing, however, the organization also managed to sustain itself politically in marginal terms through its political front MOVADEF, the organization’s connection with sympathetic NGOs, and work by Shining Path in mobilizing communities against mining projects. In May 2021, just prior to national elections, Shining Path was accused of ambushing and killing 16 people in a bar in the VRAEM town of San Miguel de Ene by the Joint Command of the Armed Forces. However, Peruvian officials have not officially declared a responsible party for this killing. Illegal mining is a phenomenon that occurs throughout Peru, due to the country’s widespread, abundant mineral deposits. In the country’s national parks and other environmentally protected areas alone, by one estimate, 28 percent of all gold produced in the country is mined illegally. As noted previously, illegal mining in Peru is supported in part by narcotics operations, with the latter helping to finance illegal mining to launder its proceeds. Both activities sometimes use the same routes and organizations to bring supplies into the region and move products out of it. Although illegal mining occurs in virtually all parts of Peru, it has historically been concentrated in Madre de Dios, and the surrounding provinces such as Puno and Loreto, among others. The illicit gains from the industry have also contributed to the movement of persons to the region from other parts of Peru. The population of Madre de Dios, for example, increased 50 percent from 2007 to 2017, with 28,000 people moving to the capital, Puerto Maldonado alone. In February 2019, in an attempt to crack down on illegal mining in the region, the government launched “Operation Mercury,” sending 1,200 police and 300 military personnel into La Pampa, part of the Tambopata National Reserve. Although illicit production was reduced in the area, Peruvian security experts consulted for this work believe that the illegal miners were simply displaced to other areas, including Ayapata, in the department of Puno. However, following Operation Mercury, due both to COVID-19 and the less aggressive policy of the current government, there has not been a major anti-mining sweep. Nonetheless, in June 2022, the Peruvian Army reinforced police in an operation against illegal mining in the province of Condorcanqui, in the Department of Amazonas. With respect to the dynamics of the industry, the illegally mined gold and other minerals are generally purchased by consolidators, who use falsified paperwork to create the illusion that it came from a legitimate Peruvian mine. Such gold is often moved to Lima for sale in the internal market or for export. In some cases, however, the gold is smuggled into Bolivia, where the process of falsifying its origin is perceived to be easier and to require lower bribes. Nor is illegal mining in Peru confined to gold and diamonds. In the Department of La Libertad, on the northern part of Peru’s Pacific coast, Peruvian authorities identified illicit coal mining operations, in which the perpetrators used the port of Salaverry to export their illicit production. The illegal timber industry in Peru is a significant, if often overlooked illicit complement to narcotrafficking and illegal mining in the remote areas of the interior of the country. In 2020 alone, Peru lost an estimated 203,000 hectares of forest, an increase of 37 percent from the amount lost the prior year. Departments such as Ucayali and Loreto, where the quality of the wood for commercial use is good, have been particular focuses of the illegal timber industry. According to Peruvian security experts consulted for this report, wood cut there is traditionally taken to Pucallpa and shipped out by road, sometimes by riverine routes through the Brazilian amazon. The verification of the legitimate origin of timber is complex, and the Peruvian state has very limited resources to check shipments, allowing virtual impunity for illicit shipment of wood through the zone. An estimated 70 percent of timber shipped out of Peru is on the international red list. As with other types of illicit trade, some government officials at the highest level have been corrupted by those participating in the trade. The governor of Madre de Dios, for example, was accused of accepting bribes from 5 companies of Chinese national Ji Wu for 42,000 hectares concession in a protected area for export of wood out of province. With the collapse of Venezuela’s economy, a substantial portion of the more than 7 million Venezuelans forced to flee their country have gone to Peru. The Peruvian government estimates that there are 1.4 million Venezuelan migrants currently living in the country. The United Nations estimates that there will be 1.45 million by the end of the year. Lima is, by one official estimate, the city with the most Venezuelan immigrants outside Venezuela. The expansion of the Venezuelan population is particularly notable in the exterior suburbs of Lima, including Rimac, Comas in the North, and Ate in the East, but also extends to other cities throughout Peru. The vast majority of Venezuelan migrants have been law-abiding, absorbed into Peru’s large informal sector, and estimated to comprise as much as 70 percent of the Peruvian economy. Their economic integration was also facilitated insofar as their arrival coincided with the take-off of a number of service-based industries in Peru that could accommodate them, including home delivery of food and products, and beauty salons, among others. The migration of many Venezuelans as families, rather than individuals, has also facilitated empathy toward them and relatively good relations with the rest of the population of Peru. Despite such factors facilitating the integration of Venezuelans into Peruvian society, the sheer number of arrivals, amidst challenging times in Peru associated with the COVID-19 pandemic and the inflationary pressures on food and fuel prices caused by Russia’s invasion of Ukraine, have created challenges. A portion of Venezuelans have expanded the illicit economy in the country, including prostitution. The number of Venezuelans arriving has meant that Venezuelans, to some degree, have begun to reproduce local chapters of Venezuela-based criminal networks in Peru, rather than integrating into local ones. Whether or not supported by crime statistics, many Peruvians perceive elevated levels of insecurity and higher prevalence of crimes, such as the use of motorcycles to commit robberies and assaults, often associating these crimes with Venezuelans. To date, both crime directly tied to the Venezuelan immigrant community and ethnic violence by Peruvians against Venezuelan immigrants has been limited. However, the expanded immigration continues to create social and criminal pressures, particularly in the context of expanding economic difficulties, political instability, and mobilization and unrest directed toward the Castillo government. The Peruvian State Policy Response: The response of the Peruvian State and its institutions to the challenges of transnational organized crime and insecurity have been complicated by ongoing political crises in the country. Additionally, the high rate of turnover in Peruvian institutions involved in the coordinated whole-of-government response, including multiple changes in the leadership of the Interior and Defense Ministries, has prevented progress in the effort against criminal organizations In general terms, the posture of the current government has emphasized greater attention to the socioeconomic needs of long neglected communities, especially in areas such as the VRAEM where such criminal activities are taking place. A number of policies and plans, however, have been continued across governments without significant overt change. Peru’s 2018 law for a National Policy of Frontier Development, for example, focuses on 10 critical frontier areas in which the presence or performance of state institutions is weak and organized crime is operating in the area. Peruvian law also gave the military special jurisdiction and powers in select areas of the country including the VRAEM. Public law 1095 established such “emergency zones” where the military could operate with special authorities. Public Law 30796, passed in 2018, authorized select military actions in those zones, including direct interdiction of narcotraffickers. In practice, however, even in the VRAEM, the military has generally conducted operations in coordination with police, prosecutors, and other interagency representatives in order to avoid legal problems that arise from targeted groups denouncing them, both for legitimate and cynical reasons. The increasing prevalence of multiple types of criminal activity outside the VRAEM has reportedly led to some discussion regarding the utility of establishing new areas of emergency jurisdiction in those parts of the country, including Ucayali and Leticia (particularly near the triple frontier). However, at the time of this writing, no concrete action had been taken. Beyond the questions of special jurisdictions, in early June 2022, the Peruvian Congress passed legislation (Public Law 31494), extending a law from the Fujimori era authorizing the operation of citizen “defense committees.” The former law had formalized legal authority for armed community watch organizations known as “rondas campesinas” to operate in designated conflict zones, as well as receive recognition and compensation for their sacrifices helping the state maintain local control in the fight against the terrorist group Shining Path. The new law, which extended the rights of defense committees to operate nationwide, raised concerns among some because of ambiguities regarding the types of arms authorized for them, responsibility for supervision and training, and other matters. Some interviewed for this article worried that, under a radical left government, such committees could be used in a manner similar to armed “collectivos” in Venezuela, as a force loyal to the President, countering the traditional Armed Forces in the context of a leadership dispute. Despite some public perceptions that the Peruvian military was the lead government actor in the VRAEM, most major operations were whole-of-government. One major example was Operation Harpy, conducted during 2018-2019, which focused on acting in areas identified as nodes supporting multiple different types of criminal operations. The Peruvian government intervention involved everything from intelligence, surveillance, and reconnaissance from aerial and other military assets to interventions by multiple government agencies—including DEVIDA, the Agriculture Ministry, and the Ministry for Women and Social Development—to address the economic and social needs of the population. With respect to resources in the Defense sector, pre-pandemic plans called for establishing a “basic defense nucleus” which would be achieved from the revenues dedicated from particular sources, including a percentage generated by the exploitation of Lot 88 and Lot 56 of the Camisea gas project. Such funds, in theory, would assure the modernization and transformation needs of the Armed Forces. Within the context of defense modernization, each of the services has a vision for the role, initiatives, and transformation of its own institution. The Peruvian Army has an Institutional Transformation Plan for the period 2019-2034, adopted prior to the period of the present government but reviewed by persons in the present government with some adaptations, according to people with whom I spoke to for this work. It emphasizes “capabilities-based planning” and focuses on four lines of effort: changing the institutional culture, developing the force, modernizing institutional management, and sensitizing internal and external actors. In the short term, the Army has committed to using its engineering capabilities to build roads and bridges to support the development of vulnerable areas, including better connecting them to the rest of the country and making it more economically viable to sell legal products rather than coca. The goal of the initiative is to build 1083 bridges, and the Army has reportedly started building 12, although the future of the project is not yet clear. One key element of the Army’s plan for adapting to meet the new mission set is a concept of “Amazon protection Brigades,” reflecting the previously noted emphasis on frontier regions. Such brigades, in principle, would have monitoring, mobility, and other capabilities appropriate to controlling the strategic border regions to which they were being deployed. The brigades would also work consistent to the Peruvian Constitution, identifying and acting against, within whole-of-government operations, groups involved in illicit activities. The Amazon Protection Brigades would be outfitted to effectively interact with and support the people of the zone, facilitate development of the region, and strengthen connections with the government. The concept would thus support the concept of a “system for monitoring of the Amazon” (SIVAM) and “system for the protection of the Amazon” (SIPAM), analogous to the “system for the protection and control of frontier regions “(SISFRON) employed by neighboring Brazil. SIPAM would have the power to shareinformation with Brazil and other neighbors, as authorized by national leadership, to strengthen control of the frontier region. A driver shaping the development of Army’s Amazon Protection Brigade concept are the lessons learned from the brigade that it deployed to Madre de Dios in 2019 under Operation Mercury. Another priority is assuring that brigades for operations in the Amazon region are adequately outfitted with the material that it needs to operate in the jungle environment to which it is being deployed. Although pursuit of the concept was suspended during the pandemic because of the considerable costs involved, its implementation in principle would involve a significant increase in State presence along Peru’s frontier. Indeed, the Peruvian Army currently has only four battalions for controlling the entirety of its 1,626 kilometer jungle border with Colombia. Beyond the Army, the Peruvian Air Force has its own ideas for modernization: the “Quinones” plan. Elements of the Air Force plan include support to the SIVAM/SIPAM concept through an expansion of monitoring assets from access to satellites beyond the current PeruSat1, airborne ISR assets to possibly include unmanned aerial vehicles, synthetic aperture radar (SAR) and LIDAR on airborne assets, and the possible centralization of data collected in a “National Amazon Vigilance” Center. The Air Force concept also includes the completion of acquisitions postponed during the pandemic, such as its C-27J Spartan transport aircraft, among others. For the Peruvian Navy, its response to the aforementioned challenges includes the strengthening of its system for maintaining control of the system’s internal waterways, including riverine interdiction units employing hovercraft, and counterterrorism bases. The Navy also added new blue water and brown water assets built by its own SIMA shipyards, including two CB-90 interceptor boats. In the short term, just as the Army is building bridges, the Peruvian Navy is working to upgrade river docks so that river barges can access communities to load and offload cargo. Each branch of the Armed Forces is also developing or implementing a concept to use its assets to bring state presence to local populations. The most mature concept among the three services is a Navy program, Platforms for Itinerant Social Action (PIAS), which uses riverboats produced in the navy’s own SIMA Iquitos shipyards to bring services—including registration for the national identification card through RENIEC, pension payouts, and other forms of banking—to remote communities accessible only by river. The Navy program has been relatively successful, and a side benefit is that the Ministries leveraging the system pay for part of platform operation costs. From April-May 2022, Navy PIAS boats conducted missions serving Loreto, Ucayali, and Puno. The Air Force has a complimentary concept using Twin Otter pontoon craft to bring those services to even more remote towns that the Navy’s riverboats can’t reach. The Army, for its part, envisions caravans of trucks to bring such state services to remote towns. While the Peruvian government and military has a range of innovative programs to help address the challenges mentioned in this paper, it is not clear that in the difficult post-COVID-19 fiscal environment, in the context of government debt and demands to compensate communities for high food and fuel costs, money for military modernization and transformation programs and initiatives to combat crime and insecurity in the country will be forthcoming. Operational issues tied to pandemic and the war in Ukraine have further complicated matters. For the Peruvian Air Force and other services, the extra hours flying helicopters and other fixed wing aircraft, along with operating other vehicles, have created a bow wave of maintenance requirements at exactly the moment when resources available to the services are constrained. In addition, Peruvian security officials consulted for this report note that Western sanctions against the Russian arms organization “Rosboronexport” make it difficult for the Peruvian military to pay Russia for necessary depot-level maintenance for those platforms, reducing their operational readiness. While opinions vary on whether the effective suspension of drug eradication efforts by DEVIDA, or the state’s purchase of Peru’s entire elicit coca crop mentioned earlier, are good ideas, resource-backed alternatives to develop long neglected communities and strengthen their bonds to the legitimate state have not yet been put on the table in more than a symbolic fashion. At the technical level, yet another daunting challenge is the effort to prevent precursor chemicals from entering drug producing regions. Materials as ubiquitous as cement and gasoline are on the controlled list. The items on the list are also highly substitutable. Moreover, the group of areas in the vast national territory where such production occurs is ever-expanding, and the state only has limited resources to control that growing list of substances. By one estimate, the Peruvian tax authority SUNAT has managed to intercept less than 1 percent of controlled substances going into just the VRAEM. Moreover, because SUNAT also collects tax revenues from the sale of goods such as gasoline and cement, the organization has contradictory incentives not to question the entry of materials which are key generators for the tax revenue whose collection it is responsible for. Peru’s people and government professionals have a long history of resilience and adaptation to adversity. They are doing so currently, in the face of enormous challenges. Nonetheless, it is critical to recognize that expanding criminality—in the form of narcotrafficking, illegal mining, logging, and other activities—continues to erode the effectiveness of Peruvian institutions, as well as the faith of the Peruvian people in democratic, market-oriented solutions to their challenges. Peru is an integral part of Latin America, a region weathering its own severe economic and social difficulties. It is important to recognize that what happens in Peru will likely have profound repercussions for the trajectory of democracy and stability in the hemisphere. Evan Ellis is a Latin America Research Professor with the U.S. Army War College Strategic Studies Institute. The views expressed here are strictly his own. The author thanks the Center for Strategic Studies of the Peruvian Army (CEEEP), Leonard Longa, Jose Robles, Martin Arbulu, Luis de la Flor Rivero, Eduardo Zarauz, Mario Caballero Ferrioli, Anibal Cueva Lopez, Josue Meneses, Juan Carlos Liendo, and Jorge Serrano Torres, among others, for their help with this work.
<urn:uuid:fc170678-8895-4dc1-b823-6ec0522b542f>
CC-MAIN-2022-33
https://theglobalamericans.org/2022/07/the-evolution-of-perus-multidimensional-challenges-part-ii-transnational-organized-crime/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00496.warc.gz
en
0.957092
7,571
2.71875
3
Lower-extremity injuries are the most common sports injuries during practice and competition (1). Sports injuries are often associated with inadequate planning and execution of training sessions, improper joint alignment and movement, and weakness in muscles, tendons, and ligaments. Acute sports injuries primarily affect athletes in power events, such as jumping, sprinting, landing, and sharp changes in direction. Injuries prevent athletes from executing full training programs and delay their return to competition, and in extreme cases, may have long-term or career-ending consequences. Biomechanical analysis has proven crucial in objective quantification and understanding of injury mechanisms and has been used in improving injury risks in training and competition (2). In sports biomechanics, motion analysis systems (MAS) are widely used in attempts to improve the performance and techniques of athletes, and to evaluate injury mechanisms (2). Ideally, MAS could be used for individualized, quantitative rehabilitation applications targeting specific deficits in a given athlete. Evaluation of the postoperative progress, gait and running patterns, sports techniques, and detection of motor control and functional deficits are the main targets of motion capture based sports biomechanics research. MAS have been used effectively in risk assessment and injury prevention of athletes (2). MAS also can be used as a biofeedback system to increase the efficiency of neuromuscular training and in lowering injury risks (3). MAS can be classified into two general categories: (i) wearable and (ii) nonwearable sensors (4). The Table provides a brief description of the technology, application, advantages, and disadvantages of wearable and nonwearable MAS. Biomechanical modeling and computer simulation platforms allow for analysis of the musculoskeletal system during sports movements (5–7). Quantitative information of movement patterns, static and dynamic balance, posture, and motor and sensory control can be obtained using modeling and simulation. Interactions between athletes and environment also can be simulated using these platforms. In addition to the temporospatial, kinematic (joint angle, angular velocity, and angular acceleration), and kinetic data (ground reaction force, joint moments, mechanical power, and work), which can be obtained from MAS, musculoskeletal simulation programs can provide information on joint contact forces, muscle forces and power, muscle-tendon unit (MTU) velocity and length changes, and activation level of muscles (5,6) (Fig.). In this review, we discuss selected injury mechanisms and risk factors of some of the most common lower-limb musculoskeletal injuries, including anterior cruciate ligament (ACL), patellofemoral, and hamstring injuries. The discussion focuses on approaches using kinematic and kinetic information on injury assessment. Furthermore, we evaluated the efficacy of musculoskeletal modeling and dynamic simulation tools in helping our understanding of injury mechanisms related to these injuries. Lower-Extremity Sports Injuries ACL injuries are among the most common knee injuries in athletes. About 2 million ACL injuries occur every year around the world (2). The surgical treatment of an ACL injury costs about US $17,000, excluding expenses associated with the rehabilitation (8). The primary function of the ACL is to resist anterior translation and medial rotation of the tibia relative to the femur. ACL loss is typically associated with knee instability, which may lead to further knee injuries and increased risk for knee joint degeneration (2). In vitro loading and in vivo studies indicate that anterior tibial translation causes large ACL strains at low flexion angles (30° and below) (9). ACL injury often has a disruptive effect on an athlete’s career and quality of life (2,8). ACL injury may lead to chronic knee instability, cartilage injury, meniscus tears, and osteoarthritis. Half of all the ACL patients suffer from knee pain and dysfunction, within 10 to 20 years, regardless of the mode of intervention: surgical repair or conservative treatment (10). ACL injuries can occur due to direct contact with another athlete. However, two-thirds of all ACL tears occur in noncontact situations (11). Noncontact ACL injuries usually occur while executing sudden movements, such as landing from a jump, single leg landing, cutting maneuvers, sudden decelerations, or combinations of these patterns (10,11). Female athletes landing with excessive hip and knee angles, knee valgus, internally rotated tibia, and pronated feet are at increased risk of ACL injury (10). Poor trunk control and trunk motion with a shifted body on the weight-bearing leg also have been associated with increased risk for ACL tears (10). In alpine skiers, internal tibial rotation with a fully extended or a flexed knee beyond 90° has been shown to cause the noncontact ACL injury (10). Contact ACL injuries are usually related to forceful valgus stress and are often accompanied by medial meniscus and medial collateral ligament injury (10). Three-dimensional (3D) kinematic analysis of landing from a drop vertical jump (DVJ) is one of the most common methods used in ACL injury risk assessment (12). Besides risk assessment, MAS can be used to provide targeted feedback training aimed at altering movement patterns (3). For example, Ford et al. (3) reported that they were able to reduce knee abduction load and posture from baseline to posttraining during a DVJ by using kinetic- and kinematic-based real-time biofeedback during repetitive double-leg squats. Tuck jumps also have been used as a screening movement for ACL injuries (12). DVJ and tuck jump assessments are thought to help identify quadriceps dominance, leg dominance, residual injury deficits, trunk dominance, and poor technique (13). To prevent ACL injuries, understanding biomechanical mechanisms for ligament overloading is essential. Noncontact ACL injuries are thought to be related to poor neuromuscular control, leading to inadequate biomechanical characteristics (2). For this reason, injury prevention interventions are often targeted at improving neuromuscular control (2). For best outcomes, ACL injury prevention programs should be multicomponent (10). Programs including strengthening, aerobic conditioning, plyometrics, neuromuscular training with feedback related to body mechanics, and landing pattern corrections are the most common rehabilitation and prevention methods. ACL prevention programs focusing on neuromuscular training include proprioception and balance training, symmetry among lower limbs, and joint alignment feedback during squatting, lunging, cutting, jumping, and landing movements. Strength training prevention programs try to achieve lower-limb symmetries, proper muscle coordination, and accepted muscle strength ratios between the quadriceps and hamstrings muscle groups. Plyometric programs should target proper jumping and landing techniques, and cutting movements should be made such as to decrease strains on joints and ligaments. Patellofemoral injuries are mainly due to overuse rather than a traumatic injury. They typically occur in conjunction with anterior knee pain, especially in athletic populations performing repetitive jumping movements. Patellofemoral pain may account for 25% to 40% of all knee problems seen in a sports injury clinic (14). Patellofemoral maltracking is thought to cause patellofemoral pain, arthritis, instability, and focal chondral disease (15). Possible contributors to patellofemoral pain are kinematic abnormalities, abnormal patellar tracking, high patellofemoral joint compressive stresses, increased Q-angles, reduced quadriceps length, malalignment of the lower extremity, quadriceps weakness, muscle/soft tissue tightness, and overuse (16). High knee abduction moments have been implicated with patellofemoral pain and injuries (17). Increased external knee flexion moments and anterior tibial shear forces have been shown to increase tibial shear forces, patellofemoral joint reaction forces, patellofemoral pain, and patellar tendinopathy (18). Knee extensor and hip abductor strength insufficiencies have been proposed to lead to overuse running injuries including patellofemoral pain (19). Female runners who have patellofemoral pain often present with hip abductor and extensor weaknesses, and an increased range of hip internal rotation (19). In a study of 600 novice recreational runners, it has been shown that high eccentric hip abductor strength lowered the risk of developing patellofemoral pain (20). A more erect landing pattern also has been associated with an increased risk of acute and overuse patellofemoral injury (21). Landing with a reduced hip flexion angle has been associated with increased quadriceps activation and reduced hip extensor activation, thereby reducing ground reaction forces (21). Increased external knee flexion moment, and the related increase in quadriceps activity in an erect trunk position lead to high anterior tibial shear forces, patellofemoral joint reaction forces, and patellar tendon forces, which are all associated with ACL injuries, patellofemoral pain, and patellar tendinopathy. Reducing hip flexion during the landing of DVJ has been shown to increase knee abduction moments, and high knee abduction moments are associated with increased risks for patellofemoral pain and ACL injury (17,21). Hip and trunk movements are related to knee joint injury risks (21). Hamstring injuries are classified as acute and chronic. They are among the most common lower-extremity sports injuries. Hamstring injuries often lead to long-term dysfunction, difficulty to return to play, and recurrent future injuries (22). Hamstring injuries have a high incidence in soccer (23) and sprinting (24). During the 2016 Rio de Janeiro Olympic Games, hamstring injuries were the most common muscle injury among all sports (46.2%), and in sprinters (60%) (25). Previous hamstring injury is the best predictor for a future hamstring injury, followed by increasing age (26). Many athletes may have a second, and then multiple hamstring injuries. Running and sprinting, especially the deceleration of the swing leg just prior to foot strike, has been identified as the primary phase for Type I acute hamstring strain injuries in sprinting. During dancing, slide tackling, and high kicking, hip flexion movement with knee extension may lead to Type II hamstring strains because the hamstring muscles are stretched beyond their limit (27). Biomechanical risk factors associated with hamstring injuries are insufficient lumbopelvic motor control and stability, overuse, and weakness in the hamstring muscles (28). Neuromuscular coordination of the muscles in the lumbopelvic region may influence the hamstring functioning. Lumbopelvic instability can lead to change in the length-tension relationship of the hamstring muscle, which may increase the risk of hamstring injuries (29). The gluteus maximus and hamstring act as synergistic muscles for hip extension. If the gluteus maximus is weak, the hamstring often takes over the role of primary hip extensor to compensate this weakness, which contributes to hamstring overload (29). Horizontal ground reaction force and eccentric peak torque at the end of the swing phase of sprinting have been related to high-speed running hamstring injuries (30). Asymmetries in muscle force between limbs are thought to have a negative effect on sprint mechanics, which cannot be precisely evaluated with today's biomechanical analysis techniques (31). Therefore, the need for approaches other than MAS to calculate time-dependent contractile properties of muscles, such as force, length, and contraction velocity arises. Analyzing Mechanisms of Lower-Limb Injuries through Musculoskeletal Modeling and Simulation The simplest form of motion analysis is performed by the human eye. But this method is subjective and qualitative, although it is used much more frequently in coaching and rehabilitation assessment than any formal and quantitative method. Two-dimensional (2D) video analysis has been used frequently in the assessment of human movement since it is simple and often provides much of the required information. However, sports injuries often have an off-2D component, thus requiring 3D analysis tools. MAS enable the recording of 3D position and orientation of body segments, and ground reaction forces, as well as electromyographic (EMG) signals. Data from MAS can be used as input for modeling and simulation of movements, using computational tools, such as Anybody (5) and OpenSim (6). Because in vivo measurement of muscle forces is complicated and requires invasive surgical approaches, it is not practical. Also, measuring in vivo human muscle forces can be ethically questionable. Similarly, bone-to-bone contact forces in an intact joint, muscle contraction velocity, and muscle length changes are not easily obtained using MAS. Therefore, computational musculoskeletal modeling and simulation environments have been developed in biomechanics research to calculate length changes of contractile elements within muscles, contraction velocities, force, power, and work for individual muscles, and to estimate in vivo joint contact forces (24). It has been suggested that lower-limb malalignment, weakness, and poor conditioning are risk factors for ACL injuries (32). Movements in all three anatomical planes of the knee affect ACL stresses and strains (2,11). Knee valgus and varus moments, internal tibial rotation moment, and anterior shear forces are common mechanisms for noncontact ACL injury. Understanding the force-sharing patterns among muscles crossing the knee allows for identification of each muscles’ contribution to ACL loading and injury. One of the first musculoskeletal knee models was developed by Pandy and Shelburne (33). They designed a 2D, sagittal-plane knee model to predict the forces in the knee ligaments induced by isometric contractions of 11 muscles. They simulated quadriceps leg raises, maximum isometric knee extensions, and maximum isometric knee flexion motions. They found that hamstring muscle forces produce a posterior shear force on the tibia that reduces ACL strains. However, this ACL protective effect only worked for knee flexion angles between 15° and 60°, and was ineffective outside that range. The main limitations of that initial knee model were that it was 2D, it had a constant patellar ligament length, and the tibial plateau and patellar facet were assumed to be flat. Noyes et al. (34) and Jonathan et al. (35) performed simulations to identify differences in knee kinematics and kinetics between males and females. They found that women had greater knee extension and valgus moments than men during the landing phase of the stop-jump task. Their model did not include any muscles, not allowing them to predict muscle forces during the landing tasks. Ali et al. (36) performed simulations for single-leg landings performed from increasing vertical heights and reaching increasing horizontal distances. They found that increasing quadriceps forces increased the noncontact ACL injury risk, while increasing hamstring and gastrocnemius forces and increasing ankle plantarflexion angle reduced the risk. The knee was modeled as a revolute joint without ACL. Therefore, the results of that study required further corroboration. To determine the causes of ACL injuries in female athletes during noncontact impact activities, Kar and Quesada (37) developed a knee joint model that allowed for mediolateral translation, adduction-abduction rotation, and internal-external rotation. The ACL was modeled as a passive tissue attaching to femur and tibia. Knee flexion, valgus and internal/external moments, knee flexion, valgus and internal/external angles, ACL strains, and internal forces were calculated. They observed a lack of symmetry between the left and right knees for valgus angles, valgus moments, and muscle activations in female athletes, which are thought to be among the main risk factors for ACL injury. In addition, the absence of a model for the patellofemoral joint, and the lack of complete EMG recordings (only the rectus femoris, vastus lateralis, bicep femoris, and gastrocnemius were measured) made it difficult to validate the model. Roldan et al. (38) performed simulations of walking, running, cross-over cutting, sidestep cutting, jumping, and jumping on one leg for 12 young participants. They predicted ACL length, strain, and tensile force, and found that the ACL was subjected to multidirectional loading. Maniar et al. (39) used a 37 degree of freedom, full-body, musculoskeletal model to investigate the role of the major lower-limb muscles on knee joint loading during unanticipated sidestep cutting maneuvers, a movement considered a high risk for ACL injury. They showed that knee-spanning as well as nonknee-spanning muscles considerably contribute to anteroposterior shear joint force, frontal plane knee joint varus/valgus moment, and transverse plane knee joint internal/external rotational moment during the weight acceptance phase of the sidestep movement. Specifically, they found that the hamstring (biceps femoris long head and medial hamstrings), soleus, and gluteal muscles can unload the ACL during the sidestep cutting task. They concluded that optimizing the function of these muscles should be of high priority in ACL prevention programs. This example illustrates how musculoskeletal modeling can be used to investigate cause-effect relationships between muscle forces and joint loads, which in turn may help improve the effectiveness of preventative and rehabilitative interventions. MAS have been used effectively in evaluating patellofemoral injury mechanisms. For example, Besier et al. (40) simulated a musculoskeletal model to predict knee muscle forces during walking and running in a group of patients with patellofemoral pain and pain-free control subjects. They found that the patients with patellofemoral pain had higher normalized muscle forces (forces normalized to the maximum isometric muscle force for each muscle) than pain-free controls. Muscle forces are the main contributors to joint contact forces, thus the increased forces may have resulted in increased joint contact stresses, which in turn may have caused the pain observed in the patients. Besier et al. (40) also found that females had greater normalized hamstring and gastrocnemius muscle forces during walking and running compared to males, which is in agreement with the experimental findings in the literature (41,42). Yet, the patellofemoral joint contact force was not calculated in this study. Besier et al. (43) combined neuromusculoskeletal and finite element modeling to estimate patellar cartilage stress during stair climbing in patients with patellofemoral pain and compared the results to pain-free controls. They found no significant differences between patients and pain-free controls. However, females displayed greater peak patellar cartilage stress compared to males. This finding may contribute to the justification of the greater prevalence of patellofemoral pain in females compared to males. Olbrantz et al. (44) determined patellofemoral stresses for drop landings in healthy females. Visual feedback of the ground reaction forces helped the participants to reduce patellofemoral joint stresses. Therefore, visual feedback may be used in teaching landing mechanics. Kernozek et al. (45) compared inverse dynamics and inverse dynamics coupled with static optimization techniques for determining the quadriceps force for estimating patellofemoral joint stress. They found that patellofemoral joint stresses obtained from the combination of inverse dynamics and static optimization were higher than those obtained from inverse dynamics alone, indicating that the choice of approach in the prediction of the muscle forces plays a major role in calculating the stresses in patellofemoral joint models. This uncertainty in the selection of the solution approach for the muscle force distribution problem is an important limitation of the musculoskeletal simulation programs. Unfortunately, it is still impossible to measure all muscle forces in humans during movement, and therefore model validation is impossible except using approaches in which muscle forces can be measured directly. Such approaches have repeatedly shown that it is impossible at this time, to predict individual muscle forces with any certainty across a wide range of movements (46). Musculoskeletal simulation platforms estimate muscle lengths, velocity and force, and thus are well-suited to study hamstring strain injuries. Experiments on isolated muscles and single fibers have shown that the amount of muscle/fiber strain is directly related to muscle damage (47). Musculotendon strain is typically defined as the change in length relative to the musculotendon length measured during standing. Estimation of MTU strain provides an opportunity to understand strain-type injuries. Another important parameter calculated using musculoskeletal simulations is the muscle work. Positive and negative works occur during concentric and eccentric contractions, respectively. These phases are sometimes referred to as “producing” power and “absorbing” power. Based on musculoskeletal simulations, it is thought that most of the hamstring strain injuries happen during the early stance phase and late swing phase of high speed running (24,48,49). However, if this is indeed the case is virtually impossible to demonstrate using experimental approaches. Thelen et al. (48) and Chumanov et al. (50) used musculoskeletal models to understand the function of human hamstring muscles during high-speed running. They investigated whether the hamstrings are susceptible to injury during the late swing phase of sprinting, when the hamstrings are active and lengthening, or during the stance phase, when contact loads occur. In both studies, the mechanics of the hamstring muscle were studied using a forward dynamics approach during high-speed treadmill running. Thelen et al. (48) found that the peak length of the hamstring MTU occurs during the terminal swing phase. However, they had only one subject to obtain the required kinematic data. Chumanov et al. (50) concluded that the large inertial loads during high-speed running make the hamstrings susceptible to injury during the late swing phase. Because the load patterns between treadmill and overground running differ, these data may not reflect real sprinting conditions. Schache et al. (24) performed simulations using a 3D musculoskeletal model to understand the mechanics of hamstring muscles in overground sprinting. They calculated joint moments, MTU forces, strains, velocity, power, and work. They found that peak forces and strains for the hamstring muscles occur during the terminal swing phase, thus creating the highest risk for injury during that phase of sprint running. Musculoskeletal modeling and simulation tools provide a practical and quantitative way to investigate the mechanics of musculoskeletal sports injuries by examining the relationships between muscle forces and joint loads during movements with a high risk of injury. Arguably, the biggest challenge facing scientists is the inability to measure individual muscle forces experimentally, and thus the inability to validate muscle force predictions obtained theoretically. The most common approach to calculate individual muscle forces has been to formulate an optimization problem (46). However, muscle forces obtained from optimization-based approaches should be cautiously evaluated because the predicted muscle functions are highly sensitive to changes in mechanical and architectural properties of the MTU, especially the tendon slack length, of which accurate experimental determination is challenging (51). Therefore, solving the muscle force distribution problem, which has its origins in the beginnings of modern biomechanics, remains one of the major challenges facing musculoskeletal modeling and simulation community. Nevertheless, integrating musculoskeletal simulation platforms with injury prevention programs is a useful approach, as it helps identify candidate mechanical events that may cause musculoskeletal injury, and allows for musculoskeletal injuries to be simulated without jeopardizing athletes. For such approaches to be useful, it is necessary to obtain subject-specific musculoskeletal models with accurate anatomical and physiological structures. Improvements in musculoskeletal simulation also may be achieved by performing real-time data analysis and providing real-time feedback to subjects. The authors declare no conflicts of interest and do not have any financial disclosures. 1. Hootman JM, Dick R, Agel J. Epidemiology of collegiate injuries for 15 sports: summary and recommendations for injury prevention initiatives. J. Athl. Train . 2007; 42:311–9. 2. Bates NA, Myer GD, Shearn JT, Hewett TE. Anterior cruciate ligament biomechanics during robotic and mechanical simulations of physiologic and clinical motion tasks: a systematic review and meta-analysis. Clin. Biomech . 2015; 30:1–13. 3. Ford KR, DiCesare CA, Myer GD, Hewett TE. Real-time biofeedback to target risk of anterior cruciate ligament injury: a technical report for injury prevention and rehabilitation. J. Sport Rehabil . 2015; 24:1–6. 4. Muro-De-La-Herran A, Garcia-Zapirain B, Mendez-Zorrilla A. Gait analysis methods: an overview of wearable and non-wearable systems, highlighting clinical applications. Sensor . 2014; 14:3362–94. 5. Damsgaard M, Rasmussen J, Christensen ST, et al. Analysis of musculoskeletal systems in the AnyBody Modeling System. Simul. Model Pract. Th . 2006; 14:1100–11. 6. Delp SL, Anderson FC, Arnold AS, et al. OpenSim: open-source software to create and analyze dynamic simulations of movement. I.E.E.E. Trans. Biomed. Eng . 2007; 54:1940–50. 7. van den Bogert AJ, Geijtenbeek T, Even-Zohar O, et al. A real-time system for biomechanical analysis of human movement and muscle function. Med. Biol. Eng. Comput . 2013; 51:1069–77. 8. Hewett TE, Lindenfeld TN, Riccobene JV, Noyes FR. The effect of neuromuscular training on the incidence of knee injury in female athletes. A prospective study. Am. J. Sports Med. 9. Marieswaran M, Sikidar A, Goel A, et al. An extended OpenSim knee model for analysis of strains of connective tissues. Biomed. Eng. Online . 2018; 17:42. 10. Acevedo RJ, Rivera-Vega A, Miranda G, Micheo W. Anterior cruciate ligament injury: identification of risk factors and prevention strategies. Curr. Sports Med. Rep . 2014; 13:186–91. 11. Mokhtarzadeh H, Ewing K, Janssen I, et al. The effect of leg dominance and landing height on ACL loading among female athletes. J. Biomech . 2017; 60:181–7. 12. Myer GD, Stroube BW, DiCesare CA, et al. Augmented feedback supports skill transfer and reduces high-risk injury landing mechanics: a double-blind, randomized controlled laboratory study. Am. J. Sports Med . 2013; 41:669–77. 13. Fox AS, Bonacci J, McLean SG, et al. A systematic evaluation of field-based screening methods for the assessment of anterior cruciate ligament (ACL) injury risk. Sports Med . 2016; 46:715–35. 14. Witvrouw E, Callaghan MJ, Stefanik JJ, et al. Patellofemoral pain: consensus statement from the 3rd International Patellofemoral Pain Research Retreat held in Vancouver. Br. J. Sports Med . 2014; 48:411–4. 15. Golant A, Quach T, Rosen J. Patellofemoral instability: diagnosis and management. In: Hamlin M, editor. Current Issues in Sports and Exercise Medicine . Rijeka: InTech; 2013. p. 87–117. 16. Lenhart RL, Thelen DG, Wille CM, et al. Increasing running step rate reduces patellofemoral joint forces. Med. Sci. Sports Exerc . 2014; 46:557–64. 17. Hewett TE, Myer GD, Ford KR, et al. Biomechanical measures of neuromuscular control and valgus loading of the knee predict anterior cruciate ligament injury risk in female athletes: a prospective study. Am. J. Sports Med . 2005; 33:492–501. 18. Malfait B, Dingenen B, Staes F, et al. Differences in neuromuscular activity of quadriceps and hamstrings with respect to different landing patterns in female athletes. Br. J. Sports Med . 2014; 48:631. 19. Luedke LE, Heiderscheit BC, Williams DS, Rauh MJ. Association of isometric strength of hip and knee muscles with injury risk in high school cross country runners. Int. J. Sports Phys. Ther . 2015; 10:868–76. 20. Ramskov D, Barton C, Nielsen RO, Rasmussen S. High eccentric hip abduction strength reduces the risk of developing patellofemoral pain among novice runners initiating a self-structured running program: a 1-year observational study. J. Orthop. Sports Phys. Ther . 2015; 45:153–61. 21. Dingenen B, Malfait B, Vanrenterghem J, et al. The reliability and validity of the measurement of lateral trunk motion in two-dimensional video analysis during unipodal functional screening tests in elite female athletes. Phys. Ther. Sport . 2014; 15:117–23. 22. Askling CM, Tengvar M, Saartok T, Thorstensson A. Acute first-time hamstring strains during high-speed running: a longitudinal study including clinical and magnetic resonance imaging findings. Am. J. Sports Med . 2007; 35:197–206. 23. Ekstrand J, Walden M, Hagglund M. Hamstring injuries have increased by 4% annually in men's professional football, since 2001: a 13-year longitudinal analysis of the UEFA elite Club injury study. Br. J. Sports Med . 2016; 50:731–7. 24. Schache AG, Dorn TW, Blanch PD, et al. Mechanics of the human hamstring muscles during sprinting. Med. Sci. Sports Exerc . 2012; 44:647–58. 25. Crema MD, Jarraya M, Engebretsen L, et al. Imaging-detected acute muscle injuries in athletes participating in the Rio de Janeiro 2016 summer Olympic games. Br. J. Sports Med . 2017; 1–6. 26. Liu H, Garrett WE, Moorman CT, Yu B. Injury rate, mechanism, and risk factors of hamstring strain injuries in sports: a review of the literature. J. Sport Health Sci . 2012; 1:92–101. 27. Chu SK, Rho ME. Hamstring injuries in the athlete: diagnosis, treatment, and return to play. Curr. Sports Med. Rep . 2016; 15:184–90. 28. Fredericson M, Moore W, Guillet M, Beaulieu C. High hamstring tendinopathy in runners: meeting the challenges of diagnosis, treatment, and rehabilitation. Phys. Sportsmed . 2005; 33:32–43. 29. Goom TS, Malliaras P, Reiman MP, Purdam CR. Proximal hamstring tendinopathy: clinical aspects of assessment and management. J. Orthop. Sports Phys. Ther . 2016; 46:483–93. 30. Morin JB, Gimenez P, Edouard P, et al. Sprint acceleration mechanics: the major role of hamstrings in horizontal force production. Front. Physiol . 2015; 6:404. 31. Setuain I, Lecumberri P, Izquierdo M. Sprint mechanics return to competition follow-up after hamstring injury on a professional soccer player: a case study with an inertial sensor unit based methodological approach. J. Biomech . 2017; 63:186–91. 32. Shao Q, MacLeod TD, Manal K, Buchanan TS. Estimation of ligament loading and anterior tibial translation in healthy and ACL-deficient knees during gait and the influence of increasing tibial slope using EMG-driven approach. Ann. Biomed. Eng . 2011; 39:110–21. 33. Pandy MG, Shelburne KB. Dependence of cruciate-ligament loading on muscle forces and external load. J. Biomech . 1997; 30:1015–24. 34. Noyes FR, Barber-Westin SD, Fleckenstein C, et al. The drop-jump screening test: difference in lower limb control by gender and effect of neuromuscular training in female athletes. Am. J. Sports Med . 2005; 33:197–207. 35. Jonathan DC, Bing Y, Donald TK, William EG. A comparison of knee kinetics between male and female recreational athletes in stop-jump tasks. Am. J. Sports Med . 2002; 30:261–7. 36. Ali N, Andersen MS, Rasmussen J, et al. The application of musculoskeletal modeling to investigate gender bias in non-contact ACL injury rate during single-leg landings. Comput. Methods Biomech. Biomed. Engin . 2014; 17:1602–16. 37. Kar J, Quesada PM. A musculoskeletal modeling approach for estimating anterior cruciate ligament strains and knee anterior-posterior shear forces in stop-jumps performed by young recreational female athletes. Ann. Biomed. Eng . 2013; 41:338–48. 38. Roldan E, Reeves ND, Cooper G, Andrews K. In vivo mechanical behaviour of the anterior cruciate ligament: a study of six daily and high impact activities. Gait Posture . 2017; 58:201–7. 39. Maniar N, Schache AG, Sritharan P, Opar DA. Non-knee-spanning muscles contribute to tibiofemoral shear as well as valgus and rotational joint reaction moments during unanticipated sidestep cutting. Sci. Rep . 2018; 8:2501. 40. Besier TF, Fredericson M, Gold GE, et al. Knee muscle forces during walking and running in patellofemoral pain patients and pain-free controls. J. Biomech . 2009; 42:898–905. 41. Foss KD, Myer GD, Magnussen RA, Hewett TE. Diagnostic differences for anterior knee pain between sexes in adolescent basketball players. J. Athl. Enhanc . 2014; 3:1814. 42. Petersen W, Rembitzki I, Liebau C. Patellofemoral pain in athletes. Open Access J. Sports Med . 2017; 8:143–54. 43. Besier TF, Pal S, Draper CE, et al. The role of cartilage stress in patellofemoral pain. Med. Sci. Sports Exerc . 2015; 47:2416–22. 44. Olbrantz C, Bergelin J, Asmus J, et al. Effect of post-trial visual feedback and fatigue during drop landings on patellofemoral joint stress in healthy female adults. J. Appl. Biomech . 2017; 27:1–17. 45. Kernozek TW, Vannatta CN, van den Bogert AJ. Comparison of two methods of determining patellofemoral joint stress during dynamic activities. Gait Posture . 2015; 42:218–22. 46. Arslan YZ, Jinha A, Kaya M, Herzog W. Prediction of muscle forces using static optimization for different contractile conditions. J. Mech. Med. Biol . 2013; 13:1350022. 47. Lieber RL, Friden J. Muscle damage is not a function of muscle force but active muscle strain. J. Appl. Physiol . 1993; 74:520–6. 48. Thelen DG, Chumanov ES, Best TM, et al. Simulation of biceps femoris musculotendon mechanics during the swing phase of sprinting. Med. Sci. Sports Exerc . 2005; 37:1931–8. 49. Chumanov ES, Heiderscheit BC, Thelen DG. Hamstring musculotendon dynamics during stance and swing phases of high-speed running. Med. Sci. Sports Exerc . 2011; 43:525–32. 50. Chumanov ES, Heiderscheit BC, Thelen DG. The effect of speed and influence of individual muscles on hamstring mechanics during the swing phase of sprinting. J. Biomech . 2007; 40:3555–62. 51. Ackland DC, Lin YC, Pandy MG. Sensitivity of model predictions of muscle function to changes in moment arms and muscle–tendon properties: a Monte-Carlo analysis. J. Biomech . 2012; 45:1463–71.
<urn:uuid:db3fe54e-4d66-468f-84fd-54f1d9438624>
CC-MAIN-2022-33
https://journals.lww.com/acsm-csmr/fulltext/2019/06000/Musculoskeletal_Simulation_Tools_for_Understanding.6.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00498.warc.gz
en
0.898014
7,923
3.109375
3
Old Medical Equipment We no longer die in such great numbers because of the marvellous invention of the indoor flushing toilet, adequate food for everyone (for the first time in history), heating, decent housing and contraception. Contraception has ensured that most women only have 2 or 3 children, as opposed to 15 or 20, and so she is more able to bear a healthy child if she can take care of herself and her child, and not subject herself to numerous pregnancies. Alcohol abuse is a causative factor in diphtheria, as is underlying disease. It stated in ‘Medical World’, 1931, p.627, that ‘»…shows an interesting and conclusive fashion the definitive effect of school buildings, their construction and sanitation, on the spread of diphtheria. The highest incidence was observed in those schools where sanitation is most deficient and ventilation and lighting the least satisfactory. The brightest and airiest school showed the lowest incidence, and the incidence throughout all the schools placed them in exact order of sanitary virtue. Moreover, the incidence indicated the schools where malnutrition in the children is most conspicuous.» As we can see from the above, over-crowding and malnutrition played a key role. By the time vaccinations were introduced, most of these killer infectious diseases had become more benign. The vaccine is also known not to be effective in many cases, and may actually cause the spread of the disease. According to Minutes of the 15th Session (November 20-21, 1975) of the Panel of Review of Bacterial Vaccines and Toxoids with Standards and Potency (data presented by the US Bureau of Biologics, and the FDA):‘For several reasons, diphtheria toxoid, fluid or absorbed, is not as effective an immunizing agent as might be anticipated. Clinical (symptomatic) diphtheria may occur . . . in immunized individuals—even those whose immunization is reported as complete by recommended regimes . . . the permanence of immunity induced by the toxoid . . . is open to question.’ Medics have always known this vaccine doesn’t work and have been writing about it since it was invented. For instance, in the ‘Practitioner’, April 1896, it was written ‘that the serum did not, to any appreciable degree, prevent the extension of the disease to the larynx; all the severe cases died, and the good result in the lighter ones was attributable to the mild type of the epidemic.» The doctor also states that, at the Hospital of Bligdam, Copenhagen, «the mortality from diphtheria remains the same after, as it was before.’ Dr. Joseph Winters published a book, ‘Clinical Observations upon the Use of Anti-Toxin in Diphtheria’, in which he stated: ‘percentage of mortality is not only misleading, but is absolutely worthless unless accompanied by the actual number of cases reported and the actual number of deaths.» He also declares that «the serum has an injurious effect, and will certainly be abandoned. «Also, the famous Dr. Hadwen wrote in his booklet, ‘The Anti-Toxin Treatment of Diphtheria: In Theory and Practice’, that in 1895 in Berlin the mortality rate from diphtheria was 15.7% (before any vaccination). By 1900 (after vaccination) this figure had risen to 17.2%. According to Metropolitan Asylums Board Annual Reports, 1895-1910, the death rate from Diphtheria in 1910 was 9.80% in those who had received anti-toxin and only 2.99% in those who had not received it. In more recent years there have also been numerous studies of ‘failure’ of DPT vaccine to ‘immunize’ against the diseases it was designed to prevent. As an example, here are some studies: Journal of Infectious Diseases, vol. 179, April 1999; 915-923. «Temporal trends in the population structure of bordetella pertussis during 1949-1996 in a highly vaccinated population «Despite the introduction of large-scale pertussis vaccination in 1953 and high vaccination coverage, pertussis is still an endemic disease in The Netherlands, with epidemic outbreaks occurring every 3-5 years.» One factor that might contribute to this is the ability of pertussis strains to adapt to vaccine-induced immunity, causing new strains of pertussis to re-emerge in this well-vaccinated population.Vaccination against whooping-cough. Efficacy versus risks (The Lancet, vol. 1, January 29, 1977, pp. 234-7): Calculations based on the mortality of whooping-cough before 1957 predict accurately the subsequent decline and the present low mortality… Incidence [is] unaffected either by small-scale vaccination beginning about 1948 or by nationwide vaccination beginning in 1957… No protection is demonstrable in infants.» The Lancet Volume 353, Number 9150 30 January 1999 Risk of diphtheria among schoolchildren in the Russian Federation in relation to time since last vaccination Quote:In 1993, the Russian Federation reported 15229 cases of diphtheria, a 25-fold increase over the 603 cases reported in 1989.1 The incidence rate among children 7-10 years of age (15·7 per 100000) was twice that of adults aged 18 years or over (7·9 per 100000). 81% of the affected children aged 7-10 years had been vaccinated with at least a primary series of diphtheria toxoid, and most had received the first booster recommended to be given 12 months after completion of the primary series. Shimoni, Zvi; Dobrousin, Anatoly; Cohen, Jonathan; et al. «Tetanus in an Immunised Patient» British Medical Journal Online (10/16/99) Vol. 319, No. 7216, P. 1049;Israeli researchers present the case of a 34-year-old construction worker who was hospitalized after having a reported epileptic fit and experiencing flu-like symptoms. The patient had a low-grade fever, but was alert and coherent. Any attempts to speak or get up on the second day resulted in attacks of risus sardonicus, opisthotonus, and trismus. The patient was diagnosed with tetanus and given 2000 U of human tetanus immunoglobulin. Further treatment was provided, and after 15 days, the patient had stopped taking diazepam and ventilatory support was withdrawn. The man had been fully immunized against tetanus, and had received booster shots five and two years before being hospitalized. Another reason for the fall in infectious disease rates is that diseases are classified according to vaccine status. For instance, tonsillitis and mild Diphtheria have identical symptoms: severe sore throat, swollen glands in the neck, bright red tonsils and a green/yellowish or grey discharge at the back of the throat. With severe Diphtheria, this discoloured film is impossible to remove and it may block off the airway and cause respiratory problems. Essentially, in milder cases there is no difference between tonsillitis and Diphtheria and vaccinated patients would simply be recorded as tonsillitis. Also, doctors do not test for Diphtheria anymore so they wouldn’t know whether it was present or not, and most doctors do not know what symptoms to look for to diagnose it, so all of this would skew statistics. This is also a sanitation disease and can be caused by vaccination polluting the internal system.The vaccine doesn’t work and never has and the world’s only ever double-blind controlled trial on vaccination (BCG) in the early 1970s which proved it didn’t work. However, it took almost 30 years of administering useless vaccine to people before they stopped its use.The study stated: ‘The efficacy of the TB vaccine is 0%’ (Bulletin of the WHO, Tuberculosis Prevention Trial, 57 (5); 819-827, 1979). Here are some other studies showing that TB vaccine causes the disease:Foster DR. Miliary tuberculosis following intravesical BCG treatment. Br J Radiol. 1997 Apr;70(832):429. No abstract available. PMID: 9166085 [PubMed — indexed for MEDLINE]Foster DR. Miliary tuberculosis: a complication of intravesical BCG treatment. Australas Radiol. 1998 May;42(2):167-8. No abstract available. PMID: 9599839 [PubMed — indexed for MEDLINE]Marrak H, et al.[A case of tuberculous lupus complicating BCG vaccination]. Tunis Med. 1991 Nov;69(11):651-4. French. No abstract available.PMID: 1808776; UI: 92230052.Magnon R, et al. [See Related Articles] Disseminated cutaneous granulomas from BCG therapy. Arch Dermatol. 1980 Mar;116(3):355. No abstract available.PMID: 7369757; UI: 80174030.Vittori F, et al. [Tuberculosis lupus after BCG vaccination. A rare complication of the vaccination]. Arch Pediatr. 1996 May;3(5):457-9. French. PMID: 8763716; UI: 96297887. According to Dr. Surinder Bakhshi, Consultant in Communicable Diseases:‘BCG, the most used vaccine in the world since it was introduced more than 50 years ago, has made no difference to TB in countries which rely solely on it to halt its spread. It has never been claimed to prevent TB, but even the evidence of its protectiveness is patchy and historical. And there have been no studies of its effectiveness in the past three decades.It may leave an ugly scar and, indeed, do more harm than good. Further, as TB, with rare exceptions, is largely a disease of the elderly in the Western world, vaccinating children doesn’t make sense. TB in Britain is a legacy of its empire. As long as people from third world countries come and settle here, there cannot be a let-up in its spread.People who come from high prevalence countries will continue to harbour TB germs in their bodies until they die. The World Health Organisation has set its face against vaccination and routine screening. It advocates effective disease management — early diagnosis and supervised treatment — to contain it and avoid its spread to the host community. Vaccination wastes resources, gives false hope and distracts attention from what needs to be done.’(Letter, the Sunday Times, 15 April 2001). Isolation worked in the old days and its still one of the most effective means of preventing disease. Other diseases like Scarlet Fever and Typhus disappeared to virtually zero without vaccination. Chickenpox, which is not vaccinated for in this country and in some other countries, is also declining in incidence. A report showed that there are now less cases in Wales, where there is no vaccine: Objective: To examine the epidemiology of chickenpox in Wales from 1986 to 2001. Design: Descriptive analysis of chickenpox consultations reported by the Welsh general practice sentinel surveillance scheme for infectious diseases, compared with annual shingles consultation rates from the same scheme to exclude reporting fatigue and data from a general practice morbidity database to validate results. Setting: A total of 226 884 patients registered with one of 30 volunteer general practices participating in the sentinel surveillance scheme. Main outcome measures: Age standardised and age specific incidence of chickenpox. Results: Crude and age standardised consultation rates for chickenpox declined from 1986 to 2001, with loss of epidemic cycling. Rates remained stable in 0–4 year olds but declined in all older age groups, particularly those aged 5–14 years. Shingles consultation rates remained constant over the same period. Data from the morbidity database displayed similar trends. Conclusion: General practitioner consultation rates for chickenpox are declining in Wales except in pre-school children. These findings are unlikely to be a reporting artefact but may be explained either by an overall decline in transmission or increased social mixing in those under 5 years old, through formal child care and earlier school entry, and associated increasing rates of mild or subclinical infection in this age group. Source: Declining incidence of chickenpox in the absence of universal childhood immunisation, Arch Dis Child 2004;89:966-969 doi:10.1136/adc.2002.021618 Measles is a disease which is mild in most cases. The figures the DOH use are from the third world, not of Western children. They also include children who have pre-existing conditions, those who are malnourished and those whose measles was treated with anti-pyretics (which is known to cause measles side-effects. In 1967, Christine Miller from the National Institute for Medical Research, London, published a paper on measles, stating: ‘Measles is now the commonest infectious disease of childhood in the UK. It occurs in epidemics in which the total number of cases usually exceeds half a million…there is no doubt that most cases in England today are mild, only last for a short period, are not followed by complications and are rarely fatal.’ Also in the Practitioner, November 1967: ‘some physicians consider that measles is so mild a complaint that a major effort at prevention is not justified.’ After the measles vaccine was introduced in 1968, followed by the MMR in 1988, the disease suddenly became more serious. According to the BMA Complete Family Medical Encyclopaedia, 1995: ‘measles is a potentially dangerous viral illness…prevention of measles is important because it can have rare but serious complications…it is sometimes fatal in children with impaired immunity.’ Clearly, you can see vaccine marketing techniques at play here. According to the DOH, in their book ‘Immunisation Against Infectious Diseases’,‘Before 1988 (when the MMR was introduced) more than half the acute measles deaths occurred in previously healthy children who had not been immunised.’ They quote the study C L MILLER. Deaths from measles in England and Wales, 1970-83. British Medical Journal, Vol 290, 9 February 1985, but if you actually read this study (which they are relying on parents not doing), you will find it actually says: ‘No attempt was made to establish further clinical details, vaccination history, or social class.’ — i.e. they didn’t know the vaccine status of the individuals. And: ‘90% of deaths in those previously normal occurred in those over the age of 15 months, when the vaccines are usually given’. These children were probably vaccinated prior to dying of measles as they were of vaccination age. Nearly half the children who died were ‘grossly physically or mentally abnormal or both. The pre-existing conditions in the 126 previously abnormal individuals included cerebral palsy (24), mental retardation (20), Down’s syndrome (19) and various congenital abnormalities (22). There were nine children with immune deficiency or immunosuppression, and 19 aged 2-8 with lymphatic leukaemia, a number of them in remission.’ In normal healthy children whose measles has not been treated with anti-pyretics, and whom are well nourished, I would say measles is a good thing. Diseases of childhood are there for a reason. They release toxins from the body, they mature the child’s developing immune system, which is why they occur in childhood. According to Jayne Donegan, a medical GP, “our immune system had matured and developed purely because of catching the diseases we are trying to eradicate. In my opinion, normal childhood diseases are basically good for us. They teach our immune system what is «us» and what is foreign. All our childhood diseases were killers when they first came along. They wiped out thousands because we had no natural immunity against them. Diseases infect us and, in turn, strengthen our immune system. I vaccinated both my children with the MMR jab, but this was before I started my research into the problems associated with it.” Often, when a child has had a childhood disease such as Chickenpox or Measles, they will pass more developmental milestones such as suddenly beginning to read, or learning new words, and any existing problems seem to reverse after a bout of measles (for instance, asthmatics suddenly recover). My own daughter had measles as a toddler and was not ill again for more than a year afterwards, not even with a cold. I believe this was because measles was a strengthening milestone for her. In the case of tetanus, unlike other childhood diseases, it isn’t possible to gain natural immunity to tetanus. If you’ve had it once, you can have it again. The body does not produce antibodies to Clostridium Tetani. Vaccination is the act of injecting a viral or bacterial substance into the body to make it produce antibodies to that disease. However, since no natural antibodies can be made, then there is no possible way that artificial antibodies could be made either. If the disease cannot give you protection, then how can a vaccine? It is likely that any raised antibody level seen after vaccination is the result of adjuvants (toxic heavy metals which are added to increase the body’s antibody response). In the case of tetanus vaccine, this substance is aluminium. Antibodies themselves are not an indication of immunity – this is just one function, which is vastly different from whole body immunity. According to Vieira et al: ‘This minimal protective antibody level is an arbitrary one and is not a guarantee of security for the individual patient.’ (Vieira, B.l.; Dunne, J.W.; Summers, Q.; Cephalic tetanus in an immunized patient. Med J Austr. 1986; 145: 156-7). Here are a number of other studies of disease occurring in the vaccinated: Bentsi-Enchill AD, et al. Estimates of the effectiveness of a whole-cell pertussis vaccine from an outbreak in an immunized population. Vaccine. 1997 Feb;15(3):301-6. PMID: 9139490; UI: 97227584. D. C. Christie, et al., «The 1993 Epidemic of Pertussis in Cincinnati: Resurgence of Disease in a Highly Immunized Population of Children,» New England Journal of Medicine (July 7, 1994), pp. 16-20.MMWR November 05, 1993 / 42(43);840-841,847 Diphtheria Outbreak — Russian Federation, 1990-1993 Despite high levels of vaccination coverage against diphtheria, an ongoing outbreak of diphtheria has affected parts of the Russian Federation since 1990 (1); as of August 31, 1993, 12,865 cases had been reported. This report summarizes epidemiologic information about this outbreak for January 1990- August 1993, and is based on reports from public health officials in the Russian Federation. Shimoni, Zvi; Dobrousin, Anatoly; Cohen, Jonathan; et al. «Tetanus in an Immunised Patient» British Medical Journal Online (10/16/99) Vol. 319, No. 7216, P. 1049; Rev. Soc. Bras. Med. Trop., vol. 28, no. 4, Oct-Dec 1995, pp. 339-43 «Clinical and epidemiological findings during a measles outbreak occurring in a population with a high vaccination coverage» : «The history of previous vaccination [in very early childhood] did not diminish the number of complications of the cases studied. The results of this work show changes in age distribution of measles leading to sizeable outbreaks among teenagers and young adults.»Clin. Invest. Med., vol. 11, no. 4, August 1988, pp. 304-9: «Measles serodiagnosis during an outbreak in a vaccinated community» ( from a group of 30 measles-sufferers displaying IgM antibodies during the acute phase of illness, 17 had been vaccinated for measles. All 17 experienced measles again, showing IgM antibodies indicating acute infection. «A history of prior vaccination is not always associated with immunity nor with the presence of specific antibodies.»Aaby P, et al. (1990) Measles incidence, vaccine efficacy, and mortality in two urban African areas with high vaccination coverage. J Infect Dis. 1990 Nov;162(5):1043-8. PMID: 2230232; UI: 91037153. Boulianne N, et al.(1991) [Major measles epidemic in the region of Quebec despite a 99% vaccine coverage]. Can J Public Health. 1991 May-Jun;82(3):189-90. French. PMID: 1884314; UI: 91356447. All vaccination does is alter the expression of diseases and weaken our immune systems because we don’t have as much opportunity to experience the wild disease. Whilst we have less infectious (self-limiting) illness, we have more chronic (long-term) illness. 1 in 3 people now have cancer. This figure is INSANE. Back in the 18th century, cancer was virtually unheard of. Meningitis was extremely rare, now many more children get it. So many people are puffing on ventolin inhalers, with allergies to nuts and strawberries and everything else. Many people have weird skin conditions, and there are dozens more auto-immune diseases than there were before vaccination, like HIV, Lupus, MS. According to Cambridge University, 1 in 58 children is autistic and there are more with ADHD. These are poisoning and brain damage conditions. This amounts to 2% of the population that are now brain damaged by this! Vaccination has turned us into a nation of weaklings that cannot cope with anything. That is why scientists are trying to invent a ‘dirt’ vaccine to strengthen children’s immune systems. With regard to the tribes people dying of diseases, they were white man diseases and we went in, invaded their home and their way of life (that they had been living for hundreds of years quite happily) and exposed them to our diseases, which obviously they had not encountered before. With continued exposure, the disease would become less severe and the tribes people would not die in great numbers, as is the course of all disease if we are allowed to develop natural immunity. Personally I also feel that we in western society had no right to interfere in the way of life of the tribes people and we ought to be ashamed of this aspect of our history. (This article was originally written for a blog on vaccination, in response to some comments from parents). Soldier Paralysed By Smallpox Vaccine Fights For His Life and Compensation The VA won’t pay for one marine’s injury. Lance Cpl. Josef Lopez deployed to Iraq in 2006 when he was 20 years old. He enlisted in the Marine Corps fresh out of high school and was enthusiastic about serving to protect the lives of others. He never thought that he would almost lose his own life from something as routine as a vaccination. «I started having trouble walking,» Lopez said. «There was a numbness that started in my feet and gradually worked it’s way up.» After being overseas only nine days, Lopez had trouble with his legs tingling. Literally overnight he was paralyzed. The sensation worked its way up, and soon he couldn’t use his arms. «When the morning came everyone woke up and found me laying on the floor and I wasn’t able to move my legs at all,» he described. Doctors in Balad, Iraq scrambled to find out the medical mystery taking over his body. «The next day they sent him to Germany, and I got a call from the doctor in Germany who told me that they weren’t sure if he was going to make it. And they wondered if I could come to Germany and try to get him to respond to me,» Joe’s mom, Barbara Lopez, said. He was on life support, and doctors had no choice but to put him in a medically-induced coma. Barbara and her older son Steven flew to Germany to find out shocking news. «Well when I fist woke up they said the vaccine caused your body to attack itself,» Lopez said. The smallpox vaccine that he got from the Department of Defense just days before deployment was the reason for it all. Lopez had an adverse reaction causing incredible damage. The bottom line: his immune system was eating away at his nervous system, causing the nerves to deteriorate. The family flew to Bethesda Hospital in Maryland where Lopez remained in the ICU for three weeks. Doctors argued over what treatment to give him, but eventually decided on the controversial IVIG treatment. It slowly worked bringing him out of the coma. «They told me he might be a vegetable,» Barbara said. «They wanted me to watch for brain damage and question him…see what he remembered…see if he was still him.» Each day she would question him and have him blink once for the answer yes and twice for no. Days later he started talking. The greatest news was that Lopez remembered who he was and everything about his life. Despite this good news, he had another huge obstacle to overcome. «One of my doctors came and said ‘you’ll never be able to walk again.'» However, slowly Lopez started rebuilding his strength. He came back to his hometown of Springfield and endured intense physical therapy. He also spent more than a year in a wheelchair. «No one ever thinks they’ll be in a wheelchair, and I’ve always had that ‘it’s not going to happen to me’ mentality. Now it’s the opposite,» Lopez said. Today he can walk but not for very far or for very long. He takes 10 to 15 pills each day and will need to for the rest of his life. The VA paid for his medical bills, but there is more to the story. The Lopez family had thousands of dollars in non-medical bills — and the VA refuses to pay. Barbara had to leave her job for several months to care for her son, and they had to install a wheelchair lift in their home. There are also other expenses he will have for the rest of his life that Barbara worries about. After speaking with other Marines and their families, she heard about Traumatic Servicemembers Group Life Insurance or TSGLI compensation. TSGLI is a government program that is designed to compensate injured service members for injury from traumatic events. To the dismay of his family, Lopez was denied coverage. The VA Department of Insurance Chairman, Stephen Wurtz, said Lopez was denied because his injury didn’t come from a traumatic combat event, but from a needle. He also said the government can’t afford to cover injuries from vaccines. «Any additional claims under TSGLI are paid by the government, and the government would now be paying that many more claims during a period of conflict,» Wurtz said. Lopez said what upset him even more is the fact that they amended the TSGLI bill after he applied to specifically disqualify vaccine injuries from compensation. The Lopezes visited Missouri U.S. Sen. Claire McCaskill to explain their fight for fair compensation. McCaskill is now working on a bill that would extend coverage to service members injured by vaccines. «It would give him the same coverage, and frankly I really think we need to take care of this young man and his family,» Sen. McCaskill said in a satellite interview with KOMU. «He was willing to take care of us.» Through all of this Lopez is not just sitting around. He now races a specially made hand cycle in the Marine Corps Marathon each October to raise money for other Marine families. His mom, Barbara participates in the 10K. Reflecting back on his journey to recovery the past three years, he said the hardest part is the unknown: «Just the not knowing. Not knowing if I would ever walk again.» The love and support of his mother Barbara was constant through all that unknown. «She was the first person I saw when I woke up, and she was there everyday,» he said. Source: KOMU HD, reported by Laura Nichols, 3 November 2009. 1948 Pediatrics Article Questioning ‘Considerable Risk’ from DPT Vaccine Inspection of the records of the Children’s Hospital for the past ten years has disclosed 15 instances in which children developed acute cerebral symptoms within a period of hours after the administration of pertussis vaccine. The children varied between 5 and 18 months in age and, in so far as it is possible to judge children of this age range, were developing normally according to histories supplied by their parents. None had convulsions previously.» «Twelve of the children were boys and three were girls, a sex difference also encountered in relation to other substances, such as lead, causing gross injury to the developing nervous system. At inoculation time, the children varied in age between 5 and 18 months. Developmental data were obtained in detail on all but two of the children, whose mothers simply stated that they had developed normally. Reference to the case histories showed that such objective activities such as sitting, walking, and talking had appeared in many of the children prior to the inoculations; and the regressions or failure of further development occurred after the encephalopathies [Any disease or symptoms of disease referable to disorders of the brain] in several instances. In so far as it was possible to judge none of the children were defective prior to their acute illness.» «In common with many other biologic materials used parenterally [not by mouth], an important risk of encephalopathy attends the use of prophylactic pertussis vaccine. The mechanism whereby the encephalopathy is produced is not elucidated by the present study. The universal use of such vaccine is warranted only if it can be shown to be effective in preventing encephalopathy or death from pertussis itself in large groups of children. If avoidance of the inconvenience of the average attack of pertussis is all that is expected, the risk seems considerable. Efforts to diminish the hazard by modification of the vaccine or new methods of administration seem indicated.» Source: Randolph K. Byers, M.D. and Frederic C. Moll, M.D., Encephalopathies Following Prophylactic Pertussis Vaccine, Pediatrics, April 1948, Vol. 1, No. 4, pp. 437-456 Prescription Drugs are Second Leading Cause of Accidental Death in the USA In a study published in the May issue of the American Journal of Preventive Medicine, researchers came to a surprising conclusion: hospitalizations for poisoning by prescription medication has increased by 65 percent from 1999 to 2006. The rates of unintentional poisoning– from prescription opioids, sedatives and tranquilizers in the U.S. has surpassed motor vehicle crashes as the leading cause of unintentional injury death. Simply put, this means that poisoning from prescription drugs is now the second leading cause of unintentional injury death in the U.S. “Deaths and hospitalizations associated with prescription drug misuse have reached epidemic proportions,” said the study’s lead author, Jeffrey H. Coben, MD, of the West Virginia University School of Medicine. “It is essential that health care providers, pharmacists, insurance providers, state and federal agencies, and the general public all work together to address this crisis. Prescription medications are just as powerful and dangerous as other notorious street drugs, and we need to ensure people are aware of these dangers and that treatment services are available for those with substance abuse problems.” Dr. Coben states that while the data shows a fast-growing problem, there’s an urgent need for more in-depth research on these hospitalizations. The study was able determine whether the poisonings were diagnosed as intentional, unintentional or undetermined. While the majority of hospitalized poisonings are classified as unintentional, notable increases were also shown for intentional overdoses associated with these drugs, most likely reflecting their widespread availability in community settings. © URL: http://www.vaccineriskawareness.com/Did-Vaccines-Really-Halt-Killer-Diseases-
<urn:uuid:9304ca04-e5c6-4a46-9609-18aeebeb46e4>
CC-MAIN-2022-33
http://www.similia.lv/%D0%BF%D0%BE%D0%BB%D0%B5%D0%B7%D0%BD%D0%B0/%D0%BF%D1%80%D0%BE-%D0%B2%D0%B0%D0%BA%D1%86%D0%B8%D0%BD%D1%8B-%D0%B2%D0%B0%D0%BA%D1%86%D0%B8%D0%BD%D0%B0%D1%86%D0%B8%D1%8E/did-vaccines-really-halt-killer-diseases/?lang=ru
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00098.warc.gz
en
0.961525
6,915
2.75
3
Buddhism | Christian Life and Worship | Catholicism | Islam The Buddhism of Tibet By His Holiness the Dalai Lama Trans. and edited by Jeffry Hopkins As Tibetan Buddhism matures in the West, the release of more substantive and esoteric literature becomes timely. With this intermediate audience in mind, and with the hope that "even a few people for a short period could have some internal peace," the Dalai Lama here offers two of his original writings alongside two ancient texts. His works "The Buddhism of Tibet" and "The Key to the Middle Way" comprise roughly half of the book. They reveal some of the secondary and more cerebral layers of Tibetan Buddhist study, going well beyond the primary embrace of the Four Noble Truths. Emptiness, "the final mode of being of all phenomena," is a recurring motif throughout the volume. The second half includes "Precious Garland of Advice for the King," 500 quatrains written by Nagarjuna, who lived 400 years after the Buddha. Written to advise the Indian king Satavahana, it has specific counsel on ruling, plus more general material on emptiness and compassion. Although theoretically softened by a caveat of application to both sexes, the prohibition against desiring women, who are partially described as "a source of excrement, urine and vomit," among other similar vitriolic phrases, will be hard to stomach for many. The book concludes with an exposition of a relatively short poem, "Song of the Four Mindfulnesses" by Kaysang Gyatso, the Seventh Dalai Lama. No doubt a book of merit, this volume is most appropriate for serious students who are ready to wade through fairly heavy intellectual currents. On Forgiveness: How Can We Forgive the Unforgivable? By Richard Holloway Former Bishop of Edinburgh and a divinity professor in the City of London, Holloway offers deceptively simple reflections on the always compelling, ever-relevant subject of forgiveness. Refreshingly free from the extremes of rant and piety, the cosmopolitan cleric instead summons an eclectic and humanistic range of provocative thinkers, from Derrida to Nietzsche, and a generous sampling of contemporary British poetry. The prolific author of "Godless Morality" and 23 other books is fond of attention-grabbing Derridan paradox: Unforgivability is necessary in order to make forgiveness possible. We can practice religion what it signifies without the form of religion, yielding "religion without religion," which can also be seen in the phenomenon of people who are "spiritual but not religious." Although the book originated as lectures at Glasgow University, Holloway's point is hardly academic. He always applies his reasoning to real and historical examples: the Middle East, Nazi-hunter Simon Wiesenthal, South Africa's Truth and Reconciliation Commission. Holloway offers subtle guidance, the kind that is easiest to accept and therefore most effective. He is not imperative: forgiveness is a choice so hard that there is room for the unforgiving, and magnanimity and generosity may work as substitutes for forgiveness in the political arena. This slender book is a reminder that if enormous error is all too human, so too must be the capacity to forgive it and thereby transcend it and, as the author puts it, "reclaim the future." This is an estimable contribution to the growing current literature on forgiveness. The Mosaic of Christian Belief: Twenty Centuries of Unity and Diversity By Roger E. Olson In this ambitious book, Olson delineates from an evangelical perspective what is and is not authentic Christian belief. Chapters feature such topics as the Bible, God, Jesus and the Church, beginning with an overview of orthodox belief about the topic, citing Scripture, the Church Fathers and noted Christian writers throughout history. Olson then devotes a section to heretical beliefs, and follows this with an examination of diverse non-heretical beliefs among orthodox Christians (including Roman Catholics, Eastern Orthodox believers, and most Protestants). He ends each chapter envisioning greater unity among Christians, despite honest disagreements. While marred by some redundancy and excess verbiage, Olson's writing renders many complex theological concepts surprisingly accessible. And in his attempts to separate heresy from right belief, he acknowledges that those who adhere to beliefs he labels erroneous are usually sincere Christians (he cites wrong belief among fundamentalists, charismatics, liberal Christians and various sects). Attempting to mediate among the myriad dogmas, doctrines and opinions of orthodox Christians is no easy task, and Olson's descriptions of certain right beliefs and heresies (such as the psychological analogy for the Trinity and modalism) are sometimes barely distinguishable. Despite these and other small logical problems, Olson's book contributes greatly to contemporary evangelicalism not only in its impressive survey of many theologies, but also in its use of "The Great Tradition" of Christian belief as an essential guide to orthodoxy. The Great Worship Awakening: Singing a New Song in the Postmodern Church By Robb Redman Robb Redman, the former director of the D. Min. program at Fuller Theological Seminary, argues that American Christianity is in the midst of a "worship awakening." In "The Great Worship Awakening: Singing a New Song in the Postmodern Church," Redman argues that the shift is best seen in the popularity of seeker services, the "praise and worship" movement, the growth of the Christian worship industry and the renaissance of liturgical traditions. Redman traces demographic changes to understand why worship has taken on new significance, pointing to increasing ethnic diversity and inter-generational dynamics. This is not the most sophisticated book on changes in American Christian worship practices; recent contributions by Robert Webber and Leonard Sweet have hit the mark more forcefully. However, it is a competent and resourceful overview. In God's Time: The Bible and the Future By Craig C. Hill Eschatology is a hot subject. "Prophecy" is a regular feature in supermarket tabloids, and it recently made the cover of Time magazine. Interest in the subject fuels countless water cooler conversations, myriad "end-times" Web sites and the whole Left Behind publishing juggernaut. But in many quarters of the Christian community, that same intrigue over "what happens at the end of the story" is balanced by bewilderment, even embarrassment over what the Bible and its various interpreters say. For these Christians in particular (and less so for inerrantist end-time enthusiasts), this book is a welcome, comprehensive and accessible guide to exploring what the Bible says about the future. Hill, a professor of New Testament at Wesley Theological Seminary in Washington, D.C., wants to show "that the idea of God's triumph is central to Christian faith and that a working knowledge of the concept is essential to an informed reading of the Bible, particularly the New Testament." He begins with a primer on biblical interpretation, then addresses prophecy throughout history, the biblical books of Daniel and Revelation, Jesus' expectations for the future and what those expectations were for the earliest Christians. The book closes with an appendix on the Rapture. It all reads like a good lecture, punctuated with summary lists, illustrative diagrams and funny asides (though some readers may find the latter off-putting). Like a well-prepared and practiced professor, Hill leads his readers through this difficult material with ease and expertise, sensitivity and a sense of humor. The Vatican's Women: Female Influence in the Holy See By Paul Hoffman How do women influence the inner workings of the male-dominated Roman Catholic Church when the door to priesthood remains closed to them? To find out women's impact on the Vatican, Hoffman, a former Rome bureau chief for The New York Times, conducted interviews with more than 40 representatives of the church's distaff side and did historical research aided by two of the Vatican's women professionals. He learned that although they are barred from many official positions of authority, women have managed to exercise persuasive power at the Vatican into the present day. Indeed, some of Hoffman's strongest examples are of women who wielded great power while assuming traditional and even subservient roles. Chief among these was Mother Pascalina, a Bavarian nun who spent more than 40 years attending to the personal needs of Pope Pius XII, and who had so much influence that she was referred to by some as "the popess." This book is as much about the Vatican as it is about women and is full of interesting, gossipy tidbits drawn from the author's years of working and living in Rome. Although such details make for interesting reading and will certainly attract readers with a taste for scandal and rumor, their inclusion detracts from what otherwise might have been a more serious study of the role of women in the church. By Roger Housden Housden ("Ten Poems to Change Your Life") adds a mystical twist to a young man's search for love in this spare, allegorical tale of a Greek icon painter living in 1950s Italy who makes a pilgrimage to the tomb of 13th-century Sufi poet Jelaluddin Rumi. Aesthete Georgiou loves art and beauty, but is frustrated by his inability to find a worthy love in his native Florence. Dazzled by a book of Rumi's poems, Georgiou hopes that a journey to the poet's tomb at Konya, Turkey, will teach him something about love. His meandering trip takes him to a monastery in Meteora, Greece; to the shrine of Delphi, where he has a vision of the Virgin Mary, who poses a riddle that holds the key to his quest; and to other sites in Greece and Turkey, where he meets Orthodox priests, mystics, sheikhs and dervishes who teach him that romance between a man and a woman is not the only kind of love there is, and that accumulating knowledge doesn't necessarily help one to experience or understand love. Housden is a graceful storyteller and he offers an offbeat look at the relationship between divine love and earthly romantic love. Unfortunately, he tends to slip into treacly, bland affirmations ("All is already well. Listen to what your heart tells you, and you cannot stray far"), and the tidy, happily-ever-after ending belies some of the complicated questions about spirituality and self-knowledge that are raised through Georgiou's quest. By Tahar Ben Jelloun From the author of "Racism Explained to My Daughter" comes this slender but ambitious treatise designed to make sense of Islam to young Western readers in the wake of September 11. Jelloun organized his book in a simple question-and-answer format, imagining the questions to come from his own children. The format and largely simple language makes it a quick read and easily digestible. Jelloun tells the tale of Muhammad and the origins of Islam, then dwells largely on Islam's Golden Age by emphasizing its openness to the knowledge of other cultures and by enumerating some of its own contributions to world science and philosophy. Jelloun tries not to whitewash Islamic history by mentioning the violent wars that characterized its expansion, but in doing so he raises more questions than he answers. He explains terrorists as "bad men" who are "not real Muslims." He also defines a range of terms from "humility" and "decadence" to "martyr" and "jihad," but often uses fairly sophisticated vocabulary in his explanations (which could be a translation issue from the original French: Jelloun is a Moroccan-born Muslim transplanted to France). For this reason, the book would work better for adult readers looking for simple ways to answer their children's questions. Although billed as being of interest to the general reader, it will certainly be frustrating to those who want more than a superficial overview of Islam. This book only whets the appetite. Behind the Burqa: Our Life in Afghanistan and How We Escaped to Freedom By "Sulima" and "Hala," as told to Batya Swift Yasgur This memoir from two sisters who fled Afghanistan 20 years apart distinguishes itself from the spate of books about women in similar circumstances by the sheer breadth of its coverage. Through these first-hand accounts of oppression, abuse and downright misery, readers come to understand that the much-maligned Taliban only picked up where the Mujihaddin left off in curtailing women's rights. In fact, as "Sulima" and "Hala"'s mother points out, "[The Taliban] is better than the Mujihaddin. The laws are strict and harsh, but at least we know what to expect. They're not just randomly breaking into houses and killing people.... If we keep all the rules, then we will be safe." The sisters' tales of domestic abuse and other now-familiar yet hair-raising injustices may crystallize the turbulent historical timeline, but it seems that their individual voices have been muted in translation. Unfortunately, it's so difficult to distinguish one from the other that much of the impact of this well-intentioned book is lost. The New Encyclopedia of Judaism In 1989, The Encyclopedia of Judaism set a high standard for Jewish reference works and was selected as an Outstanding Reference Book by the American Library Association. But in The New Encyclopedia of Judaism, a good work has been made even better; the original thousand entries have been updated and 250 new ones added. As with the first edition, the one-volume resource has hundreds of illustrations, contributions from scholars from all major branches of Judaism and a strong annotated bibliography. The Secret: Unlocking the Source of Joy and Fulfillment By Michael Berg Popular kabbalist and author of "The Way," Berg is back with another spiritual how-to, a guidebook for applying the principles of Jewish mysticism to everyday life. The book opens with a powerful tale: Josef and Rebecca, a poor couple, sell their only cow to provide a feast for a famous rabbi, and they are eventually rewarded with unfathomable riches. The cow, says Berg, symbolizes the unfulfilled life many people are willing to accept, and the riches symbolize the joy we can find if we shape our lives around the titular "Secret." What is this secret? It is a saying that Berg's teacher, the late Rav Ashlag, learned from his own teacher, years ago in Jerusalem: "The only way to achieve true joy and fulfillment is by becoming a being of sharing." That idea is hardly innovative, of course, but Berg's meditations on the life of generosity are stirring, and the kabbalistic and midrashic tales he employs movingly illustrate the fruits of sharing. The book is a bit skimpy, though, and padded with self-help standards. There's a list of six tips to aid those trying to live out The Secret, including the unabashed suggestion to "Read this book often" and, since The Secret is about sharing, share the book with others. Most readers will breeze through the text in an hour. One wishes that Berg had followed his own advice and shared even more with his audience. Kabbalah Month by Month: A Year of Spiritual Practice and Personal Transformation By Mindy Ribner It was inevitable that amid the explosion of Kabbalah-related books in the last five years, some would be done devotional-style, aimed to bring the puzzles of Judaism's most mystical text to readers in digestible, bite-sized daily doses. But in "Kabbalah Month by Month: A Year of Spiritual Practice and Personal Transformation," Mindy Ribner gives readers a fairly thoughtful and perceptive interpretation. What sets this book apart from most others that explore Kabbalah for the hoi polloi is that it is firmly and stubbornly rooted in Jewish tradition. Some may not agree with Ribner's explanations of some Jewish traditions, or her investigations of astrology, but they will appreciate the fact that she has not sought to divorce Kabbalah from its religious roots. The book is beautifully designed in a square paperback format. The Witches' Craft: The Roots of Witchcraft & Magical Transformation By Raven Grimassi Grimassi ("Wiccan Mysteries"; "Encyclopedia of Wicca and Witchcraft"), a practicing Wiccan for nearly three decades, has trained in at least four schools of The Craft. Here he makes a powerful case for returning to the ancient traditions that he believes have fallen by the wayside in the last 20 years. He complains that "many modern books on Witchcraft will describe a technique or method of performing a spell or ritual, and then go on to inform the reader that almost everything described is optional, and that the prescribed items can easily be substituted with other things." His approach is different than those "modern" books he chastises more traditional, more rooted. His substantive research in the first third of the book traces the written history of witches over the past 2,500 years. Having thus established his traditional credentials, Grimassi then turns to the tools, techniques and tried-and-true methods such as instruments, states of consciousness, implements, and the like. Much more than the standard gallop through the sabbats (seasonal observations), Grimassi delves deftly into more cerebral issues such as "right and left brain consciousness" and "myth and metaphor." He also manages to put into perspective more provocative avenues such as "sex magic" and "ritual flagellation." Grimassi offers a well-researched history of ancient magickal techniques, including some that have been preserved orally and are here in print for the first time. Everyone who cares deeply about the witchcraft tradition will want this impressive work. The New Revelations: A Conversation with God By Neal Donald Walsch Like Walsch's earlier bestsellers, this New Age volume purports to be a record of a conversation with, and revelation from, God. The overarching argument is simple, indeed a bit tautological: humanity has reached a turning point. As evidenced by September 11, something about our world isn't working. We do not, however, need to tinker with our economics or politics; rather, we need to retool our beliefs about those systems that govern society. This is key, Walsch insists, because "beliefs create behaviors." Fond of numbered lists, Walsch gives us "Five Steps to Peace," which include our admitting that there is something we don't understand about "God and... Life, the understanding of which could change everything." Walsch also offers Nine New Revelations, some of which don't seem all that new, including the idea that God has always communicated directly with people, or that God would never punish us with eternal damnation. The Steps to Peace and the New Revelations all point toward the peaceful, humane spirituality that Walsch wants readers to cultivate, a spirituality that focuses not on morals but on "functionality." Because Walsch is ecumenical, drawing on Robert Schuller, Harold Kushner, the Bhagavad Gita and Shakespeare, seekers from many spiritual backgrounds will find his book inviting, and the dialogue format makes for easy reading. For those who are interested in a spiritual approach to global upheaval, these "New Revelations" will prove inspiring and companionable. History and Current Affairs Joseph Smith: A Penguin Life By Robert V. Remini This accessible biography by Remini, a historian whose three-volume biography of Andrew Jackson won the National Book Award, makes a fine contribution to the field of Mormon studies. Remini has an engaging writing style, as when he suggests that Joseph Smith's future father-in-law "roared his refusal" to his daughter's marrying the young upstart, or that the prophet's friend Sidney Rigdon was a "fire-breathing Mormon." The book is strongest when it contextualizes the Mormon story in the larger fabric of U.S. history in the first half of the 19th century. Not surprisingly, Remini speaks eloquently about the sea changes that characterized the Jacksonian age, and explores how Smith and early Mormonism benefited from and were also hurt by the spiritual and economic cataclysms of the era. Remini helps readers understand how specific events in Mormon history were related to larger trends and affairs; for example, he situates the collapse of the Mormon-owned Kirtland Bank in the larger rubric of the financial panic of 1837. Remini states at the outset that this biography does not seek to pass judgment on the authenticity of Smith's prophetic calling, and with only a few exceptions, he successfully holds that neutral stance. There are several scattered and minor errors; there was no subtitle on the first edition of the Book of Mormon, as Remini claims, and Brigham Young is believed to have had 27 wives for "time and all eternity," not 20. But these are very insignificant problems in a book noteworthy for its balanced tone and thorough scholarship. The Science of Harry Potter: How Magic Really Works By Roger Highfield British science writer Highfield ("The Private Lives of Albert Einstein") takes on J.K. Rowling's Harry Potter series "to show how many elements of her books can be found in and explained by modern science." The result is an intelligent though odd attempt to straddle the imaginative worlds of science and fiction. Using Harry's magical world to "help illuminate rather than undermine science," Highfield splits the book in two: the first half a "secret scientific study" of everything that goes on at Potter's Hogwarts school, the second half an endeavor to show the origins of the "magical thinking" found in the books, whether expressed in "myth, legend, witchcraft or monsters." This division is an obvious attempt to duplicate the method and the popularity of his "Physics of Christmas." Here, however, as intriguing as the concept is, the author isn't quite able to engage or entertain as he explores the ways in which Harry's beloved game of Quidditch resembles the 16th-century Mesoamerican game Nahualtlachti or how, by using Aztec psychotropic mushrooms, Mexican peyote cactus and other types of mind-altering fungi, even Muggles can experience their own magic. While interesting, the book reads more like an obsessive Ph.D. dissertation that fails to satisfy either of its target audiences: the children who read the books or the parents who buy them and often read them themselves. Romancing the Ordinary: A Year of Simple Splendor By Sarah Ban Breathnach "Women were created to experience, interpret, revel in, and unravel the mysteries of Life through their senses," declares Breathnach ("Simple Abundance"), insisting that women have two extra senses: those of "knowing" and "wonder." Breathnach then works her way through the calendar year, offering tips to women to free their "essensual" selves. Much of the advice (e.g., make your own scented sachets and foot lotions) is rote. At times, Breathnach herself criticizes the commercialization of the sensual. For example, the bath is a "waterfall of delight" that's being "snuffed out by the banality of the self-enhancement poseurs." Homemade is the best way to go, says Breathnach, and even the hours spent preparing various potions are a gift in themselves. On the other hand, she heartily endorses purchasing gourmet fruits, "essensual sets of underwear," silk sheets and other luxuries, since these items also pleasure the senses. Fortunately, the object of all this pampering isn't just to attract a mate. Breathnach urges women to stop focusing on finding a partner and to "learn the sacred soulcraft of self-nurture." While exhortations to "become your own courtesan" may seem narcissistic, the message will strike a welcome chord among women who've learned that sacrificing for others isn't always worth it. At times Breathnach is unintentionally funny she recommends taking Beckett plays to the laundromat to "try on for size being an intellectual." But her occasionally pretentious use of quotations and capitalized references to the Spirit and the Divinity shouldn't stop her fans from pampering their Inner Goddess. Everyday Karma: A Renowned Psychic Shows You How to Change Your Life by Changing Your Karma By Carmen Harra Telling readers to "forget everything you think you know about psychics," self-described "metaphysical intuitive" Harra combines her gift to communicate with the "Invisible World" (honed after nearly drowning in a Romanian river at age five) with lessons she's gleaned as a licensed hypnotherapist, astrologist, numerologist and Kabbalah expert. Juxtaposing her casual, friendly writing style and positive tone against a highly structured framework, Harra, who's consulted with presidents, Hollywood celebrities and European royalty, provides thorough explanations on the levels of karma (past, present and accumulated), types of karma (individual, family and group) and a 10-step Karmic Resolution Method meant to develop the self-awareness needed to "project your own happy future." The tips on communicating with one's spirit guide, rules for a happy marriage and exercises meant to clarify one's true purpose, attract a soul mate and eliminate addictive patterns from one's life do lend an interactive feeling. However, Harra's heavy reliance on anecdotal evidence and her predictions section (where she shows that, for example, she knew "Clinton would almost be ousted from office" and later "disappear into civilian life") come off as self-congratulatory and may leave skeptics wondering if she's only the latest to capitalize on the ever-growing American appetite for occultism. U Shall Prosper: Ten Commandments for Making Money By Daniel Lapin Combining pop psychology, snippets of Jewish lore, homespun homilies and quotations from a daunting variety of sources, Lapin offers a manual on how to make money by succeeding in business. Lapin, a super-conservative Orthodox rabbi and talk show host, insists that everyone is in business "unless you are a Supreme Court judge [sic] or a tenured university professor." (Excluding professors fits with Lapin's devaluation of them, since he believes that higher education doesn't prepare for "real life.") The material is organized into 10 chapters of advice, beginning with the notion that "business is moral, noble and worthy," and ending with the admonition not to retire. Throughout, Lapin urges behavior that will produce more business and, thus, more money. For example, he unabashedly recommends attending synagogue or church services in order to make business contacts. Similarly, he encourages giving charity to an organization that has members who "are in the best position to advance your business objectives." Lapin justifies these dubious actions by interpreting the fifth commandment ("Honor thy father and thy mother") as a mandate to form relationships for business purposes. His struggle to ground his financial advice in Jewish tradition is abandoned as he expounds an anti-environmentalist stance. He digresses still further from both Judaism and wealth-building when he gives tips for public speaking based on what his father taught him (talking without a manuscript or notes and not grasping the rostrum). Lapin's book may appeal to patient readers who share his conservative political and economic views. Jewish Holidays All Year Round By Ilene Cooper, illus. by Elivia Savadier Written by Booklist's children's book editor, abundantly illustrated with Savadier's ("The Uninvited Guest and Other Jewish Holiday Tales") playful watercolors as well as color photographs of art and artifacts from New York City's Jewish Museum, this book strikes a tone both child-friendly and respectful. As the author thoughtfully explores the history and significance of the holidays and festivals of the Jewish year, she succinctly links these to traditions and rituals. For example, after explaining Sukkot and identifying it as an inspiration for the Pilgrims' first Thanksgiving, she writes, "Today, each sukkah fragile... open to the sky and the rain reminds us that we eternally owe our thanks to God. The sukkah symbolizes our need for God's shelter." Instructions for holiday activities (crafts, recipes, etc.) are also included. Almost every page features at least one illustration, from a view of an 18th-century Galician Torah crown to a contemporary photo of a Harlem congregation blowing long, twisty shofars to a 1910 Rosh Hashanah "card" carved on a walrus tusk in Nome, Alaska. Savadier's vignettes, mostly of busy, happy people, underscore the liveliness of Jewish faith. The Legend of Saint Christopher By Margaret Hodges, illus. by Richard Jesse Watson Hodges ("Saint George and the Dragon") masterfully adapts William Caxton's 15th-century translation of The Golden Legend to serve up a saint's tale with strong folkloric elements. Offero, a strong man who works as a bearer (porter), wants to serve the greatest king in the world. When he discovers that the king fears the devil, Offero concludes the devil is mightier, and serves him until he learns that the devil fears Christ. Offero's search to serve Christ teaches him that his own inner grace is even stronger than his physical prowess. Watson's ("The High Rise Glorious Skittle Skat Roarious Sky Pie Angel Food Cake") artwork achieves a startling blend of the ancient and the timeless, the archetypal and the particular he paints narrative elements in representational oils, reserving the backgrounds for abstract patterns that hint at the mythic roots of legend. By Lesley Harker Though adapted from the same source, Harker's ("Twinkle, Twinkle Little Star") journey on the ark is stylistically worlds away from Jerry Pinkney's. In this chipper version, young Annie scampers throughout the sailing vessel to tend to the animals, per the directions of "Grandaddy Noah" and other relatives. All the while, Annie hopes to find some peace and quiet amid the clatter and confusion on board. Annie's wish is closer to being granted as the rain finally ends and everyone dances for joy at the sight of the rainbow ("I knew it was a present... just for me!" Annie exults, not naming the donor of the "present"). The rainbow, like the raindrops on the jacket, is laminated; the shiny surfaces, along with the cheery watercolors of bright-eyed critters creating a rumpus and the sweet countenances of Annie and her family, are sure to prove inviting to very young readers. By Jerry Pinkney Pinkney ("The Ugly Duckling") unfurls some of the finest illustrations of his career in this lush, not-to-be-missed version of the perennially popular Bible story. In unfettered, graceful prose, Pinkney relates Noah's faithful work in building the ship and gathering the animals. He enhances the smoothly rendered plot with simple, evocative detail ("The strong wooden beams embraced the clouds"; "[The animals] followed him into the ark, and God closed the door behind them"). The watercolor-and-pencil animal tableaux delicately hued, vigorously executed are stunning in their artistry. Realistically drawn creatures flap, leap, lumber and slither about under the watchful, hopeful eyes of a kind-faced, gray-bearded Noah and his family. These crowded but never chaotic scenes, as well as those depicting whales in implicit comparison with the ark, will help children grasp the magnitude of the story's message of faith, stewardship and obedience. I Love You, Christopher Bear A trio of titles in the handsize paper-over-board Tales of Christopher Bear series by Stephanie Jeffs, illustrated by Jacqui Thomas, links a boy's love for his teddy bear to divine love. In I Love You, Christopher Bear, Joe loses Christopher Bear but the joyful reunion prompts Mom to explain just how much God loves everyone; "A Bad Day for Christopher Bear!" occasions a similar lesson about seeking forgiveness from individuals and from God; and in "Christopher Bear's First Christmas," the bear follows along as Joe's preschool class enacts a simple Nativity pageant. The Littlest Candlesticks By Sylvia Rouss On the heels of The Littlest Pair's winning the 2002 National Jewish Book Award in the picture-book category comes "The Littlest Candlesticks," another title in the Littlest series by Sylvia Rouss, illustrated by Holly Hannon. Couplets describe a girl's wish for her own Sabbath candlesticks, like her mother and her older sisters ("'Abby, just wait 'til you're a little older./ You'll have candlesticks,' her mother gently told her"). Abby's patience pays off the next week in preschool (her class is girls only, with all of them in modest dresses), when each girl receives a pair of "see-through glass" candlesticks to paint. Hannon compensates for uneven draftsmanship with radiantly colored compositions that almost shine with Abby's family's warmth. On Morning Wings by Reeve Lindbergh Often breathtaking watercolor-and-collage illustrations by Holly Meade illuminate "On Morning Wings" by Reeve Lindbergh, a verse adaptation of Psalm 139 that previously appeared in Lindbergh's "In Every Tiny Grain of Sand: A Child's Book of Prayers and Praise" (2000). Meade's visual story line shows four children spending an idyllic summer day together outdoors. The striking use of light, reflected in water or filtered by a campfire, conveys the natural reverence of the text with seeming spontaneity. I'm Gonna Like Me: Letting Off a Little Self-Esteem By Jamie Lee Curtis, illus. by Laura Cornell The dynamic duo behind "Today I Feel Silly" returns for another lively, emotionally reassuring picture book. This time out, Curtis looks to the source of what makes children (of all ages) feel comfortable in their own skin. Cornell pictures the perky rhymes being delivered by a pair of young protagonists confident enough to shake off embarrassment and to feel proud (though not overly so) of personal achievements. "I'm gonna like me when I'm called on to stand. I know all my letters like the back of my hand," announces a girl dressed in plaid, flowers and a cape. "I'm gonna like me when my answer is wrong, like thinking my ruler was ten inches long," says the boy as both youngsters stand before the school blackboard. Ultimately, the author concludes "I'm gonna like me 'cause I'm loved and I know it, and liking myself is the best way to show it." Though the message is both catchy and effective in its delivery, it's Cornell's humorous, detailed, ink-and-watercolor illustrations that give this volume true pizzazz. She hits just the right note of fear-tinged bravura with the characters' vividly imagined antics. Their portraits, embellished with all manner of costumes and fun accessories (a fire-extinguisher-like toothpaste tube, an Esther Williams lunchbox, a "Dalmatian Kit" for polka-dotting pets), will delight the audience. Ages 4-8. The Festival of Bones/El Festival de las Calaveras: The Little-Bitty Book for the Day of the Dead By Luis San Vicente, trans. by John William Byrd and Bobby Byrd Originally published in Mexico, this bilingual primer on the Day of the Dead may be best suited to those already familiar with the festival. For the uninitiated, an afterword explains that Mexicans celebrate el d¡a de los muertes from October 31 to November 2. Feasts, music and visits to gravesites help the living honor the dead, who are believed by many to return for the festivities. Vicente, a respected Mexico City artist, creates charming skeletal characters; their playfulness accentuates the holiday's merriment. Rendered in a style reminiscent of scratchboard illustrations, his bony subjects dance in top hats and ride bicycles amid a fetchingly surreal world. For "Pascual's skeleton sings a song/ Without any pain or dread/ Although half a leg is really gone/ Still a flower sits upon his head," he pictures the skeletal fellow balanced on one leg atop a crescent moon and a wide-eyed owl as his audience. But for norte¤os, the macabre content may not translate well. The text abruptly begins with a deceased guitarist crooning, "The skeletons are going along the road to the graveyard.... These are the dead. How happy they are." They may be further confused by a shifting narrative voice and non sequitur conclusion. But for those immersed in Mexican culture, this neatly designed square volume offers a fresh look at a familiar subject. Ideas on how to honor the dead and recipes for the holiday feast are included. Ages 4-10.
<urn:uuid:82b4ca50-7ae3-4aed-ab24-06a85951ffe6>
CC-MAIN-2022-33
https://www.beliefnet.com/entertainment/books/2002/10/canticles-kabbalah-and-day-of-the-dead-for-kids.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00697.warc.gz
en
0.944058
7,845
2.59375
3
Of all school-level factors related to student learning and achievement, the quality of the student’s teacher is the most important. Yet the teacher evaluation systems in use in American school districts historically have been unable to differentiate teachers who improve student learning from lower-performing educators. Many have failed to differentiate teachers at all. A 2009 study by The New Teacher Project found that “satisfactory” or “unsatisfactory” were the only ratings available to school administrators in many districts, and that more than 99 percent of teachers in those districts were deemed satisfactory. Improving methods for evaluating teacher performance and using the resulting information to change teaching practice has been a focus of recent reform efforts. According to the National Council on Teacher Quality, 32 states and the District of Columbia altered their teacher-evaluation policies in recent years to incorporate multiple methods of assessing and evaluating teachers, spurred in part by the federal Race to the Top competition. And each of the 43 states to which the Obama administration has granted a waiver from No Child Left Behind is now in the process of implementing evaluation systems that employ multiple measures of classroom performance, including student achievement data. These systems differentiate among three or more performance levels and are used to inform personnel decisions. While much of the debate over these new evaluation systems centers on their use of student test-score data to measure a teacher’s “value added” to student learning, classroom observations remain critically important. Most teachers work in grades or subjects in which standardized tests are not administered and therefore will not have a value-added score. Even when students’ test scores are available, classroom observations may capture dimensions of teachers’ performance that are important but not reflected in those scores. Finally, value-added scores on their own do not tell teachers how they might improve their practice and thereby raise student achievement. We examine a unique intervention in Chicago Public Schools (CPS) to uncover the causal impact on school performance of an evaluation system based on highly structured classroom observations of teacher practice. An iterative process of observation and conferencing focused on improving lesson planning and preparation, the classroom environment, and instructional techniques should drive positive changes in teacher practice. As teachers refine their skills and learn how best to respond to their students’ learning needs, student performance should improve. Recent evidence from Cincinnati Public Schools confirms that providing midcareer teachers with evaluative feedback based on the Danielson Framework for Teaching observation system can promote student-achievement growth in math, both during the school year in which the teacher is evaluated and in the years after evaluation (see “Can Teacher Evaluation Improve Teaching?” research, Fall 2012). The Excellence in Teaching Project (EITP), a teacher evaluation system also based on the Danielson framework, was piloted in Chicago Public Schools beginning in the fall of 2008. Leveraging the random assignment of schools to the EITP intervention, we find large effects of the intervention on school reading performance. The program had the largest impact in low-poverty and high-achieving schools but little or no impact in less-advantaged schools. These effects seem to be a consequence not only of the design and focus of the EITP pilot but also of the extent to which CPS supported the implementation of the new evaluation process. Similar benefits were not observed in schools implementing the same program the following year with less support from the central office, suggesting the importance of sustained support for teacher evaluation reform to translate into improved student performance. Teacher Evaluation in Chicago Public Schools For nearly four decades prior to the introduction of the EITP, CPS teachers were observed and evaluated based on a checklist of 19 classroom practices. During a classroom observation of a teacher’s lesson, the observer (usually the principal, but sometimes an assistant principal) would check one of three boxes (Strength, Weakness, Does Not Apply) next to each of the practices. The checklist approach was unpopular among both teachers and principals. High-performing teachers believed that the system did not provide meaningful feedback on their instruction, and only 39 percent of veteran principals agreed that the checklist allowed them to adequately address teacher underperformance. The system provided no formal guidance or rubric to either party on what constituted strong or weak performance on any of the checklist practices. Moreover, there was no direct correspondence between a teacher’s ratings on the checklist and the overall evaluation rating, which determined teacher tenure. Overall evaluations also showed little differentiation among teachers. Nearly all teachers (93 percent) received ratings of “Superior” or “Excellent” (the top-two categories in a four-tier rating system). Meanwhile, two-thirds of CPS schools failed to meet state proficiency standards under Illinois’s accountability system, and Chicago remained among the nation’s lowest-performing urban districts on the National Assessment of Educational Progress. Dissatisfaction with the evaluation system led CPS leadership under then CEO Arne Duncan to develop the EITP in partnership with the Chicago Teachers Union (CTU), beginning in 2006. A joint CPS–CTU committee met together over two years to negotiate the details of the evaluation pilot. In the summer of 2008, just prior to implementation, the district and union disagreed on whether the ratings teachers received under the EITP would be used for teacher accountability purposes, such as tenure decisions. The district nonetheless moved forward with the pilot to implement formative, ongoing assessments for teachers that would provide them with structured feedback on their instructional practices. The classroom observation process had occurred formally (if superficially) twice a year for all teachers, irrespective of tenure status, as part of the district–union teacher contract. While maintaining this schedule, the EITP changed the process significantly. First, principals and teachers engaged in a brief (15- to 20-minute) pre-observation conference during which they reviewed the rubric. The conference also gave the teacher an opportunity to share any information about the classroom with the principal, such as issues with individual students or specific areas of practice about which the teacher wanted feedback. During the 30- to 60-minute lesson that followed, the principal was to take detailed notes about what the teacher and students were doing. After the observation, the principal was expected to match classroom observation notes to the Danielson framework rubric in order to rate teacher performance in 10 areas of instructional practice. The Danielson framework delineates four levels of performance (Unsatisfactory, Basic, Proficient, and Distinguished) across four domains, of which the EITP focused on two: Classroom Environment and Instruction. Within a week of the observation, the principal and teacher conducted a postobservation conference. During the conference, the principal shared evidence from the classroom observation, as well as the Danielson ratings, with the teacher. Principals and teachers were expected to discuss any areas of disagreement in the ratings, with a specific focus on ways to improve the teacher’s instructional practice and, ultimately, student achievement. The EITP represented a dramatic shift in the way teacher evaluation had occurred in CPS, and central-office staff sought to develop principals’ capacity to conduct these classroom observations and conferences. In 2008–09, the first year of implementation, 44 participating principals received approximately 50 hours of training and support, with three days of initial training during the summer and follow-up sessions throughout the school year. The initial training covered the use of the Danielson framework to rate teaching practice, methods for collecting evidence, and best practices for conducting classroom observations. The follow-up sessions consisted of seven monthly meetings in which principals brought materials from classroom observations that they had conducted and engaged in small-group discussion with their colleagues. Four additional half-day trainings during the school year provided an opportunity for principals to update their understanding and use of the rubric for evaluating teachers. Principals also received additional one-on-one support from the CPS central office. During this first year of implementation, central-office administrators responsible for EITP engaged with principals through weekly e-mails, providing consistent reminders to principals about observation deadlines and other EITP requirements. Principals could request time with EITP central-office staff to review their teacher ratings as a means of calibrating their observation sessions to EITP central office expectations. Finally, principals received individualized ratings reports from the University of Chicago Consortium on Chicago School Research (CCSR). The CCSR reports provided principals with a comparison of their own teacher ratings to ratings generated by trained external observers of the same teachers. These reports supported principals in making adjustments to their own ratings of teacher performance. Forty-four schools participated in EITP in the first year. These 44 Cohort 1 schools continued to take part in the second year, and an additional 48 schools (Cohort 2) implemented EITP for the first time. The extent of principal training and support for the 48 new schools differed dramatically from Cohort 1, however. In their first year, Cohort 2 principals received just two days of initial training on how to collect evidence on teaching practices during classroom observations and how to rate these practices using the Danielson framework. Cohort 2 principals also received significantly less district-level support throughout the school year than Cohort 1 principals had in their first year of implementation. Although Cohort 2 principals could request technical assistance from EITP central-office staff, these principals did not have access to the ongoing technical support and oversight that Cohort 1 principals received. Indeed, Cohort 1 principals received the same level of support and ongoing training in their second year of implementation as did the Cohort 2 principals in their first year. Data for this study consist of CPS administrative, personnel, and test-score information from the 2005–06 school year to the 2010–11 school year. As the intervention occurred at the school level, we used school-level averages of all student-level and teacher-level data records. Administrative data collected on students include basic demographic information, such as gender and race/ethnicity as well as information on poverty level and students with special education needs. We also use school-level characteristics such as student enrollment levels and the distribution of race/ethnicity, gender, students qualifying for free or reduced-price lunch, and special education students, which were generated from student-level CPS data files. Teacher personnel data include teacher-level data about tenure status, years of experience in the district, demographic information, level of education attained, and certification status. Our primary outcome variable is student achievement as measured by performance on standardized tests. Students in Illinois take the Illinois Standards Achievement Test (ISAT) in reading and mathematics in grades 3 through 8, usually in March of each school year. We use a school-level measure that has been standardized across the sample of schools included in our analysis, taking into account the various grade configurations in different schools. We take advantage of a unique randomized control trial design. CPS, in partnership with CCSR, selected four elementary-school instructional areas (of the 17 elementary zones in the city at the time) that would implement the EITP. These areas are located in different parts of the city, and they serve different populations of CPS students with varying needs. Within each of the four instructional areas, elementary schools were randomly selected to participate in the first year of EITP (Cohort 1). Schools with first-year principals and those slated for closure in the spring of 2009 were excluded from the sample prior to randomization. Schools that were not selected to participate in the first year implemented the program the following school year (Cohort 2). The randomization process resulted in 44 Cohort 1 schools and 49 Cohort 2 schools (the latter number fell to 48 due to the unexpected closure of one school). Our data indicate that the randomization procedure worked as desired. On average, the Cohort 1 and Cohort 2 schools were very similar in terms of both student and teacher characteristics as well as school working conditions. We measure the initial impact of the EITP on a school’s math and reading achievement by comparing student achievement between the Cohort 1 and Cohort 2 schools at the end of the 2008–09 school year, during which Cohort 1 schools implemented the EITP but Cohort 2 schools did not. To increase the precision of our results, we control for student enrollment, the proportion of female students, the proportion of students by race/ethnicity, the proportion of special education students, the proportion of students receiving free or reduced-price lunch, and average prior achievement. In its first year, the EITP increased student achievement in the Cohort 1 schools by 5.4 percent of a standard deviation in math and 9.9 percent of a standard deviation in reading, relative to the Cohort 2 schools. The effect on reading scores is statistically significant, but the effect on math scores is not. The reading effect is significant not just statistically but also in size. A 10 percent of a standard deviation effect size is equivalent to closing one-quarter to one-half of the performance gap between weak schools (those at the 10th percentile of the achievement distribution) and average schools (those at the 50th percentile) in large urban districts like Chicago. In the second year, as Cohort 2 schools implemented EITP, we might have expected the difference between the two groups of schools to shrink or even disappear as the Cohort 2 schools benefited from the same program that had a positive impact on Cohort 1 schools the prior year. We find, however, that the difference in student achievement between the two groups of schools persisted over time. Figure 1 shows that the math effect of 5.4 percent increased to 8 percent in year two and was 6.6 percent in year three. For reading, the first-year effect of 9.9 percent grew to 11.5 and 12 percent in the second and third years. More-advantaged schools—those with fewer students eligible for free or reduced-price lunch and those with higher initial student achievement—benefited the most from the program. On average across all schools, 83 percent of students received free or reduced-price lunch. The effect of EITP at lower-poverty schools—those with just 60 percent of students receiving free or reduced-price lunch—was double the effect for the full sample, at more than 20 percent of a standard deviation (see Figure 2). On the other end of the distribution, there was no detectable EITP effect at higher-poverty schools. This differential effect persisted into the second and third years of the intervention, after Cohort 2 schools implemented the program. We find similar differential effects on math by school poverty level, with a statistically significant positive effect for lower-poverty schools, even though the average effect across all schools was not distinguishable from zero. We also find evidence that schools with higher student achievement before the start of the EITP benefited the most from the program. We do not, however, find any consistent evidence that the effect of the program was related to the racial composition or share of special education students in the school. Explaining Differential Impacts Why did the EITP only improve achievement in certain schools and only in the first year? The EITP represented a dramatic departure from the existing teacher-evaluation system in Chicago and relied on the human capital that already existed in the schools to generate improvements in school performance. Its efficacy depended on principals’ capacity to provide targeted instructional guidance, teachers’ ability to respond to the instructional feedback in a manner that generated improvements in student achievement, and the extent of district-level support and training for principals who were primarily responsible for implementing the new system. The pilot forced principals to make significant changes to how they conducted classroom observations and conferences with teachers. The intervention itself was time-intensive for the principals, who were required to participate in extensive training pre-intervention. Principals also had to rate teachers on the new evaluation framework, and work with them in pre- and postobservation conferences to develop strategies to improve their instructional practice. On average, CPS principals reported that they spend about six hours per teacher during each formal observation cycle. The principals’ role evolved from pure evaluation to a dual role in which, by incorporating instructional coaching, the principal served as both evaluator and formative assessor of a teacher’s instructional practice. It seems reasonable to expect that more-able principals could make this transition more effectively than less-able principals. A very similar argument can be made for the demands that the new evaluation process placed on teachers. More-capable teachers are likely more able to incorporate principal feedback and assessment into their instructional practice. Our results indicate that while the pilot evaluation system led to large short-term, positive effects on school reading performance, these effects were concentrated in schools that, on average, served higher-achieving and less-disadvantaged students. For high-poverty schools, the effect of the pilot is basically zero. We suspect that this finding is the result of the unequal allocation of principals and teachers across schools as well as additional demands placed on teachers and principals in more disadvantaged schools, which may impede their abilities to implement these types of reforms. For example, if higher-quality principals and teachers are concentrated in higher-achieving, lower-poverty schools, it should not be surprising that a program that relies on high-quality principals and teachers has larger effects in these schools. In addition, less-advantaged schools with, on average, harder-to-serve student populations, may require additional supports for these kinds of interventions to generate improvements in student learning similar to those of more-advantaged schools. School-level implementation is critically important for the success of any new educational intervention. As discussed above, the extent of principal training and district-level support varied dramatically for Cohort 1 and Cohort 2 schools. We speculate that district support also played an important role in explaining the large positive effect for Cohort 1 and the null effect for Cohort 2. Leadership turnover in CPS led to a decline in institutional and district support for EITP between the first and second years of the pilot program. When the pilot started in Chicago in 2008, few people were paying attention to teacher evaluation issues. Through its two years of planning work with the teachers union, the district leadership demonstrated its commitment to the program and to evaluating teachers in a way that was systematic and fair. When introducing the pilot program for the first time to principals, the chief education officer, Barbara Eason-Watkins, herself a former principal, personally delivered the message that the EITP pilot would be the district’s cornerstone in improving the quality of teaching and instruction and increasing student learning. Not long into the pilot’s first year of implementation, however, CEO Arne Duncan left CPS to serve as U.S. secretary of education. While Duncan’s arrival in Washington in early 2009 was followed by a national emphasis on refining teacher evaluation systems, his departure from Chicago marked a move away from the rigorous year one implementation of the EITP pilot. The incoming administration deemphasized the teacher evaluation pilot and instead focused on performance monitoring, data usage, and accountability. When the EITP expanded to include the Cohort 2 schools in 2009, doubling the number of schools implementing the pilot, the budget for district support of the program did not increase. This limited the amount of support the central office could provide to principals, which we suspect reduced the fidelity with which the pilot was implemented and in turn weakened the intervention. CPS central-office staff responsible for EITP oversight and school-level implementation indicated that there was a significant decrease in both CPS staff and budgetary resources dedicated to Cohort 2 principals in comparison to the level of support Cohort 1 principals received during their first year of program participation. As a result, Cohort 2 principals received fewer hours of training as well as different types of training than Cohort 1 principals did in their first year of system implementation. Finally, in the summer of 2010, prior to the third year of implementation, CPS ended EITP. Just before this announcement, half of the principals in the district were set to receive Danielson framework training, but the district canceled it. As a result, there is little evidence that the Danielson framework was used in any systematic way in year three. Our results are consistent with strong implementation in year one and weak or no implementation in subsequent years. The implementation of the EITP pilot in Chicago occurred prior to the nationwide shift toward more rigorous teacher-evaluation systems. These new teacher-evaluation systems incorporate multiple measures of teacher performance, including value-added metrics based on standardized tests or teacher-designed assessments and, in some cases, student feedback on teacher performance and peer evaluations. Unlike these systems, the EITP was focused solely on classroom observation. What is notable about the version of teacher evaluation systems currently evolving in districts throughout the nation, however, is the continued emphasis on classroom observations, with many systems employing the same observation tool used in CPS under the EITP initiative. A number of important issues remain unexamined. Specifically, what are the mechanisms through which the evaluation pilot produced improvements in school performance? For example, did the teacher evaluation pilot produce changes in instructional climate or alter the nature of within-school teacher collaboration? To what extent does a performance evaluation system alter teacher mobility and turnover patterns? Answers to these and other questions will shed light on how teacher evaluation systems might improve instructional practice as well as their implications for the teacher labor market. Chicago’s decision to abandon the EITP pilot, after supporting it fully for just one year, illustrates the difficulty urban school districts have in sustaining large-scale policy changes that require ongoing support from the central office and significant investment on the part of educators in specific schools. In this case, the program had considerable promise. In the fall of 2012, CPS launched a new teacher-evaluation program in order to comply with the Illinois Performance Evaluation Reform Act, which requires that indicators of student growth be a “significant factor” in teacher evaluation. Called REACH (Recognizing Educators Advancing Chicago Students), the new program also uses the Danielson framework for the classroom observation component. Matthew Steinberg is assistant professor of education at the University of Pennsylvania Graduate School of Education. Lauren Sartain is research analyst at the Consortium on Chicago School Research at the University of Chicago. This article is based on a forthcoming study in Education Finance and Policy. This article appeared in the Winter 2015 issue of Education Next. Suggested citation format: Steinberg, M.P., and Sartain, L. (2015). Does Better Observation Make Better Teachers? New evidence from a teacher evaluation pilot in Chicago. Education Next, 15(1), 70-76.
<urn:uuid:8bf303c5-135c-4b15-84c3-f1c56fb036d7>
CC-MAIN-2022-33
https://www.educationnext.org/better-observation-make-better-teachers/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00295.warc.gz
en
0.958343
4,687
3.03125
3
There is one task that is so fundamental and so obvious that it may escape your attention as a skill that you need to develop at all: picking something to photograph. You can’t just point your telescope to a random place in the sky and expect to capture a beautiful image. You will need to pick a suitable target object (and find it). An ideal astrophotography subject has a balance of three characteristics: - It is located at a position in the sky that you can see from your observing site at a time of night that works for you, and at an altitude that minimizes distortion; - It is a suitable size for your gear and your skill level. - It is a suitable brightness for your gear, available time, and skill level. Let’s go through those criteria in a little more detail now. Position in the sky Your selected target object should be located at a good position in the sky on the night, and at the time, that you will be doing your imaging. “Good position in the sky” means that it is a good balance of three things: - Freedom from obstructions; - High altitude; - Position relative to the Meridian. Freedom from obstructions This, of course, is the most important factor for choosing a target object, even though it is not very technically sophisticated. The object has to be visible from your location at the time of night you are available to observe. It can’t be below the tree line, behind a building, or in the glare of streetlights. There is no point in considering all the various more technical aspects of target selection, described below, if your target isn’t in your line of sight. You need to consider more than just physical obstructions, too. Also consider whether anything in your line of sight may produce bad atmospheric disturbances that should be avoided. For example, an object located directly over a chimney is in a bad location. Even though it may not be obstructed by being behind the chimney, warm air currents rising from the chimney may cause turbulence and ruin the stability of the air between you and the target. A planetarium program in which you can customize the displayed horizon to represent your actual location is very helpful for making such plans. In some planetarium programs you can “block out” various altitudes around your location, helping you understand what is obscured by your local horizon. On some programs, such as TheSkyX, you can actually insert a 360° panoramic photo taken from your observing location, so the computer display represents the actual surroundings you will see. This is extremely valuable for planning what objects will be visible, and when. If you frequently image from the same location, and especially if you have a permanent observing site, it is well worth taking the time to do an accurate survey of the horizon all around your site, and to enter this information in your planetarium program of choice so you can plan your observing against the horizon and obstacles that you will face. The next most important factor in an ideal object location is its altitude above the horizon. The higher an object’s altitude, the less atmosphere you have to look through to see it. And, the less atmosphere you are looking through, the clearer your view. Aside from allowing you to breath and protecting you from radiation, the atmosphere is not your friend. It is filled with dust and moisture, and is a roiling mass of turbulent air currents, which is what causes stars to “twinkle”. That’s why professional observatories are on mountain tops, and the Hubble telescope is in outer space; it’s to minimize the amount of atmosphere they look through. Let’s look at some simple diagrams to understand this. So, all else being the same, objects that are higher in altitude will involve looking through less atmosphere, and you’ll get a clearer view of them. And, for a given object, observing it at the time of year and the time of night when it is the highest above the horizon will give you the best view. Position relative to the Meridian All else being the same, choose an object that lets you stay on one side of the Meridian during your imaging session. What on earth does that mean? What is the Meridian? And why should it matter which side I’m on, or whether I’m on both sides? The Meridian is an imaginary vertical line in the sky running from the North Celestial Pole (approximately the star Polaris), up past the Zenith (the spot directly overhead), and down to the horizon at due South. It divides the sky into “the East part” and “the West part”. This is important because an equatorial mount must be positioned quite differently depending on whether the mounted telescope is pointed to the east part or the west part of the sky. To point toward the west, the telescope must be on the east side of the mount with the counterweight on the west side of the mount. To point toward the east, the telescope must be on the west side of the mount, with the counterweight on the east side of the mount. If you are following a target across the sky, sometime around when the target crosses the Meridian, the mount must be flipped to the other side. This is called a “meridian flip”. Try to do all of your imaging of a given target on one side or the other of the Meridian, to avoid having to do a meridian flip. There are several reasons why you should avoid doing a meridian flip during your imaging session. First, if you are stacking multiple images together, images taken before and after a meridian flip will be upside down relative to one another. This complicates your stacking operation, although most stacking software can handle it. Second, after the meridian flip, you will have to find your target again, consuming precious time you could be using for imaging. Since the object will now be upside-down, reproducing the exact framing will be an additional challenge. More important, some mounts and some telescopes exhibit undesirable mechanical motion when you do a meridian flip. On an equatorial mount, a meridian flip causes the mount to change from lifting the counterweight with its right ascension motor to lowering the counterweight with its right ascension motor. The other side of the teeth in your drive gears will now be pressing on each other. Backlash in the right ascension drive gears will almost certainly result in the mount’s periodic error changing when this occurs. Worse, some optical designs have play in parts of the optical train, and after a meridian flip these parts will shift to “the other end” of their range of movement. Some examples of things that might shift after a meridian flip include: - Mirror flop: telescopes with movable main mirrors, especially SCTs that focus by moving the main mirror, will have the mirror shift slightly after a meridian flip. This will throw the image out of focus. - Camera flop: the mechanical linkages between your telescope and your camera, including the focuser, extension tubes, and the camera itself, may shift slightly as gravity pulls down on the camera from the other side after a meridian flip. This may throw the image out of focus, or out of collimation. - Guide scope flop: if you are using a separate guide scope, the guide scope or the guide camera may shift slightly after a meridian flip, throwing your guiding out of calibration. You may hear the term “differential flexure” used by serious imagers. This refers to attachments – usually guide scopes – changing their relationship with the main scope as components flex and flop. Ideally, then, you would like to begin imaging an object when it is either several hours East of the Meridian, or just after it has passed to the West side of it, so that you can image for several hours without having to deal with a meridian flip. At the same time, you would like your target to be at as high an altitude as possible during your imaging session. And, most importantly, of course, your line of sight needs to be free of obstructions and local air disturbances. The next critical factor in selecting a object for imaging is its apparent size – that is, the height and width that it appears to occupy in the sky. (The actual size is not important – a large object that is far away and a smaller object that is relatively close could both appear to be the same size.) Example beginner problems Let’s illustrate the importance of matching the size of your imaging target to your gear with two unfortunate stories of disappointing experiences in astrophotography. (The confused beginner in both of these true stories is me, although some equipment brands have been changed to protect the other innocents.) Let’s first make what is probably the most common error. We have purchased a mid-sized (130 mm5-inch) reflector telescope on an equatorial mount, and, since we are already experienced at normal photography, we already own a Nikon D800, a modern DSLR camera. This camera has 36 megapixels, a large number by today’s standards, and this must be a good thing, since camera manufacturers always post their high number of megapixels as a measure of quality. With a suitable connector, the Nikon is mounted on the reflector and pointed at Saturn with the expectation of capturing a beautiful image, similar to what we’ve seen in magazines. That’s not what happens. At first, we think Saturn isn’t in the image at all, and that we have simply made some kind of an error in pointing the telescope. However, on much closer inspection, we realize Saturn is there – it’s just ridiculously small in the image. That’s frustrating. What’s the point of having 36 megapixels when only the handful of pixels in the very centre of the image are actually covering our target? 90% of my 36 million pixels are giving me a high-resolution picture of the background that I don’t care about. Now let’s switch equipment and make the opposite error. We have purchased a fairly large SCT – a 235 mm9-1/4 inch Celestron – on an equatorial mount. Since we were buying Celestron gear, we thought it would make sense to also buy Celestron’s astrophotography camera, a Nightscape CCD. We have seen many beautiful astrophotographs of the famous Andromeda Galaxy, M31, so we point at that and take a 5-minute exposure. The result looks like there is something wrong. The field is just awash with light, as though it was badly out of focus. The famous spiral arms and dust lanes aren’t visible at all. What’s going on? What’s going on is that the galaxy is far larger than the field covered by this camera with this telescope. All we have done is image a small rectangle inside the bright core. The galaxy is too big to be imaged with this setup. What size is the right size? When considering the size of an object to photograph, we should start by considering how big we would like that object to appear in our final image. Do we want the target to dominate the frame, or to be relatively small against a large background field of stars or other objects? Both are valid compositions, and it depends what we are trying to accomplish with our image. Also, that’s not a black-and-white distinction; and we can always take an image where the object is slightly smaller than the whole frame, then crop the frame down so the object fills the resulting rectangle. For the rest of this discussion, let’s assume that we would like our target to dominate the frame. Warning, Math!: I’m going to use some very basic math, below, to evaluate the suitability of certain target objects. Although I like to “work the math” this way, it is by no means necessary. Just take a quick test image of a potential target object and, if you find it to be too large or too small, try another, smaller or larger as appropriate. Hint: Messier objects tend to be large and bright (that’s why Messier was able to see them with his primitive telescope), while NGC objects tend to be smaller and dimmer (since that catalog was made 100 years after the Messier list, and optics had improved.) We should also consider how big a final image, in pixels, we actually need. Do we want an image to display on a computer monitor, or something that will be printed? Computer monitors usually require about 100 pixels per inch. Did you just say “Wrong! 72 dots per inch!”? That’s not quite right. 72 points per inch is a standard unit of measure in the printing and graphics industries, but there is no requirement that monitors have 72 pixels per inch. Originally they did because that was near the limit of what was technologically achievable, and it was convenient that it corresponded to the print industry standard. During that time a generation of computer users (my generation) formed the impression that points and pixels were the same thing. Not so. Most monitors have exceeded 72 pixels per inch for many years now. The monitor I’m using as I type this, for example, is 110 dots per inch. Printers generally require about 300 pixels per inch (although, in my experience, with astrophotographs you can usually get away with less resolution, say 200 dots per inch). So, think about how large an image you hope to produce, and on what medium, to get an idea of what pixel dimensions you require. Then, you can use your calculated image scale to determine the range of apparent sizes that suitable target objects would have. Let’s work through some examples. Suppose I want to produce an image that will look nice in a web browser window on a computer screen. I want the object to dominate the frame with no more than, say, 10% extra space around each of the edges. Computer monitors come in many sizes, with 1024 x 768 pixels being a common resolution. So, I might try to have an object that covers about 800×600 pixels to display in a window on that sort of screen, with a bit of margin on the sides. My camera has 3348 x 2574 pixels on its chip, so if I can fill one quarter of my camera frame or more I should be able to crop to a size that will work. My camera’s image scale when used with my AT8RC telescope is 0.7 arc seconds per pixel. (In fact I will probably use 2 x 2 binning for my image since the atmosphere would never produce 0.7 arc seconds per pixel accuracy. However we can use 1 x 1 binning and 0.7 arc seconds per pixel to do this calculation because if we increase the binning we effectively reduce the number of pixels by the same factor.) The largest object I can image would be one that completely fills my camera chip. At 0.7 arc seconds per pixel, the maximum width would be 3348×0.7 = 2343 arc seconds, and the maximum height would be 2574×0.7 = 1801 arc seconds. Or, approximately 39 arc minutes by 30 arc minutes. I said that the smallest object I would want would fill roughly 1/4 of my camera chip. That would be 585 x 450 arc seconds, or 10 x 7 arc minutes. I can look up object sizes in a variety of databases or online sources to help me select objects that fall within that range of sizes. Here are some examples of good choices for this chip and telescope. In the following images, the dark rectangle exactly reproduces the total image that would be produced by the camera and telescope in this example. The above are examples of objects that are a good fit to the size of my camera chip with my chosen telescope. Now, here are some bad choices. First, some famous objects that I might like to photograph are too small for this chip and telescope. There are also objects that are too large for this setup. The images below are much larger than the image produced by my example equipment, and the image area of the equipment is shown as a yellow rectangle. To image these large objects, I would need to either use a shorter focal-length telescope, or a focal reducer, or both. Or, I could take a mosaic of adjacent rectangles of sky, building up a larger image in small pieces. (This, however, is considerably harder and beyond what I would recommend for a beginner.) Finally, the brightness of an object can determine whether it is a relatively easy or relatively difficult target. What we especially care about is not the simple magnitude of the object, but the surface brightness, which is the brightness divided by the surface area. Objects with a high surface brightness (such as Messier 51, and Messier 42) can be imaged with relatively short exposures. For example, Messier 42 requires only a handful of exposures of a few minutes duration each, and Messier 51 can be nicely captured with a dozen or two dozen five-minute exposures. On the other hand, objects with a low surface brightness, such as Messier 101, may need longer exposures than your gear can tolerate, or may require stacking more exposures than you have time to collect. For example, you might need to take 10-minute exposures to capture the subtle detail in the arms of Messier 101. If your mount and autoguiding cannot produce good 10-minute exposures, you would have difficulty imaging this object unless you could stack several dozen five-minute exposures together; and you may have trouble collecting several dozen five-minute exposures in your available observing time. Plan in advance As you can see, you can put quite a bit of time, and even some mathematical analysis, into the selection of your imaging targets. Whatever amount of pre-planning you decide to do, do it in the daytime; don’t waste your precious time outdoors, under the dark sky, trying to figure out what to image. Make at least a rough list of suitable targets in advance. Software can help you with this. Some integrated control programs, such as TheSkyX contain features to make observing lists. Or, simple dedicated planning programs such as AstroPlanner are available for this function. With such software, you can say “show me all objects of such-and-such type, visible on this date and time, in this part of the sky, and which have the following range of sizes and brightnesses”. There’s your target list for the evening. If you are the type that likes to make lists, you can make a long-term plan for imaging targets. Make a list of all the objects you might like to image that are a good match for your scope and camera (primarily by their size) and that are visible from your location. Determine the range of dates during the year when each is favourably positioned in the sky, and sort your list by these dates. Now you have a target list for the entire year. In fact, probably for multiple years, since you will not likely be able to produce images of satisfying quality of all the targets on your list during a single season. Update your list periodically: when your equipment changes, when your surroundings change, or when your skills or interests change. Finally, you may end up with more than one telescope, or more than one camera. Organize different lists for all combinations of your gear. You may even find that certain seasons are best-suited to one combination of ‘scope and camera, while a different combination is best for the objects available at another time of year. For example, for me, there are more targets in winter that require a large field of view and more that require a narrow higher-magnification field in summer. So I tend to mount a short-focal-length scope as winter approaches, then change to a long-focal-length scope in spring. Now that we’ve selected a good target, our next step is to find it. Simulated sky images on this page were produced with TheSkyX Professional Edition.
<urn:uuid:0fe69291-4904-43c2-9b21-9c439f6fdbb3>
CC-MAIN-2022-33
https://themcdonalds.net/astrophotography-skills-selecting-targets/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00298.warc.gz
en
0.944583
4,269
2.640625
3
Like some of those tests your doctor is always after you to get, boiler chemical cleaning is something that most of us would rather not think about but that we all agree is necessary. Adding to our general discomfort with the process are new Environmental Protection Agency regulations, which make the disposal of chemical cleaning wastes more expensive. Here is a review of what to do, when, and some things to watch out for. Everyone knows (or should know) that boiler tubes containing deposits create long-term reliability problems for the boiler. Deposits insulate the water in the tubes from the fire, causing the metal temperature to increase dramatically between the deposit and the tube metal. Long-term overheating of the metal will result from prolonged operation with heavy tube deposits. The tubes will first bulge and then fail. Because the deposits tend to be widespread, this generally means that large sections of boiler tubing will be damaged and require replacement. The deposits also concentrate any boiler water chemistry and contamination that collects under them. The increased metal temperature caused by the deposit increases the rate of corrosion caused by any phosphate, caustic, or chloride underneath the deposit. With the exception of corrosion fatigue, all the water-side boiler tube failure mechanisms occur under deposits. Get rid of the deposits and you also stop the water-side tube failures. Determining When to Clean The standard method for determining when to chemically clean a boiler is to take a boiler tube sample and have the deposit amount measured—a deposit weight density (DWD)—and the composition of the deposit analyzed. But there are other conditions besides the DWD that require that the boiler be cleaned. These include: ■ One or more failures due to an under-deposit corrosion mechanism, particularly hydrogen damage. The first priority must be to prevent further damage by removing the deposits via a complete chemical cleaning. ■ Major contamination event or multiple small events, particularly condenser tube leaks. Contamination events increase the amount of deposit in the boiler and its corrosiveness. Chemical cleaning removes the deposits and the contamination underneath the deposits before they corrode to failure. ■ Replacement of boiler tubing. The rule of thumb is to chemically clean if you are replacing more than 10% of the surface area of the boiler. This helps to create a uniform layer of oxide on all the tubes. ■ A major change in the boiler fuel or burner design. Changing fuels, such as from coal to gas, or modification of the burners can result in changes to the area of high heat flux in the boiler. When implementing such a major change, it is best to start with a clean boiler. ■ A change in the chemical treatment regime. Such changes would include moving from one chemical treatment to another, say from all-volatile treatment to oxygenated treatment (OT). Using Deposit Weight Density to Determine When to Clean The standard DWD test should not only provide a deposit loading but also an analysis of chemical composition of the deposit on the tube. This chemical analysis of the deposit can be done quantitatively, using an inductively coupled plasma emission spectrometer (ICP-ES), but it is more commonly determined semi-quantitatively using electron dispersion spectroscopy (EDS). Occasionally, X-ray diffraction data is also provided to indicate the chemical compounds that are present. Optimally, the tube sample for DWD should be about 18 inches long and from the highest heat flux area of the boiler. This is typically above the burners or on the underside of the nose arch. The idea is to find the tube in the boiler with the most deposit in a high-heat area. You cannot use a tube that has failed, because some or all of the deposit will have been removed by the failure. DWD is determined by removing the deposit in a carefully measured area of the tube. The tube is split and the deposit on the fire-facing (hot) side is analyzed separately from the insulation-facing (cold) side. As far as chemical cleaning is concerned, the side that counts is the hot side. The change in the weight of the tube divided by the water-touched area where the deposit was removed produces the DWD result. This can be expressed in gram/ft2 or gram/meter2 (g/m2, SI, International System of Units). The conversion is 1 g/ft2 is equal to 10.76 g/m2. Currently, the most common method of deposit removal for the DWD test is bead-blasting with glass beads (NACE TM0199-99). The other method that is occasionally used is to dissolve the deposits in a solvent, typically, inhibited hydrochloric acid, HCl (ASTM D3483-83 Test Method B). In general, the solvent method produces slightly higher DWD results on the same tube, as some small amount of metal is removed with the deposit. When bead-blasting the tube, a layer of deposits often will become visible, such as a layer of copper. A good DWD report will describe and show any anomalies found as the deposit was removed. Best practice is to grab a tube sample from the boiler during each major outage or at least one every two years. Each sample should be from a similar elevation or area in the boiler. Comparing the DWD results from year to year shows the deposits are accumulating in the boiler and can be used to anticipate the need to chemically clean, though the deposit formation is rarely linear. Figure 1 presents a chart with chemical cleaning recommendations by boiler operating pressure. |1. To clean or not to clean? For cleaning with the bead blast method, this chart shows the recommended parameters for immediate and near-term cleaning. Source: EPRI| There are three general recommendations on this figure. If the DWD result is in the top area, a chemical cleaning should be performed as soon as it can be scheduled. The lowest area represents a relatively clean tube. The middle area between the green and red line indicates that deposits are beginning to accumulate to the point where cleaning should be considered and probably budgeted for the next major outage or within the next two years. If that is the case, grab another tube sample close to the next outage and see if the DWD has increased and is close to or in the “Cleaning recommended” area. If not, you may be able to get by for another year or two. Although the heat flux in a heat-recovery steam generator (HRSG) is far lower, the circulation issues can be far greater due to the multiple assemblies and configuration with the drum. So the industry has applied close to the same DWD criteria for cleaning for an HRSG as for a conventional fossil-fired unit. As a general rule, the DWD criteria for HRSG tubes is about 20% higher than for a conventional boiler. Occasionally, a utility will want to take multiple tube samples and have them analyzed. In these cases, the DWD result that should be used to determine the need to chemically clean is the tube that is the most heavily deposited. Remember, what you are trying to determine is if there is sufficient deposit anywhere in the boiler to cause under-deposit corrosion or overheating. You are not trying to determine the average amount of deposit in the whole boiler or even the average in the high-heat area. One isolated area of hydrogen damage or overheated tubes is enough to cause a number of forced outages or extend a planned outage, and certainly reason enough to chemically clean. If the DWD results indicate the need to chemically clean, now is the time to do it. Procrastination with cleaning is detrimental on a number of levels. First and foremost is the damage done to the tubes. Under-deposit corrosion rates and long-term overheat damage are exponential, not linear. A delay of one or two years on a dirty boiler can result in major tube damage. Second, you don’t save as much as you think. Cleaning a very dirty boiler is significantly more expensive than cleaning a boiler that has just crossed into that “Cleaning recommended” region. The additional costs in solvent (see sidebar), time to get the tubes clean, multiple cleaning steps, startup delays, and dealing with excessive amounts of cleaning wastes are all consequences of postponing a needed cleaning. |Gun-shy About Chemical Cleaning?Some plants have had bad experiences with chemical cleanings and so are reluctant to go through that process again. They fear that the cleaning process will turn their boiler into a sprinkler system.The solvents used in chemical cleaning are designed to minimize the corrosion of base tube metal anywhere in the boiler. When properly applied by experienced cleaning vendors, the amount of base metal removed is very small. The vast majority of the iron that is dissolved by the cleaning solvent originally came in from the condenser, feedwater heaters, deaerator and associated piping—not from the boiler tubes themselves.In some cases, the cleaning process removes the deposit that was preventing a noticeable tube leak. If the cleaning had not occurred, the leak would have still happened. Depending on the solvent and the size of the leak, a cleaning may be able to proceed anyway. Efforts will have to be made to contain and collect the leaking solvent. In other cases, the leak is so bad that the cleaning has to be terminated and the solvent drained to effect a tube repair. Dealing with leaks during a cleaning can be difficult, but the alternative—leaving the deposit on the boiler tubes for another year or two—guarantees that the under-deposit corrosion will continue. So, not only that tube, but also many of its neighbors, will eventually leak during operation, leading to repeated forced outages. So you need to clean; now what? Planning for a Cleaning The first decision is what solvent to clean with. There are five commonly used cleaning solvents. Each has advantages and disadvantages. If you have been operating a plant for awhile, you may simply go with the solvent and procedure you used last time. But it is worth a look to see if this time—due to the chemical composition of the deposit, or for a variety of reasons, including waste-handling costs—another solvent might be better suited. Cleaning boilers that have been using oxygenated treatment can be a particular challenge, as the oxide is very tenacious and slow to dissolve. If you have been using OT, and this is the first time you will be cleaning the boiler after starting OT, you will want to discuss the process with other units that have already cleaned their OT units to get the benefit of their experience. The solvents discussed below are primarily for the iron removal stage. Another set of chemicals is used specifically to remove copper in separate copper stages. Inhibited Hydrochloric Acid. Inhibited HCl is still used, particularly in boilers where it is difficult to ensure complete circulation of the solvent. It is very effective at removing silica deposits from the tubes if ammonium bifluoride is added. It is definitely not recommended on boilers with a history of corrosion fatigue failures, as it has been shown to increase the failure rate following a cleaning. If there is any copper in the boiler deposit, provision must be made for removing the copper, which will otherwise plate-out on the bare steel tube. In the past, thiourea was commonly added to complex the copper, and it is still used occasionally. There have been times on some boilers where the thiourea has been inadequate to remove copper in very localized areas of the boiler, and this causes problems. There may also be environmental ramifications for using this chemical when attempting to dispose of the cleaning wastes. For these reasons, a separate copper-removal stage is often recommended before or after (or both) the acid stage, using a variety of copper-removal solvents. The use of HCl can create problems on heavily deposited boilers, as the acid often undercuts the deposits and causes sloughing of larger pieces of material that can plug drains. Removing the acid when time is up is critical. Hydroxyacetic Acid. Hydroxyacetic acid is used in boilers with stainless steel components that will be in the cleaning path where any chloride in the solvent could create a problem. This solvent is often used in supercritical and once-through boilers. It does not remove copper, but this is usually not an issue in these boilers. EDTA. Ethylenediaminetetraacetic acid (EDTA) is probably the most commonly used operational cleaning chemical. For forced-circulation boilers, the use of diammonium EDTA is practically a standard practice. The low temperature requirement (180F) of the diammonium EDTA plus the general safety and ease of handling the solvent during the cleaning process are all substantial advantages. The disadvantages may come when trying to dispose of the cleaning wastes.. For natural circulation boilers, tetraammonium EDTA (pH 9) is still used. The boiler has to be heated to 275F to 300F and repeatedly heated and cooled during the cleaning process. EDTA has some capacity to dissolve and retain copper in solution when the chemistry of the EDTA is changed and the iron in solution is oxidized. This is generally done at the end of the iron stage of the EDTA cleaning process by the addition of oxygen gas. If there is excessive copper in the deposits, a separate copper-removal step may be required. EDTA is the most tolerant solvent and particularly good on tubes that are heavily deposited with iron oxides, or where the iron deposits are particularly tenacious, as the cleaning can be extended for a long time without risking any damage to the tubes (unlike HCl). Ammoniated Citric Acid. Ammoniated citric acid is an excellent solvent and is often the solvent of choice for pre-commissioning cleanings, where deposits are anticipated to be light and composed exclusively of iron. It can also be used in a high-temperature (higher pH) and low-temperature (lower pH) scenario, like EDTA. Chemical cleaning wastes containing citric acid are often easier to dispose of than EDTA-containing wastes. Inhibited Hydrofluoric Acid. Inhibited hydrofluoric acid (HF) is commonly used in Europe and elsewhere around the world, but rarely in North America. The stigma to its use in the U.S. is the very serious personnel risks associated with concentrated HF. However, when diluted to concentrations typically used in the cleaning process, HF is considered no more hazardous than an HCl solvent. HF is fast—probably the fastest cleaning solvent—and very effective in removing iron and any silica in the deposits. The potential for exposure to the concentrated acid is limited to the time when the HF is diluted in preparation for adding it to the boiler. This is handled by the chemical cleaning vendor, who is aware of the risks and whose personnel are properly protected with personal protective equipment while transferring the concentrated acid. Neutralization of the waste is typically done with lime slurry, which neutralizes the acid and precipitates calcium fluoride and iron hydroxide. Selecting a Chemical Cleaning Service Company There are a number of excellent chemical cleaning companies with diligent and experienced personnel. Be sure that the vendor you select has experience dealing with boilers that are your size and configuration. Also confirm that the firm has experience using the solvent you have selected. Ask for and check references. Occasionally, service companies that clean small package boilers bid on a bigger utility boiler. There is as much difference between cleaning an industrial fire-tube boiler and a large utility boiler as there is building a house and a skyscraper. Just because you are a good house builder doesn’t mean you are qualified to build a multi-story office tower. Cleaning vendors also have expertise in helping you select the solvent or solvents (with copper deposits) that will clean the boiler. They should ask for a tube sample to test in their small cleaning rig (in their lab) and prove that the cleaning program you have agreed on really does the job. Solvent costs are a significant portion of the cost of the cleaning job. There is no way to accurately predict the amount of solvent that will be required to clean a boiler. There are some general rules of thumb, but remember that these estimates are often based on a single tube sample. The deposit in any boiler is not uniform from top to bottom or even from tube to tube, so the estimate of the amount of deposit (and amount of solvent required) is really more of a guess. Past cleaning history and years since the last cleaning often provide a better guide than the current tube sample (or at least should be a factor in the decision of how much solvent to bring). When comparing prices from multiple vendors, select an amount and base all the bids on the same amount of solvent. This is particularly the case with EDTA. After awarding the bid, be sure that the vendor has extra solvent (50% extra is common) either on-site or very close to the plant, so that it can be used if needed. Many a cleaning has been delayed hours, if not days, while waiting for more chemical to arrive from the supplier. Get Some Help Particularly as the time between chemical cleanings increases (because you are taking better care of the boiler chemistry—right?), and with normal turnover, your plant staff may have little or no experience with a chemical cleaning. There are a number of consulting engineers who either specialize in, or offer as one of their services, support of the chemical cleaning process, acting as the owner’s engineer and project manager. Their help can be invaluable in keeping the communication channel open between the cleaning vendor and your operations, maintenance, and management staff and in helping at critical decision points in the process. A Good Procedure There are few things more important to a smooth chemical cleaning than a well-thought-out and well-documented cleaning procedure. This will require the time and effort of the plant’s operation and engineering staff to customize and prepare a cleaning procedure for each unit. Cleaning vendors can provide a general outline of the cleaning process, and a consulting engineer can help also, but your operators know where the valves are and which leak-by and which don’t. There will be three classes of valves: those that stay closed the entire cleaning process, those that must stay open the entire process, and those that need to be opened and closed, depending on where you are in the cleaning process. There are also valves that the cleaning vendor will be responsible for opening and closing, such as those going to the waste disposal (frac) tanks and chemical injection points. Each valve that comes in contact with the solvent (or potentially could come in contact with the solvent) needs to fall into one of these groups and be tagged accordingly. Be on particular lookout for possible contamination routes where the solvent can get to a place where it was not designed to go. If a contamination route would be very serious, can a blank flange be installed? If not, can a telltale be set up to provide early detection of contamination? One more aspect of deciding when to clean is the actual scheduling—determining whether a plant should clean at the beginning, during, or at the end of an outage. Chemically cleaning at any point except the very end of an outage leaves the tubes vulnerable to some general corrosion. The passivation step at the end of the chemical cleaning is generally neither long enough nor at a high enough temperature to create a robust protective layer. It is often difficult to ensure that the boiler gets really dry after the cleaning or can be laid-up properly in a wet condition. The superheater is always back-filled during cleaning to minimize the risk of contamination. So, unless it can be drained and dried, this area will remain wet until the unit is fired sufficiently to dry it out. Therefore, the typical recommendation is to perform the chemical cleaning at the very end of an outage. Heating the boiler for chemical cleaning using an auxiliary boiler or steam from another unit adds complexity and cost to the cleaning. So, as a rule, it is best to wait until the unit can be warmed using its own burners or igniters and when the fans and instrumentation associated with the fuel system (such as flame scanners) are working properly and have been fully tested. Many a utility has waited for days with the chemical cleaning vendor on site and ready to go while its staff tried to get a fire in the boiler. For this reason, some utilities have decided not to schedule chemical cleanings during an outage at all; instead, they take a weekend outage separate from the overhaul for this purpose. Similarly, pre-commissioning cleanings are optimally performed as close to the steam blow as possible, to minimize the opportunity for corrosion to undo what was just done with the cleaning. The Actual Cleaning Process Once the boiler is at the proper temperature, the actual cleaning process can take between 16 and 72 hours to fully remove all the iron. The shortest times are for the mineral acids (HF and HCl); complex-forming solvents such as EDTA require longer times. The important thing is to make sure all deposits have been removed during the cleaning and that the chemistry has stabilized. With HCl, the inhibitor that prevents the acid from aggressively attacking the metal surface has a finite time that it can protect the metal once it is in the boiler. Therefore, with this solvent, the cleaning solution must be drained before the inhibitor breaks down, whether all the deposit is removed or not. This has been a problem in some boilers that were heavily deposited, where drains have plugged, not allowing the boiler to drain and resulting in general acidic attack of the tubes. In forced circulation boilers, one critical piece is the flow of purge water through the boiler circulation pump motors. This purge water protects components in the motor from the solvents, by providing a constant outward flow past the motor cavity and into the boiler. Purge water should be on all boiler circulation pumps (in service or not) as long as there is chemical in the boiler. Since the purge water is designed to provide water to the pump during normal operation, the system is set up to overcome normal boiler pressure. These high-pressure pumps can produce high purge water flow rates during the cleaning and can be difficult to control. If the purge water flow is excessive, the level in the boiler is constantly rising to the point where solvent has to be drained before it goes out of sight in the temporary sight glass. Every gallon of purge water dilutes the solvent and generates more waste that will need to be disposed of at the end of the cleaning. Some utilities set up a separate purge water system with pressure controllers and flow meters just for the cleaning that can constantly provide a slight positive purge water flow during the cleaning process. This system can pay for itself many times over in a single cleaning. Once the chemistry in the boiler indicates that the boiler is clean, next is a passivation step. In the case of HCl, passivation is preceded by rinses and neutralization of any remaining acid in the boiler. In the case of EDTA, passivation occurs after cooling the boiler to 160F and after the pH of the solvent is raised with ammonia (for diammonium EDTA) when oxygen is added. This also complexes any remaining copper on the boiler tubes. The degree to which the boiler tube surfaces are truly passivated is a function of the solvent, the procedure, and the time and temperature at which the passivation is performed. What to Expect After Cleaning When the cleaning is finished, the chemical cleaning solvent is rapidly and completely drained from the unit. Usually, two full boiler volume rinses follow, with partial rinses in addition to these in some cases. Conductivity is used as an indicator to see how well the solvent has been removed from the boiler. In some cases, the last rinse is treated with chemical to raise the pH to a normal boiler pH range, and the boiler is fired to 180F to 200F so that it can be drained hot and dried out. As noted above, optimally, the boiler will be started as soon as rinses are complete and the normal boiler piping can be restored. Depending on the design of the boiler, it may be difficult to flush out all the iron oxides that were released by the cleaning but were not fully dissolved. This can contribute to “black water” samples and high iron levels in the boiler following the cleaning. Some utilities have used boiler dispersants for a time to promote the suspension and removal of any remaining deposits in the boiler through the normal drum blowdown. Dealing with the BCCW There will typically be between three and four boiler volumes of chemical cleaning wastes (BCCW), which includes the solvent and all rinses following a chemical cleaning. The waste and rinses are temporarily stored in frac tanks located at the site prior to the start of the cleaning (Figure 2). Before this waste can be dealt with, it must be characterized to determine if it is considered hazardous or nonhazardous under the Resource Conservation and Recovery Act (RCRA). |2. Frac tanks lined up for chemical cleaning. Courtesy: M&M Engineering| The strongly acidic cleaning wastes are generally neutralized as they leave the boiler, and then that waste is combined with neutralization and passivation steps and rinses to produce a combined waste that is not characteristically hazardous for pH. The other chemical cleaning solvents are not characteristically hazardous by pH to begin with. The other way that a BCCW can be classified as a hazardous waste is if it contains a concentration of one of the RCRA 8 toxic metals. The primary metal that is of concern is chromium. Chromium comes from stainless steel feedwater heater and condenser tubes. This accumulates in deposits in the boiler. The regulation is specifically aimed at hexavalent chromium. Normally, utilities measure total chromium first, and only address the hexavalent chromium issue if the total chromium is greater than the RCRA limit of 5 ppm. EDTA solubilizes chromium in a reduced trivalent chrome (Cr III) state, and it is not sufficiently oxidized by oxygen in the passivation stage of the cleaning to create a significant amount of the hexavalent chromium. Some utilities have gone to their state environmental agencies with analytical data showing that there is very little hexavalent chromium in their BCCW, even if the total chrome is greater than 5 ppm, and have sought and received an exemption for this waste so that it can be classified as nonhazardous. For many years utilities have utilized an exemption provided by the Bevill Amendment and a subsequent letter from the Environmental Protection Agency (EPA) to group BCCW with other wastes that were uniquely associated with coal-fired utilities (such as fly ash and bottom ash) and treat them as exempt from hazardous waste regulations. This allowed the comingling of BCCW with fly ash or bottom ash and disposal in the ash pile without first having to determine if they were characteristically hazardous. In May 2000, the EPA made a regulatory determination that moved BCCW from the “uniquely associated” to a “not uniquely associated” list, meaning that it would lose its Bevill exemption. This determination was challenged by user groups such as the Utility Solid Waste Activities Group and Edison Electric Institute, and comments were sent to the EPA. The EPA has not responded to these comments in any formal way. However, the agency’s general counsel has produced documents that clearly assume that this change in the BCCW determination has been implemented. This is still an open issue, and utilities should be aware of these regulations when blending BCCW with a Bevill-exempted waste. New regulations on coal combustion residuals are in the offing and may affect the way utilities manage fly and bottom ash and how ash landfills are managed, and this may change the discussion on comingling BCCW with ash yet again. It is important to note that a nonhazardous BCCW could still be blended with ash, at least under current regulations. For many years, coal-fired utilities have evaporated characteristically nonhazardous BCCW (<5 ppm Cr) by spraying it directly into the fireball of an operating coal-fired boiler, typically at a rate of 30 to 50 gpm, depending on the amount of coal going to the boiler. This was the practice with EDTA wastes in particular. The water is evaporated, the organic EDTA is consumed in the fireball, and metals are combined with the fly and bottom ash leaving the boiler. The small rate at which the BCCW was being added versus the coal feed made no measurable differences to any of the fly ash or bottom ash characteristics or stack gases. Some even found a slight benefit to NOx emissions during the time the BCCW was being evaporated. This practice had been successfully used for many years by coal-fired power plants. However, changes to the definition of a nonhazardous secondary material by the EPA, together with changes to definitions of what constitutes a commercial industrial solid waste incinerator, essentially will prohibit a conventional utility boiler from evaporating BCCW in the future. These changes go into effect at the latest in 2015, but they may already be in place in some states. If both evaporation and comingling with fly ash go away, other options for properly disposing of BCCW will need to be considered. The amount of BCCW generated with the combined solvent drains and rinses can be very large, and off-site treatment or disposal costs could double the cost of the cleaning. On-site treatment, particularly of EDTA wastes, can be time-consuming and expensive. Other innovative recovery and beneficial reuse of these wastes has been considered in the past but never commercialized. However, in the present regulatory environment, these options may be in more demand as utilities strive to get their boilers clean while controlling costs and liability. ■ — David Daniels, is a POWER contributing editor and senior principal scientist at M&M Engineering Associates Inc.
<urn:uuid:39dc5379-8137-4d29-b674-d4350263384d>
CC-MAIN-2022-33
https://www.powermag.com/boiler-chemical-cleaning-doing-it-correctly/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00098.warc.gz
en
0.956022
6,231
2.90625
3
What is Family Mediation? This web page describes the process of family mediation, when mediation is needed as well as the expected requirements of a family mediator. What is mediation? Mediation is the process whereby families can negotiate regarding future plans for kids with the assistance of a neutral 3rd party. The mediator does not tell parties what to do, yet can help the parties to reach their own agreements amicably, whilst attempting to boost interaction in between them. What are the benefits of mediation? When moms and dads find it hard to agree on making ideal arrangements for youngsters after a family failure, Mediation is suggested. There are a number of benefits to attending mediation, such as: - providing you much more control over what choices are made in regard to youngsters, as opposed to applying to the courts; - providing a less difficult means of taking care of delicate issues; - enhancing communication and assisting you to iron out future plans; - allowing arrangements to be examined and also altered less complicated, as long as they are mutually agreed by both parties; as well as - offering a quicker and less costly method of solving disagreements. Are any contracts made through mediation lawfully binding? Any kind of arrangements made during mediation are not legitimately binding in the sense of being enforceable in a court. Some individuals do choose to get a solicitor to look over the agreement, and the agreement can be used in court at a later phase in order to produce a Consent Order. See our page on Consent Orders for more details. What is a Mediation Information and Analysis Meeting (MIAM)? A Mediation Information Evaluation Satisfying is the initial conference which will certainly assist establish whether mediation will certainly be suitable in your conditions, and whether it will certainly help you to reach an agreement. What will occur at mediation? The mediator will look for commonalities in between you. If you’re not comfy with remaining in the exact same space as your ex-partner, the mediator can organize ‘shuttle’ mediation. This is where the mediator talks with you alone and after that talks with your ex-partner with your propositions independently. It may take greater than one session to reach an agreement. Upon an agreement being gotten to between you and your ex-partner, a “memorandum of understanding” will be created by the mediator so everybody understands what has been concurred. Do I need to go to mediation? From April 2014, anybody applying to the courts for assistance in solving conflicts regarding children or funds will be needed to participate in a conference Mediation Details Assessment Fulfilling. This includes any type of applications for: - Kid Plans Order - Particular Problem Order - Prohibited Steps Order - Adult Obligation Order - An order designating a Youngster’s Guardian - Removal from Territory Order - Unique Guardianship Order. You will certainly not need to go to mediation for the above applications if you are making an application for a Consent Order, or if there are continuous emergency process, treatment procedures or guidance proceedings for a kid or there is an Emergency Situation Security Order, Treatment Order or Guidance Order in position. You can also be exempt from needing to go to a MIAM, if you meet among the exceptions detailed in paragraph 3 of the C100 application form, which can be downloaded from www.justice.gov.uk. A few of the main exemptions include: - where there has actually been any kind of type of residential physical violence in between you as well as your ex-partner and also it has been reported to the cops, courts, wellness specialists or specialized company; - where the youngster is the topic of a Child Protection Plan or an area 47 enquiry; - where the circumstance refers urgency, i.e. a threat of harm to the youngster’s safety; - where mediation has actually been attempted within the last four months; or - where the individual seeking to make the application does not have enough call information of the various other person to which the application relates. What can I anticipate from my mediator? A family mediator need to act impartially as well as avoid any kind of problem of passion. A mediator must continue to be neutral on the result of the mediation. You must also anticipate the mediator to maintain confidential all details gotten throughout the course of mediation. The mediator can not also divulge details to the court, without the permission of both participants. The mediators may only divulge details where there are major accusations of damage to a child or grownup. Mediation is a volunteer process as well as any type of session for mediation can be put on hold or ended, if it is really felt that the parties are reluctant to fully take part in the process. Mediators need to likewise motivate the participants to think about the wishes as well as feelings of the kids. For how long can mediation take? Mediation can proceed while it fulfills the demands of the specific parties involved. The first meeting lasts approximately 45 mins. Full mediation sessions will usually last in between 1 to 2 hours, depending on the intricacy of the circumstance. What is the price of mediation? You might be able to get Legal Aid to help with the costs if you are on a low earnings or in receipt of certain benefits. Legal Help can cover the first MIAM session for both of you if just one party is qualified for lawful help. The mediator ought to be able analyze whether you are qualified for legal help or you can contact Civil Legal Suggestions on https://ammediators.co.uk/contact/. For precise prices, contact your mediation provider. What happens if we can not reach an agreement through mediation? If you can not get to an agreement with the other participant, or mediation falls short for any type of various other reason, as an example the various other party will not participate in or the mediator really feels that mediation is unworkable, you may continue with your dispute to the courts. You must ensure that the mediator signs and licenses your application form. Mediation can help you and your partner decide financial issues on separation. and future arrangements for the children without the need to go to court. Making use of mediation to aid you different Mediation is a method of sorting any kind of distinctions between you and also your ex-partner, with the help of a 3rd individual that won’t take sides. The third individual is called a mediator. They can assist you reach an agreement about concerns with money, residential property or youngsters. You can attempt mediation prior to mosting likely to a solicitor. If you go to a lawyer first, they’ll probably speak to you concerning whether utilizing mediation initially can assist. You do not need to go to mediation, however if you wind up needing to go to court to arrange out your distinctions, you normally require to confirm you’ve been to a mediation information and also evaluation meeting (MIAM). This is an initial conference to explain what mediation is and also just how it may assist you. There are some exemptions when you do not have to go to the MIAM before litigating – for instance, if you’ve experienced domestic abuse. You should contact the mediator as well as clarify the scenario if you need to go to court and your ex-partner does not want to see a mediator. You can’t require your ex-partner to go to mediation. If you can, it’s much better to get to an agreement and also try with mediation. You can conserve money in lawful charges as well as it can be easier to resolve any kind of distinctions. You can learn more concerning how mediation operates in this family mediation brochure on GOV.UK. Locate your local family mediator on the Family Mediation Council internet site. Just how much mediation expenses Mediation isn’t free, yet it’s quicker as well as more affordable than litigating. You might be able to get a totally free voucher well worth up to ₤ 500 for mediation if the differences between you and also your ex-partner are regarding a youngster. If you certify for the Family Mediation Coupon Scheme on GOV.UK, check. If you get on a low revenue you could also be able to get legal help to pay for: - the introductory conference – this covers both of you, even if just one of you gets lawful help - one mediation session – that covers both of you - a lot more mediation sessions – just the person who gets approved for legal aid will certainly be covered - help from a solicitor after mediation, as an example to make your agreement legally binding Legitimately binding methods you need to stick to the terms of the agreement by legislation. Inspect if you’re qualified for lawful help on GOV.UK. , if you don’t certify for legal help The price of mediation differs relying on where you live. Phone around to locate the most effective cost, however keep in mind the least expensive may not be the finest. Some mediators base their fees on just how much you gain – so you might pay much less if you get on a reduced earnings. If you wish to keep the prices of mediation down, try to concur as long as you can with your ex-partner before you begin. You could have currently concurred plans about your children, yet need aid concurring exactly how to divide your money. You might additionally concur a fixed variety of sessions with your mediator – this could assist you as well as your ex-partner concentrate on obtaining a quicker resolution. Prior to you go to mediation Believe regarding what you desire to obtain out of mediation prior to you begin. If you can invest the sessions focusing on points you really disagree on, Mediation is more most likely to prosper. You’ll require to fill out a monetary disclosure form when you go to mediation if you’re attempting to get to an agreement regarding money or home. You’ll have to include all your financial info: - your earnings – for instance, from job or benefits - what you invest in living expenses – such as transport, utilities as well as food - just how much money you have in financial institution accounts - financial obligations you owe - building you own Begin celebration expenses and also bank declarations with each other to take to the first mediation conference. Some mediators will certainly send you a form like this to fill out before your very first consultation. When you speak about your finances, it’s crucial that you and your ex-partner are straightforward. If your ex-partner later on finds out you attempted to hide something from them, any agreement you make may not be legitimate. Your ex-partner might also take you to court for a bigger share of your money. What happens in mediation In the initial meeting, you and your ex-partner will usually meet separately with an experienced mediator. Hereafter, you’ll have mediation sessions where you, your ex-partner as well as the mediator will rest with each other to review your differences. If you really feel not able to rest together as well as ask the mediator to go back as well as forwards in between you, you and also your ex-partner can sit in different areas. This sort of mediation takes longer, so it’s usually a lot more pricey. The mediator can’t offer legal advice, yet they will certainly: - listen to both your perspectives – they won’t take sides - help to create a tranquil environment where you can get to an agreement you’re both happy with - suggest useful steps to assist you settle on points Whatever you state in mediation is confidential. Your mediator will generally focus on what’s ideal for them and their requirements if you have children. The mediator could also speak with your youngsters if they believe it’s proper and also you agree to it. At the end of your mediation Your mediator will certainly compose a ‘memorandum of recognizing’ – this is a file that reveals what you have actually agreed. You’ll both get a duplicate. If your agreement has to do with money or building, it’s a great suggestion to take your memorandum of understanding to a lawyer as well as ask them to transform it into a ‘consent order’. This implies you can take your ex-partner to court if they do not stay with something you concurred. You can request an approval order after you have actually begun the process of obtaining divorced or ending your civil collaboration. It needs to be authorized by a judge in court – this will cost ₤ 50. You’ll also need to pay your lawyer’s fees. If you can obtain lawful aid to cover your costs on GOV.UK, inspect. If you can not get to an agreement through mediation If you can’t reach an agreement with your ex-partner with mediation, you need to speak to a solicitor. They’ll advise you what to do next. Discover your closest solicitor on the Law Society site. If you disagree concerning what ought to happen with your youngsters, a solicitor might suggest that you keep trying to reach an agreement in between yourselves. If they think the moms and dads can sort things out themselves, courts normally will not choose who a child lives or invests time with. This is recognized as the ‘no order principle’. You could try to make a parenting plan. This is a composed or online document of how you as well as your ex-partner intend to take care of your youngsters. Learn more about making a parenting plan on the Children and Family Court Advisory and Assistance Solution web site. If you differ regarding money or residential property as well as you have actually attempted mediation, a solicitor will probably recommend sort points out in court. If you prefer to avoid court, you might attempt: - mosting likely to a ‘joint law’ session – you and also your companion will certainly both have lawyers in the area collaborating to get to an agreement - going to family mediation – a mediator is a little bit like a judge – they’ll consider things you and also your ex-partner disagree on and make their very own decision Both of these choices can be pricey, but they could still be cheaper than going to court. It’s finest to obtain advice from a lawyer prior to attempting either. Mosting likely to collaborative regulation You as well as your ex-partner have your own solicitors that are specifically trained in joint regulation. The four of you satisfy in the very same space and also work with each other to reach an agreement. You’ll each need to pay your solicitors’ charges, which can be expensive. How much you’ll pay at the end relies on how much time it takes for you as well as your ex-partner to get to an agreement. Prior to you begin your collaborative legislation sessions, you each need to sign a contract saying you’ll try to get to an agreement. If you still can not get to an agreement, you’ll need to go to court to iron out the concerns. You can’t use the exact same lawyer, so you’ll need to discover a various one – this can be costly. When you get to an agreement via collective law, your lawyers will typically draft a ‘approval order’ – this is a lawfully binding agreement about your funds. If you’re not yet all set to make an application for a divorce or finish your civil collaboration, they can tape-record your setups as a ‘splitting up agreement’ instead. A splitting up agreement isn’t legitimately binding. Nevertheless, you’ll normally be able to use it in court if: - it’s been prepared effectively, for instance by a lawyer - you as well as your ex-partner’s monetary scenarios are the exact same as when you made the agreement Going to family adjudication If you desire to remain out of court, Family arbitration is another choice. It’s a bit like litigating, however in family adjudication a mediator makes a decision based upon your conditions – not a court. You and your ex-partner choose the arbitrator you intend to utilize. You can additionally pick where the hearing takes area and which issues you focus on. An arbitrator’s decision is lawfully binding. This means you have to stick to the regards to the agreement by legislation. Settlement can be less expensive than going to court, however it can still be costly. You can not get legal aid for it. The exact amount you’ll pay depends upon where you live and also how much time it takes you and your ex-partner to get to an agreement. Family adjudication may be a good alternative if you as well as your ex-partner: - want a fast decision – waiting on a court hearing can in some cases take greater than a year, whereas an arbitrator would generally have the ability to begin much earlier - can not reach an agreement via mediation or by utilizing solicitors – however you ‘d still such as to avoid litigating - would certainly choose somebody else to choose for you, instead of having to discuss yourselves Mediation isn’t inexpensive and also you can not get legal aid for it, but it may still be cheaper than litigating. Court can set you back several thousand extra pounds. A simple arbitration case could set you back ₤ 1,000, however you could wind up paying a lot more – the specific quantity depends where you live and the length of time it requires to reach an agreement. It’s a good idea to talk to a lawyer before choosing settlement – they can tell you if it’s best for you, as well as might be able to suggest an excellent neighborhood family arbitrator. Are you in urgent need of a MIAM? Look no further! We’ve helped 1000s of satisfied clients. Accredited, friendly mediators. We’ll help you move forward faster. Family Mediation FAQ What is a MIAM? A Mediation Information Evaluation Meeting is a conference with a specially certified family mediator, who will certainly clarify to you the alternatives to the court process. Many divorcing and dividing pairs in England and also Wales that want to use the court process to resolve any questions regarding kids or money need to show that they have attended a MIAM before they can obtain a court order. The function of the conference is to provide you a chance to discover out whether going to court would certainly be the very best way of solving the problems surrounding your connection or marital relationship failure (e.g. kids, residential or commercial property and financial concerns), and specifically whether mediation can be an efficient choice. At a MIAM you will certainly meet with a certified family mediator, and discuss your individual situation on a confidential basis. The various other person is anticipated to go to when invited to do so, as well as the court has the power to inform the person that has declined to go to a MIAM that they need to do so. The mediator will provide info regarding alternatives offered to you to deal with the concerns around your separation, and will certainly talk about the benefits and also downsides of each option. The mediator will also ask concerns, and make an analysis to decide whether mediation is an ideal way ahead for you in your very own certain circumstances. What is family mediation? Family mediation is a method helpful families to get to agreements about what should happen concerning separation or divorce. It is an increasingly preferred option to asking the court to make decisions regarding family problems. In family mediation, you usually discuss in person with your companion regarding arrangements that need to be created the future, with the assistance of one or 2 neutral third parties– the mediator or mediators. How is family mediation various to the other alternatives? Unlike going to court or arbitration, family mediation acknowledges that you are the experts regarding your own family and leaves the decision-making to you. Unlike working out with your legal representatives, family mediation allows you to speak straight to each various other, so that you can both explain what you are feeling as well as what is crucial to you. It likewise lets you concentrate on the things that really matter to you as a family. How could a family mediator aid my family? Family mediators have a wonderful bargain of experience of the problems bordering splitting up and also divorce and are able to provide you basic info regarding all the choices available to your family. Family mediators are specially trained to focus on the demands of the youngsters in the family, and will certainly help you, as moms and dads, to do that together. During the mediation your mediator will give you information concerning how to deal with economic issues, exactly how to deal with kids problems, appropriate legal concepts, the court process, court orders, and also exactly how to call other firms and also specialists who may be able to aid. These will certainly include talking and also paying attention to each various other with respect, as well as working with the mediator to make certain that dispute as well as any type of strong emotions that emerge throughout the mediation don’t bewilder the process. The majority of family mediators work in a fairly casual setting, and also all certified family mediators provide customers with a relaxed as well as safe environment. Throughout the session, the mediator will certainly record crucial items of info or concepts or specific options in a manner that permits both of you to see what has been composed and also to discuss it. Normally the mediator will use a flip-chart to do this, yet numerous additionally make use of more modern technology. You will be motivated to ask inquiries and also discuss what is being listed. If you don’t comprehend something that is being stated by anybody in the area, or don’t understand something that has been created on the flip-chart by the mediator, say so. It is the mediator’s work to assist. Your mediator will be keeping an eye on just how you are really feeling, however if you feel uneasy or concerned regarding anything, it is very crucial to state so. If both of you have the ability to determine some proposals that you assume might work, the mediator will certainly record those propositions in a personal method, for you to become a lawfully binding agreement after getting lawful guidance. Just how will I be kept risk-free during a family mediation? Family mediators are particularly trained to look out for any kind of domestic abuse issues that might affect your family, and additionally for other problems that could make arrangement between family members specifically difficult. Family mediators will certainly not permit you to moderate if they do not believe you will be safe. Just how can I be sure that the mediation process will be fair? Either of you can stop the mediation process at any moment; mediation will just proceed if both of you desire it to. Mediators are neutral. The mediator does not take sides, and is constantly there for both of you. Mediators do not provide suggestions, although they do offer details regarding legal principles as well as clarify a few of the points you must be considering. The mediator doesn’t ever before make any kind of choices for you; you work out in between you what propositions you assume you would like to require to lawyers, to make sure that you can obtain recommendations and assistance prior to determining to turn your propositions into a lawfully binding agreement. What takes place if I claim something in mediation yet then change my mind? Absolutely nothing you do or say throughout a mediation will produce a lawfully binding agreement. At the end of the mediation process your mediator will certainly clarify to you how to turn your suggestions right into a legitimately binding agreement and/or a court order, which typically consists of obtaining legal suggestions. Exactly how private is the process and also can what I state in mediation be made use of against me later? The info customers share with the mediator is maintained personal, with some very limited exceptions (comparable to the exemptions that put on therapists as well as attorneys and counsellors). Propositions placed forward during mediation can not be referred to in court procedures. If you try to moderate yet it does not work, the court will certainly never ever be informed why the mediation had not been effective. What sort of things will I be anticipated to do during the mediation process? After authorizing the agreement to mediate, both of you will certainly deal with the mediator to: – Describe your family scenario. – Establish the mediation agenda. The mediation sessions are customized around what you want as well as need to discuss. – Concur the issues that you need to go over. – Choose the top priority of the concerns. Some concerns are more pressing than others as well as require to be dealt with initially, e.g., temporary monetary support, holidays, contact. – Set time scales to manage specific matters e.g., for separation or divorce. – Clear up the issues: sometimes it is not specific what matters are actually in dispute and also clearing up these avoids future misconception. – Think about whether any kind of various other professionals could be able to help you. – Locate the typical ground. – Provide/obtain details, e.g., finish a financial set of questions or have a kind explained to you. If you have monetary issues to review, it is especially essential to see to it everyone has a very clear photo of the family’s monetary circumstance. This includes each of you giving details concerning any property you possess, as well as your earnings as well as expense, significantly as you need to if you litigate. – Look at the different choices and reality examination those alternatives. When there are monetary issues you will require to give factor to consider to what everybody in the family requires, especially the youngsters. – Get to the alternative that ideal matches both of you and also function out the details of your propositions. Will I need to spend for mediation, as well as if I do need to pay, just how much will it cost? , if you are paying independently you need to inspect the rates your neighborhood mediators bill for mediation session.. Some charge a hourly price, some charge on a sessional, or case basis. The majority of mediators likewise charge for the analysis conferences that occur prior to the mediation begins. The prices that family mediators cost are generally much reduced than the rates that family lawyers bill, however it is always very essential to be clear from the starting how much you are mosting likely to be billed, and what solutions the mediator will be charging you for. You may be entitled to lawful help if you have a low revenue and reasonably low resources. As you probably know legal help is no longer available for many family issues that go to court, however it is still offered for family mediation. You need to look for a mediator who is specially certified to provide legal aid mediation if you think you might be eligible for lawful help. The mediator will aid you to exercise if you are entitled to legal help, and also if you are, your mediator will then ask the Legal Help Firm to money your mediation. If later on you want to turn your mediation propositions right into an agreement, your family mediator can in some cases prepare for lawful help to pay for you to get some assistance from a lawyer. Not all FMA participants are certified to use lawful help mediation. Our website programs who does legal aid work and also who does not; if you think you might be qualified to lawful help, but aren’t certain, it is typically best to locate a mediator who is certified to provide lawful aid. All FMA members will certainly do their finest to recommend a regional mediator who can aid you. Exists a means of entailing my youngsters at the same time? Family mediators are specially trained to concentrate on the demands of the kids in the family, as well as will certainly constantly work with you, as the moms and dads, to do that together. Many FMA participants are specially qualified to entail kids straight in family mediations. There are several points to consider when choosing whether it is suitable for a private child to be included directly, which will need to be talked with by both parents, and also with the mediator, but including kids can be extremely valuable if the right preparation is done. The government has actually claimed that it thinks all youngsters above the age of 10 must have a chance to see a mediator if their moms and dads are making use of mediation to make decisions about child setups. If you have an interest in involving a youngster in the mediation you can seek a mediator who is educated to work straight with children. The mediator who functions with the moms and dads does not have to be the same mediator who satisfies with the youngster, so you can choose for a mediator who hasn’t certified to see children straight, as well as ask your mediator to discover you one more mediator that is qualified to meet with the child. FMA mediators all recognize just how direct assessment with kids works, also if they do refrain this work themselves, and will certainly have the ability to speak with the options with you.
<urn:uuid:c5611980-7b93-4005-b16e-165aebfb02bc>
CC-MAIN-2022-33
https://ammediators.co.uk/blog/mediation-the-6-phases-updated-2021/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00095.warc.gz
en
0.96534
6,124
2.65625
3
Optometrists are often the first point of contact between patients and ophthalmic surgeons, so it is essential they are well informed about the latest surgery techniques. The introduction of minimally invasive glaucoma surgery (MIGS) offers patients with mild to moderate glaucoma a safe and effective way to reduce their intraocular pressure (IOP) with potentially fewer medications, especially where cataract surgery is required. With multiple options to choose from, it is important to understand the differences between available devices and the questions to ask to help select the best procedure for an individual patient. WHY WERE MIGS DEVELOPED? Patients with glaucoma want to avoid vision loss and maintain their independence while having treatments which are convenient, with minimal negative impact on their quality of life. Because glaucoma is largely an asymptomatic disease until advanced, the convenience and side effects of treatment are important considerations. Historically, mild to moderate glaucoma has been treated with topical glaucoma medications. These treatments are effective at lowering IOP and have been shown to slow the progression of glaucoma,1 however more than one medication is often required. These medications frequently cause or exacerbate ocular surface disease, adherence can be a challenge for patients, and medication alone may not always be sufficient to control the disease. patients receiving a greater number of glaucoma medications for longer periods of time have more severe signs and symptoms of ocular surface disease Ocular Surface Disease In a survey of over 2,000 glaucoma patients treated with topical glaucoma medications, 47 per cent needed more than one class of medication and 62 per cent of patients reported side effects such as redness, burning, grittiness, tearing, and dry eye.2 These side effects were more common in patients taking multiple medications.2 Other clinical studies have found that patients receiving a greater number of glaucoma medications for longer periods of time have more severe signs and symptoms of ocular surface disease.3 Topical medications can have a deleterious effect on the ocular surface, not only through the effect of the active ingredient but also due to the presence of preservatives or excipients. Common preservatives such as benzalkonium chloride (BAK) and polyquaternium-1 (PQ) have been shown to have cytotoxic and proinflammatory effects on the ocular surface, causing squamous metaplasia of the conjunctival epithelium and a reduction in the number of goblet cells.4-6 This is significant, not only for the patient’s comfort, but also because damage to the ocular surface and conjunctiva may compromise the success of future glaucoma filtration surgery. Long-term exposure to glaucoma medication has been found to have a deleterious effect on the conjunctiva and is a risk factor for failure of glaucoma filtration surgery.7-9 These side effects and symptoms are not isolated to preserved medications. In patients taking preservative-free medications, up to one in five report pain or discomfort during instillation, foreign body sensation, stinging or burning, and dry eye sensation.10 Dry Eye and Quality of Life Co-existing eye conditions, such as dry eye, are common in patients with glaucoma. The use of glaucoma eye drops has been shown to exacerbate the signs and symptoms of dry eye and negatively affect quality of life.11 In an observational cross-sectional study, patients taking glaucoma medications were found to have more signs and symptoms of dry eye syndrome and lower quality of life scores than those not taking glaucoma medications.11 The negative impact on quality of life should not be underestimated. In one study, the effect of dry eyes secondary to glaucoma medications was similar to 10dB of visual field loss.12 Adhering to Medical Treatment Adhering with daily medication use for a chronic and asymptomatic condition like glaucoma is difficult. Proper adherence requires obtaining the medication, successfully instilling the drop into the eye, using the medication at the appropriate time, and doing so consistently each day.13 Studies show that between 30 per cent and 80 per cent of patients are non-adherent with their glaucoma medications.14,15 In one study, only 24 per cent of patients with newly diagnosed glaucoma persisted with treatment for two years.16 In another study, 50 per cent of patients had stopped taking their medications by six months and only 37 per cent were still filling their prescriptions after three years.17 Patients have difficulty maintaining recommended medication regimens for many reasons, including difficulties with the medication schedule, forgetfulness, difficulty with eye drop administration, cost, and life stress among others.17 Nonadherence with medication may result in periods of uncontrolled intraocular pressure and patients with poor adherence have been found to be more likely to progress,18 have severe glaucomatous damage,19 and become blind from glaucoma.20,21 Limitations of Medical Treatment The only proven treatment for glaucoma is to lower intraocular pressure.22 In the Ocular Hypertension Treatment Study (OHTS), lowering intraocular pressure by 20 per cent with topical glaucoma medications reduced the risk of developing glaucoma from 9.5 per cent to 4.4 per cent.23 However, this means a significant number of individuals still developed glaucoma despite treatment. Similarly, in the Early Manifest Glaucoma Trial (EMGT), 59 per cent of patients progressed on visual fields over a four year period, despite treatment with glaucoma medications and achieving an average IOP reduction of 25 per cent.24 By comparison, in the Collaborative Initial Glaucoma Treatment Study (CIGTS) IOP reductions of 35 per cent to 48 per cent resulted in no net glaucoma progression25 and in the Advanced Glaucoma Intervention Study (AGIS), patients who consistently had an IOP of < 18mmHg at every visit and a mean IOP of 12.3mmHg had no visual field progression on average.22 It is therefore important not only to lower IOP, but to do so effectively. For patients with advanced glaucoma, this is best performed with trabeculectomy or tube surgery because of their potent ability to lower IOP. In these cases, the risk of adverse events is outweighed by the importance of preventing vision loss. However, for patients with mild to- moderate glaucoma and borderline IOP control despite glaucoma eye drops, MIGS procedures provide effective additional IOP lowering while avoiding some of the short and long-term risks associated with traditional glaucoma surgery. MINIMALLY INVASIVE GLAUCOMA SURGERY OPTIONS Patients in Australia and New Zealand are fortunate to have access to a range of MIGS options to lower their IOP and reduce their dependence on glaucoma medications. These devices bypass the blocked or damaged trabecular meshwork to augment aqueous outflow. The procedures vary in efficacy and potency and are most logically classified according to where they drain aqueous. One of the first MIGS devices, the iStent Inject consists of two very small titanium stents (less than 0.4mm in size) that are surgically implanted into the trabecular meshwork to assist aqueous to drain from the anterior chamber into Schlemm canal. Typically implanted at the time of cataract surgery, the stents are inserted using a preloaded injector using the same incisions as cataract surgery. Once inserted, the device’s stents open a section of the trabecular meshwork to increase the facility of outflow. The procedure can be performed under topical or local anaesthesia. During the procedure the patient must rotate their head 45 degrees to facilitate visualisation of the angle. Because the iStent accesses a portion of Schlemm canal, it is important to target areas of the canal with the highest concentrations of the collector channels. Reflux into Schlemm canal and areas of pigmentation of the trabecular meshwork are clues to the presence of collector channels. Failure to correctly place the stent in Schlemm canal and near collector channels may result in disappointing outcomes. Post-operatively, the management is similar to cataract surgery alone. Patients are prescribed prednisolone acetate 1 per cent four times a day for one month and many surgeons also prescribe a topical non-steroidal to prevent pain and cystoid macular oedema. Glaucoma medications are stopped after several weeks, depending on the fall in IOP. When implanted correctly, the iStent Inject has been shown to produce modest reductions in IOP and reduce the need for glaucoma medications. The interim results of a pivotal trial comparing the iStent Inject in combination with cataract surgery to cataract surgery alone were announced at the American Society of Cataract and Refractive Surgery (ASCRS) Annual Meeting this year. At 24 months, the mean IOP reduced 31 per cent to a mean of 17.1mmHg from an unmedicated mean baseline IOP of 24.8mmHg. The full results, including safety data, are awaited with interest. With regards to safety, blood reflux from Schlemm canal into the anterior chamber is common and can cause an early post-operative hyphaema. This can be minimised by leaving the eye pressurised at the end of the case. Other complications include stent malposition, over or under implantation, stent obstruction, IOP spikes, and failure to lower IOP requiring further surgery. Serious complications are rare and include a trabecular meshwork tear, cyclodialysis cleft, suprachoroidal implantation, and iridodialysis.26 The Hydrus Microstent is a crescentshaped implant made of a highly flexible, biocompatible alloy of nickel and titanium (Nitinol). The 8mm stent both bypasses the trabecular meshwork and scaffolds open Schlemm canal. Unlike the iStent Inject, which only accesses a small portion of Schlemm canal, the size of the Hydrus allows it to dilate three clock hours of Schlemm canal, thus providing access to a greater number of collector channels. The posterior part of the stent is open, which means that it will not block collector channel openings. The Hydrus has been shown to increase outflow facility and reduce outflow resistance in perfusion models of human cadaver eyes.27 The Hydrus has been found to be safe and effective in the Hydrus II study. In this randomised controlled trial, 100 patients with primary open-angle glaucoma were randomised (1:1) to cataract surgery alone or cataract surgery combined with Hydrus Microstent insertion and were observed for 24 months.28 In this study, the proportion of patients achieving a 20 per cent reduction in unmedicated IOP was significantly greater in those undergoing cataract surgery with Hydrus insertion compared to cataract surgery alone (80 per cent vs. 46 per cent; P = 0.0008).28 Also, the proportion of patients who were free of medication was significantly higher in the combined cataract surgery and Hydrus group (73 per cent vs. 38 per cent; P = 0.0008).28 The safety profile was similar between both groups, except for a higher rate of peripheral anterior synechiae (PAS) in patients receiving the Hydrus. A larger randomised trial, Hydrus IV, is underway in which 556 patients have been randomised (2:1) to cataract surgery with Hydrus or cataract surgery alone. This pivotal trial will be used to seek FDA approval in the United States and the results are eagerly awaited. The Hydrus is most commonly performed with cataract surgery. Insertion, postoperative care, and potential complications are similar to those of the iStent Inject. The manufacturer of Hydrus, Ivantis, is preparing for commercial release of Hydrus this year. One of the latest generation of MIGS procedures is the CyPass MicroStent. This device is the first and only device currently available that takes advantage of uveoscleral outflow, the same route used by the most effective glaucoma eye drops – prostaglandin analogues. Leveraging the uveoscleral outflow pathway offers a number of advantages over other MIGS procedures. One of the limitations of stentbased trabecular bypass procedures is that they only treat a small area of the outflow tract and must be placed near collector channels to work effectively. At present there is no diagnostic tool for identifying collector channels and there is therefore a chance these will be missed, resulting in suboptimal treatment. Additionally, it is unknown whether collector channels remain open as glaucoma progresses. The CyPass MicroStent is a 6.35mm long fenestrated tube that is made of a biocompatible material called polyimide. The device is implanted just below the scleral spur into the supraciliary space to allow aqueous to drain via the uveoscleral pathway. The CyPass MicroStent can be inserted at the time of cataract surgery without the need for additional incisions, or it can be inserted without cataract surgery via a single 1.5mm clear corneal incision. As an internal blebless procedure, it eliminates the risk of blebrelated complications. The CyPass MicroStent has some of the strongest and most-compelling evidence supporting its use for the treatment of mild-to-moderate open-angle glaucoma. The COMPASS trial was a multicentre randomised clinical trial of 505 patients, making it one of the largest MIGS trials completed to date.29 Eyes were randomised (3:1) to cataract surgery with the CyPass Microstent or cataract surgery alone. A significantly greater proportion of patients in the CyPass group had an unmedicated IOP reduction of at least 20 per cent or more compared to the control group (77 per cent vs. 60 per cent; P = 0.001).29 Furthermore, the CyPass group showed a sustained reduction in IOP with 61 per cent having an unmedicated diurnal IOP of between 6–18mmHg through 24 months compared to only 43.5 per cent of the control group. Additionally, 85 per cent of CyPass patients were free of medication compared to 59 per cent of control subjects.29 Avoiding many of the major risks associated with traditional glaucoma surgery, the safety profile of the CyPass MicroStent is comparable to that of cataract surgery. In the COMPASS pivotal trial, transient numerical hypotony occurred in 2.9%, all of which resolved in the first month after implantation.29 Similarly, hyphaema occurred in 2.7% of patients but resolved in all cases within the first 2 weeks.29 Myopic shift in the early postoperative period can occur but this is rare and typically transient. Post-operative care is similar to cataract surgery alone. Patients are prescribed topical prednisolone acetate 1 per cent four times a day for four to six weeks after surgery and some surgeons also prescribe a non-steroidal anti-inflammatory, routinely after cataract surgery, to reduce the risk of cystoid macular oedema. Topical glaucoma medications should be discontinued at the time of surgery or in the month prior to surgery to minimise the chance of low IOPs in the early post-operative period because of the combined effect of the stent and medication. Should IOP begin to rise postoperatively, it is recommended to commence a prostaglandin analogue to maintain the aqueous lake in the supraciliary space. The XEN is a minimally invasive glaucoma procedure that drains aqueous from the anterior chamber to the subconjunctival space, the same as trabeculectomy. It is a 6mm flexible tube made of porcine gelatine that has been cross-linked with glutaraldehyde to prevent biodegradation. It is very biocompatible and does not cause a foreign body reaction. The dimensions of the tube have been carefully selected based on the Hagen-Poisseuille equation, which predicts resistance to flow based on the length, inner diameter, and viscosity of the fluid. Given a length of 6mm and inner diameter of 45μm, the XEN provides approximately 6–8mmHg resistance, therefore minimising the risk of hypotony. The XEN can be performed as a standalone procedure or in conjunction with cataract surgery. It is generally performed under a local anaesthetic block but can be performed under topical anaesthesia with intracameral lignocaine. To prevent fibrosis, an off-label injection of mitomycin is given under the conjunctiva prior to implantation. The XEN is inserted via a corneal incision and exits the sclera, remaining under the conjunctiva, 3mm from the limbus in the superonasal quadrant of the eye. Once implanted, the XEN becomes hydrated and malleable, which is important to prevent migration or erosion. The XEN is designed to provide effective lowering of IOP, using the same route as trabeculectomy, while avoiding the need to incise the conjunctiva, create a scleral flap, or perform an iridectomy. The aim is to provide a safer procedure with faster recovery of vision by reducing the risk of complications such as bleb leak, over filtration, or hypotony. Post-operatively, glaucoma medications are stopped in the operated eye and topical steroids are commenced. The frequency of steroids is tapered over several months, according to the degree of conjunctival inflammation. In cases where there is significant conjunctival inflammation preoperatively, steroid may be commenced the week prior to surgery. The patient will then be seen the following day and then at one week. After this, the post-operative reviews are spaced further apart and are much less often than for trabeculectomy. If there is conjunctival scarring, a procedure called ‘needling’ may be performed at the slit lamp and if scarring is extensive, bleb revision can be performed in the operating theatre. The XEN is indicated for glaucoma not controlled with topical medications and it is designed to offer an enhanced safety profile compared to traditional filtration surgery. While not a replacement for trabeculectomy in cases with very advanced glaucoma, it offers patients with less advanced disease a minimally invasive option where medical therapy has failed. The XEN has been studied in the APEX trial where 215 patients with mild-to-moderate primary open angle glaucoma without a prior history of intraocular pressure surgery underwent either a standalone XEN procedure, or cataract surgery combined with the XEN. The preliminary results showed a mean IOP reduction of 7.6 } 4.8mmHg at 12 months and all patients had a reduction in IOP lowering medications. The safety profile was very good and serious complications were rare. An IOP of < 6mmHg in the first month occurred in 14.9 per cent of patients and required no intervention. Needling was required in approximately one in four patients. The 24-month data is anticipated to be published shortly. In another study, there was an average IOP reduction of 36.4 per cent following XEN insertion and a 57 per cent reduction in glaucoma medication use at 12 months.30 In this study nine out of 10 patients achieved an IOP of 18 mmHg or less and 40 per cent were entirely medication free.30 SELECTING THE APPROPRIATE MIGS PROCEDURE Appropriate patient selection is the key to success in glaucoma surgery. With many MIGS options to choose from, it is important to tailor the choice of procedure to each patient. This complex decision should take into account the patient’s specific needs, preferences, and values while taking into consideration their lifestyle, type of glaucoma, disease stage, and tolerance to prior treatment. It is essential that this discussion is centred around the individual patient and not the procedures a surgeon performs as there is no ‘one size fits all’ MIGS procedure. The first step in creating an individualised treatment plan is to carefully listen to the patient to hear their symptoms, what they hope to achieve from treatment, and any concerns they may have about glaucoma or surgery. Having examined the patient and confirmed a diagnosis of ocular hypertension or open-angle glaucoma, it is then essential to answer four important questions: - What is the stage of glaucoma? - Is there cataract? - Is IOP at or above target? - Are glaucoma medications tolerated or causing side effects? Glaucoma can be staged as mild, moderate, severe, or advanced, based on visual fields and optic nerve appearance.31 A commonly used system is the Glaucoma Staging System (GSS) which is a modification of the earlier Hodapp-Parrish-Anderson (HPA) criteria.32 The GSS uses the visual field mean deviation (MD) to classify patients as mild (MD better than -6.00dB), moderate (MD -6.01 to -12.00dB), severe (MD -12.01 to -20.00dB), or advanced (-20.01dB or worse), in addition to taking into account point clusters and hemifield comparisons.32 Staging glaucomatous damage enhances management and enables therapy to be tailored to each patient. In addition, careful documentation of the degree of damage is essential for monitoring the stability of glaucoma. For patients with ocular hypertension or mild open-angle glaucoma who are well-controlled with glaucoma eyes drops and tolerating treatment, continuing with existing treatment is recommended. Should visually significant cataract be present, cataract surgery can be discussed to improve vision and this may also provide a slight reduction in IOP. After cataract surgery some patients may be able to stop their glaucoma eye drops. Uncontrolled or Intolerance to Treatment Where IOP is not controlled, or glaucoma eye drops are not tolerated due to symptoms such as red eyes, itching, burning, or stinging, the management depends on whether cataract is present. For patients without cataract, treatments such as selective laser trabeculoplasty (SLT) or additional/alternative glaucoma eye drops could be considered, in particular preservative-free medications, to provide better control of IOP and/ or relieve symptoms associated with glaucoma eye drops. For patients with cataract, they should be referred for consideration of cataract surgery combined with a MIGS procedure such as the iStent Inject, Hydrus, or CyPass. All of these procedures have a good safety profile and lower IOP. The degree of IOP lowering varies between devices, and the CyPass may provide greater IOP reduction and higher chance of being medication-free. This may be important for patients with higher IOP, the need for a greater number of medications, or a strong need or desire to be free of medication because of intolerance to medications. All of these devices can be combined with premium intraocular lenses (IOL) such as toric lenses to treat any refractive error and provide the best possible unaided visual acuity. However the use of multifocal IOLs is not recommended in patients with glaucoma or macular pathology. Patients who already have moderate visual field loss from glaucoma are at a greater risk of glaucoma progression, and therefore effective lowering of IOP is an important goal to prevent further visual loss from glaucoma. For patients with cataract and poorly controlled IOP or intolerance to glaucoma eye drops, I recommend cataract surgery combined with a supraciliary device like the CyPass MicroStent because of its proven ability to lower IOP. For patients without cataract or, for those who have already had cataract surgery, I discuss SLT or the XEN as a standalone procedure to provide better IOP control and/or reduce their need for glaucoma medications. At present the XEN is the only device both approved by the TGA and covered under Medicare for standalone use. Other devices like iStent Inject, Hydrus, and CyPass Micro-Stent are approved for standalone use however out of pocket expenses will be higher. The XEN can also be performed with cataract surgery for patients with cataract and glaucoma refractory to medications. Severe or Advanced Disease Patients with severe or advanced disease require the most aggressive IOP lowering in order to avoid vision loss from glaucoma. For patients with advanced but controlled disease who require cataract surgery, there is a risk of an IOP spike causing snuff out with cataract surgery alone. Combining cataract surgery with the XEN may reduce this risk. Where IOP is not controlled, procedures like trabeculectomy or a glaucoma drainage device (such as the Baerveldt tube) should be offered because of their proven ability to lower IOP. For patients who are unwilling to accept the risks of traditional surgery, the XEN provides a less invasive option for refractory glaucoma however patients should be aware the evidence base for use in this setting is still growing. Minimally invasive glaucoma surgery provides a new option for the treatment of glaucoma, offering safer glaucoma surgery for patients with less severe disease. With many options to choose from, optometrists will play an important role in helping educate their patients about the options available to them. There is no ‘one size fits all’ approach and options should be tailored to each patient. As many eye surgeons are in their initial phases of incorporating the latest technologies into their practice, optometrists need to be familiar with the pre-operative and post-operative expectations to best help their patients. Optometrists interested in perioperative co-management may wish to spend time in the operating theatre with a MIGS surgeon to better appreciate what is involved and what to expect postoperatively. Today, patients want to be well-educated about their treatment choices and engaged in the decision making process. To assist optometrists and patients there are now resources like MIGS.org which provide unbiased and factual information about MIGS surgery. With the recent advances in glaucoma surgery there are now many more options designed to control IOP, reduce the need for glaucoma eye drops, and help preserve patients’ vision and quality of life. Dr. Nathan Kerr is a fellowship-trained cataract and glaucoma surgeon in Melbourne, Australia. Dr. Kerr completed a prestigious Minimally Invasive Glaucoma Surgery (MIGS) Fellowship at Moorfields Eye Hospital in London and is one of Australia’s most experienced MIGS surgeons. He was the first Australian surgeon to be accredited to perform the XEN procedure and he performed the first commercial CyPass MicroStent operation in Australia. Dr. Kerr serves as a Glaucoma Section Editor for Clinical and Experimental Ophthalmology and is the Clinical Lead for Glaucoma Surgical Trials at the Centre for Eye Research Australia. Dr Kerr is a Consultant Ophthalmologist at the Royal Victorian Eye and Ear Hospital and consults privately at Eye Surgery Associates in East Melbourne, Doncaster, and Vermont South. doctorkerr.com.au To earn your CPD points from this article, answer the assessment available at mivision.com.au/Migs-tailoredtreatments- for-glaucoma - Garway-Heath DF, Crabb DP, Bunce C, et al. Latanoprost for open-angle glaucoma (UKGTS): a randomised, multicentre, placebo-controlled trial. The Lancet 2015;385:1295-304. - Kerr NM, Patel HY, Chew SS, Ali NQ, Eady EK, Danesh-Meyer HV. Patient satisfaction with topical ocular hypotensives. Clin Exp Ophthalmol 2013;41:27-35. - Fechtner RD, Godfrey DG, Budenz D, Stewart JA, Stewart WC, Jasek MC. Prevalence of ocular surface complaints in patients with glaucoma using topical intraocular pressurelowering medications. Cornea 2010;29:618-21. - Paimela T, Ryhänen T, Kauppinen A, Marttila L, Salminen A, Kaarniranta K. The preservative polyquaternium-1 increases cytoxicity and NF-kappaB linked inflammation in human corneal epithelial cells. Molecular vision 2012;18:1189. - Turaçli E, Budak K, Kaur A, Mizrak B, Ekinci C. The effects of long-term topical glaucoma medication on conjunctival impression cytology. International ophthalmology 1997;21:27-33. - Herreras JM, Pastor JC, Calonge M, Asensio VM. Ocular surface alteration after long-term treatment with an antiglaucomatous drug. Ophthalmology 1992;99:1082-8. - Allan Clark DCB. The Norwich Trabeculectomy Study: Long-term Outcomes of Modern Trabeculectomy with Respect to Risk Factors for Filtration Failure. Journal of Clinical & Experimental Ophthalmology 2014;05. - Broadway D, Grierson I, Hitchings R. Adverse effects of topical antiglaucomatous medications on the conjunctiva. The British journal of ophthalmology 1993;77:590. - Broadway DC, Grierson I, O’Brien C, Hitchings RA. Adverse effects of topical antiglaucoma medication: Ii. the outcome of filtration surgery. Archives of Ophthalmology 1994;112:1446-54. - Jaenen N, Baudouin C, Pouliquen P, Manni G, Figueiredo A, Zeyen T. Ocular Symptoms and Signs with Preserved and Preservative-Free Glaucoma Medications. European Journal of Ophthalmology 2007;17:341-9. - Rossi GCM, Tinelli C, Pasinetti GM, Milano G, Bianchi PE. Dry eye syndrome-related quality of life in glaucoma patients. European journal of ophthalmology 2009;19:572-9. - van Gestel A, Webers CA, Beckers HJ, et al. The relationship between visual field loss in glaucoma and health-related quality-of-life. Eye (Lond) 2010;24:1759-69. - Kumar JB, Bosworth HB, Sleath B, et al. Quantifying Glaucoma Medication Adherence: The Relationship Between Self-Report, Electronic Monitoring, and Pharmacy Refill. Journal of Ocular Pharmacology and Therapeutics 2016;32:346-54. - Olthoff CM, Schouten JS, van de Borne BW, Webers CA. Noncompliance with ocular hypotensive treatment in patients with glaucoma or ocular hypertension: an evidence-based review. Ophthalmology 2005;112:953-61. e7. - Schwartz GF, Quigley HA. Adherence and persistence with glaucoma therapy. Surv Ophthalmol 2008;53 Suppl1:S57-68. - Hwang D-K, Liu CJ-L, Pu C-Y, Chou Y-J, Chou P. Persistence of topical glaucoma medication: a nationwide population-based cohort study in Taiwan. JAMA ophthalmology 2014;132:1446-52. - Bansal RK, Tsai JC. Compliance/Adherence to Glaucoma Medications—A Challenge. Journal of Current Glaucoma Practice 2007;1:22-5. - Alany RG. Adherence, persistence and cost– consequence comparison of bimatoprost topical ocular formulations. Current Medical Research and Opinion 2013;29:1187-9. - Sleath B, Blalock S, Covert D, et al. The relationship between glaucoma medication adherence, eye drop technique, and visual field defect severity. Ophthalmology 2011;118:2398-402. - Paula JS, Furtado JM, Santos AS, Coelho RdM, Rocha EM, Rodrigues MdLV. Risk factors for blindness in patients with open-angle glaucoma followed-up for at least 15 years. Arquivos brasileiros de oftalmologia 2012;75:243-6. - Kooner KS, AlBdoor M, Cho BJ, Adams-Huet B. Risk factors for progression to blindness in high tension primary open angle glaucoma: comparison of blind and nonblind subjects. Clinical ophthalmology (Auckland, NZ) 2008;2:757. - The advanced glaucoma intervention study (AGIS): 7. the relationship between control of intraocular pressure and visual field deterioration. American journal of ophthalmology 2000;130:429-40. - Kass MA, Heuer DK, Higginbotham EJ, et al. The ocular hypertension treatment study: A randomized trial determines that topical ocular hypotensive medication delays or prevents the onset of primary open-angle glaucoma. Archives of Ophthalmology 2002;120:701-13. - Leske MC, Heijl A, Hussein M, Bengtsson B, Hyman L, Komaroff E. Factors for glaucoma progression and the effect of treatment: the early manifest glaucoma trial. Archives of ophthalmology 2003;121:48-56. - Musch DC, Gillespie BW, Niziol LM, Lichter PR, Varma R, Group CS. Intraocular pressure control and long-term visual field loss in the Collaborative Initial Glaucoma Treatment Study. Ophthalmology 2011;118:1766-73. - Carbonaro F, Lim KS. Managing complications in glaucoma surgery: Springer; 2017. - Gulati V, Fan S, Hays CL, Samuelson TW, Ahmed, II, Toris CB. A novel 8-mm Schlemm’s canal scaffold reduces outflow resistance in a human anterior segment perfusion model. Invest Ophthalmol Vis Sci 2013;54:1698-704. - Pfeiffer N, Garcia-Feijoo J, Martinez-de-la-Casa JM, et al. A Randomized Trial of a Schlemm’s Canal Microstent with Phacoemulsification for Reducing Intraocular Pressure in Open-Angle Glaucoma. Ophthalmology 2015;122:1283-93. - Vold S, Ahmed IIK, Craven ER, et al. Two-Year COMPASS Trial Results: Supraciliary Microstenting with Phacoemulsification in Patients with Open-Angle Glaucoma and Cataracts. Ophthalmology 2016;123:2103-12. - Sheybani A, Dick HB, Ahmed II. Early clinical results of a novel ab interno gel stent for the surgical treatment of open-angle glaucoma. Journal of glaucoma 2016;25:e691-e6. - Susanna Jr R, Vessani RM. Staging glaucoma patient: why and how? The open ophthalmology journal 2009;3:59. - Mills RP, Budenz DL, Lee PP, et al. Categorizing the stage of glaucoma from pre-diagnosis to endstage disease. American journal of ophthalmology 2006;141:24-30.
<urn:uuid:6e08be2b-b4c2-428e-ae1a-c93c2f1a569e>
CC-MAIN-2022-33
https://mivision.com.au/2018/08/migs-tailored-treatments-for-glaucoma/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00295.warc.gz
en
0.889937
7,764
2.703125
3
For those who are new to deblending, we begin with an explanation of what it is and a general description of how it works. This is then followed by a discussion of AGDeblend in particular, covering the concepts that you should understand to use it. Blending and Deblending¶ Seismic shot records should ideally only contain recordings of waves produced by that shot’s source, as this is typically an assumption of methods used for processing the data. In reality they also contain various types of noise that we must attempt to attenuate. One type of noise is interference from other sources. These may be sources from another nearby survey (known as signal interference or SI), or they may be sources from the same survey that were intentionally triggered while another shot was still being recorded (a technique to perform the survey more quickly known as blended acquisition, causing blending noise). Deblending attempts to separate these overlapping arrivals from different sources. It is most commonly used with blended acquisition, but can also be applied to signal interference (sometimes requiring knowledge of the shot times of the interfering sources, depending on the deblending method). As there will often be orders of magnitude of difference between the amplitudes of overlapping arrivals from different shots (with the direct arrival of one shot overlapping arrivals from several seconds after the previous shot was fired), deblending is a very difficult task and perfect separation is usually not possible. Surveys with blended acquisition typically use continuous recording. The receivers record continuously, and then the record for each shot with unattenuated blending noise (also known as the “pseudo-deblended” or sometimes “combed” record) is extracted from this by copying a specified number of samples after the shot firing time from the continuous record. Deblending attempts to remove the blending noise from these records so that they look like blended acquisition was not used. Many methods have been proposed to perform deblending. These usually either use traditional denoising techniques to attenuate the blending noise, or try to separate the shots using an inversion approach. The latter, which is generally more successful, is used by AGDeblend and so will be our focus. The inversion approach uses the fact that if we had the true recordings (no blending noise) and we blended them (shifted them in time to the time that the shot was fired and summed across shots), the result should match the recorded data. This can be expressed mathematically as: where \(d_o\) is the observed continuous recording (with overlapping), \(\Gamma\) is the blending operator (shifting in time and summing), and \(d_t\) is the true recording of each shot (no blending noise). Since we know \(d_o\) and \(\Gamma\) (and it is linear), and we want to obtain \(d_t\), this is a linear inverse problem. We might thus think that we could simply apply an inverse problem solver. Unfortunately, that will not work well as this is an underdetermined problem (if you express \(\Gamma\) as a matrix, it has fewer rows than columns), which means that there are an infinite number of solutions that will satisfy this. Here is one example: To resolve this, we must add an additional constraint. One that is popular in deblending methods is to minimize the norm in Fourier transformed windows of the estimated true data. This approach relies on the hope that the true recordings can be represented more efficiently in the Fourier domain than the other possible solutions can, so minimising the norm should select the true solution from the infinite number of possible solutions. There are two requirements for this to be reasonable. One is that the true arrivals are approximately locally planar across shots (and so can be compactly represented in the windowed Fourier transform domain). The second is that the timing of the shots have an element of randomness, so that the blending noise is incoherent across shots (and so is transformed into random noise in the Fourier domain). Here is a typical example: These requirements are often fairly well met by typical seismic surveys, especially deep marine tower streamer datasets with good source sampling, but can be problematic when there are variations near the sources or receivers that cause substantial changes in the recorded arrivals between shots (such as in shallow water or land surveys with a variable near surface). Data samples provided for inversion-based deblending should be as close to raw data as possible because the inversion process will try to find a solution that matches the recorded data when blended, but any processing performed before deblending may interfere with this. Denoising methods that act across shots are particularly damaging as they will likely severely affect the blending noise that is needed for inversion. Noise that is not caused by blending can also reduce the ability to deblend successfully, however. Such noise may transform into random noise across all wavenumbers in the Fourier domain, potentially making a solution that includes blending noise have a lower L1 norm than the true signal, due to the triangle inequality (the components of the two types of noise may have opposite sign and so lead to cancellation). In such cases, careful processing to attenuate such noise without substantially affecting the signal or blending noise might be beneficial before deblending. Frequency filters and common shot gather denoising are generally the safest options. Deblending will usually reduce the blending noise, but it is a difficult task and so even in ideal cases there will often still be a visible residual and some signal loss. The likelihood of success is mostly determined by how large the regions of traces are over which arrivals are approximately planar, the blending factor (the number of shots that arrivals are simultaneously being recorded from, also known as the blending fold), and by the level of non-blending noise in the data. Although AGDeblend aims to be easy to use, there are some concepts that are important to understand before using it. AGDeblend consists of a blend function and a deblend function. The concepts needed for the blend function are a subset of those needed for deblending. Basis Pursuit in the Fourier Domain¶ Deblending is often implemented by minimising the norm in a transformed domain in which seismic arrivals are expected to be efficiently represented. AGDeblend uses the multidimensional Fourier transform and minimises the L1 norm (also known as basis pursuit). It is possible that better results might be obtainable with other transforms and norms, but this combination was chosen as the best match for AGDeblend’s goals. The Fourier transform is efficient and implementations are widely available. The L1 norm is convex (unlike some other possible options, such as the L0 norm), resulting in reliable solver methods such as ISTA. Windows of data (discussed below) are Fourier transformed in all dimensions. The most important consideration when choosing the setup for deblending is how to cause the maximum separation in the Fourier domain between the signal and the blending noise, which usually means what will cause the signal to be represented as compactly as possible (since the blending noise across shots will transform into uniform random noise). To enhance the ability to compactly represent the signal, windows are zero-padded in each dimension (with one quarter of the length of that dimension, by default) before being transformed. Hann window tapers are also used in every dimension. Despite these efforts, it is still likely that the solution with the minimum L1 norm will not be exactly the true solution. Variations from shot to shot may not be more coherent (especially at high frequencies) than blending noise, and so assigning these amplitudes to the correct shot will not necessarily reduce the L1 norm. These variations are thus likely to instead be shared between overlapping shots, resulting in signal loss and an equal amount of residual blending noise. AGDeblend uses the ISTA (iterative shrinkage/thresholding algorithm) method with a decaying threshold to perform basis pursuit. The model is stored as windows in the Fourier domain. At each iteration the windows are forward transformed and blended, and the residual compared to the recorded data is backpropagated to update the model. Components of the Fourier domain model are then soft thresholded, decreasing their amplitude so that any below the current threshold are zeroed. This will initially result in all but the highest amplitude components in the Fourier domain being zeroed. These high amplitude components are likely to correspond to the most coherent strong arrivals in the data. These components will then be used to predict and remove blending noise in overlapping shots, so that in the next iteration the amplitude of components in the Fourier domain corresponding to blending noise should decrease, allowing the ISTA threshold to decrease without including them. In this way the threshold can be decayed to zero over the iterations of the deblending method, gradually predicting and removing more of the blending noise. Ideally, the threshold should decay slowly enough so that no blending noise ever passes it. Increasing the number of iterations (and thus decreasing the rate at which the threshold decays) should therefore improve the result. If the strongest components of the blending noise in the Fourier domain are substantially weaker than the strongest signal components, setting the initial threshold factor to a value lower than one can reduce the number of iterations required to obtain a good result. The initial threshold will be multiplied by this factor, and so start at a smaller value rather than decaying from the maximum amplitude in the Fourier domain. Volumes, Patches, and Windows¶ AGDeblend uses three levels of division of the data when deblending. The largest are volumes, which are divided into patches, which are in turn divided into windows. The user provides the data already divided into volumes and patches, but only has to specify the shape of windows that the patches should be divided into. Volumes correspond to blocks of data that are separate in space. In many cases, only one volume will be used, but the ability to use multiple volumes is useful for situations such as when there is interference from a nearby survey. We wish to include this other survey in our deblending (so that it can be separated from our survey’s data), but we do not expect that arrivals in that survey’s traces will be coherent with those in our survey, and so do not want them to be Fourier transformed together. Arranging them in different volumes achieves this. Different volumes may have different dimensions, such as (CMP, Offset) for one (a volume from a 2D survey) and (ShotX, ShotY, ChannelX, ChannelY) for another (a nearby 3D survey causing interference). Although there is freedom to choose the most appropriate dimensions for your survey, incorporating as many source dimensions as possible is usually helpful as that tends to be where the difference between the signal and blending noise is most obvious. It is not possible to distinguish signal and blending noise using channel dimensions alone, so at least one source-related dimension should be used, but channel dimensions can also be helpful as the blending noise may have a different dip compared to the signal, improving separability. Adding extra dimensions substantially increases the memory and computational cost, however, so they should be chosen carefully. The only constraint on dimensions is that the time dimension must always be included in every volume and must be the “fast” dimension (contiguous in memory). In simple cases you might also only use one patch. There are two main reasons for using multiple patches: to split the survey into blocks that can be handled by different MPI processes, and for irregular survey layouts. MPI is the recommended method of achieving parallel processing with AGDeblend, and is the only way of splitting the dataset across multiple nodes on a distributed memory system. Each MPI process must be assigned different patches, with a typical approach being to have one patch for each process. Neighbouring patches should overlap with each other by half a window length, which is one of the more complicated parts of using AGDeblend. See Examples 5 and 6 for simple demonstrations. Patches need to be hyperrectangles. The patches are on a grid, with a location specified by their coordinates, but not all cells of the grid need to have a patch assigned to them, allowing you to create irregularly shaped volumes. There is an example of this in Example 9. If we label the coordinates of a patch that covers the range \(([0:16), [0:32))\) of a volume with two spatial dimensions as \((0, 0)\), and if the window shape in this volume is \((16, 16)\), then the coordinates \((1, 1)\) would refer to a patch that starts at the point \((8, 24)\) (as this is \((16, 32)\) minus half a window length of overlap in each dimension). If every patch in the volume has shape \(16 \times 32\), then the patch with coordinates \((2, 1)\) would cover the range \(([16:32), [24:56))\). Patches do not need to all be the same shape, however, but those with the same coordinate in a particular dimension do need to have the same length in that dimension. If a patch with coordinate 1 in the first dimension has length 24 in the first dimension, then all other patches with coordinate 1 in the first dimension also need to have length 24 in that dimension. It is up to the user to ensure that patches have the correct shapes and are overlapped correctly. AGDeblend does not check this. Seismic arrivals are often well approximated by planes over a small number of neighbouring traces. The purpose of windows is to further decompose patches into overlapping windows that contain a small enough number of traces for this to be true. Each window is separately transformed into the Fourier domain, so the window shape should be chosen to be the largest number of traces in each dimension over which the arrivals look approximately planar. The recordings are also divided into windows in time. The recommended window length in the time dimension is twice the maximum wavelength (in units of time samples). Each volume may have a different window shape, but it is advisable to have approximately the same number of samples in every window. The window shapes do not need to evenly divide into the patch shape as some will be automatically made larger to cover the patch if necessary. Window lengths must be even unless the window covers the whole patch in that dimension and the patch has no neighbours on either side in that dimension. A common example where an odd window length is used is in multi-source marine surveys, where a source vessel may have an odd number of sources, such as three. You may choose to use the gun index as one of your dimensions, so that dimension will be of length three and you can thus use a window length of three in that dimension. Disjoint Continuous Recording¶ In a typical blended survey, the receivers record continuously, so that the number of samples in the survey is the number of channels times the duration of the survey in time samples (before pseudo-deblending, which can make the number of samples substantially larger due to duplication). Blending and deblending both use the blending operator, which shifts traces in time and sums them to form a continuous record. Sometimes, however, the continuous record is not actually fully continuous. Acquisition might stop during the night, for example, so there is a separate continuous record for each day with a gap of several hours between them. Blending occurs within each of these records, but not between them. It may still be advantageous to deblend multiple separate continuous records simultaneously, however, as they may be contiguous in space (a source line might be acquired next to another source line that was acquired the day before) and so help to identify signal and noise in each other. AGDeblend supports such disjoint continuous records. Memory is only allocated to store the time samples for which there are recordings, so gaps, even of weeks, will not use additional memory. Each recording channel is treated separately, so the gaps may occur at different times for different channels. AGDeblend needs to know which channel each trace is from so that when the recordings are blended they can be added to the correct channel’s continuous record. This information is supplied as an array, with one entry for each trace, through the input arguments. This provides flexibility in the arrangement of the provided data. It is possible, for example, for the input to all come from a single channel, in which case the channel array argument would be filled with the same number. Other possibilities include providing the input arranged with shot and channel dimensions, and CMP dimensions. The same channel can occur in different patches and volumes, and even on different nodes when using MPI, with AGDeblend arranging that the samples will still be added to the correct place in the continuous records. In real surveys shots misfire and so we do not expect the arrivals in their record to be coherent with neighbouring shots. Similarly, there are bad receivers that only record noise, or bursts of noise in particular traces. Survey recordings can also often not be arranged in perfect hyperrectangles. The sources and receivers in 3D surveys are frequently not located on a rectangular grid, but there are instead groups of lines that are longer than others, or large holes around obstacles. Even when the recordings form a hyperrectangle when arranged with shot, channel, and time dimensions, you may wish to arrange them with CMP and offset dimensions when performing deblending, in which case the CMPs at the edges will probably have fewer offset traces than CMPs in the middle, causing it to no longer be a hyperrectangle. The approach used by AGDeblend for all of these situations is for the user to specify a trace type for each trace. The options are live, bad, and missing. Only the sample values from live traces are used. The difference between bad and missing traces is small. Missing traces are ignored completely when blending (in both the blending and deblending functions). The shot time and channel specified for them is thus not used. The samples covered by a bad trace in the continuous record are, however, muted. Samples from live traces that overlap with the bad traces will thus also be muted. This is to avoid corrupting the live traces with bad values. As a result, the shot time and channel must be specified for bad traces. One common situation where missing traces are useful is in multi-source marine surveys. A source vessel may tow several airgun sources that are fired in round-robin order. With these, it can be beneficial to use the gun index as one of the dimensions of the data, as this usually helps the arrivals to be more planar within windows. If there are three airguns, labelled 1, 2, and 3, from one side to the other, the first to be fired may not be 1. In that case we will not be able to form a hyperrectangle with planar arrivals (as such a hyperrectangle would require an entry for the missing shot). We can resolve this by creating traces in the place of the missing initial (and potentially final) shots and labelling them as missing. The output of deblending will replace bad and missing traces with values predicted by the model. These may be useful, especially for bad or missing traces surrounded by live traces, but the deblending implementation and the L1 norm minimisation that it relies on, are not designed for interpolation and so it will probably be possible to get more accurate estimates of these missing trace values by using a dedicated interpolation method. For sources that have a long source wavelet, such as Vibroseis sources, it can be advantageous to convolve with the source wavelet prior to applying the blending operator when performing deblending. It can improve the deblending results, and it also reduces the memory requirement, if the same wavelet is shared by multiple traces, as the length of the stored model in the time domain will be \(nt_o - nt_w + 1\), where \(nt_o\) is the recorded trace length and \(nt_w\) is the length of the source wavelet. You can provide as many source wavelets as you like, and, with the wavelet_idxs parameter, specify which of the wavelets to use for each trace. All of the traces within a volume must have wavelets of the same length, but different volumes may use wavelets of different lengths. In AGDeblend only the phase information of the source wavelets is used.
<urn:uuid:8fbe0301-82b6-41b9-b6c9-6a0fadefda1a>
CC-MAIN-2022-33
https://ausargeo.com/agdeblend/introduction
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00298.warc.gz
en
0.949209
4,279
3.1875
3
There are over 400 species of birds that can be found in Maryland. Of those, about 60 are considered to be common. This blog post will discuss the most common birds in Maryland, as well as their habits and habitats. If you’re interested in birding, or just want to learn more about the birds that live near you, this is the blog post for you! Most Common Backyard Birds of Maryland: Cardinals are one of the most easily identifiable birds in North America. They are medium-sized songbirds with crested heads, red bodies, and black masks around their eyes. Cardinals are found in woods and forests across the eastern United States and parts of Canada. They prefer habitats with dense vegetation where they can find plenty of insects to eat. Cardinals are also known for their loud, cheerful songs. Male cardinals will sing to defend their territory and attract mates. Females also sing, but their songs are shorter and less melodious than the males’. Cardinals are monogamous birds and pairs will stay together year-round. Both parents help care for the young, which fledge (leave the nest) after about two weeks. Cardinals are one of the few bird species in which the males and females look different from each other. The males are brightly colored, while the females are a duller brownish-red color. This difference is known as sexual dimorphism. Cardinals will eat a variety of foods, including seeds, insects, berries, and fruits. In winter, when food is scarce, cardinals will often visit bird feeders to supplement their diet. Cardinals are relatively large songbirds and measure about nine inches long from beak to tail. (Agelaius phoeniceus) is a species of true blackbird in the genus Agelaius. They are about 16.0-20.0 cm (63⁄64–79⁄64 inches) long and have a wingspan of 31 cm (12¼ inches). The adult male has black feathers with a red shoulder and yellow wing bar. Adult females look quite different, having dark brown feathers all over. Both sexes have a short, thin, black beak; long legs; and yellow eyes. The Red winged Blackbirds are found in open marshes and wet meadows throughout most of North America. They are also common in agricultural areas such as fields and pastures. These birds are mostly found in the eastern half of the continent, but they range as far north as Alaska and as far west as California. The Red-winged Blackbird is a very social bird. They often form large flocks of hundreds or even thousands of birds. During breeding season, however, they can be quite aggressive. Males will often fight with each other for the chance to mate with a female. The Red-winged Blackbird feeds on insects, black oil sunflower seeds, and berries. During the summer months, their diet is mostly made up of insects. In the winter, they switch to eating more seeds and berries. Barn swallows are the most widespread swallow in the world. They have a dark blue back, rusty forehead, and long tail streamers. The female usually has shorter tail streamers than the male. Barn Swallows are found near human habitation and open country. They eat insects which they catch on the wing. Barn Swallows build their mud nests on buildings or other structures. A pair will have one to six broods per year. The young leave the nest about 20-25 days after hatching. The barn swallow is a migratory bird, spending winters in Central and South America. In North America, they can be found anywhere south of Canada during the breeding season. They are one of the latest nesting swallows, often not starting to build their nests until July. The barn swallow is a social bird and can often be seen in large flocks during migration and on stopovers during the breeding season. The barn swallow is an important bird for farmers as it eats large numbers of insects that would otherwise damage crops. It has declined in numbers in recent years due to a loss of suitable habitat, but it is still one of the most common birds in North America. Song Sparrows are one of the most common birds in North America. They can be found in nearly every habitat, from forests to Desert. The Song Sparrow is a medium-sized sparrow with streaked brown upperparts and buff underparts. Their breast is heavily streaked with dark brown and they have a white belly. Their face is grayish with a brown-streaked crown and a streaked buff throat. Their bill is pink with a dark tip. Juveniles are similar to adults but have lighter brown upperparts and streaks on their breasts and face. Song Sparrows are seed eaters but will also eat insects, especially in the summer. They forage on the ground or in low vegetation. In the winter, they often form flocks and can be found in open fields or along roadsides. Song Sparrows are monogamous and breed from late April to early July. The female builds a cup-shaped nest out of grass, bark, and other plant material. She lays three to five eggs which are pale blue with brown spots. Both parents help to incubate the eggs and feed the young. is a small sparrow with a slate-colored back and white belly. The Junco has a pink bill and legs, and brownish wings with white bars. They are about five to six inches in length. The diet of the Dark-eyed Junco consists of insects, black oil sunflower seeds, and berries. Dark-eyed Juncos can be found in forests and woodlands, but are also common in suburban and urban areas. They build nests in trees, bushes, or on the ground. Dark-eyed Juncos are social birds and often travel in flocks. During the breeding season, however, they become territorial and will chase other birds away from their territory. The Carolina chickadee is a small, sprightly bird with a black cap and bib, white cheeks, and a gray back and wings. Chickadees are acrobatic little birds that are fun to watch as they flit about in search of food. These busybodies often form large flocks in winter. Chickadees are not shy about coming to backyard bird feeders, where they will eat sunflower seeds, suet, and other foods. Chickadees also eat insects and spiders. Carolina chickadees are found in woodlands and forests throughout the eastern United States. They nest in tree cavities or nest boxes. Chickadees are non-migratory, meaning they will stay in their territories year-round. Chickadees are social birds and often travel in small flocks. They are constantly on the move as they search for food. The Carolina chickadee has a loud, distinct call that sounds like “chick-a-dee-dee-dee.” This call is how they got their name. The Downy Woodpecker is the smallest woodpecker in North America, measuring just six to seven inches in length. The adult male has a black back and white wings with small black spots, while the female’s wings are mostly white with larger black spots. Both sexes have a black head with a white stripe running down the back, a white belly, and a black tail. Downy Woodpeckers are found in woodlands across North America and prefer to nest in trees with soft bark, such as poplars and willows. These little birds eat mostly insects, which they find by pecking at tree bark or poking their long tongues into crevices. They will also eat berries and nuts in the winter. Downy Woodpeckers are acrobatic fliers, often seen swinging upside-down from tree branches as they hunt for food. These birds are also known for their loud drumming, which they use to communicate with other woodpeckers and to attract mates. White-throated Sparrows are a common bird in Maryland. They have a white throat and breast with yellow spots on their wings. They are about six inches long and weigh one ounce. Their diet consists of seeds, insects, and berries. White-throated Sparrows live in forests, fields, and gardens. They are active during the day and sing a beautiful song. If you see a White-throated Sparrow, be sure to watch it for a while and enjoy its song. The American Goldfinch is a small songbird with a short conical bill. The adult male has a yellow body, black wings, and white tail feathers. The adult female is duller in coloration. Goldfinches are often found in open woodlands and fields feeding on seeds. They build their nests in trees and shrubs. Goldfinches are social birds and often travel in flocks. They are one of the last bird species to migrate in the fall. The Common Grackle is a blackbird that is found in North America. The adult male has iridescent black feathers and yellow eyes. It can range in size from 11 to 13 inches. The diet of the Common Grackle consists of insects, earthworms, and other small invertebrates. This bird is found in open habitats such as fields, marshes, and parks. During the breeding season, the male Common Grackle will perform a courtship display which includes singing and flying with his wings spread. The female will build the nest which is usually made of grasses and twigs. Both parents will help to raise the young. The gray catbird is a small songbird with a long tail and gray plumage. It has black wings with white wingbars, and a black cap. The underparts are pale gray. It measures 18 cm in length and weighs 23-35 grams. The diet of the gray catbird consists of insects, berries, and fruits. It forages on the ground or in bushes. The gray catbird is found in woods and forests in eastern North America. It breeds in Maryland from May to August. The nest is built in a tree or shrub, usually near the ground. European Starlings are small to medium-sized birds that are about 20 cm in length. They have black feathers with iridescent green and purple plumage. The males and females look alike. Their diet consists of insects, fruits, and berries. Starlings live in woodlands, farmland, and urban areas. They build their nests in tree holes or crevices. Starlings are gregarious birds that roost and forage in flocks. They are also known to cause damage to crops and buildings. Mourning doves are the most common bird in North America. The one pictured above is a juvenile. It has not yet developed the telltale mourning doves features of black spots on its wings or tail. Mourning doves are medium sized birds. They have long, pointed tails and small heads with black bills. Their upperparts are grayish brown and their underparts are pale gray. Mourning doves are found in a variety of habitats including open woodlands, farmland, and urban areas. They feed on seeds and insects. Mourning doves are generally shy birds but can be aggressive when defending their nests. They will sometimes attack much larger birds such as hawks and crows. Mourning doves mate for life and build their nests in trees or on ledges. The female lays two eggs which hatch in about two weeks. Both parents feed the young birds until they are able to fly and fend for themselves. Tufted Titmouse are small birds with big personalities. They are easily identified by their tufted head feathers and gray bodies. These birds are acrobatic fliers and love to eat insects. You can find them in woodlands near trees and bushes where they build their nests. Watch for their quick movements and listen for their high-pitched “peter peter” call. These social birds often travel in flocks and are a joy to watch. The Red-bellied Woodpecker is a common bird found in Maryland. It is a medium-sized woodpecker with distinctive red markings on its belly and head. The male has a red cap, while the female has a dark brown cap. Both sexes have black and white stripes running down their backs. Red-bellied Woodpeckers are found in forests and woodlands. They feed on insects, fruits, and nuts. They nest in holes drilled into trees. Red-bellied Woodpeckers are active birds that are often seen climbing up tree trunks in search of food. They can also be heard drumming on trees with their beaks. This behavior is used to attract mates and to establish territory. The American Robin is a medium-sized songbird weighing about 80 grams. It has black upperparts, rusty-red breast, white throat and belly, and grayish legs and feet. The adult male has darker black feathers on its head, back, and tail. This bird can be found in woodlands, gardens, and parks across North America. It is a very adaptable bird and will nest in a wide variety of locations including trees, on the ground, or even in man-made structures. The American Robin feeds mainly on insects but will also eat berries and fruits. This bird is most active during the day and can often be seen perching on tree branches or foraging for food on the ground. During the breeding season, American Robins will form pairs and build nests made of twigs, grass, and leaves. These nests are usually located in trees but can also be found on the ground or in man-made structures. The female Robin will lay between three and five eggs which are incubated for about two weeks. The young Robins will fledge (leave the nest) after about three weeks but will continue to be fed by their parents for several weeks after that. American Robins are relatively long-lived birds with a lifespan of up to 14 years in the wild. Carolina Wrens are small songbirds with brown upperparts and a buff-orange belly. Their tail is usually cocked at an angle, giving them a rakish appearance. Carolina Wrens have a white stripe above each eye and a black band across the top of their heads. They are one of the few North American birds that actually sing louder in the winter than in the summer. Carolina Wrens are insectivores and will eat just about any type of invertebrate they can find. They forage actively on tree branches, picking insects off of leaves and twigs. Occasionally, they will also eat berries and other small fruits. Carolina Wrens are small birds, measuring only about five inches in length. They have a wingspan of around eight inches. Carolina Wrens are found in the southeastern United States, from Virginia south to Florida and west to central Texas. They prefer habitats with dense underbrush, such as forests, swamps, and gardens. Blue Jays are one of the most recognizable birds in North America. They are known for their bright blue plumage and bold white markings. Blue Jays are also known for their loud calls and antics. But there is more to these beautiful creatures than meets the eye. Blue Jays are actually quite shy birds and are not often seen in large flocks like some other bird species. They are also very territorial and will defend their homes from intruders. Blue Jays have a varied diet that includes acorns, berries, insects, and small mammals. They will also eat other birds’ eggs and nestlings if given the chance. Blue Jays are medium-sized birds with a wingspan of about three feet. They are found in wooded areas across North America. American Crowders are a species of bird that can be found in Maryland. They are a small bird with brown and white feathers. Their diet consists of insects and berries. American Crowders typically live in forests or woodlands. They are a shy bird but will sometimes come to feeders. American Crowders mate for life and usually have two to three chicks per clutch. The House Finch is a common bird found in Maryland. It is easily identified by its red head and breast. The House Finch has a diet that consists mostly of seeds and insects. It is a small bird, measuring about six inches in length. The House Finch’s habitat includes open woodlands, scrublands, and gardens. It is a social bird that often forms flocks with other Finch species. The House Finch is not a migratory bird, so it can be seen in Maryland all year round. White-breasted Nuthatch – Sitta carolinensis The White-breasted Nuthatch is a small songbird with blue-gray upperparts, white underparts, and a black cap. It has a long, stout bill and short legs. The nape, face, and throat are white; the forehead is black. This bird gets its name from its habit of wedging nuts into crevices in trees and then hammering them open with its strong bill. The White-breasted Nuthatch is a common bird of woodlands and forests in the eastern United States. It is a year-round resident in most of its range, but birds in the northern part of the range may migrate south for the winter. This nuthatch forages for food on tree trunks and branches, moving up, down, and around the tree in a acrobatic fashion. It often hangs upside down while feeding. The diet of the White-breasted Nuthatch consists mainly of insects and seeds. During the summer months, insects make up a large part of the diet. In the winter, when insects are scarce, seeds become the main food source. The White-breasted Nuthatch is known to cache (or hoard) large quantities of seeds in crevices in trees to help it survive during periods of food scarcity. The Eastern Bluebird is a small thrush with bright blue upperparts and a red-orange breast. The head has a white throat and black eyestripe. This bird can be found in open woodlands, orchards, and farmlands. They eat insects, berries, and fruits. The nesting season for this bird is from April to May. The female will build the nest, which is a cup made of grass, twigs, and leaves. She will lay anywhere from three to seven eggs. The incubation period is about two weeks and then another two weeks until the young fledge. The male will help feed the young. The Eastern Bluebird has a wingspan of about nine to eleven inches. They are six to seven and a half inches long. The adult weight is about one ounce to one and a half ounces. This bird can be found in the eastern part of North America, from southern Canada to northern Florida. During the winter months, they will migrate southward. You can find them in woodlands, fields, and backyards. White-breasted Nuthatch is a small songbird with a stubby tail. It has blue-gray upperparts and a white breast with black streaks on the sides. The belly and flanks are rusty orange. The head has a black cap and nape, and there is a white stripe over the eye. This bird nests in cavities in trees, often using old woodpecker holes. It is a common bird in forests and woodlands of the eastern United States. The White-breasted Nuthatch is a small songbird, measuring about five inches in length. It has blue-gray upperparts, with a white breast and belly, and black streaks on the sides. The head has a black cap and nape, with a white stripe over the eye. This bird nests in cavities in trees, often using old woodpecker holes. It is a common bird in forests and woodlands of the eastern United States. The White-breasted Nuthatch is an agile climber, able to move up, down, and around tree trunks and branches. It often hangs upside down while foraging for food. Chickadees are small songbirds with black and white feathers. They are about five to six inches long with a wingspan of about eight inches. Chickadees have a round body and a short tail. Their bill is black and their legs are gray. Chickadees eat insects, seeds, berries, and nuts. They live in woods and forests in North America. Chickadees are active during the day and are often seen in pairs or small groups. They are known for their cheerful “chick-a-dee-dee-dee” call. Chickadees are not migratory birds but they may move to lower elevations in winter. Louisiana Waterthrush – Parkesia motacilla The Louisiana waterthrush is a small songbird with drab brown upperparts and a whitish belly. It has a brown streaked breast, a long neck, and a short tail. Its bill is slightly upturned and it has yellow eyes. This bird measures about five to six inches in length and weighs about one ounce. The Louisiana waterthrush feeds on insects, spiders, and other small invertebrates. It forages for food by walking along the ground or wading in shallow water. It breeds in wooded areas near streams, ponds, or lakes. The female builds a cup-shaped nest out of leaves, twigs, and grasses. She lays four to six eggs, which are incubated for about two weeks. Both parents help care for the young birds. Yellow-billed Cuckoo – Coccyzus americanus The Yellow-billed Cuckoo is a medium-sized gray and white bird with a black bill and yellow eyes. It has a wingspan of 15 inches and is about 12 inches long. The cuckoo’s diet consists mostly of caterpillars, but it will also eat other insects, berries, and fruits. The Yellow-billed Cuckoo is a shy bird that is most often heard rather than seen. It is found in woods and forests across North America, including Maryland. The cuckoo is active during the day and can often be seen perched atop trees or bushes. It nests in tree cavities or on platforms made of sticks. The Yellow-billed Cuckoo is not currently considered endangered, but its numbers have been declining in recent years. This decline is likely due to habitat loss and the use of pesticides. You can help this bird by planting native trees and shrubs, avoiding the use of pesticides, and providing nesting boxes for them. What are common birds in Maryland? One common bird in Maryland is the American Goldfinch. This small yellow bird is often found near open fields and woods. Another common bird is the Blue Jay. This blue and white bird is often found in backyards and gardens. The last common bird is the Northern Cardinal. How do I identify a bird in my backyard? First, you’ll want to identify the type of bird you’re seeing. Is it a waterbird? A gamebird? A songbird? Knowing the classification can help you rule out certain types of birds and narrow down your search. Once you’ve classified the bird, take note of its size, coloration, and markings. What does the bird look like? Does it have a long neck, short legs, or a brightly colored beak? All of these characteristics can help you identify the bird. If you’re still having trouble, try looking up pictures of common birds in your area and see if any of them match the bird you’re seeing. With a little patience and some careful observation, you should be able to identify the bird in your backyard! What finches are in Maryland? The most common finch in Maryland is the American Goldfinch. Other finches include the House Finch, Purple Finch, and Pine Siskin. Finches are often found in flocks near feeders or in trees and shrubs. What red birds are in Maryland? There are several red birds in Maryland, including the cardinal, the scarlet tanager, and the northern oriole. Each of these birds is unique and has its own habits and preferences. How many species of birds are there in Maryland? No one knows for sure, but according to the Maryland Department of Natural Resources, there are more than 450 species of birds that have been recorded in the state. That number is always changing, though, as new bird species are discovered and old ones disappear. If you’re looking for a fun and easy way to bring some extra wildlife into your backyard, then look no further than bird feeders! Not only are they a great way to attract feathered friends, but they can also be a great addition to your landscaping. Here are a few tips on how to choose the right bird feeder for your yard. When it comes to choosing a bird feeder, there are a few things you’ll want to keep in mind. First, consider the type of birds you’d like to attract. Different types of birds prefer different types of food, so be sure to choose a feeder that’s designed for the kind of birds you’re hoping to attract. Next, think about the size of your yard and the number of birds you’re hoping to attract. If you have a large yard and would like to attract a lot of birds, then you’ll need a bigger feeder. On the other hand, if you have a smaller yard or are only looking to attract a few birds, then a smaller feeder will suffice. Finally, think about the placement of your bird feeder. You’ll want to place it in an area where the birds will feel safe and comfortable feeding. A good rule of thumb is to place the feeder at least four feet off the ground and away from any bushes or trees where predators may be hiding. Hummingbird feeders are a great way to attract these beautiful creatures to your yard.
<urn:uuid:7c91efde-a505-48f0-a437-0ad44be9b1ec>
CC-MAIN-2022-33
https://travellingbirder.com/most-common-birds-in-maryland/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00098.warc.gz
en
0.968736
5,625
3.609375
4
|File:مقبره سلمان فارسی.jpg| |Teknonym||Abu 'Abd Allah| |Place of Birth||Ray (Isfahan ) or Ramhurmuz| |Place(s) of Residence||Isfahan, Syria, Medina, Al-Madain| |Burial Place||Al-Madain, Iraq| |Conversion to Islam||Jumada I, 1/November/December, 622| |Presence at Ghazwas||All the Ghazwas after the Battle of Khandaq| |Other Activities||Defensive plan of digging a trench around Medina in the Battle of Khandaq, Opposing the Event of Saqifa, Governor of al-Madain in the time of the Second Caliph| Salmān al-Fārsī (Arabic: سلمان الفارسي) (b. ? - d. 36/656-7) was a companion of prophet Muhammad (s) and of Imam Ali (a). The Prophet (s) liked him and said about him, "Salman is one of us, the Ahl al-Bayt". His idea of digging a trench in the Battle of Khandaq brought victory to Muslims. He supported the successorship of Imam 'Ali (a) after the demise of the Prophet (s) and opposed the incident of Saqifa. He was assigned as the governor of al-Madain in the time of the caliphate of Umar b. al-Khattab. He gave his salary to charity and knitted baskets for a living. After a long life, Salman passed away in 36/656-7 in al-Madain, where he is buried in a shrine known as "Salman-i Pak". Based on some reports, Salman was a Zoroastrian Iranian whose original name was Ruzbih. He converted to Christianity in his youth. After hearing the Christians foretelling the emergence of a prophet in the land of Arabs, he set off toward Hijaz. He was enslaved in the middle of the way and sold to a man from Banu Qurayza in Medina. He entered Medina when prophet Muhammad (s) had recently emigrated to the city. Salman met the Prophet (s) and after confirming the signs of prophethood converted to Islam. The Prophet (s) bought and freed him and named him "Salman". Before Conversion to Islam Salman's original name was Ruzbih (Farsi: روزبه) and his father's name has been mentioned as Khushfudan (Farsi: خشفودان) and, based on a report, as Budhakhshan (Farsi: بوذخشان). According to traditions, after his conversion to Islam, he was given the name Salman by the Holy Prophet (s). His teknonym was Abu 'Abd Allah. He was born either in Jay district of Isfahan or, based on some reports, in Ramhurmuz. His father was an Iranian elite landholder (Dehqan). Reports about his pre-Islamic life is mixed with tale-telling. What has been emphasized in these traditions is his inquisitive mindset that inspired him to embark on a long journey in search of a better religion. According to these reports, Salman was a Zoroastrian in childhood until he became familiar with and converted to Christianity. He moved to Syria to study under leading Christian scholars. Based on reports, Salman's father loved him so much that he would confine him in the house. Therefore, his journey to Syria was deemed as a kind of escape. In Syria, he served in the churches and traveled to Mosul, Nusaybin and Amuriyya. From Amuriyya, Salman headed toward Hijaz. This trip was inspired by the news of a prophet emerging in that land about which Salman was informed by his Christian masters. He accompanied a caravan from the Banu Kalb tribe in which he was captured and sold as a slave to a Jew from Banu Qurayza and was taken to Medina. Conversion to Islam Salman converted to Islam in Jumada I, 1/Nov-Dec, 622, . Salman had heard of a prophet who won't accept any charity (sadaqa) food, but he accepts gifts, and he has the seal of prophethood between his shoulders. Thus, when he met Muhammad (s) in Quba, he gave some food he had collected as charity to the Prophet Muhammad (s), the Prophet (s) gave all of it to his companions, and he did not eat any from it. Another time, Salman gave some food to the Prophet (s) as a gift, and then he noticed prophet Muhammad (s) ate some of it. And at the third time, he saw the Prophet Muhammad (s) in a funeral of his friend, where he finally saw the seal between the Prophet's shoulders. After that Salman went down on his knees in front of the Prophet (s) and praised him, and then he converted to Islam. The Prophet Muhammad (s) bought Salman (who was a slave) for planting three hundred date trees and six hundred silver coins, and then he freed him from slavery. As Salman said, the Prophet Muhammad (s) had bought him and then named him Salman. The document of freedom of Salman was dictated by the Prophet and it was written by Ali b. Abi Talib (a): - The Prophet Muhammad (s) has paid three hundred date trees and six hundred silver coins to Uthman b. al-Ashhal al-Yahudi; therefore, Salman's ransom is paid and he belongs to Prophet Muhammad (s) and his family, whereas no one else has authority over him. Bond of Brotherhood According to some sources, the bond of brotherhood was made between Salman and Abu l-Darda'. While some other reports mentioned Hudhayfa b. al-Yaman, and some others mentioned Miqdad b. 'Amr. However, Shi'a narrations mostly have accepted the bond was made between Salman and Abu Dhar. In addition, some sources stated the condition that Abu Dhar was supposed to follow Salman. In the Words of the Prophet (s) and Imams (a) According to most of Shi'i sources, the first day that Salman entered the mosque, people respected and praised him, while some other people disapproved of it, because he was an 'Ajam (non-Arab). After this event, Prophet Muhammad (s) gave a speech to people: - Men are not superior to another based on their race (being Arab or non-Arab) or the color of their skin, but only piety differentiates them. Salman is a vast sea and an everlasting treasure. Salman is a member of my family (Ahl al-Bayt). He is gifted with knowledge and wisdom. The same statement of the Prophet has also been narrated in another report. Based on this report, during the days when people of Medina were busy digging a trench to confront their enemy, the army of al-Ahzab, Salman al-Farsi who was a strong man played a prominent role in the fulfillment of the task, therefore, Migrants and Helpers each considered him a member of their respective group then the Prophet (s) said that Salman is one of us, the Ahl al-Bayt. Other hadiths have been narrated from the Prophet (s) praising Salman including a statement to the fact that the heaven is eager to have Ali, Ammar, and Salman or a hadith based on which God has obliged the Prophet (s) to like Ali, Salman, Miqdad, and Abu Dhar. In Shiite sources, there are hadiths narrated from Imams (a) praising Slaman. In these hadiths, he is generally considered among the first Shiites who are steadfast in faith. Among these hadiths, there is a statement by Imam Ali (a) in which Salman and some other companions such as Abu Dhar, Ammar, and Miqdad have been considered among those for whose blessings, God grants sustenance to people. Imam Ali (a) has also considered Salman as having the knowledge of the first and the last. In a hadith narrated from Imam al-Baqir (a) and Imam al-Sadiq (a), it is stated that once in a meeting with Imam (a), Salman al-Farsi's name was mentioned and Imam (a) said not to mention his name as al-Farsi but mention him as Salman al-Muhammadi since he is one of us, Ahl al-Bayt. Before the Battle of Khandaq, Salman proposed the idea of digging a trench around the city, Medina which led to the victory of Muslims. Umar b. al-Khattab assigned Salman and Hudhayfa as the commanders of the Muslim army in the Conquest of Iran. In the conquest of al-Madain, he was the negotiator of the Muslims with the commanders of the Iranian forces. Disagreement with the Event of Saqifa - See also: Event of Saqifa Salman disagreed with the Event of Saqifa. Miqdad, Salman, Abu Dhar, Ubada b. Samit, Abu l-Haytham b. al-Tayyihan, Hudhayfa, and Ammar b. Yasir gathered around at the night after the Event of Saqifa to decide on Caliphate in the community of Muhajirun. Salman and Ubayy b. Ka'b had numerous reasons to disagree with the Event of Saqifa. The famous sentence of Salman on those sahaba of Prophet (s) who took an oath of allegiance to Abu Bakr was: "You did and you did not" Which means you chose a Caliph, but refused to accept the Prophet Muhammad's order. He said in the day, you chose an elder man, but you left the family of Prophet Muhammad (s) alone; if you had chosen a member of Ahl al-Bayt, there would not be any conflict. And also you would have enjoyed its blessings. Governor of al-Madain Salman al-Farsi became the governor of al-Madain in the time of Umar b. al-Khattab. Salman had asked for permission of Ali b. Abi Talib (a) and then he accepted it. He was the governor of al-Madain until he passed away. Salman dedicated the money he received as the governor to charity. He covered his expenses by means of knitting baskets. Salman had two unsuccessful attempts to get married. The first one was asking the daughter of Umar, the sister of Hafsa (Prophet Muhammad's wife). At first Umar disagreed but after Prophet Muhammad (s) mentioned the status and position of Salman among Muslims, he accepted his request. However, Salman retracted his request afterward. In the second attempt, Salman sent Abu l-Darda' to ask the hand of a girl for marriage, whose family did not accept the request of Salman; however, they claimed they would accept Abu l-Darda' as their son-in-law. Accordingly, Abu l-Darda' married her later. Salman eventually married Buqayra from the tribe of Banu Kinda. Abd Allah and Muhammad were the names of their sons. Abd Allah had narrated the hadith of the Heavenly Gift for the Lady Fatima (s). Salman also had a daughter in Isfahan and two other daughters in Egypt. According to Muhaddith Nuri, the descendants of Salman were living in Rey for about five hundred years. Badr al-Din al-Hasan b. Ali b. Salman was a prominent figure in the narration of hadith and his lineage goes back to Salman al-Farsi through nine generations. Dia' al-Din al-Farsi (d. 622/1225-6), a descendant of Salman, was a grand scholar and a poet in Khujand. He was a religious leader in Bukhara. He also penned a commentary on al-Mahsul by al-Razi. Muhaddith Nuri also mentioned Shams al-Din Suzani (d. 562/1166-7 or 569/1173-4) as a descendant of Salman, he was titled as Taj al-Shu'ara (the Crown of Poets). The other mentioned descendants of Salman are Abd al-Fattah, custodian of the mausoleum of Salman for some time; Abu Kathir b. Abd al-Rahman, grandchild of Salman who narrated the letter of Prophet Muhammad (s) to Abd al-Ashhal, a Jewish member of Banu Qurayza, on freedom of Salman; Ibrahim b. Shahriyar (d. 624.1226-7), known as Abu Ishaq Kaziruni, who was a religious figure in the fifth/eleventh century and al-Hasan b. al-Hasan whose lineage goes back to Muhammad b. Salman. Salman had written this poem on his enshrouding cotton: - I am heading toward the Munificent, lacking a sound heart and an appropriate provision - While taking a provision (with you) is the most dreadful deed, if you are going to the Munificent After Salman had passed away, Ali b. Abi Talib (a) travelled to al-Madain to perform ghusl on his body and enshroud it, and then he performed funeral prayer on his body, before burying him in a grave. Imam 'Ali (a) returned to Medina that night. - Ṭabarī, Tārīkh al-umam wa l-mulūk, vol. 3, p. 171. - Ibn Saʿd, al-Ṭabaqāt al-kubrā, vol. 4, p. 56; Balādhurī, Ansāb al-ashrāf, vol. 1, p. 485. - Ṭabarī, Tārīkh al-umam wa l-mulūk, vol. 3, p. 171; Ibn Saʿd, al-Ṭabaqāt al-kubrā, vol. 4, p. 56. - Ibn Ḥishām, al-Sīra al-nabawīyya, vol. 1, p. 214-218; Ibn Saʿd, al-Ṭabaqāt al-kubrā, vol. 4, p. 57-58. - Ibn Ḥishām, al-Sīra al-nabawīyya, vol. 1, p. 218; Ibn Saʿd, al-Ṭabaqāt al-kubrā, vol. 4, p. 58-59. - Ibn Ḥishām, al-Sīra al-nabawīyya, vol. 1, p. 219. - Ibn Ḥishām, al-Sīra al-nabawīyya, vol. 1, p. 189. - ʿĀmilī, Salmān Fārsī, p. 40. - See: Nūrī, Nafas al-raḥmān fī faḍāʾil Salmān, p. 6. - Abū l-shaykh, Ṭabaqāt al-muḥaddithīn b-Iṣbahān, vol. 1, p. 226. - To view the references see ʿĀmilī, Salmān Fārsī, p. 86-87. - See: Kulaynī, al-Kāfī, vol. 2, p. 84. - See: Majlisī, Biḥār al-anwār, vol. 22, p. 345. - ʿĀmilī, Salmān Fārsī, p. 32. - Balāthurī, Ansāb al-ashrāf, vol. 1, p. 343. - Ḥalabī, al-Sīra al-ḥalabīyya, vol. 3, p. 167. - Ṭabarī, Tārīkh al-umam wa l-mulūk, vol. 4, p. 41. - See: Ibn Abī l-Ḥadīd, Sharḥ nahj al-balāgha, vol. 1, p. 219-220. - ʿĀmilī, Salmān Fārsī, p. 35. - See: Nūrī, Nafas al-raḥmān fī faḍāʾil Salmān, p. 148. - ʿAskarī, ʿAbd Allāh b. Sabaʾ, vol. 1, p. 145. - Madanī, al-Darajāt al-rafīʿa fī ṭabaqāt al-Shīʿa, p. 215. - Ibn Abī l-Ḥadīd, Sharḥ Nahj al-balāgha, vol. 1, p. 219-220. - About his wife and children see: Ṣādiqī Ardistānī, Salmān Farsī ustāndār-i Madāʾin, p. 377-390. - Ibn ʿAsākir, Tārīkh Madīnat Dimashq, vol. 21, p. 458-459. - Khaṭīb Baghdādī, Tārīkh Baghdād, vol. 1, p. 176. - Nūrī, Nafas al-raḥmān fī faḍāʾil Salmān, p. 139. - See: Majlisī, Biḥār al-anwār, vol. 22, p. 380. - Abū l-shaykh, ʿAbd Allāh b. Muḥammad. Ṭabaqāt al-muḥaddithīn b-Iṣbahān. Edited by ʿAbd al-Ghafūr Balūshī. Beirut: Muʾassisat al-Risāla, n.d. - ʿĀmilī, Jaʿfar Murtaḍā. Salmān Fārsī. Translated by Muḥammad Sipihrī. n.p.: Markaz-i Chāp wa Nashr-i Sāzmān-i Tablīghāt-i Islāmī, 1375 Sh. - ʿAskarī, Sayyid Murtaḍā. ʿAbd Allāh b. Sabaʾ wa dīgar afsānihā-yi tārīkh. n.p.: Majmaʿ-i ʿIlmī-yi Islāmī, 1375 Sh. - Balāthurī, Aḥmad b. Yaḥyā al-. Ansāb al-ashrāf. Edited by Muḥammad Bāqir Maḥmūdī & et al. Beirut: Muʾassisat al-Aʿlamī l-l-Maṭbūʿāt, n.d. - Ḥalabī, ʿAlī b. Ibrāhīm al-. Al-Sīra al-ḥalabīyya. 2ned edition. Beirut: Dār al-Kutub al-ʿIlmīyya, 1427 AH. - Ibn Abī l-Ḥadīd, ʿAbd al-Ḥamīd b. Hibat Allāh. Sharḥ Nahj al-balāgha. Edited by Abū l-Faḍl Ibrāhīm. Qom: Kitābkhāni-yi Marʿashī, n.d. - Ibn ʿAsākir, ʿAlī b. al-Ḥasan. Tārīkh Madīnat Dimashq. Edited by ʿAmr b. Gharāmat al-ʿAmrī. Damascus: Dār al-Fikr l-l-Ṭabāʿa wa al-Nashr wa al-Tawzīʿ, 1415 AH. - Ibn Ḥishām, ʿAbd al-Malik. Al-Sīra al-nabawīyya. Edited by Muṣṭafā Saqāʾ & et al. Beirut: Dār al-Maʿrifa, n.d. - Ibn Saʿd, Muḥammad. Al-Ṭabaqāt al-kubrā. Edited by Muḥammad ʿAbd al-Qādir ʿAṭā. Beirut: Dār al-Kutub al-ʿIlmīyya, n.d. - Khaṭīb al-Baghdādī, Aḥmad b. ʿAlī al-. Tārīkh Baghdād. Edited by Muṣṭafā ʿAbd al-Qādir ʿAṭā. Beirut: Dār al-Kutub al-ʿIlmīyya, 1417 AH. - Kulaynī, Muḥammad b. Yaʿqūb al-. Uṣūl-i Kāfī. Translated by Saʿīd Rāshidī. Qom: Ajwad, 1388 Sh. - Madanī, ʿAlī-khān b. Aḥmad al-. Al-Darajāt al-rafīʿa fī ṭabaqāt al-Shīʿa. By introduction Muḥammad Ṣādiq Baḥr al-ʿUlūm. Beirut: Dār al-Wafāʾ, n.d. - Majlisī, Muḥammad Bāqir al-. Biḥār al-anwār. Edited by Muḥammad Bāqir Maḥmūdī. Beirut: Dār Iḥyāʾ al-Turāth al-ʿArabī, n.d. - Mufīd, Mūḥammad b. Muḥammad al-. Al-Ikhtiṣāṣ. Edited by ʿAlī Akbar Ghaffārī. Qom: al-Muʾtamir al-ʿĀlamī l-Alfīyat al-Shaykh al-Mufīd, 1413 AH. - Nūrī al-Ṭabrisī, al-Ḥusayn al-. Nafas al-raḥmān fī faḍāʾil Salmān. Qom: al-Rasūl al-Muṣṭafā, n.d. - Ṣādiqī Ardistānī, Aḥmad. Salmān Farsī ustāndār-i Madāʾin. Qom: Daftar-i Tablīghāt Islāmī Qom, 1376 Sh. - Ṭabarī, Muḥammad b. Jarīr al-. Tārīkh al-umam wa l-mulūk. Beirut: Rawāyiʿ al-turāth al-ʿarabī, n.d.
<urn:uuid:1490fc84-9f0f-499c-8a54-a0d0ce465280>
CC-MAIN-2022-33
https://en.wikishia.net/view/Salman_al-Farsi
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00498.warc.gz
en
0.926474
5,398
2.5625
3
by Michael E. Salla, Ph.D. January 16, 2012 Andy Basiago claims to have been recruited into DARPA Project Pegasus as a child. Andy Basiago first emerged into public life four years ago with sensational claims of discovering wrote a White Paper in 2008 with his analysis of Mars Rover images which he claimed were conclusive proof of life of Mars and a NASA controlled cover up. Later Basiago publicly declared his participation as a child in Project Pegasus, a DARPA funded project which tested advanced technologies using children. Among these technologies was what Basiago described as "jump room" technology. This allowed the instantaneous transport through time and space. Among the places Basiago claimed to have visited is Mars. There he saw a dinosaur roving the Martian surface hungry for food - human residents lost on the surface being a delicacy for such beasts according to Basiago. Basiago's most recent claims are even more sensational. President Obama back in 1980 also part of the Mars program and even got to travel to Mars. Is Basiago genuinely blowing the whistle on Mars or a crackpot seeking attention? Basiago's White Paper on Mars was released in late 2008 and titled, "The Discovery of Life on Mars." In it Basiago begins by declaring: There is life on Mars. Evidence that the Red Planet harbors life and has for eons was discovered by the author be examining NASA photograph PIA 10214, a westward view of the West Valley of the Columbia Basin in the Gusev Crater that was taken by the Mars Exploration Rover Spirit in November 2007 and beamed back to the Earth. NASA photograph PIA 10214 - a figure of a human female? Some of the figures in the NASA photograph are intriguing. refers to one that, "appeared to be the figure of a human female… jutting from the edge of a plateau." Critics dismissed the figure as nothing more than a rock formation. Basiago and his supporters claimed otherwise. The figure, along with other figures from NASA photo PIA 10214, was conclusive evidence of life on Mars. To the dispassionate observer, the female figure in the NASA photo was certainly curious. It was not however the kind of conclusive evidence necessary to declare that there was life on Mars. Basiago's analysis of NASA photograph PIA 10214 was interesting, but he really stretched credibility when he claimed that in the same photo was evidence of a living Plesiosaur roving the Martian surface. According to Basiago, in photograph PIA 10214 there was evidence of a dinosaur on Mars: The life forms contained in PIA10214 include, humanoids with bulbous heads and elongated bodies, like those beings described in the UFO animals still found on Earth, including lizards, frogs, snakes, alligators, and mantises animals that once existed on Earth but are now extinct, including the reptile species plesiosaur, which has been advanced as a solution in the NASA photograph PIA 10214 - a plesiosaur? Yes, Basiago was claiming that an extinct water dwelling dinosaur lived on the Martian surface - a distant cousin of the Loch Ness To most observers, the so called Plesiosaur was at best a blurry image of something interesting on the Martian surface - probably nothing more than a rock formation. Not so according to Basiago's most important supporter, exopolitics author and Webre quickly became Basiago's de facto chief publicist and ran a series of articles on his website and later his Examiner newsblog, enthusiastly supporting Basiago's claims. privately approached by Webre and Basiago in early 2009 to lend my support to Basiago's findings and comment publicly on his Mars White Not being a NASA image expert, I told them that they should get at least three independent experts to analyze the NASA images that Basiago had focused on. I mentioned Richard Hoagland as one, but Basiago declined and offered to find others to satisfy my request. Almost two years later, in an email exchange with Webre in January 2011, I wrote the following about the alleged image experts that had been found to support Basiago's In an earlier email in this exchange, you mentioned Andrew R. Stec, and Lewis B. Rhinehart. All I could find about Stec is a webpage where he sells mars image posters for exorbitant prices. I have not been able to find anything about Rhinehart other than a mention on him as author/contributor on two articles on So far, neither of the two names you have given qualifies as an image expert despite their interest in Basiago's Mars research. After all this time, it appears that Basiago & you have still not found credible image analysis experts to verify his work, even though there are many in the field who I'm sure would be willing to give an opinion, e.g., Mike Bara, Jim Dilettoso, etc. So far, the only credible image analyst to comment on your work with Basiago has been Hoagland, who has dismissed it as nonsense. My experience with Basiago's image analyses was that he was prepared to make sensational claims of discovering life on Mars without backing his analyses up with independent image experts. My conclusion, was that Basiago was seeking attention with sensational interpretations of NASA images. Webre had lost all objectivity in his uncritical support of Basiago's claims. He began to have conflictual relationships with myself and other colleagues wanting to distance themselves from the sensational claims made by Basiago. Was Basiago simply a crackpot seeking attention, or was there a deeper agenda unfolding? We were soon to In February 2010 Basiago came forward to declare that he was a child participant in Project Pegasus, a DARPA program. Pegasus website, Basiago explains: Project Pegasus is a quest begun in 1968 by Andrew D. Basiago when he was serving as a child participant in the US time-space exploration program, Project Project Pegasus was the classified, defense-related research and development program under the Defense Advanced Research Projects Agency (DARPA) in which the US defense-technical community achieved time travel on behalf of the US government - the real Project Pegasus was launched by the US government to perform "remote sensing in time" so that reliable information about past and future events could be provided to the US President, intelligence community, and It was expected that the 140 American schoolchildren secretly enrolled in Project Pegasus would continue to be involved in time travel when they grew up and went on to serve as America's first generation of "chrononauts." Basiago was not the first to have claimed that he had been recruited as a child participant in a classified program using advanced technologies. Similar claims had been made by, There was much whistleblower evidence that some elements of the U.S. military industrial complex were indeed using children in highly classified programs. The children were put through trauma based mind control so their minds could compartmentalized in ways that could be easily exploited in these programs. Was Basiago one of these experimental children? Quite possibly. Given my experience with whistleblowers making such claims, there was much to be lost in making such claims, and little to be gained other than ruined reputations and careers. As a practicing attorney in the State of Washington, Basiago had much to lose if he perjured himself publicly. Was there any proof of Basiago's new sensational claims of being a "chrononaut" with Project Pegasus? Photo taken shortly after Lincoln's Gettysburg Address Basiago found an archived photo of a scene at Lincoln's Gettysburg address, that showed a blurry image of a boy surrounded by men. According to Basiago, he had traveled back in time as part of Project Pegasus, he declared that he was the figure in the photo. The boy's image in the photo was blurry, however, and not conclusive by any means. This did not deter Basiago and supporters. They claimed photographic evidence existed to support Basiago's claims of attending the Gettysburg address. As with the NASA Rover images, was Basiago simply a crackpot seeking attention, or was there a deeper agenda unfolding? We need to dig deeper into Basiago's claims. In an email exchange in January 2011, where I was challenging Webre's support of Basiago's NASA photo interpretation that Plesiosaurs existed on Mars, Webre With regard to the Mars Plesiosaur, please note that there is independent whistleblower testimony on record by persons who have been on Mars and who have personally seen and confronted Plesiosaurs on the surface of Mars. When I asked Webre for clarification on who was the "independent whistleblower testimony" this is what he Andrew D. Basiago has publicly stated he has been on Mars twice in 1981, once in the company of Courtney Hunt of the U.S. Central During the visit to Mars with Hunt, Andy and Hunt were confronted by a Martian Plesiosaur a short distance from the entrance to the U.S. underground base on Mars where both had landed via teleporter from El Segundo, Both Andy and Courtney Hunt made a dash for the entrance to the underground base and reached it safely. Reference: below video So Webre finally revealed that the "independent whistleblower testimony" supporting Basiago's interpretation of Mars Rover images of living plesiosaurs on Mars was none other than Basiago himself. Basiago was now claiming that as part of Project Pegasus, he had actually traveled to Mars using "teleportation" or "jump room" technology. Incredibly, he claims to have actually seen a Martian Plesiosaur. This was a bizarre way of substantiating Basiago's earlier analysis of Mars Rover images and did not help avert the crackpot image. Aside from dinosaurs roving the Martian surface and eating stray humans, there was even more sensational claims to be made about participants in the Mars Program. This time it would involve President Barack Obama himself. According to Webre, Two former participants in the CIA's Mars visitation program of the early 1980's have confirmed that U.S. President Barack H. Obama was enrolled in their Mars training class in 1980 and was among the young Americans from the program who they later encountered on the Martian surface after reaching Mars via "jump room… According to Mr. Basiago and Mr. Stillings, in Summer 1980 they attended a three-week factual seminar about Mars to prepare them for trips that were then later taken to Mars via teleportation. The course was taught by remote viewing pioneer Major Ed Dames. In an interview on Coast to Coast Radio in October 2011, Basiago was confronted by Major Ed Dames who disputed his alleged involvement in such a program, and told Basiago not to involve him in "your fantasies" (below video, Part 4, at 17 minutes about): Coast To Coast AM with Mars Visitor Andrew November 19, 2011 There is certainly truth in the existence of a secret Mars program. A number of whistleblowers have attested to such, ...and others have come forward. The great granddaughter of President Laura Magdalene Eisenhower even came forward to reveal attempts to recruit her into the secret Mars program, and head as Dr So if there is a secret Mars program, is Basiago correct in his detailed claims regarding it? Before reaching a final conclusion about Basiago's sensational claims, I will first examine Alfred Webre's relationship with Basiago and Project In his former Examiner newsblog, Webre declared that he had actually come under DARPA Project Pegasus time surveillance as a "person of interest," back in 1971-72. If in fact, DARPA's Project Pegasus had used its secret time travel technology to go forward in time from the early 1970s (or earlier) to 2005 (or later) and bring back my book Exopolitics: Politics, Government and Law in the - A Decade of Contact), then Project Pegasus also had the technological means and motive to identify me - Alfred Lambremont Webre - as a person of interest for time travel surveillance… DARPA Project Pegasus had engaged in time travel surveillance of my life and timeline 1971 forward and wanted to know what whistleblower role I would play is assisting whistleblowers such as Andrew D. Basiago around the cover-up of U.S. government secret time travel and life on Mars deep state secrets. So Webre claims that he was identified by DARPA Project Pegasus as someone that would in time assist Basiago in disclosing the truth about, "U.S. government secret time travel and life on Mars deep state secrets." So are Webre and Basiago telling the truth about Mars or again is there a deeper agenda at play? Alfred Webre attended Yale University from 1960-64, four years ahead of fellow Yale student George Bush. Bush as is well known, was a member of Skull and Bones, one of Yale's secret societies. Researchers have found that is regularly used at Skull and Bones, and is part of the training of members for similar secret societies. What is not generally well known is that Webre was a member of another of Yale's secret societies, Scroll and Key as privately confided to this author by Webre himself some years ago. Scroll and Key is described as follows: The Scroll and Key Society is a secret society, founded in 1842 at Yale University, in New Haven, Connecticut. It is the wealthiest and second oldest Yale secret society. Each year, the society admits fifteen rising seniors to participate in its activities and carry on its As a former and/or current member of Scroll and Key, Webre was no stranger to secret societies and oaths, and the rituals performed therein. Some of these rituals are a form of mind control and/or conditioning designed to make the recipient loyal over a lifetime, this would help explain Webre's erratic behavior over the years of his professional career and writing in the field Certainly in my experience and those of former colleagues of Webre during his five year tenure at the Exopolitics Institute, Webre's behavior and claims have been perplexing, controversial and divisive. For example, he was "Secretary of Justice" of an online organization, Galactic Government actively selling land on the moon, and representing lunar owners. So what is the ultimate agenda of Basiago and Webre? In my conclusion, Basiago's and Webre's tasks are to disclose some of the truths about a secret Mars project but to do so in such a sensationalist way that it discredits any wanting to seriously study such claims. This is a classic psychological warfare tool whereby the truth can be hidden in plain sight, and deter any serious investigation of what is happening. Basiago's involvement as a child participant in Project Pegasus involved heavy mind control. Webre's membership in Scroll and Key, involved a degree of mental conditioning if not outright mind The result is that both Basiago and Webre are ideal candidates for a limited disclosure hangout concerning life on Mars. Being part of an officially sanctioned psychological operation, helps explain why Basiago can still practice law in Washington State while making sensationalist claims, when other whistleblowers have lost their careers for doing far My final conclusion is that Basiago is both a genuine whistleblower and a crackpot - by design. January 16, 2012 Alfred Webre has clarified his Yale "I was never in Scroll & Key at Yale... I was in Torch & Talon, an 'underground' secret society that met twice weekly at a beach house we rented at Branford, CT and had 'encounter group' style meetings in the 1964 style, where we told our life stories and supported each other. Torch & Talon is not longer active." Andrew D. Basiago ...Responds to Michael Salla 15 January 2012 [Bracketed materials are excerpts Michael Salla's letter to Andrew D. Basiago] SALLA: <I have never called you a BASIAGO: True, but you have repeatedly attacked my public statements as untrue, despite the fact that from the beginning they have all been true and you have no evidence that they have not been true. SALLA: <I think you are, without knowing it, still in a program where you are downloaded information to disseminate to others for an undisclosed agenda.> BASIAGO: This is a mere supposition on your part. You have no factual basis to allege this theory of my case. I was a participant in two historic programs, DARPA's Project Pegasus, about which I have retrieved and communicated hundreds of facts, and the CIA's Mars visitation program, for which I have already proffered one participant and for which I will soon be proffering a second participant. I am simply sharing what happened to me. SALLA: <You have ingratiated yourself with Alfred who has lost all reason and backed you to the point of discrediting himself in the wider exopolitics community.> BASIAGO: You have no basis to make a judgment about the nature of my friendship with Alfred. I have never ingratiated myself with anybody. You don't know me. Alfred and I are profound friends, strategic allies, and creative collaborators. Your assertion that you can characterize a friendship is highly When you talk about Alfred discrediting himself in the exopolitics community, I ask: Is this the Michael Salla who presented at [James] Gilliland's in 2006 about creating nature parks for human-ET interaction and then went on Coast to Coast AM and admitted that ET behavior has been characterized, for the most part, by stealth? Whose wife presented a lecture about swimming with the dolphins as a way to "channel ET"? comparison, Alfred has had the intelligence to recognize the significance of my experiences and has had the integrity to support my Truth Campaign. SALLA: <Anyone who, like you, has been through trauma based mind control deserves support and sympathy.> BASIAGO: Well, that's great, Michael. Problem is, I have never been through "trauma based mind control." Why do you insinuate that I have? I told you that I was subjected to efforts to make us comply with the secrecy regime and to not talk about what Your allegation of TBMC is unfounded and is an academic conceit that you have superimposed over the facts of my case. In fact, you have been too academically unprincipled to explore the facts of my case. You have made a snap judgment. SALLA: <Uncritically accepting what such individuals have to say carries many risks as I have pointed out for others such as Charles Hall, etc.> BASIAGO: In point of fact, you, who purports to be an historian of UFO history, have totally missed the boat by not examining my claims and realizing that my experiences reveal a whole new chapter in the US government's covert response to the ET In Project Pegasus, I was even trained in the ASTART alphabet for ET-human communication and you haven't even interviewed me. Along the way, you have made it your business to defame a respected colleague (Alfred) and a key whistle blower So, I have to ask: SALLA: <It's only when Alfred began uncritically supporting your claims that he lost all support and BASIAGO: Alfred has supported many people with different allegations and experiences and unlike you hasn't superimposed a misinformed judgment over them. I will leave it to him to make his judgments as to whose experiences he reports. I can say that he has sought not so much to be understood as to understand. his work has been groundbreaking and revolutionary and yours has been pedestrian and sterile. It's almost like you're a gatekeeper for what the CIA wants to release about the ET situation. I think that at this point the balance of the equities indicates that you are an operative. If you are not, then I would urge you to do some soul-searching and acknowledge that your egotism has caused you to fail to comprehend that I am a key whistleblower from within the postwar US defense community that was dealing with how to respond to perceived threats posed by the ETs and the Soviets. Having said all that, I forgive you, just as I forgive everybody who fails to comprehend what I experienced.
<urn:uuid:724fce0f-1366-4285-8ac5-d932627b172d>
CC-MAIN-2017-51
http://www.courtofrecord.org.uk/archive/www.bibliotecapleyades.net/marte/marte_vida02.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948599549.81/warc/CC-MAIN-20171218005540-20171218031540-00126.warc.gz
en
0.956358
4,582
2.65625
3
A spatial database management system (SDBMS) is an extension, some might say specialization, of a conventional database management system (DBMS). Every DBMS (hence SDBMS) uses a data model specification as a formalism for software design, and establishing rigor in data management. Three components compose a data model, 1) constructs developed using data types which form data structures that describe data, 2) operations that process data structures that manipulate data, and 3) rules that establish the veracity of the structures and/or operations for validating data. Basic data types such as integers and/or real numbers are extended into spatial data types such as points, polylines and polygons in spatial data structures. Operations constitute capabilities that manipulate the data structures, and as such when sequenced into operational workflows in specific ways generate information from data; one might say that new relationships constitute the information from data. Different data model designs result in different combinations of structures, operations, and rules, which combine into various SDBMS products. The products differ based upon the underlying data model, and these data models enable and constrain the ability to store and manipulate data. Different SDBMS implementations support configurations for different user environments, including single-user and multi-user environments. - Introduction to Spatial Database Management Systems - Example DBMS and Spatial DBMS Software - Examples of Enterprise Spatial DBMS Spatial database management systems, both software and hardware sub-components, organize data for inventorying and querying databases, conducting spatial analysis, and creating map visualizations within an integrated manner for managing large data stores (Yeung and Hall 2007). Database management is a subset of a larger category of technology called data management technology. Data are managed using two types of computer-based files, physical files and logical files. A physical file is a collection of records managed by the operating system software as stored on disk; a data file being different than a database file. A logical file is a collection of records managed by application software, most fundamentally database management system software. Many logical files can be combined into a physical file. One advantage for using logical files is the increase in access speed to individual data elements, as opening a physical file takes considerable time in contrast to accessing individual elements within a logical file. When data are organized into physical files to be managed, we call this ‘data file management’ (or simply file management). When we use logical data files organized within physical files, we call this database management. When a logical file is the same as a physical file then the file is called a ‘data file’. When multiple logical files are included in a physical file then we refer to the file as a database file (Rigaux, Scholl and A. Voisard 2002). Spatial database management adds the spatial aspect (dimensions of space) to database management (Shekhar and Chawla 2003). Database management software is designed specifically with a spatial aspect in mind, as three dimensions of physical space are core to existence. These three dimensions are managed (stored and retrieved) in a special manner in data management software, making spatial database management software an enhanced-type of data management software based on the data model design. A data model is essentially a design framework for a data management system. One of the most comprehensive definitions of a data model was provided by Edgar Codd (1980) ten years after he developed the design of relational data models (Codd 1970). Codd’s interest stemmed from clarifying the logical character of a data model, as opposed to its physical implementation; as such, the general concept of data model is not restricted to any particular approach to data management. From a database design perspective, a more common and popular understanding of data model is that it defines the structure and intended meaning of data (West 2011, p. 5). However, Codd’s (1980, p. 112) more comprehensive view characterizes a data model as consisting of three components: 1) a collection of data structure types (the building blocks of any database that conforms to the model) for describing data; 2) a collection of operators or inferencing rules for manipulating data, which can be applied to any valid instances of the data types listed in (1), to retrieve or derive data from any parts of those structures in any combinations desired; 3) a collection of general integrity rules for validating data, which implicitly or explicitly define the set of consistent database states or changes of state or both -- these rules may sometimes be expressed as insert-update-delete rules. Herein, the Codd (1980) framework is used to describe SDBMS due to its completeness, whereas the West (2011) interpretation and all others like it provides just the first third of the framework. Data logical structures such as tables, objects, attribute fields and relationships as descriptions of data are implemented as physical data storage structures with data access mechanisms for primary and foreign keys. Basic data types such as integers (e.g., 1, 2, 3) and real numbers (e.g., 1.1, 1.2, and 1.3), and/or character strings (e.g., ‘text string’) are extended into spatial data types (e.g., points, polylines and polygons), and are used to form spatial data structures for data storage. Basic DBMS operations for manipulating data include data creation (C), retrieval (R), update (U), and delete (D), referred to as the CRUD suite of operations. Logical operations and their physical implementation are used to derive logical structures and store them in terms of storage structures. Rules constitute the third component of a data model, hence the DBMS. DBMS rules protect against corruption of the data by validating data (hence data structures) during CRUD operations. Validity rules are critically important for establishing the veracity of databases, protecting against unintended changes by users. Atomicity, consistency, isolation, and durability is a set of properties in database transactions that are intended to guarantee data validity despite errors, e.g. power failures. A sequence of operations using these properties is called a "valid transaction." In summary of the above characterization, three levels of data abstraction combined with the three components of a data model summarize the aspects of DBMS (see Table 1). Thus, all DBMS software implementations should contain explicit capabilities for three components 1) constructs, 2) operations, and 3) rules for all three levels of conceptual (meaning), logical (structure), physical (data formatting) levels of data abstraction, respectively. |Levels of Abstraction||Three Components| |Constructs for describing||Operations for manipulating||Rules for validating| |Conceptual||... worldly features||... worldly processes||... features and processes| |Logical||... data primitives of the database||... data primitives of the database||... data and operations on the database| |Physical||... disk storage formats||... data stored as bytes and bits||... reads and writes to disk| In further clarification, many people understand data model as a collection of data categories and relationships (West 2011). As such, that interpretation is simply the first component offered by Codd (1980). However, it should be clear that operations on data structures plus rules for qualifying data structure and/or operations are essential in operational database management systems. Without the operations there is no ‘change’ in data being managed. Without the rules, the veracity of the data and operations can be easily called into question. With rules constituting the third component of a data model, DBMS rules protect against corruption of the data by validating data (hence data structures) during CRUD operations. Consequently, it is important to embrace the Codd (1980) interpretation for complete implementation and use of a SDBMS. Logical data models underpin the designs of DBMS software. Consequently, these data models underpin implementations of Spatial DBMS package implementations. 2.1 Logical Data Models as DBMS Types A variety of logical data model types for implementing DBMS exist, each type being a different implementation of a logical data language with a physical context. (See the GIS&T BoK entry for logical data model description.) To provide an idea of the most popular DBMS software systems across the world based on logical data models, DB-Engines (2020) maintains a website documenting general rank of popularity (using six criteria to form the ranks) among 300+ DBMS. The top-ten ranked DBMS and associated data models show that the relational model is the most popular (See Table 2). |DB-Engine Rank||DBMS Name||Data Models Supported* (but not all are SDBMS capable)| |1||Oracle||Relational, Document Store, Graph, and RDF Store| |2||MySQL||Relational and Document Store| |3||Microsoft SQL Server||Relational, Document Store, and Graph| |4||PostgreSQL||Relational and Document Store| |5||MongoDB||Document Store and Search Engine| |6||IBM Db2||Relational, Document Store, and RDF Store| |7||Redis||Key-value Store, Document Store, Graph, Search Engine, Time Series| |8||Elasticsearch||Search Engine and Document Store| * Definitions of the data models are provided in the text below. The majority of DBMS available have been implemented based on the relational data model, or a derivation thereof, due to its long history of success as one can observe from the above table. This success and thus popularity is due to its simplicity of data storage for maintaining validity of database elements. However, there are many other DBMS implementations based on other logical data models as well because they offer richer data storage structures. The simpler the data structure storage, the more manipulation is needed to achieve an end result. With computers being faster over the decades, the rich data structure (non-relational) approaches have been gaining in popularity. The data model types described below appear in alphabetical order. There is no implied recommendation in the listing. - Graph uses a data storage approach having nodes, links and properties. Nodes are units of data commonly constituting phenomena. Links are the relationships between the nodes. Properties are the characteristics of the nodes and relationships. The underlying logic is graph-theoretic which offers a rigorous approach to construction and retrieval. Operations can be performed on the node constructs to establish links as stored relationships. Rules guide the operations and structures to enhance validity of the nodes and links. - Document Store uses a data storage approach wherein the primary unit is a document with direct access from document to document. Often thought of as a graph approach, but the constructs can be adhoc in character, and do not necessarily involve the rigor of a graph-theoretic approach. This approach is often labeled as a ’NoSQL’ approach, indicating that it is a non-relational approach. - Key-value Store uses a storage and access approach wherein data elements are the units of access with fine granularity. A key-value store is more general called a NoSQL approach indicating that it is a non-relational approach. - Object-oriented uses a data storage and access for individual units about things in the world. The units have behaviors stored as methods containing the operations. Rules are used to constrain the behaviors of the objects. - Open Standards use a data storage approach based on constructs promulgated by the Open Geospatial Consortium (OGC), wherein everyone one has access to information about storage and operations making it easier to integrate among data stores. The constructs tend to be simpler than other approaches for enhancing readability among software systems. Both vector and raster data types are included in the data structures. The vector geometry involves point, polyline and polygon geometry which stores features ‘geometry only’, i.e., no relationships among data elements are stored as part of the feature, and thus referred to as ‘simple feature geometry’ as part of the OGC data model documentation. - Relational (commonly row-store indexing) uses a data storage approach wherein characteristics of phenomena are combined into relations (tables) and these relations can be manipulated using a relational algebra that constitute the operations. Rules constrain the combinations of characteristics and the operations on these combinations. - Relational Column Store uses a data storage approach wherein the characteristic is the main access point, and all phenomena with that characteristic are manipulated rapidly. It uses a similar approach to relational, but the indexes are built on columns as opposed to rows. - Resource Description Framework (RDF) stores use Internet-oriented formatting for implementation on the World Wide Web. The formats are designed to be read and understood by computers using the extensible make-up language (XML), wherein XML is a very common way to extend the hypertext mark-up language (HTML). XML extensions to html focus on ‘content’ as opposed to the ‘format’ focus of HTML. Every spatial database management system makes use of spatial data types that are ‘built-on-top’ of general data types. The GIS&T Body of Knowledge physical data model entry offers the list of general data types. 2.2 Spatial DBMS Products A Wikipedia (2020) page about spatial databases describes a wide variety of spatial DBMS products (https://en.wikipedia.org/wiki/Spatial_database). Below we categorize that list in terms of the principal logical data model used, as some software products support multiple data models. Again, we use alphabetical order to list the types as above, and within each data model category we alphabetize the software products, with no priority order intended. A December 2020 ranking of popularity as scored by DB-Engines website appears in parentheses, wherein NR is not ranked because DB-Engines ranks general DBMS only as opposed to more specific SDBMS. As such, the rank does not imply popularity of the SDBMS, only the DBMS used to host the SDBMS. - Neo4j, a graph database that can build 1D and 2D indexes as B-tree, Quadtree and Hilbert curve directly in the graph (ranking = 19) - AllegroGraph is a graph database that provides a novel mechanism for efficient storage and retrieval of two-dimensional geospatial coordinates for Resource Description Framework data; it includes an extension syntax for SPARQL queries (ranking = 166) - CouchDB a document-based database system that can be spatially enabled by a plugin called Geocouch (ranking = 36) - Elasticsearch is a document-based database system that supports two types of geo data: geo-Point fields (lat/lon pairs) and geo-shape fields (points, lines, circles, poloygons, multi-polygons and others) (ranking = 8) - GeoMesa is a cloud-based spatio-temporal database built on top of Apache Accumulo and Apache Hadoop; GeoMesa supports full OGC Simple Geometry Features and a GeoServer plugin - MarkLogic, MongoDB, and RethinkDB support geospatial indexes in 2D (NR) - RavenDB supports geospatial indexes in 2D (ranking = 85) - Redis with the Geo API (ranking = 7) - Tarantool supports geospatial queries with RTREE index (ranking = 132) - Smallworld VMDS, the native GE Smallworld GIS database (NR) - SpatialDB by MineRP, an open-standards spatial database with spatial type extensions used mostly within the mining industry (NR) - Caliper extends the Raima Data Manager with spatial datatypes, functions, and utilities (ranking = 227). - CartoDB, a cloud-based geospatial database on top of PostgreSQL with PostGIS (NR). - Esri File geodatabase, plus support of single-user and multiuser relational geodatabases (NR). - H2 supports geometry types and spatial indices as of version 1.3.173 (2013-07-28); an extension called H2GIS available on Maven Central gives full OGC Simple Features support (ranking = 49). - IBM Db2 Spatial Extender can spatially-enable any edition of DB2, including the free DB2 Express-C, with support for spatial types (ranking = 6). - IBM Informix Geodetic and Spatial DataBlade extensions auto-install on the use and expand Informix’s datatypes to include multiple stand coordinate systems and support for Rtree indexes. Informix datatypes can also be incorporated with time series data support for tracking objects in motion (ranking = 30). - Linter SQL Server supports spatial types and spatial functions according to the OpenGIS specifications (ranking = 309). - Microsoft SQL Server has support for spatial types since version 2008 (ranking = 3). - MySQL DBMS implements the datatype geometry, plus some spatial functions implemented according to the OpenGIS specifications, but different version offer different levels of support for spatial data types (ranking = 2). - OpenLink Virtuoso supports SQL/MM, with significant enhancements including GeoSPARQL (ranking = 111). - Oracle Spatial and Graph aid users in managing geographic and location-data in a native type within an Oracle database, potentially supporting a wide range of applications for spatial data (ranking = 112). - PostgreSQL DBMS uses the spatial extension PostGIS to implement the standardized datatype geometry and corresponding functions (ranking = 4). - SpatiaLite extends Sqlite with spatial datatypes, functions, and utilities (ranking = 9). - Spatial Query Server from Boeing spatially enables Sybase ASE (NR). - Teradata Vantage, is a data intelligence platform, deploys on-premises, to the cloud, or as a hybrid model. Vantage consists of various analytics engines on a core relational database, including its MPP engine, the Aster graph database, and a machine learning engine (ranking = 14). Relational Column Store - MonetDB/GIS extension for MonetDB adds OGC Simple Features to the relational column-store database (ranking = 123). - SAP HANA is a multi-model in-memory data environment (ranking = 20). - Vertica Place, the geo-spatial extension for HP Vertica, adds OGC-compliant spatial features to the relational column-store database (ranking = 32). Several situations exist for user environments, including single-user, workgroup, enterprise, and consortium activities. Single-user SDBMS involves a single person at a time making use of a database environment. Workgroup database management activity involves multiple people performing database management on the same project records within a single unit (division) of an organization, that is, an intra-organizational same unit context. Enterprise database management activities involve multiple people performing databases management on the same project records within multiple units across an organization, that is, an organization-wide, but different unit context. Consortium database management activities involve multiple people performing data management on the same project records across organizations, that is, an inter-organizational context. In all multiple user contexts, conflicts with record access can occur when multiple users try to update the same database record at the same time. Those circumstances require record-locking capabilities, wherein record-locking protects users ‘stepping on’ one another changes, potentially resulting in database corruption. Enterprise SDBMS are among the most common types of data management implementations across the GIS industry. By combining a list of DBMS supported by Esri and the ranked list from the DB-Engines website we gain a sense of the popularity of a DBMS being used to host a GIS enterprise approach with the Esri geodatabase DBMS environment (See Table 3). Table 2 and 3 present world-wide lists. Only two of the DBMS solutions fell in rank from December 2019 to December 2020. This might indicate that SDBMS is on the rise world-wide. |Esri DBMS-Compatibility**||Rank on DB Engines Website (363 DB Engines ranked Dec 2020)||Data Model(s) Listed on DB Engines Website, for each of respective DBMS| |December 2020||December 2019| |Oracle||1||1||Relational, Document Store, Graph, and RDF Store| |Microsoft SQL Server||3||3||Relational, Document Store, and Graph| |PostgreSQL||4||4||Relational and Document Store| |IBM Db2||6||6||Relational, Document Store, and RDF Store| |Teradata Data Warehouse Appliance||14||15||Relational, Document Store, Graph, and Time Series| |Microsoft Azure SQL Database||16||25||Relational, Document Store, and Graph| |SAP HANA||20||20||Relational, Document Store, and Graph| |IBM Informix||30||26||Relational, Document Store, and Time Series| |Netezza Data Warehouse Appliance||34||33||Relational| |Dameng||Nor ranked||Not ranked||Relational| * Relational data model is supported by Esri DBMS software. Other data models might be supported through customized software. Dameng is not compatible with the geodatabase data model. Codd, E. F. (1970). A relational data model for large shared data banks. Communications of the ACM 13(6), 377-387. Codd, E. F. (1980). Data models in database management. ACM SIGMOD Record - Proceedings of the workshop on Data abstraction, databases and conceptual modelling, Volume 11, Issue 2, Feb 1981, pages 112-114. DOI: 10.1145/960128.806891. DB-Engines (2020). http://db-engines.com/en/ranking. Rigaux, P., M. Scholl and A. Voisard (2002). Spatial Databases: With Application to GIS. San Francisco:Morgan-Kaufmann. Shekhar, S. and S. Chawla (2003). Spatial Databases: A Tour. New York: Pearson Higher Education. West, M. (2011). Developing High Quality Data Models. San Francisco, CA: Morgan Kaufmann Publishers Inc. Wikipedia (2020) Spatial Database (https://en.wikipedia.org/wiki/Spatial_database) Yeung, A. and B. Hall (2007). Spatial Database Systems: Design, Implementation and Project Management, Springer, Dordrecht, Netherlands. - Define a data model. - Describe the purpose of a data model in terms of spatial data management system. - Describe how a conceptual, logical and physical data models differ in regards to software implementation. - Compare and contrast logical data models as related to spatial database management systems implementations. - Prepare an overview of Esri enterprise spatial database management approaches for data management environments. - Compare and contrast single-user versus multi-user SDBMS approaches. - Why is it useful to understand the three different levels of data abstraction for conceptual data model, logical data model, and physical data model in terms of database management implementation? - Characterize the difference between logical and physical data models in regards to spatial database management software implementations. - Why is the relational data model for a DBMS the most popular logical data model? - How might a variety of approaches to development of spatial database management systems be advantageous to the GIS industry? - Why is it advantageous for a DBMS to be compliant with open geospatial consortium simple feature geometry? - How do many of the top-ranked DBMS support geospatial data types to form various SDBMS? Why might some DBMS not support geospatial data types? - What are the four user environments of spatial database management systems? Why do they all have a place in supporting GIS users?
<urn:uuid:d9af8c53-aff3-4378-834f-ae9b47c78d19>
CC-MAIN-2022-33
https://gistbok.ucgis.org/bok-topics/spatial-database-management-systems
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00096.warc.gz
en
0.866157
5,120
3.46875
3
Chapter 16 Resources In Pennsylvania, the provision of services to identified gifted students is governed by Chapter 16 of the Pennsylvania school code. Students must meet eligibility requirements in order to receive Chapter 16 services; specific requirements include being identified with mental giftedness as defined by the Pennsylvania Department of Education and being in need of specially designed instruction.Quaker Valley teachers and staff strive to meet the needs of all learners by providing a challenging, stimulating environment that encourages children to grow and develop their academic, intellectual and creative skills. At Quaker Valley, ALL academic resources are available to all students, whether identified as gifted or not, who demonstrate a need for differentiated instruction to reach their potential. Chapter 16 Code A Critical Attribute of the Quaker Valley Model There is a fundamental difference between Quaker Valley's model and more typical programs. Most schools FIRST identify and label the students who are eligible for the program, THEN attempt to do the right things for them. Instead, we FIRST do the right things - Deliver a solid, rigorous curriculum with high expectations for all students Collect and use a variety of data in determining instructional needs Create quality enrichment opportunities Promote acceleration as a tool for meeting exceptional need Train and support teachers to recognize and accommodate high-end learners via differentiated instruction Endorse the use of instructional grouping for efficient and effective instruction Permit flexibility in decision-making, tailored to individual circumstance Recognize the role of motivation, maturity and interest in diagnosing and addressing student need Encourage creativity and responsible risk-taking Creatively use technology for instruction and opportunity Promote equity and excellence THEN, students access services by the needs they exhibit through classroom performance, test results, teacher observation, parent information, interest, and/or motivation. To address these needs, a variety of group and individualized services are offered and organized using the Levels of Service Model. What does Quaker Valley offer to meet students' individual interests or needs? STEM Design Challenge Junior Great Books Differentiation by need through small group instruction and instructional practices Differentiation by qualification in Academic Competitions, such as: Differentiation by choice through extended program offerings such as: Independent and classroom projects Acceleration by subject or grade Open door policy to advanced courses including 19 in-house Advanced Placement courses Career counseling and college planning (see Office of Collegiate Affairs) Allegheny Intermediate Unit Apprenticeship Program Innovative elective courses and arena scheduling (see Program of Studies) Out-of-level testing to determine instructional needs - all students in grades 9, 10, and 11 take the PSAT to both prepare for the SAT and to provide us with valuable achievement information. College Board Academic Competitions (varies with student interest) Mock Trial (via CHS Argument, Communication and Rhetoric course) Junior Academy of Science (via Honors Research course or SDL) Odyssey of the Mind Co-Curricular clubs and activities The MDE (multi-disciplinary evaluation) Process Instructional decision-making is guided by student achievement data. Multiple criteria across domains are analyzed to determine how best to meet the learning needs of all students. Standardized tests, curriculum-based assessments, and teacher observations and evaluation are considered. At the beginning of the year, each teaching team analyzes student data. Based on each student’s profile, teachers make programming decisions to best meet their needs. Research and best practice indicate that this can be accomplished through grouping practices and differentiated instruction supported by the academic specialists. Data collection and analysis is ongoing throughout the year. It is our goal to provide every student an educational experience based on demonstrated need. As student needs change throughout the course of the school year, we adjust accordingly. We have a myriad of support staff including reading specialists, speech and language pathologists, librarians, learning support teachers, technology teachers, and counselors, in addition to the academic specialist in each building. These individuals partner with classroom teachers to monitor student progress, plan and facilitate experiences for all students, collaborate and co-teach with classroom teachers, meet with groups of students inside and outside of the classroom and attend to individual academic, social, and emotional needs. Additionally, at any time, parents may choose to access their special education rights by requesting a multidisciplinary evaluation of their child. A written request for this service should be directed to the building Principal or to the Director of Student Services, Mike Lewis. Frequently Asked Questions about Services for High End Learners My child says she’s bored. What should I do? It is helpful to first probe a little beyond the face value of this statement. Ask your child to tell you what she doesn’t like about the class or assignment at issue. Listen carefully. Sometimes the work is challenging, and the child is unnerved by the unexpected difficulty. Sometimes there is an issue with the topic being studied. Sometimes she’s out of sorts with her group or teacher. But if the work seems like review or your child is frustrated or unhappy, your first call should be to her teacher. Be prepared to offer specific examples in the work or behaviors you see that make you think that your child may require something different. Share information about your child that the teacher may not know – experiences or interests she has, things that she does at home that may offer insight into what opportunities or choices you think may help make school a better fit. Ask for the teacher’s assessment of your child and his classroom observations. Problem-solve together for what steps to take and plan to meet again to discuss how things are working. Parents may request additional testing at any time by contacting the building principal or the Director of Student Services. How can I find out what enrichment activities my child can participate in? Many enrichment opportunities are embedded in the classrooms. The website lists examples of activities offered to all students (Level 1), as well as those available to many students (Level 2) based on specific interest or ability. Often opportunities are announced in the weekly Monday Memo, on Schoology, and through other building communications. Ask your child’s teacher or the Academic Specialist about how to access them. If the opportunity you seek doesn’t exist or can’t be offered at school, the Academic Specialist may be able to suggest outside resources or alternatives you can explore as a family to support your child’s special interest. My child finishes his homework in minutes. I don’t know if he’s being challenged. What should I do? The answer to this question will vary depending on the age of the child. Minor amounts of homework may be intentional, and the purpose for homework differs among units of instruction and at different grade levels. Sometimes homework is a review of the day’s lesson and meant to simply reinforce or practice the concept. Other times, it surveys prior knowledge or interest. Some students manage their time well and get started in spare minutes during the day or on the bus. In general, students should not struggle, and homework should not dominate the time outside of school. First look over his work and check for completeness and accuracy. Monitor assessments and other work that comes home for the level of rigor and your child’s degree of success with it. Contact his teacher with homework concerns, but enjoy extra time at home for enrichment and play, as appropriate. What’s the difference between enrichment and differentiation? Differentiation is a broad term encompassing a wide variety of instructional methods that customize learning for small groups or individual students whose instructional needs differ from their classmates. Differentiation can occur in the complexity of content or materials students are exposed to, in the different processes they use to learn, in the kinds of products they generate to demonstrate their learning, in the homework they are assigned, in the environment they are placed for learning, such as bright or dimmed lights, music or quiet, etc. and in the learning styles they prefer, such as group or alone tasks, visual, auditory, or hands-on lessons, etc. Enrichment is one differentiation strategy. Enrichment is lessons, activities, assignments or materials that extend or enhance the curriculum in a way that deepens or challenges understanding, supports personal interest, and/or further engages students in new learning about the topic under study. How do I know my children are growing? Your children’s academic growth is assessed often and in a variety of ways throughout the school year. Reviewing standardized data sent home to you in mailings, achievement noted on report cards, and graded assignments should give you a good picture of your child’s progress through the curriculum. If your children come home happy, engaged, tired, and can tell you what they learned each day, chances are they are growing. Growth data, such as PVAAS, exist, but are statistically of limited value in districts that are very small and among our students that are significantly above or below average/ grade level. Any questions about growth can be directed to your child’s teacher, the counselor, the academic specialist or the principal. What might differentiation look like for my child? Sometimes differentiation at the classroom level is designed not to be noticed, for example a teacher may give various groups questions or tasks at different levels of complexity, but you see only the materials designed for your own child’s level. Sometimes student choice is the method of differentiation and you learn what choice your child made. Sometimes homework is different for different groups and you see only what has been assigned to your child. In these instances, you may be unaware that teachers are engaging in sophisticated and prescriptive planning for your child. Teachers should communicate with you, however, if your child requires differentiation that indicates your child is working significantly above or below grade level expectations. How can my child work on things she’s interested in? Classroom instruction often involves activities, projects, games, and other strategies that students find very engaging. Teachers provide students with a variety of choices in how and where and with whom they complete their various assignments throughout the day. Their schedules are often so packed that there is little “extra” time to indulge personal interests, however, when students finish their work earlier than others or pre-test out of some instruction, minutes can be gained and spent in a variety of ways that vary by classroom, age, maturity level, and need. From personalized book selections, learning centers and computer applications, to long term independent projects, classrooms are full of opportunities for students to explore their interests. Contact your child’s teacher or the academic specialist for more information about these options. Why are so many people around the table for meetings about my child? Quaker Valley is proud of the professional teaming between classroom teachers and the myriad of support staff in each building. To make meetings about students more efficient, to have more minds around the table contributing to the thinking and decision-making, to be more comprehensive in the delivery of whatever is needed, and for the convenience and respect of our parents who may need to leave a workplace of their own during the day, we endeavor to have “all hands on deck” whenever possible. We are dedicated to serving the whole child, thus academic, counseling, learning specialists and administrators are present with parents to consider all aspects of the child’s needs. What can I do if I disagree with my child’s placement? Teachers and administrators use many data points, classroom observations and other factors to create class rosters that will optimize the experiences of all students, while working within the parameters of schedules, class sizes, and special needs. The elementary schools use a cluster grouping system where students with similar needs are deliberately placed together and benefit from working together in an otherwise heterogeneous classroom that has been structured to lessen the extremes of need in any single room. Middle and High School placements in leveled classes are a function of assessment data, grades, and teacher recommendations. Careful consideration is given to each placement decision; however, extenuating circumstances may occasionally necessitate changes. Contact the building principal or counselor if you’d like to discuss your concerns or questions. How does the Levels of Service Model work during the day? Level 1 services, which are offered to all students, are embedded in the classrooms where all students participate and benefit from the educational value of the opportunity. Curriculum extensions and enrichment are available in each classroom for use as needed. Building or grade level trips, assemblies, projects, and other Level 1 experiences occur throughout the year. Level 2 services are available to students by academic need, interest or ability. Teachers differentiate instructional strategies for small groups to accommodate diverse learning needs. For some activities, interested students participate in qualifying rounds in the classroom or are pulled to complete the preliminaries for the chance to compete against other schools. Some activities, such as Odyssey of the Mind, are outside of the school day. Level 3 services are available to individual or small groups of students through assessed need and include learning contracts, curriculum compacting and other more targeted interventions. Level 4 services are usually highly individualized to accommodate more extreme need and generally include subject or grade acceleration. Services are delivered by a variety of teachers (classroom, specialists, special area teachers) and in a variety of places, including classrooms, the large group instruction room, the library, the playground, etc. dependent upon the activity and its duration. Why are gifted services available to students without formal identification? Gifted children are labeled in most schools as a prerequisite to services, meaning that services such as acceleration, participation in competitions, enrichment programs and other specially designed instruction are available only to students labeled as gifted. Children in the Quaker Valley Schools, however, do not require the gifted label to receive these services. Instead, their gifted needs are identified and accommodated without the need for qualifying scores that do not measure strong interest, previous experience, maturation, or motivation. All services are open and available to any child who has the need for them. Needs are identified based on achievement data, classroom performance, and behavioral data, but services can be created and implemented without the intensive testing, lengthy timelines, and delayed implementation required by formal identification. See the Critical Attribute explanation above. There may be instances, however, when you and your child’s team believe additional testing would be helpful for planning, at which time that process can be used. Please feel free to request an evaluation and the team will be happy to meet with you. What are examples of specific gifted services that might be offered in other districts with a more typical gifted program? More typical programs may offer a time, generally up to a few hours per week, where labeled students are pulled from their regular classrooms and provided with enrichment materials, projects, competitions, and field trips that are generally outside the boundaries of the regular curriculum. Classes are limited to 20 students at a time, and gifted support teachers have a maximum of 65 students on their caseload rosters. Additionally, students attending these programs are often viewed as having similar needs when in fact they could differ substantially in interest, ability, and achievement. Individualizing services to meet the academic and creative needs across all content areas in a limited space and for a limited time is problematic in pull-out models. When students return to their classrooms where they spend the majority of their time, they often must make up what they’ve missed because the classroom teacher cannot be certain that the scheduled time for their gifted class is at the best time for them to miss the instruction taking place in their absence. Because a single gifted support teacher usually services multiple grades and sometimes multiple buildings, the students are pulled according to the support teacher’s availability rather than the students’ needs. Very little research supports gifted models of this nature since assessment of the enrichment provided is not a part of any testing program. Schools vary widely in the quality and quantity of services available to gifted (and non-gifted) students. In more traditional programs, some students thrive, but some are troubled by the disruption to their schedule and the burden of “extra” work, and some are unfairly excluded from experiences that would be beneficial to them. Quaker Valley makes all the services we offer available to students based on need or interest, without the requirement of a qualifying label. Students do not have to wait for a designated time each week for services. Instead, services are embedded in the regular program more consistently and comprehensively. We have a dedicated Academic Specialist in each building who aids in the planning, programming, and ultimately the transition of students between buildings. (See the Levels of Service model for additional details). Isn’t being labeled gifted an advantage for college? No. No college application contains a check box or space for the inclusion of a student’s gifted status. College admissions offices focus solely on transcripts, recommendations, test scores, activities, experiences, and other more standard markers of excellence and achievement. Because in the United States gifted education exists only in some states, but not all, and in some districts and at some grade levels, colleges do not seek this information. Local gifted programs vary widely in eligibility criteria, quality, and substance, thus a student’s inclusion in or exclusion from them has no meaning or consideration in the college selection process. Why doesn’t QV use IQ screening tests for all children? The primary purpose of such tests is to identify and serve only those children who score above a certain cutoff point. In very large schools where such screening is needed by the sheer number of students who may require specialized instruction, the test is efficient, if not optimal, in narrowing the pool of students eligible for further testing. It is also used in schools where seats in gifted programs are limited. At QV, neither situation is the case. In grades K-6, students are screened individually three times each year with achievement tools that clearly identify outliers (students with scores significantly above or below grade level expectations) who may then be further tested should the need arise for additional information to inform programming decisions. In addition to mandatory yearly state testing, we test all students in grades 8, 9, 10, and 11 using the College Board's PSAT tests, which can alert us to "late bloomers" or students working beyond grade-level curriculum. These tests include helpful benchmark scores to measure student readiness for college or career. Group IQ tests are regarded as less accurate than individually administered tests, and would require additional time away from instruction, which is already at a premium. We find that other sources of data, along with parent or teacher information, are often sufficient for identifying needs and programming to meet them. Parents may request additional testing at any time, however, by contacting the building principal or the Director of Student Services. What is the difference between ability and achievement testing? Ability or cognitive testing normally involves an IQ (Intelligence Quotient) test of some kind and measures the functioning efficiency, speed and accuracy of the thinking brain compared to others of the same age. These tests are independent of any particular curriculum or academic training and involve cognitive functions such as problem solving, comprehension, working memory, and reasoning. IQ is commonly viewed as the capacity to learn quickly and efficiently. Standardized achievement testing measures academic skills relative to grade level expectations. Most achievement tests are aligned to the Common Core Standards and measure students’ mastery of them. They are commonly viewed as measuring what has been learned and can be applied. While each kind of test offers different information about students, for the purposes of skill development and schooling, i.e. accurate placement in and the pacing of curriculum, achievement data have more immediate impact and utility. When students present with unusual or discrepant behaviors (i.e. strong vocabulary but poor word attack skills) or inconsistent data (i.e. strong test scores but poor classwork) or organizational, work completion, or attention concerns and we suspect that something is interfering with achievement, we pursue ability measures to give us a more complete picture of a student’s capabilities and learning needs. Parents may request additional testing at any time by contacting the building principal or the Director of Student Services. What is benchmark assessment? Benchmark assessments are short assessments or writing prompts that are given to all students at a grade level to determine each student’s performance against the grade-level expectations. These “snapshots” give information about how students are progressing toward state goals for their grade level. The results are most often used to guide instruction and determine curriculum effectiveness. What are National and Local Norms? Statistical “norms” are data that compare your child to others of the same age, such as the PSAT scores in middle and high school. They are designed to give parents a snapshot of their child relative to others of the same age across the nation. They are normally best understood as percentiles - not percents. A child who scores in the 81st percentile scored the same or better than 81% of the students of the same age who took the same test. National norms compare our students to all US students who took the same test at the same time. In contrast, local norms compare our students only to each other. In general, our students’ scores are much higher when compared to a national population and are more discriminating when compared to each other. Local norms are most useful in helping educators place students in the most appropriate instructional groupings. When is acceleration considered for a child? Acceleration is an intervention strategy that is highly supported by a wealth of research. When a student consistently performs well beyond grade level expectations or beyond any grade level cohort of students or by achievement, out-performs the differentiated curriculum, acceleration is warranted. Acceleration can be implemented in a single subject or a full grade level, or in extreme cases, multiple grade levels. Typical forms of acceleration include, but are not limited to: early entrance to Kindergarten, subject acceleration, grade acceleration, completing middle school in two years, dual enrollment in high school and college, College in HS courses, Advanced Placement (AP) course enrollment, and early graduation. What is the policy on accessing Advanced Placement classes? Quaker Valley HS currently offers more than a dozen AP courses taught by our teachers in-house, with additional courses available on-line. We practice open enrollment, and all enrolled students are required to take the associated exam, at the district's expense. With approval, advanced students may choose to take exams in courses they've studied independently. What is the complaint process if my child’s needs are not being met? All concerns about your child should first be directed toward a member of your child’s team: begin with the classroom teacher or the school counselor, followed by the principal, who will involve any other support staff as appropriate. Complaints directed toward District Office administration will often be redirected to the building level if the building staff has not been involved first. See Board policy 902. How does Quaker Valley recognize and support students with dual-exceptionality (twice-exceptional or 2E)? Dual exceptionalities, for example, students with strong ability and Autism Spectrum disorder or a specific learning disability, require sophisticated diagnostic and prescriptive instruction that is highly individualized to work through strength areas while developing compensatory strategies for learning challenges. These students often require that we collect more extensive and individualized test data. We use a team approach in analyzing the data and determining what adjustments and modifications will work best with this very unique set of learning parameters. Some students with dual exceptionalities or those who under- or selectively achieve are placed in classes at their instructional level, even when their work completion or grades are at odds with their abilities and potential. Our open access to and nimble delivery of enrichment and other high-end learner services makes accommodating twice-exceptional students a routine occurrence. I read that gifted students have unique emotional characteristics. How are these needs met? Each building is staffed with highly skilled counselors, in addition to the academic specialists who are familiar with these needs and are well equipped to address them in the context of the school day. Careful placement ensures a cohort of similar students for friendships and support. For more in-depth needs, our school psychologist can be consulted and can conduct further assessment of the concerns and make recommendations for services. Glossary of Terms alternative course - any pre-approved course taken outside of Quaker Valley for the purpose of enrichment, acceleration, progress toward graduation or concurrent enrollment. Students are awarded credit but no grade is calculated in the GPA. apprenticeship - a short term experience between an expert and student during which the student has the opportunity to learn and have work evaluated by a practicing professional in an area of interest. The AIU sponsors numerous apprenticeships for high school students in a variety of fields. cluster grouping - placing students with similar interests or needs together with a teacher who differentiates instruction for this small group within a larger instructional group. This grouping can be temporary or long-term depending on need and agreement among teachers and administration. concurrent or dual enrollment - a student attends classes in two grade levels, two buildings or in high school and college during an academic term. creative scheduling - any deviation from the norm in building a student’s day. Examples include scheduling two courses in the same time period, (alternating attendance and earning a grade and full credit for both) and taking more courses than the number of periods in the day, usually via technology. These circumstances are highly individualized and require input from counselors, teachers and parents with approval from administration. curriculum-based assessments – school developed tests or activities to determine specific instructional level within our curriculum, to identify the place where new instruction should begin and or to measure mastery of the taught curriculum . Results are item-analyzed and instruction or placement is adjusted accordingly. curriculum compacting - process of pretesting for prior mastery, prescribing what remaining curriculum is yet to be mastered and providing the student with new material or enrichment to be completed, usually by contract in the time gained. At the secondary level, compacting involves “packaging” the major readings, materials, writings, projects and other course requirements in such a way that a motivated and capable student can complete them without direct instruction, usually to "buy time" for additional pursuits an areas of personal interest. customized options (in class attendance, homework, etc.) - these are highly individualized situations negotiated as needs dictate. For example, a student may opt to attend class only on days deemed necessary (tests, labs, or the introduction of new or particularly difficult concepts). Students negotiate an agreement outlining the expectations and are held accountable for contracted assignments and all concepts. Another example is excusing exceptional math students from “showing all their work” with the understanding that no partial credit, then, is available for wrong answers. Such customized arrangements offer students additional choices and independence along with increased responsibility and appropriate accountability. demonstration of proficiency (testing out) - student proves mastery of a course or grade level subject by passing an assessment, most often a final exam and/or culminating project. The student is awarded credit for the class and may enroll in another course or begin study at the next level. enrichment - enhancements to the curriculum that present new ideas, extensions or concepts in greater depth to further challenge learners and/or to satisfy an intense interest. guided study - students are provided with a teacher-designed roadmap for a course not currently offered in the master schedule. instructional grouping - flexible groups (Elem), level 3000-4000 (MS), honors/AP (HS) classes are administrative groupings to accommodate learning readiness and are usually determined by previous performance in the subject and by curriculum-based assessments and other appropriate data. Instructional grouping fosters efficient and effective instruction by placing students with similar needs together. For very ready learners, curriculum is presented in greater depth and/or at a more rapid pace. While placement can be for the duration of the course or academic year, performance is closely monitored and constantly reassessed so that students may move as needed. learning contracts - negotiated agreements between a teacher and student for the completion of work, usually at a faster pace than the norm. out-of-level testing - any assessment given to students younger than those for which it was originally designed. The purpose is to assess student needs beyond the current grade level and to make appropriate adjustments to the curriculum, the student's placement or course selection. self-pacing - facilitating the coverage of curriculum at a student’s rate of acquisition through the use of compacting, tiered assignments, guided study, learning contracts, etc. This is a key component in accommodating high-end learners who may not possess prior mastery of the curriculum but can learn and retain new material faster than grade level peers. It requires a high level of motivation and some degree of independence to work ahead of peers. waiver – agreement signed by high school parents and students to permit enrollment in a course for which the high school student lacks the prerequisite grades, teacher recommendation, and/or prerequisite course. The waiver process involves in-depth discussion and clearly identifies the potential consequences of inadequate preparation, but acknowledges the role strong motivation can play and encourages students to “rise to the occasion” if they so desire. National Association for Gifted Children Council for Exceptional Children Allegheny Intermediate Unit Apprenticeship Program National Research Center for Gifted and Talented Education Hoagies Gifted Education Page Institute for Research and Policy on Acceleration
<urn:uuid:917911ea-4188-4ba3-852e-184fdf74150b>
CC-MAIN-2022-33
https://www.qvsd.org/apps/pages/index.jsp?uREC_ID=1240168&type=d&pREC_ID=1469386
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00497.warc.gz
en
0.955685
6,148
2.6875
3
The official definition of executive function is: a set of processes that all have to do with managing oneself and one’s resources in order to achieve a goal. It is an umbrella term for the neurologically-based skills involving mental control and self-regulation. Think of executive function as the “conductor” of all cognitive skills, enabling us to manage our lives, responsibilities, and projects. These skills include: - Inhibition – The ability to stop one’s own behavior at the appropriate time. - Shift – The ability to move freely from one situation to another and to think flexibly in order to respond appropriately to the situation. - Emotional Control – The ability to modulate emotional responses by bringing rational thought to bear on feelings. - Initiation – The ability to begin a task or activity and to independently generate ideas, responses, or problem-solving strategies. - Working memory – The capacity to hold information in mind for the purpose of completing a task. - Planning/Organization – The ability to manage current and future- oriented task demands. - Organization of Materials – The ability to impose order on work, play, and storage spaces. - Self-Monitoring – The ability to monitor one’s own performance and to measure it against some standard of what is needed or expected. Looking at this list, it’s obvious that self-regulation is a critical competency of executive function in two major ways: social-emotional (appropriate behavior in a social context) and cognitive (focus, academic learning, problem-solving). When children are self-regulating, they can both stop or start doing something, even if they don’t want to. They can delay gratification; they can think ahead; they can control impulses and consider options. It is crucial that children learn basic self-regulation in early childhood because research indicates that “children who cannot control their emotions at age four are unlikely to be able to follow the teachers’ directions at age six, and will not become reflective learners in middle and high school.” (http://toolsofthemind.org/learn/resources/research-by-tools/) Breathing Techniques for Executive Function Breathing techniques offer easy-to-practice activities for building basic self-regulation in the body of youngsters and in your classroom. With something specific to do to support themselves when confronted with transitions, sharing, waiting, and re-directing impulses, children are better able to navigate those challenges. As they experience how specific ways of breathing enable them to tolerate feelings and manage impulses, they start to embody greater control. This process strengthens executive function, which builds self-esteem and self-trust. Help kids learn how to count on their inner wisdom and intelligence. Make time for self-reflection and self-care throughout the day. Then introduce and practice breathing exercises regularly as a way to de-stress, recharge, and reset to to an optimal mind-body state. Below are 2 options that offer simple, effective tools for healthy self-regulation. is a fun technique that is sure to make kids laugh and not take things too seriously. Because it requires make a silly blooping sound on the exhale, like a fish, it disperses tension, releases frustration, and busts the stress of over-efforting. Humor and playfulness are keys to accessing executive function and creative thinking. Physiologically, when you inhale deeply, you pull in lots of oxygen needed by our brain and body to stay relaxed and alert. When you exhale completely, you make room for more which helps us release toxins and recharge. Take a deep breath through your nose, Fill up your cheeks with that breath and … Push it all out through your mouth while saying… Bloop, bloop, bloop, bloop, blooooooop. And again, deep breath in your nose… Fill up your cheeks with it and … Exhale it out your mouth … Bloop, bloop, bloop, bloop, blooooooop. activates the midline of the body, connects both hemispheres of the brain, and relieves tension in the eye muscles. As they inhale, direct children to place one hand on their belly button and the other on their sternum, like giving themselves a hug. Then, as they exhale, have them move just their eyes (head remains still) slowly from right to left and back again 2-4 times. This movement facilitates improved eye teaming skills and cross-motor coordination. Overall, Ocean Breath slows, calms, and centers both mind and body, which will enable children to access executive function. Place one hand on your belly button, place the other in the middle of your chest. Press your thumb and forefinger into the soft tissue points beneath your collar bones on either side of your sternum. Inhale fully through your nose and then, as you exhale slowly, move just your eyes from right to left. What is Cross Crawl? Cross crawl refers to movements in which we use opposition such as crawling, walking, running, and swimming. Opposition means that opposite sides of the body work together to coordinate the right arm and left leg, then the left arm and right leg. Therapeutically, cross crawl refers to any intentional cross-lateral activity in which you cross the mid-line of the body, such as touching opposite hand and knee or foot. Performing this movement builds the bridge between the right and left hemispheres of the brain, allowing for electrical impulses and information to pass freely between the two, which is essential for physical coordination as well as cerebral activities, such as learning language, reading, and hand-to-eye coordination. Why is Cross-Crawl Beneficial? As soon as we start to crawl, this cross-lateral pattern of movement stimulates more complex brain and nervous system development and integration. In addition to firing neural pathways in the right and left brain hemispheres simultaneously, a cross crawl movement stabilizes the pelvis while mobilizing the shoulders, reinforcing the walking-gait reflexes (1). In short, any time you do cross crawl, you are re-integrating your brain and nervous system and re-organizing your mind-body connections (2). Because we are daily, hourly, being bombarded and impacted by multiple stimuli and tasks, practicing cross-crawl throughout the day is one of the best self-care activities you can do for yourself. This Valentine’s Day, love yourself and your family by building cross-crawl into your daily schedule. Think of it as a basic part of wellness, like drinking plenty of water. You will not only feel clearer, you will behave and perform better. Try it before homework, testing, or an important meeting, after anything stressful and between different kinds of activities. If you’ve been reading and it’s time to go play soccer – cross crawl. If you’re frustrated with a project – cross-crawl. You need to clear some cobwebs or recharge – cross crawl! Through mind-body science, we now understand that physical coordination precedes cognitive coordination. The ability to do cross-lateral movements with the body literally lays the foundation for other cognitive abilities, such as readiness for fine motor academic work. Though it seems to be a fun, simple exercise, here’s what cross-crawl is doing for you physically and mentally: - Stabilizes your walking gait coordination – builds core strength - Energizes your body and calms your mind – releases tension and stress - Improves your eye teaming skills – essential for focus, reading, and writing - Enhances whole brain thinking – your left and right hemispheres work together - Develops proprioception – your spatial and kinesthetic awareness Cross crawl also offers an effective way to reboot your nervous system and re-integrate mind and body. You can use it regularly to both discharge and recharge your attention and energy. It’s a great break from over- focusing and it works just as well to bring body and mind online. As a stress buster or a warm-up for doing your best, cross crawl has significant social-emotional benefits: Cross-lateral Balancing Cat! - Increased self-awareness - Situational insight - Clarity of thought - Impulse control - Physical coordination in general How Do You Cross Crawl? Stand with your feet apart and your arms open parallel to the ground. Shift your weight to your right foot, lift your left knee and touch it with your right hand. Step back to both feet and immediately shift weight to on your left foot as you lift your right knee and touch it with your left hand. Repeat this several times in a comfortable, upbeat, rhythmic way. Breathe fully and enjoy. When Do You Do Cross Crawl? Most adults can do cross crawl. However, like anything, the more you practice, the easier and more fluid and embodied the coordination pattern becomes. The age when children can intentionally cross crawl varies because they develop at different speeds. Some can easily balance and cross the mid-line of the body by the age of 4 and some find it challenging up to age of 6 or 7. It is age – appropriate for children ages 5 and under to automatically bring their hand to the same knee, demonstrating a same sided crawl (homo-lateral crawl). (1) Walking gait reflexes (2) Neurological disorganisation Recently, my teacher sent me this article by Gabor Maté: How to Build a Culture of Good Health. Read it! It beautifully explains the holistic, relational, developmental nature of health that I think we’ve all experienced at some level but never had words for: Ultimately, healing flows from within. The word itself originates from “wholeness.” To be whole is much more than to experience the absence of disease. It is the full and optimal functioning of the human organism, according to its nature-gifted possibilities. By such standards, we live in a culture that leaves us far short of health. I’ve been studying biodynamic craniosacral therapy and meditating 30-60 minutes a day for over a year now. In the process, I’ve come to embody a new level of self-trust, presence, and health. It has strengthened my ability to be neutral and allowed the deeper forces that created and sustain me to build potency. In Dr. Mate’s words, I’ve been doing this: Give yourself, as best you can, what your parents would have loved to grant you but probably could not: full-hearted attention, full-minded awareness, and compassion. Make gifting yourself with these qualities your daily practice. Now, instead of gripping to protective identifications, I am being moved toward greater fluidity, resilience, awareness, and metabolism. It’s not always pleasant. I’m resolving long held imprints. I cry almost every time. But my tears are cleansing; they do not reinforce any victimhood. Instead, they dissolve old fears that no longer make sense. My personality is less rigid. My window of tolerance is widening. I can see others more clearly. I am able to sustain my own coherence more powerfully. And I can resource myself more effectively. As educators and parents, we are often at a loss as to how to help our children. More and more, we see how trauma and dysregulation impact them negatively. We try to soothe, cajole, convince, manipulate, force, explain, etc. We want them to feel alright and know that everything will be okay. But resolving trauma and truly embodying self-regulation is an inside job. To teach children how to meet their fears and feelings in a healthy way, we must be regulated and model metabolizing our own experiences. To connect them to their inner health forces, we must meet them, as we meet ourselves, with authentic presence and love. Adults need to know, even if their physicians often do not, that their health issues are rarely isolated manifestations. Any symptom, any illness is also an opportunity to consider where our lives may be out of balance, where our childhood coping patterns have become maladaptive, exacting costs on our physical well-being. When we take on too much stress, whether at work or in our personal lives, when we are not able to say no, inevitably our bodies will say it for us. We need to be very honest with ourselves, very compassionate, but very thorough in considering how our childhood programming still runs our lives, to our detriment. To take advantage of the metabolic forces of our own health system, we need to grant ourselves the time and the space to process our own mental-emotional-energetic experiences and make conscious choices that serve our higher intentions. To prevent chronic stress from making us sick, we must stop valuing accomplishment over well-being. And yes, I know that’s challenging inside of … A materialistic culture (that) teaches its members that their value depends on what they produce, achieve, or consume rather than on their human beingness. Many of us believe that we must continually prove and justify our worthiness, that we must keep having and doing to justify our existence. Choose to re-prioritize. Put your health first and your do-list second. Spend time being, processing, loving yourself. Give yourself the gift of meditation this holiday and open the door to expanding your consciousness, embodying self-regulation, and accessing the intelligence of your own system. Your children will thank you! In a recent Movement and Mindfulness™ Curriculum Certification, our trainer, Leah Kalish, MA, taught us about “being in the Vertical versus Horizontal.” She was speaking to the idea of self-care. That it behooves every teacher or parent or caregiver to make taking care of oneself a priority, even before attending to our children. Just like those oxygen masks in airplanes! This concept was a revelation for me. I realized that in my own parenting I was constantly in horizontal mode; trying valiantly to make things happen for my kids. “Here, let me teach you about how this works” or “Let me help with you that.” Which left me feeling frazzled, overwhelmed, and exhausted. There was always so much to do. In horizontal mode we are thinking outside ourselves, multi-tasking, and anywhere but centered in our own spot. We appear to be getting a lot accomplished, but the energy we use to do everything is unsustainable and we are left feeling depleted, scattered. Then, I consciously switched to vertical mode. I hung back and let my kids tell me what they knew about any given subject, giving answers only when questions were asked. I gave them autonomy to dress, bathe, get food for themselves (at age 6 and 10 they were both developmentally capable, but I had stayed in the habit from when they were toddlers). I stopped trying “to do”, and let others do for me. Most remarkably, I had more time and space for myself: to write, do yoga, daydream (if I dared) and any other things that fed my soul. In vertical mode we are aligned with our intentions and rooted in the motivations that drive all we do. Vertically, we are constantly being replenished and re-energized simply by not overdoing, but by being receptive, letting things come to us as opposed to always trying to make things happen. We are present and centered, in the vertical we are balanced. In the vertical state one can revisit and reflect: what is my overall intention (in raising my family/or teaching students/or being a member of this human race)? Who do I want to be and how do I want to feel? When you make time to name it, you can see it, and when you see it, you can be it. The ingenious thing is that when others see it, they can be it, too. In taking care of yourself, you have full access to your coping mechanisms, you’re not running on fumes or giving from an empty place. You become a model for those around you on how to do the same. The Movement and Mindfulness™ certification course was a great experience. I left feeling my whole mind, body, and spirit nourished. I’m excited about sharing this transformative information with my students, other teachers, and especially families. Leah really walks the talk and is such an inspiration to me. I wish every parent and educator could take her course! In exploring how to have family fun playing with being vertical, I adapted old and new material into what I call: Family Freeze Dance. Turn on your favorite tunes, just before dinner or after. Take turns pausing the song, and instead of freeze-ing (which often makes bodies stiff and breathing tight) try dropping into Mountain Pose (standing tall, rooted into the earth yet receptive & soft around the eyes and shoulders). Mountain is such a great pose to practice experiencing being in the vertical with strength AND ease. At the end of the game, use a Humming Breath to calm bodies and bring energy down for the next activity: reading time, dinner time, bed time. Take in 3 more breaths here, while enjoying the view from standing strong and easeful in who you are. April Cantor has been teaching yoga fulltime since 1999; first to adults in studios and corporate centers, and now currently with children. Her former life as a theater arts educator with Stages of Learning in NYC public schools set her on course to working with children in ways that get them out of their desks and feeling at home in their bodies. She founded SoulShine Life Yoga for Kids and Families to bring yoga programs into Brooklyn & NY preschools, and to help families integrate yoga into their busy lives. April finds much inspiration from her two boys, and occasionally facilitates Partner Yoga workshops with her husband, dance educator/choreographer, Barry Blumenfeld. To me, the mindfulness movement has wonderfully enhanced our learning how to self-care, self-regulate, and be responsible for our own well-being and mental health. Because it encourages us to rest back, widen out, and notice without judgment, it also invites us to move out of a pathology paradigm and participate in a health paradigm. It doesn’t focus on what’s wrong. It strengthens our ability to be with what is and motivates us with science validated reminders that enjoying its benefits takes practice. Regular practice slows us down, expands our consciousness, and reconnects us to our greatest asset – our health system. When I say your health system, I am talking about the bigger forces that literally created you and are continually monitoring, metabolizing, eliminating, maintaining, integrating, and renewing you. You are a metabolic miracle, truly. Your pursuit of mindfulness is a doorway to greater access to your innate health and healing power. Every time you meditate, move into greater awareness, or relax deeply, you allow and support this system to process, balance, and re-calibrate you. The more you practice, the more space and fluidity in your system, and the greater ease and well-being you experience. My point is that each of our systems is infinitely intelligent and always moving us toward greater health based on the present circumstances and consciousness. Just as after eating a big meal, you don’t go running because you know your body needs time to digest; in our busy, demanding lives, we can’t just go-go-go. We need to give ourselves time to metabolize the stressors and reset our nervous systems to maintain health. Like a hot bath, mindfulness supports our greater health intelligence to work with and metabolize for us. As you make lifestyle changes and explore how to bring more mindfulness and wellness into your homes and classrooms, where are you coming from? Are you focused on what’s wrong and how to fix it or stop it, which often creates more constriction and diminishes flow and health? Or are you making time and space for your and your students’ systems to function optimally? Can you stop seeing something wrong with you or them, and instead allow, feel, and attend to what is expressed? When seen through the lens of health, everything that arises is for greater health. Can you embrace and be responsive such that what arises can be seen, heard, and processed in the service of greater health? For me, movement and mindfulness go together. They speak to the essence of being human, which is how we come into relationship with and respond to what arises in the dynamic, metabolic flow of energy that is our inner and outer lives. How we dance with our own life-force defines who we are. It is the template for how we show up and what we teach. I’ve spent the last 20 years bringing the same movement and mindfulness practices to education that I use in my own life. I teach them because I believe they are essential tools for self-realization and lay the foundation for embodied wisdom. They are the structures that slowed me down and shifted me into a new paradigm of possibility. They plugged me into a much greater field of awareness and intelligence in which kindness, compassion, and differentiation replaced force, judgment, and projection. I stopped believing all the thoughts in my head and started to see how I could focus my thinking and process my feelings instead. I came to understand the power of meditation, yoga, and self-inquiry to relieve my own suffering and to empower me to take ever greater responsibility for my behavior and happiness. This is an on-going process. My life keeps moving. I continue to practice because I know that I teach from who I am being. The same is true for you, for everyone. Whatever subject matter you teach, what matters most is who you are being. You hold the space. Your mind-body state sets the tone. The younger the students, the more they entrain to you. Your values and principles are the invisible operating system influencing how everyone feels and learns in your classroom. Yes, it’s a big responsibility and one that behooves working on oneself. I meditate every day. It resets my nervous system and cultivates presence. It keeps me honest and in touch with deeper feelings, which translates to being more clear, sensitive, and responsive with others. I can see and hear what is really going on out there, because I’ve practiced sitting with what goes on in here. I encourage you read the article, Seven Ways Mindfulness Can Help Teachers, by teacher, Patricia Jennings, who also wrote, Mindfulness for Teachers. Or learn about Loving Kindness meditation as taught by Sharon Salzberg, who wrote: Mindfulness helps relieve anxiety and can give us a real sense of connection and fulfillment, as well as insight and understanding. The idea is, by developing a different relationship with our experience, we get to see it differently. If an emotion comes up, and we start fighting it, there’s not a lot of learning going on. If we fall into it and become overwhelmed, there’s not a lot of learning going on. Mindfulness helps us develop a different, kinder relationship with ourselves, to see much more deeply into all of our experience. I fuel myself with activities and people that I enjoy. I do something every day to feel blessed and grateful. And if it’s been a rough day, I won’t go to bed miserable. I’ll call a friend, I’ll re-read a card, I’ll watch a favorite movie, I’ll journal. Positive emotions fill our inner tank with vitality and resilience. They boost our immune systems and can transform thinking. Play is just as important for you as it is for the children you teach. Do you make time to play? Could you take a more playful approach to your daily activities? I recommend you explore what makes you happy and do more of it. If that sounds silly or you don’t have time, check out this online course in the Science of Happiness, which I completed last year. It is full research that will motivate you to generate more gratitude and joy for yourself! It will engage you in valuable self-inquiry and offer you a wide range of practices to play with. I cultivate self-compassion. It helps me feel my innate value and recognize the sacred journey of every life. It cultivates humility and respect for others’ struggles and leaves me being kind by default, not to be nice. According to Dr. Kristin Neff, self-compassion is a skill that can be learned by anyone. It involves generating feelings of kindness and care toward ourselves as imperfect human beings, and learning to be present with greater ease during life’s inevitable struggles. It is an antidote to harsh self-criticism, making us feel connected to others when we suffer, rather than feeling isolated and alienated. Unlike self-esteem, the good feelings of self-compassion do not depend on being special and better than other people; instead, they come from caring about ourselves and embracing our commonalities. Self-compassion is not self-pity, self-absorption, or self-indulgence. It is simply a mindset of caring and curiosity for our own process, which helps us develop the inner resources to be able to care about and serve others. The Dali Lama’s translator in many books, Thupten Jinpa, describes it as: the instinctive ability to be kind and considerate to yourself – the put on your oxygen mask first before helping others’ approach to self-care – which makes a big difference when you are dealing with the demands of raising children, dealing with a difficult boss, or facing a relationship crisis. These are 3 practices that work for me (there are more to come :). They help me self-regulate, be mindful, and feel playful with whatever arises. If you know it’s time to up your self-care in order to be the mindful, responsive teacher / person you’d like to be more often, I would be happy to support you with some ideas, suggestions, and coaching. Just email me and we will schedule a call. If there are many of you, we can schedule a conference call. Look forward to connecting! Leah @ move-with-me.com
<urn:uuid:2ac60b65-1945-44c7-931e-64161ef49a22>
CC-MAIN-2022-33
https://move-with-me.com/category/self-regulation/page/2/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00497.warc.gz
en
0.954295
5,543
3.671875
4
Making appropriate parenting arrangements in family violence cases: applying the literature to identify promising practices 2.0 Literature review on impact of family violence Family violence is considered to be any form of physical, sexual, emotional, or psychological abuse that occurs in the context of family relationships. The term family violence encompasses child abuse and neglect, spousal violence (intimate partner violence), and elder abuse. Throughout this document the term family violence is intended to be inclusive of all forms of abuse in the family and the term spousal violence signifies abuse within the context of an intimate adult relationship. In the divorce literature, high–conflict couples are identified as those that require extensive and lengthy court involvement to resolve disputes post–separation. Family violence issues are present in a majority (but not all) of high conflict separations (Jaffe, Austin, & Poisson, 1995; Johnston, 1994). This distinction is important because not all conflict can be deemed violence, but conversely, violence should not be euphemized as conflict. Family violence continues to negatively impact the healthy development of children and families across the country. In Canada, 27% of reported violent crime victims are victims of family violence, and similar rates have been documented in the US. In both countries the number of female victims outnumber the male victims by at least 300% in the context of intimate violence (Statistics Canada, 2004a; Bureau of Justice Statistics, 2000). These rates are comparable to those found in Europe, although the reports of estimated prevalence of family violence vary due to differences in definition, data sources and sampling (Hagemann–White, 2001; Kury, Obergfell–Fuchs, & Woessner, 2004). For example, the British Crime Survey estimates 26% of women and 17% of men are physically assaulted and/or threatened with violence by an intimate partner (Byron & Mirlees–Black, 1999). Similarly, estimates from the Australian Women's Safety Survey, which strictly focused on the prevalence of physical and sexual violence experienced by women and the nature of this violence, reported that 8% of women have experienced at least one incidence of violence, perpetrated by an intimate partner. These cross–national estimates capture the reported (actual or threatened) violent incidents from crime victim surveys. There continues to be debate within the research literature, and among practitioners and other members of the violence prevention community, about using official crime statistics versus random surveys as tools for determining the incidence and prevalence of family violence. (Johnson & Bunge, 2001; Tjanden & Theonnes, 2000). There is general agreement that family violence is an underreported crime. There continues to be a lack of information nationally and cross–nationally regarding the likely number of unreported incidents, as well as the extent and trajectory of family violence. However, Canada has been a forerunner in collecting these data through methods other than crime surveys. Statistics Canada has completed several comprehensive telephone surveys on the topic of family violence (Statistics Canada 2001; 2004a, 2005). While these surveys suggest that rates of victimization of intimate partners is similar to other cross–national samples, there is particularly rich additional information that is captured in these surveys including trends, context, sentencing implications, family violence against children and youth, violence against older adults, and homicide risk. At one level, rates of victimization for females and males look very similar (7% of women vs. 6% of men reported being victims of an act of spousal violence in the previous five years); however, the additional contextual information identified important gender patterns in severity, impact and lethality of violence. Notably, these findings revealed that: - Female victims of spousal violence were twice as likely to suffer ten or more incidents of violence in comparison to male victims (Statistics Canada, 2005). - Female victims of spousal violence were significantly more likely than male victims to suffer injuries, require medical attention, lose time from work, live in fear, and worry about the safety of their children (Statistics Canada, 2005). - Data from the Homicide Survey (Dauvergne, 2003) indicate that between 1993 and 2002, women were four times more likely to be killed by their spouse (8 female homicide victims per million couples compared to 2 male homicide victims per million couples). - Cases of spousal homicide–suicide involve female spouses as the target in 97% of these cases (Statistics Canada, 2005). The most recent survey completed looked at violence after separation and the association with child contact. Twenty–seven percent of estranged spouses with children under 18 years of age reported physical or sexual assault in the previous five years. More than twice as many abused spouses in comparison to non–abused spouses reported that their ex–spouse had no contact with the children (14% vs. 6%, Statistics Canada, 2005). Family violence has an impact on children in both direct child abuse and the indirect impact of exposure to spousal violence. This impact on children has garnered heightened awareness as scholars and those in the broader spousal violence network continue to call for better answers about how to accurately measure the incidence, impact and prevalence of family violence, its impact on family dynamics, and how to create meaningful interventions (Mears & Visher, 2005). While there has been considerable progress in the identification of cases and coordination of community responses to family violence, there is still much to achieve. In particular, the complexity of family violence and its impact on all facets of family functioning and child development is the source of ongoing efforts to improve intervention and prevention. There is a growing awareness of the need for longitudinal research on the impact of family violence on children. Challenging but important issues to study include research into what happens to children after parental separation, and what are the effects of different post–separation parenting arrangements on children who have experienced family violence. While there have been numerous studies related to all forms of childhood victimization and its short term and long term effects on social, emotional, physical and psychological development, this research tends to be compartmentalized. That is, "a relatively small proportion of studies concerned with childhood violence has assessed participants for exposure to multiple forms of violence, multiple incidents of the same type of violence, or exposure to potentially stressful or traumatic events other than violence" (Saunders, 2003, p. 359). This lack of research speaks to the complexity of family violence and how the effects of violence can vary greatly based on an array of variables. Cunningham & Baker's (2004) recent exhaustive review of family violence and child maltreatment literature proposed a model that examines the variables hypothetically associated with the impact of family violence (See Figure 1). This illustration captures the complexity and the substantial number of variables that must be taken into consideration when examining the impact of family violence. The literature related to child abuse is dominated by empirical studies that examine the characteristics, behavioural and emotional impact (immediate and long–term implications), developmental considerations, and societal consequences of child maltreatment. The majority of studies have shown that maltreated children, when compared to children who have not experienced maltreatment, are more likely to display major behaviour problems and emotional difficulties (Egeland, Yates, Appleyard, & van Dulmen, 2002; Jungmeen & Cicchetti, 2003; Maughan & Cicchetti, 2001; Hildyard & Wolfe, 2002), demonstrate more discipline problems at school (Kendall–Tackett & Eckenrode, 1996), are more aggressive towards their peers or more socially withdrawn (Shields & Cicchetti, 2001), have fewer social skills (Levendosky, Okun, & Parker, 1995) and are more likely to be rejected by their peers (Shields & Cicchetti, 2001). The serious long–term effects of child maltreatment have also been noted, including adverse mental health, physical health impairments and societal consequences (National Clearinghouse on Child Abuse and Neglect, 2004a; Higgins & McCabe, 2003; Johnson et al., 2002). While there have been a number of studies that document the characteristics and behavioural impact of child maltreatment, there has not yet been enough research conducted on the relationship between the characteristics of maltreatment and the development of behavioural or emotional problems over time (Ethier, Lemelin, & Lacharit, 2004). It is generally acknowledged in the literature that child abuse and its fundamental causes can be traced to various systems including the family, the community and larger society (Belsky, 1993; National Clearinghouse on Child Abuse and Neglect, 2004b). It is the family system and its impact on child maltreatment that is most important and pertinent to this discussion. In Canada, it is estimated that biological parents are responsible for the majority of child maltreatment, with approximately 90% of all instances of child abuse being committed by at least one biological parent (Trocme, MacLaurin, Fallon, Dciuk, Billingsley, Tourigny, et al., 2001). In addition, extensive research indicates perpetrators of spousal violence are significantly more likely than non–perpetrators to physically abuse their children (review in Bancroft & Silverman, 2002). However, separation of the maternal caregiver from her abusive partner significantly reduces the risk for child maltreatment when spousal violence is reported (Cox, Kotch, & Everson, 2003). In these cases, the identification of these families as high risk may facilitate appropriate intervention and safety planning for the caregivers and their children. Figure 1 - Variables Hypothetically Associated with Impact of Family Violence [ Description ] Source: Cunningham, A. & Baker, L. (2004). What about Me! Seeking to Understand the Child's View of Violence in the Family. Available at www.lfcc.on.ca/what_about_me.pdf There has been much research and policy focus on children exposed to spousal violence. The term "exposure" covers such a wide range of circumstances which include hearing a violent event, visually witnessing the event, intervening, being used as a part of a violent event (e.g., being used as a shield against abusive actions), and experiencing the aftermath of a violent event (Edleson, 1999c). The negative effects of childhood exposure to spousal violence have been presented in numerous studies and meta–analyses (Edleson, 1999a; Kitzmann, Gaylord, Holt, & Kenny, 2003; Wolfe, Crooks, Lee, McIntyre–Smith, & Jaffe, 2003). Most notably, research indicates that children exposed to spousal violence are more likely than other children to be aggressive and have behavioural problems (Graham–Bermann, 1998), have different physiological presentations (Saltzman, Holden, & Holahan, 2005), exhibit higher rates of Post–Traumatic Stress Disorder symptomatology (Kilpatrick, Litt, & Williams, 1997), are likely to try to intervene on behalf of the victimized parent (Peled, 1998), and may also develop a 'traumatic bond' (a longing for kindness, leading to confusion between love and abuse) with the perpetrator (Bancroft & Silverman, 2002). In some cases, children express preference to live with the abusive parent, who is perceived as more powerful. We are only beginning to understand the broader picture as it relates to children's exposure to spousal violence. Research related to the effects of being exposed to spousal violence has evolved over the past decade, but has largely relied on the reporting of victims or other adults (teachers, service providers, etc.) to identify the problematic effects using standardized measures (Ornduff & Monohan, 1999; Morrel, Dubowitz, Kerr, & Black, 2003). A recent review of available studies estimated that less than 20% (of 220 empirical studies) directly asked children for information (Cunningham & Baker, 2004). Recently, researchers have begun turning their attention to capturing children's voices and their experience of being exposed to violence. This research has shown that children usually are aware of the spousal violence that occurs in their family and also freely disclose incidents of their own abuse (Cunningham & Baker, 2004; Ornduff & Monahan, 1999; Holden, 2003). These first–person accounts from children that describe multiple forms of violence in the home converge with other research that indicates child maltreatment occurs most frequently in families where there is also spousal violence present (Edleson, 1999b; Hartley, 2002). Spousal violence and child abuse often occur in the same family and until recently very few interventions were targeted at addressing this duality in families (Straus & Gelles, 1990; Schechter & Edleson, 1999). The majority of studies reveal that in families where there is spousal violence or child maltreatment present, in 30% to 60% of the cases both forms of abuse exist (Edleson, 1999b; Appel & Holden, 1998). The impact on children in these families varies based on the degree and frequency of violence, how much is witnessed and how much is directly experienced, as well as risk and protective factors (Edleson, 2004). Risk factors such as young caregiver age, low education, and low income, and lack of the social support network compound the risk for child abuse associated with spousal violence (Cox et al., 2003). Emerging Canadian interventions, such as the Caring Dads program, recognize this overlap by providing intervention for fathers who have maltreated their partners and children. This program addresses both spousal violence and child abuse (Scott & Crooks, 2004; Crooks, Scott, Francis, Kelly, & Reid, in press). The presence of spousal violence also increases the likelihood of the presence of violence and abuse between siblings (Hoffman & Edwards, 2004). There are few studies that document the incidence and prevalence of sibling abuse, with some researchers suggesting that there are no systematic studies that address the incidence and prevalence of sibling abuse and its impact on future adult functioning (Graham–Bermann, Cutler, Litzenberger, & Schwartz, 1994). One of the most reliable studies, conducted well over a decade ago, reported that sibling abuse is the most common form of violence in the family, with 8 out 10 children reporting physical violence against a sibling (Gelles & Straus, 1988). In addition, parents may view the violence between siblings as mutual and therefore never really consider the possible perpetrator and victim roles that exist in sibling violence (Graham–Bermann et al., 1994). While some degree of intersibling aggression is normal, more severe sibling abuse is a cause of concern, especially in families where there are other family violence issues. Recently, Wiehe's (1997) study on severe sibling abuse described a cascading effect, with the oldest sibling targeting the second child, and this sibling attacking the next youngest child. In this same study, victims of severe sibling abuse reported that their self–esteem and their ability to trust others were negatively impacted, resulting in future problems such as depression, substance abuse, and poor intimate relationships. For siblings who have unresolved abusive relationships throughout childhood, their opportunity to develop a mutually supportive and healthy adult intimate relationships may be compromised (Brody, 1998). Beyond abuse by a sibling, children can be affected by witnessing a parent abuse a sibling, regardless of whether they themselves are targeted for abuse. Although few studies have been done in this area, it seems likely that witnessing a sibling being abused by a parent figure threatens the emotional security a child experiences (Cummings & Davies, 1996; Davies, Harold, Goeke–Morey, Cummings, Shelton, & Tasi, 2002). That is, the child may have a secure relationship with the parent, but the experience of seeing a sibling victimized by that parent may profoundly shape a child's view of the world and relationships. In this case, the child may be physically safe, but may suffer from anxiety related to the possibility that he or she might be a future victim. Furthermore, the observer child may feel guilty about being safe, or conversely, come to see the victimized child as deserving of the abuse, to make sense of the violence. The serious implications for children who are maltreated or exposed to spousal abuse have been well documented. There are a number of studies which indicate that not all children who directly and indirectly experience family violence later develop severe emotional and behavioural problems (National Clearinghouse for Child Abuse and Neglect Information, 2004b). Cunningham & Baker (2004) caution against making assumptions that (1) all children are negatively affected by spousal violence, (2) all children are affected in the same way and (3) that spousal violence should be the sole focus of interventions. Outcomes of individual cases vary widely and are affected by a combination of factors, including the child's age and developmental status when the abuse or neglect occurred, the type of abuse (physical abuse, neglect, sexual abuse, etc.), frequency, duration, and severity of abuse, and the relationship between the victim and the abuser (Chalk, Gibbons, & Scarupa, 2002). These varying outcomes can be seen in families where children have similar risk factors and exposure experiences, but have very different short–term and long–term outcomes. Researchers have begun to explore why some children experience long–term consequences of abuse and neglect while others emerge relatively unharmed under similar circumstances. The ability to cope effectively following a negative experience is sometimes referred to as "resilience." A number of protective factors may contribute to an abused or neglected child's resilience. These include individual characteristics, such as optimism, self–esteem, intelligence, creativity, humour, and independence; parent or family factors such as extended family support, highly educated parents, household rules and boundaries, and a caring adult in the child's life; and social factors such as community well–being, including neighborhood stability and access to health care (National Clearinghouse for Child Abuse and Neglect Information, 2004b). In the same way that there is variability among outcomes for children, there is also great variability among the patterns and contexts of violence between adults in a relationship. A thoughtful analysis of the impact of family violence must consider typologies of violence and the various contexts in which spousal violence can occur. A number of helpful typologies have been developed. The different types of spousal violence have different expectations of future dangerousness and require different social and legal interventions. Johnston and Campbell (1993) were among the first to offer a model for understanding different patterns of spousal violence within high–conflict divorcing families, operating under the assumption that spousal violence arises from multiple sources and follows different patterns in different families. Recognizing that theories from the literature related to family violence are numerous (psychodynamic, biological, family systems, sociopolitical, etc.), these researchers created linkages between these theories to create five categories of spousal violence (with special consideration given to paranoid and psychotic forms of violence). These five types include: - 1. Ongoing / Episodic Male Battering - This type of violence most closely resembles the traditional understanding about batterers as it relates to the cycle of violence theory. Men's perpetration of violence is attributed to "their low tolerance for frustration, their problems with impulse control, and their angry, possessive, or jealous reactions to any perceived threat to their potency, masculinity and 'proprietary male rights'."(p. 193). These men generally are a threat to women, and over time their propensity to use violence increases with the threat of separation and long after separation. - 2. Female Initiated Violence - Women's use of violence (not in the context of self–defense) is seen as a reaction to their own stress and tension. While women demonstrate physical, emotional and verbal abuse within relationships, these acts do not affect the power differential between partners (in relation to perceived or actual power and control dynamics between partners). - 3. Male Controlling Interactive Violence - This type of violence most closely resembles what has come to be known as "mutual violence". This type of violence arises out of a mutual disagreement or verbal altercation and escalates into a physical struggle. It should be noted that the term "mutual violence" is not without controversy, as most advocates and others working in the anti–violence field acknowledge that context and power dynamics are not often recognized in the understanding of this type of violence. Indeed, the name of this category has been identified as problematic due to the seeming paradox of "interactive" and "male controlling" (see Bancroft, 1998 for critique). - 4. Separation / Divorce Trauma - This category refers to acts of violence which only occur about the time of separation, but were not present in the relationship prior to separation. Often, after an escalation of outrage, anger and abandonment, physical violence is typically perpetrated by the partner who is being 'left'. The violence does not develop into an ongoing pattern of violence, but stops following a few isolated incidents at the height of the separation. - 5. Psychotic / Paranoid - The fifth category addresses violence that is associated with psychotic or paranoid reactions due to mental illness or "drug–induced dementia." Psychiatric treatment is recommended as the preferred intervention. However, Bancroft's critique (1998) notes that a person who batters and also has a mental health problem may have two important issues requiring multiple intervention strategies. Furthermore, treating the mental health problem alone may not eliminate spousal violence. Bancroft further argues that a perpetrator of spousal violence who has co–existing mental health problems may require an approach similar to the one needed for the substance abusing batterer; that is, both problems need to be specifically addressed in intervention. Frederick and Tilley (2001) contend that "in order to intervene effectively, it is important to understand the (1) intent of the offender, (2) the meaning of the act to the victim and (3) the effect of the violence on the victim." (p. 1). They describe 5 contexts that must be considered when gathering historical information about spousal violence in a family. Thus, any act of physical aggression must be evaluated in the larger context of these factors. These include: - 1. Generally violent (a "fighter") - Some people are violent regardless of the context. These are people who use violence in situations inside and outside of the family to resolve conflict or to satisfy aggressive impulses. - 2. Battering - Battering consists of not only acts of violence and abuse, but is a component of a larger system of intimidation, control and isolation that purposefully puts the victim at a power disadvantage, severely compromising the victim's independence, self–esteem and safety. While some batterers are also "fighters," many are violent only in a familial setting. - 3. Isolated act (not a "batterer") - The use of violence is highly uncharacteristic and not used in the relationship to exert power or control. The violent incident may occur in a highly stressful situation and the perpetrator normally recognizes the behaviour as inappropriate. - 4. Mental incapacitation - Mental illness, substance use and dependency, and medications contribute to use of violence. For perpetrators who have some mental health impairment, their use of violence in a relationship may be illegal, but may reflect their mental health issues. - 5. Responsive to battering (self–defensive) - Self–defensive violence is always in response to a partner's violence or threat of violence. The use of violence by this person is not part of an attempt to gain control of the relationship, but rather is a response to attempt to protect oneself or gain control in a particular, violent situation. Depending on the combination of type of violence and the context, each situation can call for a different systems (criminal justice, civil justice including family law and child protection aspects, health care, etc.) response. Also, perpetrators of violence can fit into more than one context (i.e. they can be a batterer and be generally violent). Another researcher who has argued for delineation of different patterns of spousal violence is Michael Johnson (Johnson, 1995; Johnson & Ferraro, 2000). His early work identified the important distinction between patriarchal terrorism and common couple violence. More recently, LaViolette has extended this framework to develop a continuum of aggression and abuse. This continuum conceptualizes spousal violence ranging from common couple aggression to terrorism/stalking (LaViolette, 2005). LaViolette has hypothesized a number of dimensions upon which the five (Johnson) types differ, including the contextual factors identified by Frederick and Tilley (2001). Figure 2 depicts this continuum and the characteristics of each type of aggression / abuse. Understanding the differences among these types of violence provides an important foundation for assessing the appropriateness of a particular post–separation parenting arrangement. Examination of the various patterns of family violence also highlights gender differences that need to be discussed. A gendered analysis of family violence is a controversial topic that tends to divide both practitioners and researchers. There is no doubt that male perpetrated violence against women is most often reported to police, results in more serious physical injury, is associated with fear and concern about children's well–being, and accounts for the majority of domestic homicides (Statistics Canada, 1999; Tjaden & Thoennes, 2000; Ontario Domestic Violence Death Review Committee, 2004; Washington State Fatality Review Committee, 2004). At the same time, not all female perpetrated violence is in self–defense, and it is generally accepted that males are more hesitant to report victimization experiences to authorities. Furthermore, although male domestic homicide victims constitute a minority of intimate partner homicide victims, these cases of victimization of male intimate partners present the same challenges for early identification and prevention. their victimization can have the same profound impact on children and extended family members. most recognized experts in the field would agree that one death is one too many and there is a paucity of research on violent relationships in which the female partner is the primary perpetrator. a similar gap exists for understanding same–sex intimate partner violence; this violence is underreported due to the need to disclose both intimate violence and sexual orientation to authorities who may be perceived to be homophobic. - Date modified:
<urn:uuid:9688a24b-8fb8-4bfe-a18a-7389ad0cbab8>
CC-MAIN-2022-33
https://www.justice.gc.ca/eng/rp-pr/fl-lf/parent/2005_3/p2.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00498.warc.gz
en
0.946533
5,424
3.03125
3
Rowlandson, Mary White This is the story of Mary Rowlandson’s capture by American Indians in 1675. It is a blunt, frightening, and detailed work with several moments of off-color humor. Mary, the wife of a minister, was captured by Natives during King Philips War while living in a Lancaster town, most of which was decimated, and the people murdered. See through her eyes, which depict Indians as the instruments of Satan. Her accounts were a best-seller of the era, and a seminal work, being one of the first captivity narratives ever published by a woman. Without works such as hers, there would likely not be many modern works inspired by similar themes, such as The Searchers, starring John Wayne. Life on the Mississippi is a memoir by Mark Twain detailing his days as a steamboat pilot on the Mississippi River before the American Civil War. Roughing It is semi-autobiographical travel literature written by American humorist Mark Twain. It was authored during 1870–71 and published in 1872 as a sequel to his first book Innocents Abroad. This book tells of Twain's adventures prior to his pleasure cruise related in Innocents Abroad. Patterson, John Henry In 1898, during the construction of river-crossing bridge for the Uganda Railway at the Tsavo River, as many as 135 railway workers were attacked at night, dragged into the wilderness, and devoured by two male lions. The Man-Eaters of Tsavo is the autobiographical account of Royal Engineer Lt. Col. J.H. Patterson's African adventures. Among them, his hunt for the two man-eaters. This book was the basis for the 1996 film The Ghost and the Darkness. In 1879 John Muir went to Alaska for the first time. Its stupendous living glaciers aroused his unbounded interest, for they enabled him to verify his theories of glacial action. Again and again he returned to this continental laboratory of landscapes. The greatest of the tide-water glaciers appropriately commemorates his name. Upon this book of Alaska travels, all but finished before his unforeseen departure, John Muir expended the last months of his life. Higginson, Thomas Wentworth These pages record some of the adventures of the First South Carolina Volunteers, the first slave regiment mustered into the service of the United States during the late civil war. It was, indeed, the first colored regiment of any kind so mustered, except a portion of the troops raised by Major-General Butler at New Orleans. These scarcely belonged to the same class, however, being recruited from the free colored population of that city, a comparatively self-reliant and educated race. The Brothers Orville (1871 - 1948) and Wilbur (1867 – 1912) Wright made the first controlled, powered and sustained heavier-than-air flight, on 17th December 1903. They were not the first to build and fly aircraft, but they invented the controls that were necessary for a pilot to steer the aircraft, which made fixed wing powered flight possible. The Early History of the Airplane consists of three short essays about the beginnings of human flight. The second essay retells the first flight: "This flight lasted only 12 seconds, but it was nevertheless the first in the history of the world in which a machine carrying a man had raised itself by its own power into the air in full flight, had sailed forward without reduction of speed and had finally landed at a point as high as that from which it started." Dawson, Sarah Morgan Sarah Morgan Dawson was a young woman of 20 living in Baton Rouge, Louisiana, when she began this diary. The American Civil War was raging. Though at first the conflict seemed far away, it would eventually be brought home to her in very personal terms. Her family's loyalties were divided. Sarah's father, though he disapproved of secession, declared for the South when Louisiana left the Union. Her eldest brother, who became the family patriarch when his father died in 1861, was for the Union, though he refused to take up arms against his fellow Southerners. The family owned slaves, some of whom are mentioned by name in this diary. Sarah was devoted to the Confederacy, and watched with sorrow and indignation its demise. Her diary, written from March 1862 to June 1865, discourses on topics as normal as household routines and romantic intrigues to those as unsettling as concern for her brothers who fought in the war. Largely self-taught, she describes in clear and inviting prose, fleeing Baton Rouge during a bombardment, suffering a painful spinal injury when adequate medical help was unavailable, the looting of her home by Northern soldiers, the humiliation of life under General Butler in New Orleans, and dealing with privations and displacement in a region torn by war. She was a child of her time and place. Her inability to see the cruelty and indignity of slavery grates harshly on the modern ear. Regardless of how one feels about the Lost Cause, however, Sarah's diary provides a valuable historical perspective on life behind the lines of this bitter conflict. A Voyage to the South Sea, undertaken by command of His Majesty, for the purpose of conveying the Bread-fruit tree to the West Indies, in His Majesty’s ship The Bounty, commanded by Lieutenant William Bligh. Including an account of the Mutiny on board the said ship, and the subsequent voyage of part of the crew, in the ship’s boat, from Tofoa, one of the Friendly Islands, to Timor, a Dutch settlement in the East Indies. (Summary is the full title) The title is, I think self explanatory. The nurse in question went out to France at the beginning of the war and remained there until May 1915 after the second battle of Ypres when she went back to a Base Hospital and the diary ceases. Although written in diary form, it is clearly taken from letters home and gives a vivid if sometimes distressing picture of the state of the casualties suffered during that period. After a time at the General Hospital in Le Havre she became on of the three or four sisters working on the ambulance trains which fetched the wounded from the Clearing Hospitals close to the front line and took them back to the General Hospitals in Boulogne and Le Havre. Towards the end of the account she was posted to a Field Ambulance (station) close to Ypres. Pengilly, Mary Huestis Mary Pengilly was taken to a Lunatic Asylum by her sons where she kept a diary, which this book is taken from. Mary records the harsh conditions and treatments received at the hands of the nurses during her stay. Once Mary is released she takes it upon herself to make the authorities aware of the situation at the Provincial Lunatic Asylum. Sailing voyage from England to Portugal in the mid Eighteenth Century, by one of the premier humorists, satirists, novelists and playwrights of his age. It was to be his last work, as his failing health proved unable to persevere much longer after the voyage. Beerson, Joseph Lievesley A Narrative of Personal Experiences of the Officer Commanding the 4th Field Ambulance, Australian Imperial Force . From his leaving Australia December 1914 till his evacuation due to illness after 5 months at Gallipoli. Read to remember those who were there. Leander Stillwell was an 18-year-old Illinois farm boy, living with his family in a log cabin, when the U.S. Civil War broke out. Stillwell felt a duty "to help save the Nation;" but, as with many other young men, his Patriotism was tinged with bravura: "the idea of staying at home and turning over senseless clods on the farm with the cannon thundering so close at hand . . . was simply intolerable." Stillwell volunteered for the 61st Illinois Infantry in January 1861. His youthful enthusiasm for the soldier's life was soon tempered at Shiloh, where he first "saw a gun fired in anger," and "saw a man die a violent death." Stillwell's recounting of events is always vivid, personal, and engrossing. "I distinctly remember my first shot at Shiloh . . . The fronts of both lines were . . . shrouded in smoke. I had my gun at a ready, and was trying to peer under the smoke in order to get a sight of our enemies. Suddenly I heard someone in a highly excited tone calling to me from just in my rear, --'Stillwell! Shoot! Shoot! Why don't you shoot?' I looked around and saw that this command was being given by . . . our second lieutenant, who was wild with excitement, jumping up and down like a hen on a hot griddle. 'Why, lieutenant,' I said, 'I can't see anything to shoot at.' 'Shoot, shoot, anyhow!' 'All right,' I responded. . . And bringing my gun to my shoulder, I aimed low in the direction of the enemy, and blazed away through the smoke. But at the time the idea to me was ridiculous that one should blindly shoot into a cloud of smoke without having a bead on the object to be shot at." The Story of a Common Soldier is a compelling coming of age tale that will appeal not only to Civil War buffs but to anyone who enjoys autobiographies. Written at the urging of his youngest son, when Stillwell was a mature man--a lawyer, judge, and member of the Kansas legislature, it combines graphic detail (provided by his war diary and letters written at the time to his family) with the insights of a thoughtful man looking back on those horrific times. An intensely personal account of the immigration experience as related by a young Jewish girl from Plotzk (a town in the government of Vitebsk, Russia). Mary Antin, with her mother, sisters, and brother, set out from Plotzk in 1894 to join their father, who had journeyed to the "Promised Land" of America three years before. Fourth class railroad cars packed to suffocation, corrupt crossing guards, luggage and persons crudely "disinfected" by German officials who feared the cholera, locked "quarantine" portside, and, finally, the steamer voyage and a famiily reunited. For anyone who has ever wondered what it was like for their grandparents or great grandparents to emmigrate from Europe to the United States last century, this is a fascinating narrative. Mary Antin went on to become an immigration rights activist. She also wrote an autobiography, The Promised Land, published in 1912, which detailed her assimilation into American culture. Watkin Tench was an officer of the British Marines in the First Fleet to settle NSW. This is an interesting and entertaining account of his experiences during that time. Written for the Atlantic magazine in 1877, this is a collection of stories about a trip Mark Twain made with some friends to Bermuda. American novelist Edith Wharton was living in Paris when World War I broke out in 1914. She obtained permission to visit sites behind the lines, including hospitals, ravaged villages, and trenches. Fighting France records her travels along the front in 1914 and 1915, and celebrates the indomitable spirit of the French people. Hambleton, Chalkley J. "Early in the summer of 1860, I had an attack of gold fever. In Chicago, the conditions for such a malady were all favorable. Since the panic of 1857 there had been three years of general depression, money was scarce, there was little activity in business, the outlook was discouraging, and I, like hundreds of others, felt blue." Thus Chalkley J. Hambleton begins his pithy and engrossing tale of participation in the Pike's Peak gold rush. Four men in partnership hauled 24 tons of mining equipment by ox cart across the Great Plains from St. Joseph, Missouri, to Denver, Colorado. Hambleton vividly recounts their encounters with buffalo herds, Indians, and"the returning army of disappointed gold seekers." Setting up camp near Mountain City, Colorado, Hambleton watched one man wash "several nice nuggets of shining gold" from the dirt and gravel, only to learn afterwards that [i]"these same nuggets had been washed out several times before, whenever a 'tenderfoot' would come along, who it was thought might want to buy a rich claim." Two years later, "tired and disgusted with the whole business," Hambleton returned to Chicago, where he arrived "a wiser if not richer man." In later years, Hambleton was a prominent Chicago lawyer, real estate developer, and a member of the Chicago Board of Education. He wrote this candid account for family and friends, publishing it privately in 1898. It is based in good part on letters he had sent from the gold fields to his sister. Summing up his experience with wry humor, he writes: "After selling out my interest in the joint enterprise, I still had left some fifty claims on various lodes . . . Some time after returning to Chicago, I was making a real estate trade . . . and I threw in these fifty gold mines. . . Had I only kept them, and gotten up some artistic deeds of conveyance, in gilded letters, what magnificent wedding presents they would have made. . . In the long list of high-sounding, useless presents, the present of a gold mine would have led all the rest." Grenfell, Sir Wilfred This autobiographical work describes the author's harrowing experience caught on a small drifting piece of ice, while crossing a frozen bay by dog team on the Northern Peninsula of Newfoundland. Do you love books? No, I mean REALLY love books? These series of sketches on the delights, adventures, and misadventures connected with bibliomania (bibliomania is characterized by the collecting of books which have no use to the collector nor any great intrinsic value to a genuine book collector. The purchase of multiple copies of the same book and edition and the accumulation of books beyond possible capacity of use or enjoyment are frequent symptoms of bibliomania.). The author wholeheartedly enjoyed this pursuit all his life and his descriptions are delightful to read. Anyone who has lovingly held a book, smelled it, and enjoyed it for being just what it is, will understand what the author puts so well. According to the author, collectors may be grouped in three classes: Those who collect from vanity, those who collect for the benefits of learning and those who collect out of veneration and love for books. Mr. Field fell squarly in the latter category. Wright, Jacob William Short memory of boyhood by a little-known American poet based in Carmel-By-The-Sea, California. Grant, Ulysses S. In preparing these volumes for the public, I have entered upon the task with the sincere desire to avoid doing injustice to any one, whether on the National or Confederate side, other than the unavoidable injustice of not making mention often where special mention is due. There must be many errors of omission in this work, because the subject is too large to be treated of in two volumes in such way as to do justice to all the officers and men engaged. There were thousands of instances, during the rebellion, of individual, company, regimental and brigade deeds of heroism which deserve special mention and are not here alluded to. The troops engaged in them will have to look to the detailed reports of their individual commanders for the full history of those deeds. Harriet Jacobs' autobiography, written under the pseudonym Linda Brent, details her experiences as a slave in North Carolina, her escape to freedom in the north, and her ensuing struggles to free her children. The narrative was partly serialized in the New York Tribune, but was discontinued because Jacobs' depictions of the sexual abuse of female slaves were considered too shocking. It was published in book form in 1861. These Reminiscences were written and published by the Author in his fiftieth year, shortly before he started on a trip to Europe and America for his failing health in 1912. It was in the course of this trip that he wrote for the first time in the English language for publication. (from preface) Xenophon the Athenian was born 431 B.C. He was a pupil of Socrates. He marched with the Spartans, and was exiled from Athens. Sparta gave him land and property in Scillus, where he lived for many years before having to move once more, to settle in Corinth. He died in 354 B.C. "Anabasis" is a Greek work which meane "journey from the coast to the center of a country." This is Xenophon's account of his march to Persia with a troop of Greek mercenaries to aid Cyrus, who enlisted Greek help to try and take the throne from his brother Artaxerxes, and the ensuing return of the Greeks, in which Xenophon played a leading role. This occurred between 401 B.C. and March 399 B.C. H. G. Dakyns lived from (1838 - 1911). Booker T. Washington Up from Slavery is the 1901 autobiography of Booker T. Washington sharing his personal experience of having to work to rise up from the position of a slave child during the Civil War, to the difficulties and obstacles he overcame to get an education at the new Hampton Institute, to his work establishing the Tuskegee Institute in Alabama to help black people learn useful, marketable skills and work to pull themselves up by the bootstraps. He reflects on the generosity of both teachers and philanthropists who helped in educating blacks and Native Americans. He describes his efforts to instill manners, breeding, health and a feeling of dignity to students. (Mark Nelson) De Quincey, Thomas “Thou hast the keys of Paradise, O just, subtle, and mighty Opium!” Though apparently presenting the reader with a collage of poignant memories, temporal digressions and random anecdotes, the Confessions is a work of immense sophistication and certainly one of the most impressive and influential of all autobiographies. The work is of great appeal to the contemporary reader, displaying a nervous (postmodern?) self-awareness, a spiralling obsession with the enigmas of its own composition and significance. De Quincey may be said to scrutinise his life, somewhat feverishly, in an effort to fix his own identity. The title seems to promise a graphic exposure of horrors; these passages do not make up a large part of the whole. The circumstances of its hasty composition sets up the work as a lucrative piece of sensational journalism, albeit published in a more intellectually respectable organ – the London Magazine – than are today’s tawdry exercises in tabloid self-exposure. What makes the book technically remarkable is its use of a majestic neoclassical style applied to a very romantic species of confessional writing - self-reflexive but always reaching out to the Reader. Famed American humorist Washington Irving published a series of short stories telling of his adventures traveling from America to England. This volume contains some of his observations about that trip, including his impressions of the English countryside, the differences between the wealthy and the poor, rural customs, and other aspects of British culture. During a visit to the library located in the depths of Westminster Abbey, Irving muses on the issue of why some examples of English literature stand the test of time, while others are lost to history. The collection concludes with Irving's memories of his visit to Stratford-on-Avon, the home of William Shakespeare, and the nearby communities that influenced some of Shakespeare's work. ( Greg Giordano) Richard Henry Dana, Jr. While there are many books upon the subject of sea life, there are few that can compare with Two Years Before the Mast. It is the story of a sailor's life from the forecastle, not a captain, nor a passenger, but a regular hand. The book was a great favorite with the jack tar of Dana's day, and two thousand copies are said to have been sold to Liverpool sailors in a single day. Even those who haven't the faintest idea what reefing topsail is, or which is starboard and which larboard, will find it an engaging story of an era long past told in simple narrative style. Henry Ford profiles the events that shaped his personal philosophy, and the challenges he overcame on the road to founding the Ford Motor Company. Throughout his memoir, he stresses the importance of tangible service and physical production over relative value as judged by profits and money. He measures the worth of a business or government by the service it provides to all, not the profits in dollars it accumulates. He also makes the point that only service can provide for human needs, as opposed to laws or rules which can only prohibit specific actions and do not provide for the necessaries of life. Ford applies his reasoning to the lending system, transportation industry, international trade and interactions between labor and management. For each, he proposes solutions that maximize service and provide goods at the lowest cost and highest quality. He analyzes from a purely material viewpoint, going as far as to argue that the need for a good feeling in work environments may reflect a character flaw or weakness. However, his unflinching focus on the ultimate material products and necessities of life provide clever insights in how he created an efficient and flexible system for providing reliable transportation for the average person. Shackleton's most famous expedition was planned to be an attempt to cross Antarctica from the Weddell Sea south of the Atlantic, to the Ross Sea south of the Pacific, by way of the Pole. It set out from London on 1 August 1914, and reached the Weddell Sea on January 10, 1915, where the pack ice closed in on the Endurance. The ship was broken by the ice on 27 October 1915. The 28 crew members managed to flee to Elephant Island, bringing three small boats with them. Shackleton and five other men managed to reach the southern coast of South Georgia in one of the small boats (in a real epic journey). Shackleton managed to rescue all of the stranded crew from Elephant Island without loss in the Chilean's navy seagoing steam tug Yelcho, on August 30, 1916, in the middle of the Antarctic winter. “Beasts, Men and Gods” is an account of an epic journey, filled with perils and narrow escapes, in the mold of “The Lord of the Rings.” The difference is: it’s all true. Ferdinand Ossendowski was a Pole who found himself in Siberia and on the losing side during the Bolshevik Revolution. To escape being rounded up and shot, he set out with a friend to reach the Pacific, there to take ship back to Europe. During his journey he fell in with dozens of other military men who shared the same objective… but nearly every one of them perished on the way. It’s up to you to decide whether Ossendowski was threatened most by the beasts, by the men, or by the gods, or indeed, by the severe and uncompromising landscapes of Siberia, Mongolia, and China. That he survived at all seems improbable. The mystical mysteries and magics of Buddhism, “The Yellow Faith”, were woven about and through his sojourn and had no little part in his survival. Time after time he was put in the delicate position of being the bargainer between warring groups, and ultimately, only incredible luck and his friendship with the Hutuktu of Narabanchi Monastery saw him through. When published in the United States, this book caused a sensation and became a best-seller. Frances Anne "Fanny" Kemble Fanny Kemble was a British actress who married mega-plantation owner, Pierce Butler of Georgia. During her marriage she kept journals of everyday life, and after some years grew to detest the institution of slavery and the things Butler stood for. Kemble eventually divorced him, but it wasn't until after the Civil War had started that she published her journal about her observations and the experiences of the hundreds of African American slaves owned by her ex-husband. Chopin was a romantic era Polish composer. This work is a memoir by Liszt who knew Chopin both as man and artist. This memoir gives a unique understanding to the psychological character of the compositions of Chopin. It also offers Liszt's insight into some of Chopin's polonaises, especially the grand polonaise in F sharp minor. Liszt explains the strange emotion "ZAL" which is inclosed in his compositions. Then, presents a brief sketch on the lives of other great people in Chopin's circle. After that, Liszt discusses Chopin's fame and early life. Finally, Liszt gives a detailed account on Chopin's sufferings due to ill health and the unfortunate departure of the great composer. Henry David Thoreau On August 31, 1846, twenty-nine-year-old Henry David Thoreau left his cabin on Walden Pond to undertake a railroad and steamboat journey to Bangor, Maine, from where he would venture with his Penobscot guide Joe Polis deep into the backwoods of Maine. This account of his expedition, some think, is a profounder exploration of the philosophical themes of the more famous "Walden" than is the latter book, at least revealing his fundamental perspectives in embryonic form. Of particular interest is his sympathetic and penetrating observation of the Indian nations of Maine, especially the Penobscot and Passamaquoddy. Early in his life Samuel Butler began to carry a note-book and to write down in it anything he wanted to remember; it might be something he heard some one say, more commonly it was something he said himself. In one of these notes he gives a reason for making them: “One’s thoughts fly so fast that one must shoot them; it is no use trying to put salt on their tails.” So he bagged as many as he could hit and preserved them, re-written on loose sheets of paper which constituted a sort of museum stored with the wise, beautiful, and strange creatures that were continually winging their way across the field of his vision. As he became a more expert marksman his collection increased and his museum grew so crowded that he wanted a catalogue. In 1874 he started an index, and this led to his reconsidering the notes, destroying those that he remembered having used in his published books and re-writing the remainder. The re-writing shortened some but it lengthened others and suggested so many new ones that the index was soon of little use and there seemed to be no finality about it. In 1891 he attached the problem afresh and made it a rule to spend an hour every morning re-editing his notes and keeping his index up to date. At his death, in 1902, he left five bound volumes, with the contents dated and indexed, about 225 pages of closely written sermon paper to each volume, and more than enough unbound and unindexed sheets to made a sixth volume of equal size. Cellini's autobiographical memoirs, which he began writing in Florence in 1558, give a detailed account of his singular career, as well as his loves, hatreds, passions, and delights, written in an energetic, direct, and racy style. They show a great self-regard and self-assertion, sometimes running into extravagances which are impossible to credit. He even writes in a complacent way of how he contemplated his murders before carrying them out. He writes of his time in Paris: Parts of his tale recount some extraordinary events and phenomena; such as his stories of conjuring up a legion of devils in the Colosseum, after one of his not innumerous mistresses had been spirited away from him by her mother; of the marvelous halo of light which he found surrounding his head at dawn and twilight after his Roman imprisonment, and his supernatural visions and angelic protection during that adversity; and of his being poisoned on two separate occasions. The autobiography is a classic, and commonly regarded as one of the most colourful; it is certainly the most important autobiography from the Renaissance. Cellini's autobiography is one of the books Tom Sawyer mentions as inspiration while freeing Jim in The Adventures of Huckleberry Finn. Jack London died at the age of forty. In this autobiographical work, London describes his life as seen through the eyes of John Barleycorn (alcohol). There is much controversy about the cause of his death just as there is about alcoholism and addiction. London's brutally frank and honest analysis of his own struggles and bouts with alcohol was way before its time and more modern theories of addiction. With remarkable candor and insight, London describes the demons and gods he encountered through both friend and enemy, John Barleycorn. Jack London credited his skill of story-telling to the days he spent as a hobo learning to fabricate tales to get meals from sympathetic strangers. In The Road, he relates the tales and memories of his days on the hobo road, including how the hobos would elude train crews and his travels with Kelly’s Army. Churchill, Winston S. When the self-proclaimed Mahdi (“Guided One”) gathered Islamic forces and kicked the Anglo-Egyptians out of the Sudan, he unleashed a backlash. With the image of the heroic General Charles Gordon dying at Khartoum, the British public was ready to support a war to reclaim the lost territories. And when the political time was right, a British-Egyptian-Sudanese expedition led by the redoubtable Herbert Kitchener set out to do just that. The river involved was the Nile. For millennia, its annual flood has made habitable a slender strip, though hundreds of miles of deserts, between its tributaries and its delta. Through this desolate region, man and beast struggled to supply the bare essentials of life. Though this same region, the expedition had to find and defeat an enemy several times larger than itself. The young Churchill was hot to gain war experience to aid his career, and so he wangled a transfer to the 21st Lancers and participated in the last successful cavalry charge the world ever saw, in the climactic battle of Omdurman. He also had a position as war correspondent for the Morning Post, and on his return to England he used his notes to compose this book. 'Roughing It In the Bush' is Susanna Moodie's account of how she coped with the harshness of life in the woods of Upper Canada, as an Englishwoman homesteading abroad. Her narrative was constructed partly as a response to the glowing falsehoods European land-agents were circulating about life in the New World. Her chronicle is frank and humorous, and was a popular sensation at the time of its publication in 1852. Volume 2 of Aubrey's sparkling gossipy biographical pieces on his contemporaries, including Bacon, Jonson and Shakespeare, Brief Lives' glimpses into the unofficial side of these towering figures has won it an undying popularity, with Ruth Scurr's recent reimagined "autobiography" of Aubrey, breathing new life into this classic for the next generation of readers In 1892, anarchist and Russian émigré Alexander Berkman was apprehended for the failed assassination of industrialist Henry Clay Frick. This was a retaliatory act meant to incite revolution against those who had violently suppressed the Homestead Steel Strike — but for Berkman, it was a crime that ultimately led to his 14 year incarceration in Pennsylvania’s notorious Western Penitentiary. First published by Emma Goldman’s Mother Earth Press, Prison Memoirs of an Anarchist is a classic of autobiographical literature that recounts his experiences in the brutal, dehumanizing world of America's prison system. (ChuckW) Sarah Emma Edmonds The “Nurse and Spy” is simply a record of events which have transpired in the experience and under the observation of one who has been on the field and participated in numerous battles—among which are the first and second Bull Run, Williamsburg, Fair Oaks, the Seven days in front of Richmond, Antietam, and Fredericksburg—serving in the capacity of “Spy” and as “Field Nurse” for over two years. While in the “Secret Service” as a “Spy,” which is one of the most hazardous positions in the army—she penetrated the enemy’s lines, in various disguises, no less than eleven times; always with complete success and without detection. Her efficient labors in the different Hospitals as well as her arduous duties as “Field Nurse,” embrace many thrilling and touching incidents, which are here most graphically described. The Cruise of the Snark (1913) is a memoir of Jack and Charmian London's 1907-1909 voyage across the Pacific. His descriptions of "surf-riding", which he dubbed a "royal sport", helped introduce it to and popularize it with the mainland. London writes: Through the white crest of a breaker suddenly appears a dark figure, erect, a man-fish or a sea-god, on the very forward face of the crest where the top falls over and down, driving in toward shore, buried to his loins in smoking spray, caught up by the sea and flung landward, bodily, a quarter of a mile. It is a Kanaka on a surf-board. And I know that when I have finished these lines I shall be out in that riot of colour and pounding surf, trying to bit those breakers even as he, and failing as he never failed, but living life as the best of us may live it. (Excerpted from Wikipedia) Leonardo da Vinci This is a compilation of the thoughts on art, science and life of Leonardo da Vinci, translated by Maurice Baring and edited by Lewis Einstein. E. E. Cummings "For this my son was dead, and is alive again; he was lost; and is found." He was lost by the Norton-Harjes Ambulance Corps. He was officially dead as a result of official misinformation. He was entombed by the French Government. It took the better part of three months to find him and bring him back to life—with the help of powerful and willing friends on both sides of the Atlantic. This is his story. Prof. Hiram Bingham of Yale Makes the Greatest Archaeological Discovery of the Age by Locating and Excavating Ruins of Machu Picchu on a Peak in the Andes of Peru. There is nothing new under the sun, they say. That is only relatively true. Just now, when we thought there was practically no portion of the earth's surface still unknown, when the discovery of a single lake or mountain, or the charting of a remote strip of coast line was enough to give a man fame as an explorer, one member of the daredevil explorers' craft has "struck it rich." Struck it so dazzlingly rich, indeed, that all his confrères may be pardoned if they gnash their teeth in chagrin and turn green with envy. The lucky man is Prof. Hiram Bingham of Yale, he whose hobby is South America. He has just announced that he has had the superb good fortune to discover an entire city, two thousand years old, a place of splendid palaces and temples and grim encircling walls, hidden away so thoroughly on the top of a well-nigh inaccessible mountain peak of the Peruvian Andes that the Spanish invaders of four hundred years ago never set eyes upon it. He calls it Machu Picchu. (From New York Times, June 15, 1913) One hundred years ago in the summer of 1911, Bingham discovered Machu Picchu, returning in the summer of 1912 to excavate under the auspices of Yale and The National Geographic Society, and coming home to great acclaim and a spate of published articles and photos. He fully described the 1911 expedition and original find in his 1922 book INCA LANDS: Explorations in the Highlands of Peru. Help the Librivox 2012 World Tour celebrate South America in September 2012! (ToddHW) Plunkitt, George Washington "I seen my opportunities and I took 'em.", George Washington Plunkitt of Tamminy Hall. There's good graft and bad graft according to Plunkitt. Listen to this candid discourse from a 19th century politician, and decide for yourself if things have changed. This book tells of a girl named Alice falling through a rabbit hole into a fantasy world populated by peculiar, anthropomorphic creatures.
<urn:uuid:cf06a934-1d9f-4ffc-b6d1-eb9a618630b3>
CC-MAIN-2022-33
https://beelingo.com/Audiobooks/ByGenre/111/1
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00298.warc.gz
en
0.9731
7,808
2.84375
3
At the time of Saddam Hussein, the situation of the Arameans (fake: “Assyrians”) of Iraq was relatively good. There was security, stability, they could practice their belief in freedom, build new churches, renovate old churches and teach their language freely. Their economic situation was good, they were highly educated and occupied high social functions. It was unthinkable that Islamic terrorists could harass the Arameans or other minorities without disappearing behind the bars, if they were lucky. However, they often received the death penalty. Islamic terrorists did not exist at all in Iraq. Saddam Hussein acted very hard and decisive against fanatics and every attempt was mercilessly nipped in the bud. In short, the Aramean nation of Iraq, consisting of various denominations, had it very well under the reign of Saddam Hussein. For example, Turkey is officially a democratic country. However, the Arameans of Iraq had much and much more rights under "dictator" Saddam Hussein compared to the Arameans in President of Iraq Due to overwhelming Aramean presence, in the time of early Christianity, the area in the middle of Iraq, around Seleucia-Ctesiphon, was called "Beth-Oromoye / Aramaye" in Aramaic, that is: the house of the Arameans. Furthermore, the Arameans were geographically divided into West and East Arameans at the beginning of Christianity. Roughly speaking, the Euphrates River was the border. Those who lived east of Euphrates, roughly the Persian Empire, were called East Arameans, and those who lived West of Euphrates, roughly the Roman / Byzantine Empire, were called West Arameans. Exact numbers are unknown, however estimates ranged from 800,000 to 1.5 million Arameans in Iraq divided over 6 denominations, namely: The West Aramean Syrian Orthodox The West-Aramean Syrian Catholics The West-Aramean Melkiten The East-Aramean Nestorian Chaldeans The East-Aramean Nestorian "Assyrians" who in turn are split into: The Assyrian Catholic Church of the East (Since 1976, the term "Catholic" has no relationship here with Rome) and the Ancient Church of the East (since 1968 The demise of the Aramean nation of Iraq was brought about by the illegal and criminal invasion of the Americans and their allies in 2003, which completely destroyed the country. The house of the Arameans "Beth Oromoye / Aramaye" has been destroyed in the name of "democracy" and "human rights." Before the invasion in Iraq in 2003 and the overthrow of the regime of Saddam Hussein, the Americans launched a worldwide tasteless, filthy, vicious, nefarious and dastardly campaign, based on lies, delusion, deceit and fabricated stories, to unprecedentedly demonize and finally overthrowing The climax of lies, fraud and deception (here, was the speech by US Secretary of State Collin Powell to the UN Security Council in which US provided so-called "evidence" for Iraq's chemical weapons. The "evidence" produced by the US convinced doubting countries to approve invasion of Iraq. Speech by Minister Colin Powell to the UN Security Council on 5th of February 2003 On March 19, 2003, the Americans invaded Iraq to bring "democracy," "human rights," and "equality for all.” However, what the Americans and their allies have done is something completely different, namely: - Cultivation of extremist Muslim terrorist groups (here, here and here), trained and supported by Western secret services, to act as death squads causing chaos, misery, bloodbaths, hopelessness and disrupting the country. - Stirring up religious differences terrorist attacks with enormous bloodbaths and flying around of human body parts which was determining the street scene in Baghdad and other parts of In the colonial Western fake news media, these massacres were attributed to the so-called "Al Qaeda" or other terrorist groups, while in reality they were committed by special terror brigades that were created, armed and trained by the colonial Western powers, especially by the Americans. Regardless of the various exotic names given in Western fake news media to Islamic terrorist groups such as "Al Qaeda", "Al Nusra", "Islamic State (IS)," Islamic State of Iraq and the Levant (ISIL) "Islamic State of Iraq and Syria" (ISIS), they all are unholy products of destruction and extermination created in the insane colonial Western laboratories. There was no such thing as "Islamic terrorists" under the rule of Saddam Hussein. They are unholy products of blood, suffering, misery and chaos from the colonial Western Freemasonic / illuminati powers and their Middle Eastern cronies. - Poisoning Iraq with depleted Uranium (here, resulting in malformed children, increased cancer and other diseases and on long term millions will die. - More than one million Iraqis murdered by the occupying power and their special terror brigades that were mentioned in their fake news media under - Chaos, mutual distrust and corruption through which nothing works anymore. - Under the watchful eyes of the Americans and British, the Arameans (false: were threatened, bullied, abducted, murdered, their homes and land confiscated, their churches blown up and set on fire, their clergy abducted and killed, their women were dishonored and killed. Results of Depleted Uranium: Malformed However, to the colonial Western powers, these crimes against the Aramean nation of Iraq were not enough to chase them away from their ancestral lands. Much more had to be done to completely drive away the Arameans from their home "Beth Oromoye / Aramaye", and that was the deployment of the by the West created, trained and armed Islamic terrorists, also known as ISIS / ISIL. In 2014, under the guidance of Americans and their allies, the ISIS / ISIL terrorists invaded the Nineveh plain, chasing away hundreds of thousands of Arameans from their homes and cities. As a result, most of the Aramean people have left the indigenous lands of their forefathers, which they have inhabited for thousands of years, and have fled to Western countries. And with that the criminal Freemasonic/ Satanic/ Illuminati powers have achieved their goal. Fides News Agency reported on 8th of august 2014 the story of an Aramean refugee from Qaraqosh in an article entitled "Kurden ziehen sich aus Qaraqosh zurück: gab es dafür einen amerikanischen Befehl?": “I have asked the members of Kurdish Peshmerge militia, who previously had defended Qaraqosh, why in view of the advance of the fighters of Caliphate they had withdrawn from the city. The answer was that the withdrawal took place because of orders they had received from American commandos who are deployed in the Iraqi Kurdish area” Everything was planned in advance and meticulously executed according to the unholy plans forged in the insane colonial Western laboratories. CrossTalk on Christianity: Below we will provide the reader with few examples to explain the "liberation" of Iraq by the Americans and their allies and the consequences for the Aramean nation of Iraq: On the crimes of ISIS/ISIL terrorists, we read: Secretary of State of State declared on March 17, 2016, and on August 15, 2017, that Daesh (also known as the Islamic State of Iraq and Syria or ISIS) is responsible for genocide, crimes against humanity, and other atrocity crimes against religious and ethnic minority groups in Iraq and Syria, including Christians, Yezidis, and Shia, among other religious and ethnic ISIS / ISIL terrorists are the such-and-such unholy and insane killing machines created in the bloodthirsty colonial Western laboratories (here, here and here). The crimes committed by ISIS / ISIL have taken place under the guidance of the Americans and their Middle Eastern henchmen. On the disappearance of Aramean Christianity in Iraq, we read: “the number of Christians living in Iraq has dropped from an estimated 800,000 to 1,400,000 in 2002 to fewer than 250,000 in 2017” This would never have happened under Saddam Hussein and therefore is yet another unholy acidic fruit of the involvement of the Americans and their allies with Iraq. It was a stable, safe and prosperous country that they have 2)On 31st of January 2019, the East- Aramean Chaldean Patriarch Louis Sako stated that around 1 million Aramean (fake: “Assyrian”) Christians have left Iraq. Again this is the malignant and demonic result of the "liberation" of Iraq by the Americans and their allies in 2003 for the so-called "democracy" and "human rights." The Arameans of Iraq had no weapons to defend themselves and were therefore an easy target for the boys (here, here and here) of the colonial Western powers, forcing them to leave the country. Another point is again the involvement of the colonial Western powers in the 16th and 19th centuries with the Aramean nation. They have brought about unbridgeable division, hatred and mutual distrust between the various Aramean denominations, which makes it impossible to find mutual cooperation in the hostile environment of the Middle East. Patriarch Rafael Louis Sako In other words: Aramaeans themselves are largely to be blamed for their downfall in the Middle East because they cannot get rid of the unholy heritage of the colonial activities among them. The Arameo-Chaldean Patriarch Sako also refers to campaigns by fanatics to encourage hatred, violence and confiscation of Christian homes registered in Baghdad and other Iraqi cities. Patriarch Sako also refers to the problematic relationships with some so-called "Christian" Syrian Catholic Archbishop of Mosul, Yohanna Petros Moshe, sent an alarming letter on 6th of march 2019 to the Prime Minister of Iraq, Mr. Adel Abdul Mahdi, about the plans of the urban Management of the Nineveh Province where a large part of Aramean nation (false: "Assyrian") If the plans are implemented, it will result, according to Mgr. Petros Moshe, in altering the balance and ethnic composition of the local population” The plans of urban Management of Nineveh Province are according to mgr. Petros Moshe aimed at: the creation of new settlements in the area, also in an attempt to support the repopulation of areas and villages that remained deserted after the defeat and the forced exodus of the local populations that occurred during the years of occupation by the jihadist militants of the Islamic State (Daesh)” Several local Aramean (fake: “Assyrian”) organizations have expressed their uneasiness about “a building project that involves the construction of hundreds of new real estate units in the urban area of a town in the Nineveh Plain, traditionally inhabited by Christians. The project, called "Sultan City" - reports the website ankawa.com - plans to use agricultural land belonging to Christian families, in an area where currently the military control exercised by the popular mobilization forces, Shiite militias considered close to Iran, is very strong.” First the Arameans are intimidated, murdered and chased away by the boys (here, here and here)of the colonial Western powers and then the Islamic groups come to take possession of the Aramean houses, lands and properties, completely cleansing the area of the original Aramean inhabitants. This is the umpteenth bitter fruit of the criminal and carnivorous invasion of the Americans in Iraq. "Democracy" and "Human Rights" according to the colonial Western model means in practice endless massacres, chaos, humiliation, untold suffering, misery and ethnic cleansing. 4)On 13th of May 2019, two Aramean women, mother and daughter, were heavily mistreated by armed men in their house in the Aramean city of Bartella in the plain of Nineve, in the north of Iraq. The perpetrators then plundered the house and took the property of the women. The event is not considered as ordinary crime case by the Aramean inhabitants of Bartella en elsewhere. Professor Muna Yaku, professor of law at the University of Salahaddin in Erbil, “links the beating of the two women to other intimidating actions aimed at removing or keeping Christian families away from their villages of origin, located in the Nineveh Plain, where Christians fled from between the spring and summer of 2014, when the entire area had fallen under the control of the Islamic State (Daesh) Welcome to the world of Islam! We are familiar with this intimidation and ethnic cleansing technology and are strongly reminiscent of the 1980s and 1990s in Tur Abdin, southeastern of Turkey. Special units of the Turkish secret service in cooperation with the Kurds intensively intimidated the Aramean people and every now and then a prominent Aramean individual was killed. Due to the continuing uncertainty, unsafe situation, intimidation, murders and the lack of independent law and order, the Arameans were forced to leave Tur Abdin. Their villages / towns, lands and properties were subsequently seized by the Kurds. To hide these heinous crimes, the Turks had a very good excuse, namely: the PKK did it. What happened in Bartella would have been unthinkable under Saddam Hussein's regime. Such a thing is only possible in a country that is "liberated" by Americans and their allies and engulfed in chaos, corruption, duplicity and 5)In May 2019, the Arameo - Chaldean Patriarch Louis Sako of the Chaldean Church of Babylon reported on the marginalization and discrimination of the Arameans (fake: "Assyrians") In the Iraqi parliament there are 5 seats reserved for the Christian parliamentarians. Patriarch Sako's complaint is that the big parties are stealing these places by putting forward their own so-called "Christian" candidates to occupy these seats in parliament. According to Patriarch Sako, "these elected members of parliament" do nothing for Christians and they do not care about their situation. The same applies to elections for municipal and administrative councils. Patriarch Sako says that in 1970 5% of the Iraqi population was Christian, while after the overthrow of the Saddam Hussein regime in 2003, their number fell to less than 2%. The Arameo- Chaldean Patriarch Louis Sako mentions two examples of discrimination against a Christian graduate and a professor. “Maryam Maher is a young Christian graduate (female) with high grades and has been listed by the Ministry of Higher Education and Scientific Research (HESR) among the outstanding graduates of the college for the academic year 2016-2017, with a recommendation to be appointed, but the implementing agency ignored that "because she is The second example is about the issuance of a Christian professor: “an official letter from the Secretary General of the Council of Ministers Dr. Mahdi Mohsen Al-Alak, to replace the current President of the University of Hamdanyia with a more efficient Christian Professor, but the decision was not implemented also.” On the unfairness of employment of Christians, the Patriarch expresses his dissatisfaction about “disregarding the compensatory law of employing Christians to replace their Christian colleague who "resigned, left the job for different reasons or retired, this law was approved by the Council of Ministers in 2018” Chairman Assyrian Democratic Movement (ADM) in Iraq President of United States Premier of United Kingdom The downfall, marginalization and discrimination against the Aramean Christianity in Iraq is exclusively caused by the reprehensible, division promoting, hate-generating and spiritual fornication of the apostate, occult and unholy colonial Western invention "Assyrians". The apostate Arameans who identify as “Assyrians” are unequalled masters of spiritual fornication. They go to bed with everyone with just one condition and that is: promotion of the unholy and occult “Assyrianism.” There are no moral restrictions to their fornication, no restrictions regarding Christian or human values, everything is allowable to promote their “Assyrianism” at any costs! Mr. Yonadam Khanna and his Assyrian Democratic Movement (ADM) in Iraq played the pioneering role to welcome Mr. Bush and Blair to Iraq and was blowing the joyful trumpet of “liberation”, “new Iraq”, “human rights”, “democracy” etc. There is really something heartbreaking going on here. While the anti-Aramean colonial product "Assyrians" brought their Western masters and creators into Iraq and welcomed them as their "liberators" and spiritual "relatives" and offered them their help and services for "building a new Iraq", the same colonial powers had prepared unholy plans to destroy the Arameans of Iraq and to drive them out of the land where they have been living for thousands Result of this unprecedented spiritual fornication: massacres, bloodbaths, chaos, refugee flow, burning / destruction of churches / mosques, ethnic cleansing, uranium poisoning and the destruction of life in Iraq. Mr. Bush and Blair managed masterfully to win the hearts of the unholy spiritual colonial product “Assyrians” of their forefathers to side with them in toppling the regime of Saddam Hussein. The “Assyrians” went into ecstasies when Mr. Bush and Blair made reference to them in their (October 7, 2002) “The oppression of Kurds, Assyrians, Turkomans, Shi'a, Sunnis and others…” (March 16, 2003) “…..All the Iraqi people -- its rich mix of Sunni and Shiite Arabs, Kurds, Turkomen, Assyrians, Chaldeans,……” (April 28, 2003) “Whether you're Sunni or Shia or Kurd or Chaldean or Assyrian or Turkoman or Christian or Jew or Muslim (March 30, 2003): “I want all Iraqis - Arab, Assyrian, Kurd, Turkoman, Sunni, Shiite, Christian and all other groups…” To completely win their hearts, Mr. Bush added the organization of Mr. Younadam Khanna (ADM) to the list of opposition groups who were supported by the US to topple the regime of Saddam Hussein. In “Presidential Determination No. 2003-05” we read: hereby determine that each of the following groups is a democratic opposition organization and that each satisfies the criteria set forth in section 5(c) of the Act: the Assyrian Democratic Movement;……………………………. I hereby designate each of these organizations as eligible to receive assistance under section 4 of the Act…………………………. This resulted in fallen Arameans who call themselves “Assyrians” welcomed Mr. Bush and Blair as “heroes” to please come and “liberate” them from that “ruthless” dictator Saddam Hussein and were eager to assist Mr. Bush and Blair in every possible ways to get rid of Saddam Hussein and bring “democracy”, “freedom” and “human rights” to Iraq. The result of this shameless spiritual fornication was just terrifying to our people in Iraq. Already more than 70% of the Aramean indigenous nation has left Iraq and the remaining Arameans are also considering leaving the country. And with that, the spiritual children of Nimrud and Semiramis have fulfilled their task under supervision of their colonial illuminati masters. On 2nd of May 2004, the local News Paper “The Modesto Bee (http://www.modbee.com/)” reported in an article “Sign bashing president shocks Assyrian visitor” on the visit of Younadam Khanna to the United States. On the love of “Assyrians” for Mr. Bush, we read: “Walking through Dulles International Airport on Saturday, Younadam Kanna, the leader of the Assyrian Democratic Movement political party in Iraq, saw a sticker on a newsstand that read: Bush is the Butcher of Baghdad.”……………… …………………. "I could not believe what I read," …… "To us in Iraq, he is not a butcher. He is a savior. Yes, maybe he is a butcher to terrorists, but we appreciate that." is a hero in Iraq — he brought freedom to our country," he said. "Yes, (Osama) bin Laden thinks Bush is a butcher. But Americans think so, too? It is a tragedy to think that some Americans share viewpoints with a terrorist like bin Laden." On toppling of Saddam Hussein, Khanna says: "The vast majority of Iraqi people are happy that Saddam is gone," …………………. "We have a On the role of media concerning coverage on Iraq, Mr. Khanna says: "The media is exaggerating only the negative side," ……. "Ninety percent of the new Iraq is good. Ten percent is bad. But the media only focuses on the problems. Where are the stories of our new life, In the National Review Online of 19th of may 2004, Mr. Khanna says in an article entitled “Kana's Iraq, a story you can't hear enough of”: “We are calling on...America not to stop; to go on with us on this blessed mission, …………. this blessed mission On the regime of Saddam Hussein Mr. Khanna says: "Under Saddam's sectarian, apartheid policies, we were fifth-degree citizens," Kana explains. "…………….. On the “glorious” liberations and the rights of “Assyrians” he says: "The Iraqi people are free now," ……. "For the first time in the history of Iraq — ……. our neighbors, and the majority of people today, recognize us [Assyrian Christians],……………………. “Bush is hero”, “Bush is savior”, he brought freedom”, “our new life”, “our liberation”, “our freedom”, “blessed mission”, “liberation, democracy, freedom”, “we are free”, recognize us” ? Well beloved reader, what to say about such foolishness, blindness and sorcery? Mr. Bush had impressed the apostate Arameans who call themselves “Assyrians” so much so, that they were attributing him kind of messianic qualities in describing him as their “savior.” Please note that Mr. Khanna proudly states neighbors, and the majority of people today, recognize us [Assyrian Christians]…” This has been always the ultimate goal of the apostate Arameans who call themselves “Assyrians” and that is: glorification and advertising the unholy illuminati product “Assyrians”, no matter the killings, ethnic cleansing, destruction of the life in Iraq, blowing up Aramean churches and forcing the Aramean indigenous nation into Diaspora. All these sacrifices are okay, as long as they can benefit from it to advertise the colonial illuminati invention “Assyrians” in reference to all the Aramean denominations in Iraq and abroad. Such statements by Mr. Khanna and other “Assyrians” to “glorify” the work of Mr. Bush in Iraq, did not contribute to the wellbeing of the Aramean people in Iraq. On the contrary, these kind of extremely dangerous statements were serving as grist to the mills of fanatics in Iraq to target the defenseless Arameans in Iraq. The result of this foolishness, spiritual fornication and blindness was that under the watchful eyes of the Americans and British, the Aramean churches were blown up and burned, their clergy abducted and killed, their men, women, sons and daughter killed and more than 70% of them have expelled out of Iraq. But, beloved reader, who cares about this terrifying misery that is happening to our people in Iraq? World powers, media…..who????..... You see, this is the way the evil Mr. Khanna however cannot get enough in expressing his admiration for the “liberators” of Iraq no matter the daily flying around of human body parts, bloodbaths, indescribable horrors, disrupting and extermination of life in Iraq. On 15th of august 2008, The Modesto Bee reported in an article entitled “Visiting Turlock, Iraqi official says his country is stabilizing” on the visit of Younadam Khanna: we got rid of the Saddam Hussein regime, and we appreciate all the Americans and other friends who gave their support to get rid of this bad guy," We cannot imagine beloved reader that it was the will of the God YAHWEH, the God of Abraham, Isaac and Jacob to plunge Iraq in this hopelessness, sorrow and misery. On the contrary, the daily bloodbaths, havoc, deformed children due to use of uranium, extermination and devastation of the life in Iraq are a delicious sacrifice to the god of illuminati lucifer/satan. This are however matters where Mr. Khanna and other apostate Arameans who call themselves “Assyrians” are not able to understand anything from it. Do not try to explain them that the so-called “terrorists” are nothing but an unholy product created and developed in the insane and antichristian colonial illuminati laboratories. They give you the impression as if they are completely hypnotized and cannot look outside the box they have been put in. This insanity, spiritual suicide, foolishness and betrayal can only be understood if we assume here that a horrible form of demonic fanatical nationalism has taken hold of the colonial product "Assyrians", a product created in the odious, criminal Western camps of destruction and brainwashing. And their colonial Western masters at least understand their psychology very well to play them The apostate Arameans seem to be hypnotized to such extent by the occultism in which they are completely immersed that they can no longer think sensible. There is apparently no one among them who critically evaluates the unholy fruits of "Assyrianism" of the past hundred years and questions what it all has produced, except chaos, occultism, anti-Christianity, mutual hatred, division, misleading the young people and, above all, loss of respect from surrounding peoples. They are completely blinded and cannot think normally. This is the umpteenth indication of an intensive demonic activity that holds them in a strong grip. Iraq was a rich and prosperous country, with a strong economy, good education and health care system where all sections of the population lived peacefully, regardless of their beliefs. There was stability and security. Saddam Hussein was the guarantee for the protection of minorities of Iraq so that they were not overrun by the majority. The fanatical Islamic forces in Iraq did not stand a chance under Saddam Hussein, they were fought effectively and forcefully. The Arameans of Iraq had a good time under the reign of Saddam Hussein. They were well educated, held high social positions, were able to practice their faith undisturbed and in freedom, were safe and satisfied. The criminal, malicious and illegal invasion by the Americans and their allies in 2003 to overthrow Saddam Hussein's regime has brought about the demise of the Aramean nation of Iraq. Under the watchful eyes of the Americans and British, the Arameans were killed, their churches burned down, their clergy abducted and murdered by Western death squads, presented in various Western fake news media under exotic names. This has forced the vast majority of them to leave Iraq. With the disappearance of Saddam Hussein, the Aramean presence in Iraq also began to disappear. And that was also the intention of the criminal colonial Western Illuminati powers, namely to rid Iraq of its native Aramean They are marginalized, discriminated against, not taken seriously and their homes, lands and property confiscated. The anti-Aramean, anti-Christian, divisive, occult and unholy colonial Western product "Assyrians" have played a crucial role in the downfall of the Arameans of Iraq. They were used by their colonial Western masters and creators to destroy and poison Iraq and to ethnically cleanse the land of the original Aramean inhabitants.
<urn:uuid:239a82f7-7492-42cb-a5ed-115d5aaf21d2>
CC-MAIN-2022-33
https://www.aramnahrin.org/English/Arameans_Iraq_Marginalisation_Downfall_25_6_2019.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573163.7/warc/CC-MAIN-20220818033705-20220818063705-00096.warc.gz
en
0.943506
6,653
2.953125
3
By James Marino Mired in combat during the Battle of Hürtgen Forest of Germany, an American soldier wrote in December 5, 1944: “The road to the front led straight and muddy brown between the billowing greenery of the broken topless firs, and in the jeeps that were coming back they were bringing the still living. The still living were sitting in the back seats and some were perched on the back seats and others were sitting facing forward on the radiators because the jeeps were that crowded. You could see the white of their hasty bandages from far off and there were others of the still living who were on stretchers strung from the front seats to the back and on the radiators too, and the brown blankets came up to their chins. Overhead the sky was grey.” The Battle of Hürtgen Forest: Why Hemingway Called it “Passchendaele With Tree Bursts” The area surrounding the Battle of Hürtgen Forest encompassed about 50 square miles of rugged, densely wooded terrain along the German-Belgian border south of Aachen, the first major German city to fall to the Allies. The area was thickly laced with mines, barbed wire, and concealed pillboxes with interlocking fields of fire. The American First Army was tasked with capturing the forest to secure the right flank of the advancing VII Corps, to prevent the Germans from mounting counterattacks from its concealment, and to attack a large portion of the Germans’ fixed fortifications of the Siegfried Line from the rear. The dominant geographic feature in the Hürtgen was near Bergstein, the Burgeberg-Castle Hill, or as the Americans called it, Hill 400 because of its height in meters, equivalent to 1,312 feet. Its steepest slope was at a 45-degree angle, and the hill was thickly wooded with evergreens. From this promontory the Germans enjoyed a commanding view of American movements and directed both nasty artillery fire and counterattacks. No American vehicle or foot movement went undetected from this vantage point. It was the cornerstone of the German line. If the Americans seized Hill 400, it would provide excellent observation of the Roer River, the next Allied objective after Hürtgen was cleared of Germans. The U.S. 9th Infantry Division made the first offensive move into the Hürtgen in September 1944 and was still there by the middle of October, having gained three kilometers at the cost of 4,500 casualties. A new attack was launched in November by the 28th Infantry Division. No appreciable ground was gained, and 6,184 Americans were casualties. The 4th Infantry Division was committed at the end of the month. After sustaining 6,053 casualties, it was relieved by the 8th Infantry Division early in December. In his book The Mighty Endeavor, Charles MacDonald described the fighting in the Battle of Hürtgen Forest as “the battle of the hedgerows all over again only this time with freezing rain, sleet, snow, flood, mud, pillboxes, and dense woods straight out of frightening German folk tales. It was the Argonne of World War II.” “The Purest Hell I’ve Ever Been Through” The men of the 2nd Ranger Battalion, veterans of D-Day who had landed on Omaha Beach and some of whom had scaled Pointe-du-Hoc, were attached to the VII Corps and became caught up in the attrition of forest combat. Prior to this, the battalion had remained in Normandy to perform a series of odd jobs. When VII Corps began its drive into Brittany, the Rangers were ordered in as well. The battalion helped capture the great port city of Brest, and after a two-month respite, the battle-hardened soldiers joined the offensive against the Siegfried Line. On November 14, the 2nd Rangers, attached to the 28th Infantry Division, moved into the front line with a complement of 485 enlisted men and 27 officers. General Norman Cota, commander of the 28th Division, who had personally seen the Rangers in combat on Omaha Beach, used the 2nd Ranger Battalion to replace units of the 112th Infantry Regiment on the front line. Lieutenant Bob Edlin of A Company walked his platoon through the snow and ankle-deep mud to the village of Geremeter. There the Rangers met the infantry of the 112th. Edlin recalled, “The infantry outfit that had been up there was actually almost running in retreat just to get away. I ran into a friend in that unit, Captain Preston Jackson, who said, ‘Bob this is the meanest son of a bitch that you’ve ever seen in your life up there. I wish you wouldn’t go.’” The Rangers did go and were immediately greeted with their first but not their last German artillery barrage. “Suddenly the artillery came. It’s the purest hell I’ve ever been through. It was just round after round of crashing and smashing, beating on your head till you think there is no way you can stand it,” recalled Edlin. The most shocking surprise to the Rangers was not the enemy but the Americans. Captain Sid Solomon noted his men’s observations. “The Rangers of Baker Company were amazed to see the GI equipment, clothing, and even weapons that had been discarded by the division troops who had previously held this area. Cold weather and a driving rain did not help the morale of those inexperienced American troops.” Even more disturbing was the fact that the regiment left wounded behind. Frank South, a medic with the 2nd Ranger Battalion, recounted the discovery. “We moved to what had been a German troop shelter at the crossroads near Vossenack. When we entered the troop shelter, we medics were shocked to find several wounded Americans there. In addition to abandoning equipment and supplies along the road, the 112th Infantry Regiment deserted some of their own wounded in their haste to retreat. Of course, we took care of their injuries and immediately evacuated them.” The Rangers ran patrols, dug in deeper, reinforced their foxholes with logs, and battled the cold weather. Fortunately, direct intervention by the Supreme Allied Commander, General Dwight D. Eisenhower, helped the 2nd Rangers to be better prepared for the cold weather than most of the regular infantry in the Hürtgen. Bob Edlin explained, “While at the bivouac, we were visited by General Eisenhower. The whole battalion gathered around and he just flat-out asked if anybody could tell him why we didn’t have the new boot packs. One of the men yelled out, ‘Hell, General, everybody back at headquarters has got them,’ which was true. Everybody at Army, corps, and divisional headquarters was wearing boot packs, parkas, and warm clothes. That gear never leaked down to the front lines. We were still wearing summer clothes and it was in the low thirties. General Eisenhower said that will be taken care of, and God rest his soul, it was. A few days later we received boot packs, and even wristwatches. He must have raided the whole damn headquarters to get enough for one Ranger battalion.” Rangers Under V Corps Near the end of November, the 28th Division relinquished its frontline sector to the 8th Infantry Division, V Corps. The Rangers remained and supported the 8th, but were now under V Corps command. After the 8th Division moved into the line, the Rangers moved Companies C, D, E, and F a short distance behind the front. A and B Companies remained deployed on the extreme right flank of the division’s 121st Infantry Regiment. Lieutenant James Eikner, in charge of headquarters communication, explained how the men felt. “We were a specialized unit, all volunteers. Putting us in a defensive position wasn’t utilizing our skills and capabilities. Just sitting in those foxholes. We were very disappointed about this.” All three infantry divisions, 9th, 28th, and 8th, had tried to seize Hill 400 but failed. A combat command of the 5th Armored Division tried in the first week of December. Despite armored support, its infantry was also repulsed. The 47th Armored Infantry Battalion barely held Bergstein against German counterattacks and was in no shape to participate in another attack on Hill 400. Positioning to Take Hill 400 General Walter Weaver, commander of the 8th Division, personally asked the V Corps commander, General Leonard Gerow, for Rangers to bolster the armored combat command in Bergstein and to assist his division’s next assault on Hill 400. Gerow, with approval from General Courtney Hodges, commander of the First Army, released the 2nd Rangers from V Corps control and trucked them from their bivouac to Hill 400. Weaver then decided to let the Rangers assault the hill by themselves, so as to keep his division fresh and to allow the Rangers to work in their own manner. That same day, Lt. Col. James Rudder, the battalion commander, was transferred to the 28th Division’s 109th Regiment. Captain George S. Williams, the executive officer, and Captain Harvey Cook, the intelligence officer, were called to 8th Division headquarters and given the mission. They were to relieve elements of Combat Command Reserve (CCR), 5th Armored Division, outside the town and take Hill 400. Captain George S. Williams, promoted to major, assumed command of the tactical effort. Major Williams returned to the Ranger assembly area at 2130 hours. Trucks carried the Rangers toward the town of Kleinhau. The trucks were dispersed and “we had a heck of a time getting them together,” recalled Williams. From Kleinhau the companies began their march to Bergstein. Shortly before the move, Lt. Col. Rudder hustled back from First Army headquarters to help Williams as much as possible, and Williams proceeded to Brandenberg. The Rangers marched through darkness, mud, and bone-chilling cold to reach Bergstein before dawn. At Brandenberg, Major Williams contacted Colonel Glen Anderson, the CCR commander. Anderson outlined the difficulties his unit had faced at Bergstein and gave the group a guide to the armored infantry company command post in the town. Captain Harold K. Slater went to the western edge of town to contact elements of the 5th Armored Division. By this time Companies A, B, and C had arrived. They went on through without stopping to take up defensive positions to the west and south of Bergstein. There were no troops in these positions until their arrival. The men of the 5th Armored were all in cellars, and they provided no guides. Fortunately, the 2nd Rangers had three lieutenants working as advanced liaison officers. These men were briefed at the armored command post on the locations of enemy positions. The lieutenants helped the battalion move into its positions by 0300 hours. Three companies, A, B, and C, dug in on the edge of a wood near the base of the hill. Between 0300 and 0500, Companies D, E, and F settled into Bergstein. Rudder’s Assault Plan Rudder devised the assault plan before the Rangers left the bivouac. The men were confident in it because it was Rudder’s concept. Companies D and F would assault Hill 400, while Companies A, B, and C secured nearby ridges, established roadblocks, and provided supplemental mortar fire. Company E and the tanks of CCR remained in Bergstein as the reserve to support the assault or to respond to the expected German counterattack. A small scout party from Companies D and F reconnoitered the hill before the assault. Lieutenant Len Lomell took out the patrol from Company D. Lomell recalled, “The patrol was to discover evidence of pillboxes, bunkers, and enemy positions. I set out with my patrol at 0330 and returned with the information to the battalion forward CP at 0600.” Headquarters reviewed the information and passed it to the assault companies. The plan still called for E Company to be the reserve while D and F Companies, the assault force, assembled near the church and cemetery in a partially sunken road that paralleled the base of Hill 400. The assault was at 0730 sharp. Williams called for the opening salvo just before sunrise. The Assault Begins The quick, crisp barrage caught the Germans by surprise. At dawn, Williams launched two companies across an open field and up the heights, using one company’s covering fire to support the attack. Companies D and F, a total of 65 Rangers, moved out from their positions in Bergstein and crossed the line of departure at 0730. German troops of the 272nd Volksgrenadier Division reacted like veterans, scrambling to their positions despite being under the American artillery barrage. Sid Salomon, commanding B Company, observed the German response. “The enemy defenders immediately became alert. A red flare shot in the air from an enemy outpost. Shortly thereafter, a heavy mortar and artillery barrage came down on the assaulting Rangers. Heavy small arms and machine gun fire was directed on the rushing Rangers. Casualties on both sides began to mount. A creeping German artillery barrage behind the assaulting Rangers produced more Ranger casualties. The enemy offered stiff resistance.” Four machine guns fired point blank at the Rangers moving up the hill. A German observation post swung into action, directing accurate mortar and artillery fire. Williams wrote in his after action report, “The Germans poured in mortar, 88, 120, and self-propelled gun fire.” Company C fired in support of the charge as its companions crossed the field. Salomon described the opening moments of the Ranger advance: “The CO at the appropriate time gave the word ‘Go!’ With whooping and hollering as loud as possible, firing clips of ammo at random from their weapons in the direction of the hill, the Rangers ran as fast as they could across the approximately 100 yards of open, cleared field into the machine-gun and small-arms fire of the German defenders. Crossing the field, and before reaching the base of the hill, the company commander and his runner became casualties, but undaunted, the remaining D Company Rangers charged up the hill.” Lieutenant Lomell described the opening moments of his first platoon’s action. “The platoon crossed the 100 yards of half frozen mud, shooting randomly, we were at a dead run, facing small arms fire.” Salomon and Lomell did not know what really triggered the assault. Staff Sergeant Bill “L Rod” Petty of F Company did. “During the wait for the jump-off enemy mortars burst about 75 yards behind us and moved toward us,” Petty recalled. “You could feel the tension building as voices grumbled about why we didn’t charge. We were stuck waiting for our own artillery to lift. We knew the enemy mortars would reach us before our barrage lifted. We were caught in a 200 yard area between the barrages. A new officer ordered a scout forward. McHugh and I screamed at the GI not to listen to the order. Private Bouchard stood. Four steps later he was cut down. This shot was the fuse that ignited the explosion of the Ranger charge. I was pulling Bouchard back when I heard McHugh yell, ‘Let’s go get the bastards!’ Waving his Tommy gun over his head, he broke across the field. As one man, F Company with bayonets shining, hip firing, and yelling a battle cry that probably goes back into the eons of time charged into the jaws of death. I never saw a more brave and glorious sight. It was a moment of being proud to be a Ranger.” Not all Rangers felt enthralled by what was happening. Herman Stein, also of F Company, had more basic thoughts. “I wasn’t thinking about a … thing. My one thought was, ‘Let me get the hell across this field into some woods over there.’” In his book Citizen Soldiers, author Stephen Ambrose related the Rangers’ comparison of Point-du-Hoc and Hill 400, “Those who were at Pointe-du-Hoc on D-Day recall Hill 400 as worse. It was not as precipitous, but it was rocky shale, with frost and snow on it, and they had no grappling hooks or ropes. It was hand-over-hand, using the third hand to keep up a stream of fire.” Bud Potratz, D Company, was part of the whirlwind of combat edging up the hill. He remembered, “Fox Company led the way followed by the 1st and 2nd sections of the 1st Platoon of Dog Company. We fell to the ground at the sunken road and began firing our rifles toward the two burned-out buildings in front of the D Company attack. Mortar shells fell all around us, and our guys were getting hit. Captain Otto Masney, commander of F Company, gave the order to fix bayonets, and suddenly Mike Sharik stood up and yelled, ‘C’mon, you unholy bastards!’ and off we went. I remember firing at the hip and hollering ‘Hi-Ho Silver!’ as we trotted across the open field toward the base of the hill.” Salomon could also see F Company moving up the hill into enemy strongpoints. “An enemy machine gun located at the left lower corner of the hill wounded and killed several of the F Company Rangers as they crossed the open field,” he recalled. “The remainder of the company continued forward, some running faster than others, all firing their weapons running up the hill…. Some of the Germans at the lower base of the hill either turned and ran up the hill to avoid the charging Rangers or stood up and surrendered.” Herman Stein figured the Germans saw it this way. “Half the Krauts ran and half gave up. I guess if you see 120 men acting like a bunch of Indians coming at you, you think these guys are nuts! We were yelling like crazy—rebel yells.” Private William Anderson, a seasoned Ranger broken from sergeant to the lowest rank for garrison infractions; Sergeant Petty, and Pfc. Cloise Manning were the first F Company men to reach the summit of Hill 400. They saw an enemy bunker with steel doors on the crest. Petty thrust his Browning Automatic Rifle (BAR) into an aperture and emptied a 20-round magazine. Anderson shoved in a couple of grenades. Just then, an enemy shell exploded, killing Anderson. Captain Masney arrived with more soldiers and captured the bunker. According to Lomell, Sergeant Harvey Koenig’s squad chased the remaining Germans down the hill, almost to the Roer River, before returning to deploy along the forward crest. By 0835 the 2nd Rangers held the hill and had captured 28 prisoners. The German Barrage The Rangers knew the Germans would follow with an immediate counterattack. Quick preparation and placement of the men had to be achieved. The men of D and F Companies found it difficult to dig foxholes in the rocky ground. Both sides recognized the tactical importance of Hill 400. The Germans threw in the crack 6th Parachute Regiment to counterattack. According to captured German records, Field Marshal Walter Model had offered Iron Crosses and a two-week furlough to any Germans who recaptured the hill. German artillery peppered Hill 400. The bursts showered the Rangers with deadly shrapnel. The shale prevented them from digging in, but the bunkers provided some cover. Potratz remembered, “Shells hit the hill from three sides. We could hear our comrades trying to dig in. There were screams of dying men. The smoke burned our eyes and nostrils. It was horrific, and the voices of the wounded tore our hearts.” The barrage cost casualties in more than one way. In his book Closing with the Enemy, historian Michael Doubler reported the psychological impact. “One new replacement in the 2nd Rangers saw the head of a fellow ranger less than three feet away blown completely off. The new soldier became speechless, did not know his name, and could not recognize anyone around him. The Rangers evacuated the replacement off the hill between counter-attacks. He ended up in a stateside psychiatric ward.” The Rangers were too weak to hold all points along the line simultaneously. Lomell had to learn where the Germans were building strength to attack. He boldly sent out two-man recon patrols to check likely enemy assembly areas down the hill. With the intelligence the patrols gathered, Lomell was able to meet each thrust with the little strength he had. As a result, Hill 400 was saved by brains and bravery at the junior level. Staff Sergeant Petty, in charge of what was left of F Company, organized his defense on top of the hill. Herman Stein took the 1st and 2nd Platoons and set up near the bottom of the hill toward the river. “I am convinced that these six to eight men were the vital factor that kept ‘F’ from being overrun,” claimed Petty afterward. He was right; the Germans focused their efforts on that day entirely on F Company in an attempt to recover the bunker. Outnumbered 10 to One The first of five German counterattacks during the next two days hit the Rangers at 0930. In each assault the German force numbered between 100 and 150 men. Most of the attacks developed from the south and east where wooded areas close to the hill’s base allowed a company of German paratroopers to launch the assault. Major Williams described one of the counterattacks. “Germans were in and around the bunker on the hill before the Rangers were aware of their presence. Once on the hill they attempted to rush the positions. They used machine guns, burp guns, rifles, and threw potato masher grenades. Hand-to-hand fights developed on top of the hill in which some use was made of bayonets.” Petty was wounded in the fight and was evacuated that night. The Rangers held, forcing the Germans to regroup. The enemy artillery did not let up. By noon, two of the Ranger companies mustered only 32 men. The survivors of D Company lost their commanding officer just before the German counterattacks began. Captain Masney was captured attempting to move down the hill to bring up more reinforcements. Lomell was the sole officer of D Company still standing. Lomell was also wounded, his left index finger almost severed and bleeding from the ears from the concussions of the artillery barrage. General Weaver was unable to disengage any troops to relieve the Ranger Battalion. Stuck on the hill, the Rangers threw back every German counterattack over the next 40 hours. The German paratroopers launched the second counterattack at mid-afternoon as about 150 men struck F Company. Ranger casualties increased. Lomell recalled, “We were outnumbered 10 to one. We had no protection, continuous tons of shrapnel falling upon us, hundred of rounds coming in.” 18 Battalions of Artillery German efforts were about to pay off when a single Ranger turned the tide. Lomell recalled the moment. “My platoon sergeant, Ed Secor, a very quiet man, out of ammo and unarmed, seized two machine pistols from wounded Germans and in desperation charged a large German patrol, firing and screaming at them. His few remaining men rallied to the cause, and together they drove the Germans back down the hill.” By 1600 hours, the Rangers had only 25 men left on top of Hill 400. “We had stopped another counterattack, but if the Germans had known how many men or really how few we had up there, they would have kept coming,” reflected Lomell. The desperate situation seemed to be sensed back at Ranger headquarters in Bergstein. Major Williams dispatched urgent messages to General Weaver for reinforcements, but none were available. Williams scraped together a platoon of 10 men from E Company in Bergstein and sent them scampering up the hill. The unit arrived in the nick of time just as the third counterattack erupted. The Germans struck with both of their regrouped companies. American artillery was the key to the repulse of the third German counterattack. More precisely, it was one American forward observer, 1st Lt. Howard K. Kettlehut, from CCR’s 56th Armored Field Artillery Battalion. He had a clear 360-degree field of vision atop the hill. As the German wave crested the hill from three directions, Kettlehut brought down the American firepower. “At one point Kettlehut brought down fire from all artillery available in the Corps—18 battalions in all: 155s, 75s, self-propelled, 8-inch and 240mm guns placed a ring of explosive shells around the hill,” remembered an observer. Kettlehut directed the barrage to keep the German paratroopers out of the hilltop positions and to prevent further reinforcements from the woods. This counterattack, together with another at 1500 hours, was beaten back. “It was a Death Factory” The evacuation of the wounded was difficult during the fight. The first aid station in the bunker held as many as 20 wounded at a time. When night fell, ammunition bearers clambered over the snow, ice, and rocks up to the top and returned carrying the wounded on litters. Around 2100 hours, several men from E and C Companies performed the duty. Medic John Worthman recalled the arduous process of moving the wounded. “Most wounded had to be carried back to the aid station on litters. Carrying litters is cruel work in good terrain and inhuman punishment on wet hillsides under tree bursts.” Litter jeeps were waiting at the bottom of the hill. Nearly every jeep in the battalion was used for the evacuation. The battalion physician, Captain Walter E. Block, taught his medics never to hesitate to go in harm’s way to care for a Ranger. Block was killed while tending the wounded and coordinating the evacuation when a shell burst on the roof of the aid station. Throughout the night the Germans shelled the Rangers. “We stayed out all night,” Stein said later. “There was a sort of drizzle. At that point we had about six guys from F Company left. One of these men included a replacement, Julian Hanahan, who fought like a veteran. We had a few guys down below, at the base of the hill.” Lomell summed up the entire first day on Hill 400. “It was a death factory. One way or another, they got you. You froze to death or you got sick or you got blown to bits. June 6, 1944, was not my longest day. December 7, 1944, was my longest and most miserable day on earth during my past 75 years.” The Last Counterattacks Just after dawn on December 8, E Company reported being counterattacked from the north by troops coming from Obermaubach. The German 6th Parachute Regiment probed the hill under the cover of artillery fire. At 0808 friendly artillery fire, which was already on the way, was requested on the road to the north. It proved effective, and the Germans withdrew. Kettlehut and the artillery easily handled the morning attack. Captain Arnold recalled the most difficult attack of the 8th. “The heaviest counterattack of the fight was launched at 1500 on December 8. Between 100 and 150 men supported by direct fire of the 88s, self-propelled guns, mortars, and artillery attacked from all sides. Five of them [German soldiers] got within 100 yards of the church which was being used as a first aid station. Artillery fell all around the aid station, one round entering one window and leaving through another, taking away part of the second window. This attack lasted two to three hours and was beaten back by artillery.” During the night, the Germans tried to slip through the Rangers’ foxholes toward the bunker. The Rangers hit the small German groups with short bursts of BAR and rifle fire or grenades. A 20-minute final barrage from the American artillery drove the Germans off the hill for the fifth and final time. The German attacks had all been directed at D, E, and F Companies and had inflicted severe casualties on the Rangers. However, the Rangers held with sheer guts and accurate artillery support. The Hürtgen Forest Campaign Concludes By nightfall on December 8, General Weaver had juggled the lines of the 13th Infantry Regiment and was able to free up a battalion. Trucked to Hill 400, the infantrymen shuffled up the slope to the crest during the night. The Rangers were finally relieved. During 40 hours of intense fighting, the 2nd Ranger Battalion had lost 107 men wounded, 19 killed, and four missing, a quarter of their original strength. But the Rangers had seized Hill 400, the first American unit to do so in the four-month battle. Unfortunately, nine days later the Germans retook the hill from the 13th Regiment. The U.S. Army would not seize Hill 400 again until February 1945. The December engagement on Hill 400 concluded the Battle of Hürtgen Forest campaign. The 2nd Rangers had demonstrated their mettle. Sid Salomon summed it up best: “The people in command did not know what the Rangers were. A Combat Command of 5th Armored Division, 3,000 men with tanks, failed to take the Burgeberg. Three companies of Rangers, just 200 soldiers, captured and held it.”
<urn:uuid:ad14e59e-1d19-47a6-9b11-a15dcf4e4ca5>
CC-MAIN-2022-33
https://warfarehistorynetwork.com/battle-of-hurtgen-forest-army-rangers-vs-fallschirmjagers/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00698.warc.gz
en
0.977629
6,269
3.4375
3
treatment of World War II in the secondary school national history textbook of the six major powers involved in the war, The Santoli, Susan P This study analyzed the treatment given World War II in current high school textbooks from Japan, France, Germany, Great Britain, Russia, and the United States. Information from each textbook was matched with a list of World War II events and leaders. The war was divided into six major time periods and one topic which included 10 leaders. Each text’s coverage of every event and person was entered on charts. Differences occurred in the both the inclusion and interpretation of the events and leaders included in each text. The Japanese text was the most blatant in providing imbalanced coverage. Data presented in the textbooks from the other five countries varied greatly. Although there is much disagreement about what should be included in history curriculums, there appears to be agreement on the need to emphasize World War II as a turning point in world history. There is also little disagreement as well that in today’s social studies classrooms the primary tool used by teachers to convey an understanding of that war is still the textbook. Textbooks continue to play a major role in determining what our students will learn about their own country an other countries (Berghahn & Schlissler, 1987), and textbook versions of events are often accepted without question (Parsons, 1982). Textbook-related activities occupy 70% to 95% of class time in American schools (Wade, 1993), and studies in Japan (Goodman, 1983) report a similar reliance. The research of German’s George Eckert Institute for International Textbook Research emphasizes the important role of textbooks in shaping attitudes and understandings both nationally and internationally (Nose, 1986). The impetus for a joint Soviet-U.S. textbook study in 1977 was the shared belief that “what students learn from their textbooks can contribute to or detract from efforts aimed at improving relationships” (U.S./CT.S.S.R Textbook Study Project [U.S./CJ.S.S.R], 1981, p.1). One earlier study on international textbook revision concludes: “No sources of socialization in modern societies compare to textbooks in their capacity to convey a uniform, approved, even official version of what youth believe” (Becker, 1955, p. 338). What is presented in national textbooks more than half a century after the World War II ended still molds the understanding students have of that period. The specific purpose of this study was to analyze and compare information concerning selected World War II events and people in one secondary level, national history textbook for college bound students from each of the following countries: 1. Great Britain 6. United States The textbooks were selected with help from the International Textbook Institute in Braunschweig, Germany which provided lists of the British, French, and German history textbook publishers and the names of contacts in Japan Russia. The criteria for the texts was that they be most frequently or at least, frequently used national history textbooks published and currently in use after 1990. Because the Russian text was in the process of being written, an alternative text was selected with the help of Janet Vaillant at the Harvard University National Resource Center for Russian, East European, and Central Asian Studies. The U.S. text was selected from a list of frequently used U.S. History texts published by the American Textbook Council. The secondary school tests used for this study were: 1. Japan: Susumu Ishii et al. Shosetu Sekaishi. Tokyo: Yamakawa-shuppan, 1994. 2. France: Robert Frank and Valery Zanghellini. Histoire Ire L, ES, S. Paris: Belin, 1994. 3. Germany: Herausgegeben von Wolfgang Hug Unsere Geschichte. Frankfurt am Main: Verlag Moritz Diesterweg, 1991. 4. Great Britain: Denis Richards and J. W. Hunt. An Illustrated History of Modern Britain, 1783-1980. Burnt Mill, England: Longman, 1991. 5. Russia A.A. Kreder. Noveishaia Istoriia. Mockva: Interpraks, 1994. 6. United States: Thomas A. Bailey and David M. Kennedy. The American Pageant. Lexington, MA: D.C. Heath and Company, 1991. Native speakers of the language translated the four non-English textbook selections into English. All translators were bilingual and had live in the United States for a number of years. A method similar to one used in two previous content analysis studies was used (American Council on Education [A.C.E., 1947; Harbourt, 1931). The composition of the list was based on the Tables of Contents of several U.S. History texts and input from faculty in the History department of a local university. The development of such a list was consistent with the methodology used in other textbook content studies (Harbourt, 1931; Peiser, 1971). The finished list reflected the biases and backgrounds of the authors and faculty contributors since all were from the United States. The textbook analysis was primarily descriptive. The items on the Events and People list were used to determine how much space in each textbook was devoted to each item. A strategy used in other studies (Barth, 1991-1992; Ketchem, 1982; Social Studies Development Center [S.S.D.C.], 1981, 1984). The coding for space usage which follows is similar to one used in a previous study (Julian, 1979). n = no mention b = brief mention (one sentence or less) d = short discussion (over one sentence, but under one paragraph) p = full paragraph x = extensive coverage (anything over one full paragraph) A modified version of this coding, used for the analysis of the People in the War category, was as follows: n = no mention b = brief mention (one mention) x = extensive mention (more than one mention) World War 11, 1939-1943 A comprehensive textbook presentation of the events that unfolded In the years 19391943 is vital to students’ understanding f World War II. As the Axis war machine began to roll, country after country was involved in the war, through conquest as combatants or sometimes as both, making World War II truly a global war. Because so many civilians were affected, presentations of both the military and the home front, combatant and resistor, are necessary for a balanced view of the war itself and its effects. The years 1939-1943 were characterized by great triumphs and defeats, by allies who became adversaries, by the use of new battle tactics and weapons, and by powerful personalities who controlled the destinies of millions of people. “War in Western Europe, 1939-1943” The French, German, and Russian texts address each of the eight events in this topic, with French authors providing “extensive coverage” in seven of the eight events (Table 1). The Japanese text provides the least coverage addressing only three of the eight events, none with more than a “short discussion.” Both French and Russian texts do a very good job of discussing activities outside their own respective countries, while the Japanese, British, and U.S. texts do little of this, providing only partial or no information in many areas. For example, in the discussion of Home Fronts, only France and Russia discuss any countries but their own. The American text provides far more information about the U.S. and the War before Pearl Harbor than for all of the other events combined in this topic. In fact, the chapter in the American text that contains a discussion of this topic is entitled “Franklin D. Roosevelt and the Shadow of War, 1933-1941,” as opposed to chapter entitled “World War Two” in the other texts. The differences in coverage provide readers with different interpretations of events. For instance, only the French text offers explanations as to why Britain and France were unable to help Poland. This is understandable because this would be a sensitive issue of national pride. It is surprising that the British text does not address this event; in fact, it is unclear in the British text whether or not Britain and France did intervene in Poland and exactly why the intervention was or would have been futile. Neither the German nor the Russian text relates that France and Britain entered the War as a result of the attack on Poland. British authors are the only ones to differentiate between the Battle of Britain and the Blitz. In addition to the unequal coverage that occurs, there are also differences of opinion on the interpretation of certain events. An example of this is the British destruction of the French fleet at Oran. The British text clearly states that Britain was afraid that Germany would gain use of the fleet while the French text implies that since Germany promised it would not use the fleet, there was not danger of this occurring. An additional disagreement exists concerning the importance of the invasion of France by Italy, dismissed by the French and German authors as ineffective, but credited in the British and American tests as being the final pressure to convince France to surrender. “War in Eastern Europe, 1939-1943” The Japanese textbook addresses only one of six events under this topic area and provides only a “brief mention” of this event which is Hitler’s Invasion of Russia (Table 2). By failing to provide more discussion about this topic, the Japanese text has neglected an entire front of the War. Both the U.S. and British texts omit only one topic, the Japanese text has neglected an entire front of the War. Both the U.S. and British texts omit only one topic, the Resistance; however, they provide little information on the Soviet Expansion in Eastern Europe which receives “extensive coverage” in the Russia text. The amount of Russian coverage of this topic is understandable since the focus is on war in the East; however, the Russian text provides a great deal of information about countries other than Russia, such as its discussion of the Mediterranean Sea and North Africa activities. Interestingly, under the event of Hitler’s Invasion of Russia, more information about Russian activities is contained in other countries’ texts. It is suprising that the Russian author does not mention the battle Stalingrad, referred to by the French authors as the turning point in the war. The Russian text gives the battle of Moscow that distinction. Only the Russian and U.S. authors mention the Lend Lease aid sent to the U.S.S.R. and the Russian text is the only one to emphasize the controversy that existed among the Allies because of the fighting in North Africa. The German text includes information about the war in North Africa in early 1942, which is surprising since that was a time of German success. Rommel is not mentioned by name, nor is there any mention of the Italian mistakes in North Africa. The German text is very frank in its discussion of certain aspects of Nazi Germany. However, the author has a tendency to avoid much discussion of any military activities, whether they be German victories or defeats. The French and German texts provide the most in-dept coverage of the Holocaust, including information about the German antiJewish policies and actions, the concentration camps, and results of the Holocaust. The French text is only one of the six that uses the term “Holocaust,” although both the German and French texts include the phrase “Final Solution.” It is interesting that these two texts also contain some of the same document excerpts including the Wannsee Conference speech and the Nuremberg testimony of the Auschwitz commander. There are discrepancies in the Russian interpretation of the Katyn massacre as compared to those by the French and German authors. The Russian text does not mention the massacre of Polish army officers, which is discussed in both the German and French texts; in fact, the Russian author attributes the destruction of the village of Katyn to Germany. This has long been a topic of disagreement between the Germans and Russians, and the texts of these countries reflect the controversial nature of this issue. “War in the Pacific. 1939-1943” The Japanese, French, and Russian texts include information about all four events in this topic (Table 3). This is the only topic in which the Japanese authors have addressed each event. Three out of the four events receive “extensive coverage” in the Japanese text. Ironically, the only event which does not is the period from Midway to the U.S. invasion of the Philippines, a period of Japanese decline. Both the Japanese and U.S. texts provide “extensive coverage” of the events leading to Pearl Harbor, although both focus almost exclusively on their respective domestic situations. The German, British, and U.S. texts all omit any discussion on Occupied Asia, although each includes information about Occupied Europe. Less emphasis on the Asian War, as a whole, than on the European is found in the German and British texts and there is very little discussion on Japan itself by the German, British, or U.S. authors. The French text provides fairly equitable coverage of all events in this topic. It is the only text to discuss the joint chiefs of staff arrangement and the Allied strategy of defeating Hitler first. The battle of Midway is neither mentioned by name nor described in either the British or the German text. It is described in greatest detail by U.S. authors, which is understandable due to the U. S. participation in the battle, but it also receives a great deal of explanation in both the French and Russian tests. The Japanese text mentions Battle of Midway, but includes little else on this event. This is very similar to the coverage of the attack on Pearl Harbor, which also includes not details. In discussing the Japanese expansion into Southeast Asia, there is disagreement as to the goals of Japan The British text insists that Japan wanted British colonies; the Russian text has Japan moving into French colonies; the German test says that American interests were threatened; and the U. S. text says that the Dutch Fast Indies were Japan’s primary goal. The Japanese text mentions both the Dutch East Indies and French territories as goals. . After the bombing of Pearl Harbor, there are differences among the all of texts over who declared war on whom. There is also a difference between the Japanese and American authors’ descriptions of the events leading to why Pearl Harbor attack was late; the U. S. authors accuse Japan of purposefully drawing out the negotiations “World War 11, 1943-1945” During 1943-1945, the advantage in the war slowly began to shift in favor of the Allies; however, brutal fighting was necessary to force Germany and Japan to surrender. As the fighting in North Africa came to a close, new fronts were opened in Italy and France, and fighting intensified in the Pacific. The allied leaders met fact to face in a series of conferences that would determine not only the fates of Germany and Japan, but would affect the development of the postwar world as well. One leader escaped assassination, while another died before the end of the war. Finally, the war ended in a cloud of death as the most horrifying weapon the world had ever known was unleashed against the Japanese. Through their textbooks, students must learn how and why the advantage in the war began to change as it did, after years of Axis victories. They must also receive information on why the fighting continued and what strategies were used eventually to bring the war to an end. Many important battles were fought, and influential leaders made decisions that impacted the war. These international personalities should be noted and the military operations explained or summarized in such a way that students are given information about what happened on each front. The face to face meetings between the Allied leaders are of great historical note and affected every country involved in the war. Balanced coverage of the decisions made at conferences is needed. It is also essential that the textbooks relate the developing tensions among the Allies that affected many of the decisions both on and off the battlefield, leading to a very different postwar world. Lives of civilians continued to be greatly impacted by the war and some discussion of this is necessary in order to present a comprehensive picture of the war to students. As with the 1939-1943 topics, several events are widely agreed upon while disagreement occurs among the texts on other events. There are problems with serious omissions and nationalistic biases that make the textbook coverage of the time period 1943-1945 just as varied as in 1939-1943. “War in the Western Europe, 1943-1945” With the exception of the Japanese text, all others address each of the four topic events (Table 4). The differences are more of omission than disagreement. Each text tends to focus on events and details closely related to its own national history. For example, the German text provides the most detailed information on the plot to kill Hitler which is not mentioned in either the U.S. or Japanese texts. This text presents much more material on social history, such as on the suffering of the German victims of civilian bombings, than on military operations. No German general is mentioned anywhere in the chapters on World War II. The British text accords as much space to the invasion of Sicily, which involved British paratroopers, as to the remainder of the fighting in Italy. The American authors are the only ones to mention the involvements of Canadian troops in the North African fighting the North African invasion was not supported by the Russians who were desiring a European invasion. The U.S. authors also make it a point to defend the importance of the North African front, whereas the Russian author gives no particular importance to this fighting. Evident in the U.S. text is the concern over the Russian expansion into Eastern Europe, and this is the first topic area where that is expressed. The German surrender is reported somewhat differently in every text. The Japanese text is the only one which provides not dates. The French text provides the most information, giving both surrender dates, sites, and even the names of Allied commanders involved. The British and American texts mention only the May 7 surrender, which is the one in which they were directly involved; however, there was a second surrender that involved the U.S.S.R and Germany. The Russian text mentions that surrender, on May 89, but implies that was the only surrender, omitting the surrender at Rheims a day earlier. “War in Eastern Europe, 1943-1945” No country’s text addresses every event in this category (Table 5). Discussion of the Teheran, Yalta, and Potsdam conferences are the primary areas in which omissions occur, with the British text omitting all three. None of the countries’ texts provides comprehensive coverage of the conferences and it would be impossible for students to know the scope of what occurred at each conference by what is presented in each text. For example, in discussing the Teheran Conference, the German author says that the war goals discussed were primarily Soviet demands for land and the division of Germany. The Russian text does not mention these, but says that the opening of a second front was a major goal. While the Russian text describes the Conference climate as a heated one, the American authors state that things went very smoothly. Surprisingly, nothing appears about this conference in the French text. Although no French were directly involved, the conference goals of the landing in France certainly concerned them. In addition, the French text has consistently provided the most comprehensive coverage of the events outlined of any text, this being its first omission. It is also interesting that the British authors do not mention this conference in which Churchill was intimately involved, but this omission is consistent with the authors’ tendency to include little about any joint activities which involved the U.S.S.R. In discussing the Yalta Conference, the Japanese authors discuss only those decisions which affected Japan, leaving the false impression that these were the only items under discussion. Neither the French, British, nor the U.S. text mentions Yalta, which is somewhat surprising since Britain and the U.S. were two of the participants and since it was Roosevelt’s last Allied conference. The Japanese authors mention nothing about the Eastern Front, which is included to some degree in all of the other texts, nor do they ever mention that the U.S.S.R joined the Allies. The amount of coverage accorded this event by the British and U. S. texts is disproportionately small when compared to the information included in these texts on the Western War, and in the case of the U.S., on the Pacific War. “War I the Pacific, 1943-1945” The French and U.S. texts address each event in this category, with the U.S. text providing “extensive coverage” in three of the four events (Table 5). In contrast, the dropping of the atomic bombs is related very factually in one sentence in the Japanese text. The French, German, and Russian authors agree that the use of atomic weapons was based on estimates of Allied casualties that would occur in the invasion of Japan. The German author concedes that this is one explanation, but also questions whether the atomic bombs were also a desire to exhibit American military power to the Soviets. The U.S. authors note only that the bombs were dropped because of the failure of Japan to surrender. Only the German author discusses the later results of the bombing and only in the German text are there descriptions of the suffering of the victims. The texts differ in the casualty figures of Hiroshima; four different figures are given. The Japanese and British authors do not mention the Soviet declaration of war on Japan. Neither text has included much information about any Soviet operations. The U.S. text states that the Soviets entered the war when they were supposed to, but then disparagingly remarks that they had ulterior motives in doing so. The German text implies this as well. The Russian text notes not only the declaration of war on Japan, but mentions the transfer of Soviet troops from Eastern Europe to the borders of Manchuria. “Leaders Throughout the War” It is difficult to imagine writing a chapter on World War II without mentioning Hitler, Stalin, Churchill, or Roosevelt, but that is what the Japanese authors have done. Only Tojo is mentioned by name in the Japanese text (Table 7). This sole inclusion is consistent with the practice of Japanese authors totally to ignore or to provide very little information on aspects of World War II outside of Japan. The French text includes the largest number of leaders, omitting only Yamamoto, who was also omitted in every other text, including that of the Japanese. Zhukov is mentioned only in the French text. Hitler, Churchill, Stalin, Roosevelt and Eisenhower are mentioned in every texts except that of Japan. In focusing primarily on these five personalities, most of the texts missed the opportunity to include some very remarkable leaders who accomplished significant feats, and the chance to broaden students’ knowledge of leaders to whom they might not be exposed in other courses. Especially surprising is the lack of mention of someone like Zhukov in the Russian text. Ketchem (1982) concluded, following his analysis of international textbooks, that some students were presented with “profoundly inadequate information about World War Ir” (p. 100), and this conclusion is certainly valid in this study, as well. Few of the textbooks which were examined are adequate to provide comprehensive, bias-free information to teach World War II. Most of the texts are biased in emphasis, content, or omissions. Excessive nationalism is sometimes present, creating distorted views of events being presented to students. There is the possibility of problems in international understanding resulting from a biased impression in the minds of the readers. Of all six textbooks examined, the French text is the most free from these problems. The most serious problems occur in the Japanese text, which ignores the European War, both East and West, to the such an extent that not al of the main countries in the war are brought into the discussion, and while fronts of the war are missing. The Japanese authors omit the largest number of events and leaders from the topic outline, focusing almost exclusively on the Pacific war and, more particularly, on Japanese activities In that front. The earlier years of the Pacific war receive more discussion than the later years. Much of the information in the text is purely factual, containing no descriptions or explanations. Most events concerning both the Western and Eastern European fighting were omitted, only one person from the “Leaders Throughout the War” topic was mentioned. The emphasis on the Pacific War and the lack of narrative echo the observations of authors in previous studies involving Japanese texts (Duke, 1969; S.S.D.C., 1981). Particularly disturbing is the fact that this textbook is used by over 62% of high schools in Japan, meaning that the majority of Japanese high school students are using a text which has inadequate coverage both because of the information that is included and that which is not. Failure to include more comprehensive information about the European arena potentially affects not only Japanese students understanding of World War II but of post-war international relations as well due to the impact of the war on these relations. The French text includes numerous examples from many different countries that are used in the discussion of events. No topic receives substantially more coverage than the others, excluding a separate chapter devoted entirely to France. The French authors mention leaders from every country represented In the “Leaders Throughout the War” topic. The only events which receive significantly less coverage are those events in which the Soviet Union was the primary actor. French students will receive a more comprehensive presentation of the war if discussions of these events are expanded. Military operations are covered in much less detail by the German author than are events affecting civilians. All military operations are briefly summarized with few battles and people mentioned, both for those in which Germany was involved and those involving other countries. As in the Japanese text, the military coverage is factual, lacking both narration and description. The German author focuses primarily on German activities and operations. As far as the inclusion of events and leaders from the topic outline is concerned, the German author omits only three events and five leaders. Students using the German text are presented with a great variety and number of documents in the text. They are asked to draw conclusions, make inferences, and answer questions. In many cases, however, students will lack the factual background to consider certain national or international implications, having been provided too brief a summary of certain events in the war. Even the coverage of German military operations is too inadequate to provide a comprehensive understanding of what happened and why. The British text contains very little discussion of activities which do not primarily feature Britain. All operations involving the Soviet Union and the Pacific are especially lacking in coverage. Three of the omitted events relate to the Eastern Front and two leaders related to the Pacific front. Problems in the British text are similar to those noted in earlier studies involving British texts (Billington, 1966; Ketchem, 1982) where authors found a lack of comprehensive coverage. Because of the primary emphasis on British actions during the war, students using this text will lack information about actions by other countries, which may result in an exaggerated view of the importance of British actions in winning the war at the expense of the other countries who are involved. The partial information that British authors provide about Japanese and Soviet operations during the war potentially impairs British students understanding of post-war international relations. The Russian text provides fairly equal coverage of all topics and events. Military operations are presented in a fairly factual manner, almost entirely devoid of narrative. The author omits only two events and four leaders. Although the Russian text provides a comprehensive view of the war, it lacks discussion of certain events and leaders that should be of particular interest to Russian students since this is a national history book. Inclusion of more information about these items would provide a more complete picture of the war as well. Curiously, the items omitted in the Russian text about Soviet activities are included in most of the other countries’ texts. The problems of distortion and error found to be present in Soviet texts in previous studies (Duke, 1969; Harbourt, 1931; Ketchem, 1982) do not exist in this text. The U.S. authors provide the most coverage of those events in which the U. S. was involved. The major overall emphasis of U.S. authors is the Pacific war. Within the two Pacific topics, however, the emphasis is overwhelmingly on U.S. activities. The U.S. text fails to include events which focus on civilian involvement in World War II. Only three leaders are omitted by U.S. authors. Events are seldom presented only factually, at least where U.S. involvement occurs, and the discussions contained biased language. An example of this is in the description of Stalin as a “Hardened conspirator” and a “nasty Communist” while references to U.S. personalities such as Eisenhower, characterize him as “gifted” and “easy smiling.” No other text contains as much imbalance in this area. The tone and type of language used in the World War II chapter is consistent with that used throughout the text, and it may be that the colorful language is designed to engage the readers; however, in many instances, the language goes beyond interesting and engaging. Students using this text are presented with a U.S.-dominated World War II. For the many problem areas that occurred in the texts, there are also some outstanding features. Most of the authors provide very comprehensive coverage of the events involving their respective countries, which is to be hoped for, as these are national history textbooks. The French text not only provides the most comprehensive coverage of all aspects of World War II, but by incorporating many primary documents, photographs, and maps into the text requires students to evaluate sources and formulate conclusions. The German text, as well, includes many documents, photographs, and maps. Like the French authors, the German author poses questions to be considered or provides materials to be interpreted. These textbooks require the student to do more than simply read and memorize the text material as they provide an opportunity to actively involve the student in the learning process. Students are asked to formulate and support conclusions from the documents. Active thinking as opposed to memorization is required, and the French and German texts lend themselves to discussion-based classrooms, rather than to teacher-dominated lectures. The majority of the textbooks selected for this research inadequately provide students with comprehensive, bias-free information about World War II. The nature of these inadequacies lies in information that is included in texts, that which is not, and the emphasis given certain actions or events. Unless supplemental materials are used, students studying these texts will be presented with very different, and in some instances, erroneous depictions of a war which profoundly involved and affected their respective countries. American Council on Education. (1947). A study of national history textbooks used in the schools of Canada and the United States (Publication Number 2). Washington, D.C.. Barth, J.L. (1991-1992). A comparative study of the current situation on teaching about World War 11 in Japanese and American classrooms. International Journal of Social Education 6(3), 7-19. Becker, C.L. (1955). What are historical facts? The Western Political Quarterly, VIII(3), 327-340. Berghahn, V.R, & Schlissler, H. (1987). Introduction: History textbooks and perceptions of the past. Perceptions of History: International Textbook Research on Britain, Germany and the United States. New York: St. Martin’s Press, i-16. Billington, R.A. (1966). The historian’s contribution to Anglo-American misunderstanding. New York: Hobbs, Dorman and Company, Inc. Duke, B.C. (1969). The pacific war in Japanese and American high schools: A comparison of the textbook teachings. Comparative Education, 5(l),73-82. Goodman, G. (1983). The project. The History Teacher, 16(4), 541-543. Harbourt, J. (1931). The world war in French, German, English and American secondary school textbooks. The First Yearbook NCSS, 54-117. Julian, N.B. (1979). Treatment of women in United States history textbooks (ERIC Document Reproduction Service No., ED 178-371). Ketcham, A.F. (1982). World War 11 events as represented in secondary school textbooks of former allied and axis nations. Unpublished doctoral dissertation, The University of Arizona. Nose, C. ( 1986). George Eckert Institute for International Textbook Research. Braunschweig, Germany: George Eckert Institute for International Textbook Research Parsons, J. (1982). The nature and implication of textbook bias (ERIC Document Reproduction Service No. 280 769). Peiser, A. (1971). An analysis of the treatment given 10 selected aspects of populism and the populist party in American history high school textbooks. Unpublished doctoral dissertation, New York University. Social Studies Development Center. (1981). In search of mutual understanding: A final report of the Japan/United States textbook study. (ERIC Document Reproduction Service No. 200 500). Social Studies Development Center ( 1984). In search of mutual understanding: A final report of the Netherlands/United States textbook study. (ERIC Document Reproduction Service No. 257 761). U.S./U.S.S.R. Textbook Study Project. (1981). Interim report. (ERIC Document Reproduction Service No. 210-213). Wade, R.C. (1993). Content analysis of social studies textbooks: A review of ten years of research Theory and Research in Social Education, XVI(3), 232-256. Susan P. Santoli, St. Paul’s Episcopal School Andrew Weaver, Auburn University Copyright University of Northern Iowa Winter 1999 Provided by ProQuest Information and Learning Company. All rights Reserved
<urn:uuid:24cf3fb0-fab6-4381-88c3-d44057982f01>
CC-MAIN-2022-33
https://indexarticles.com/reference/journal-of-social-studies-research/treatment-of-world-war-ii-in-the-secondary-school-national-history-textbook-of-the-six-major-powers-involved-in-the-war-the/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00295.warc.gz
en
0.955477
7,108
3.46875
3
Colombia is a constitutional, multiparty republic. Presidential and legislative elections were held in 2018. Voters elected Ivan Duque Marquez president in a second round of elections that observers considered free and fair and the most peaceful in decades. The Colombian National Police force is responsible for internal law enforcement and is under the jurisdiction of the Ministry of Defense. The Migration Directorate, part of the Ministry of Foreign Affairs, is the immigration authority. The Colombian National Police shares law enforcement investigatory duties with the Attorney General’s Corps of Technical Investigators. In addition to its responsibility to defend the country against external threats, the army shares limited responsibility for law enforcement and maintenance of order within the country. For example, military units sometimes provided logistical support and security for criminal investigators to collect evidence in high-conflict or remote areas. Civilian authorities generally maintained effective control over security forces. There were credible reports that members of the security forces committed some abuses. Significant human rights issues included credible reports of: unlawful or arbitrary killings; torture and arbitrary detention by government security forces and armed groups; rape and abuse of women and children, as well as unlawful recruitment of child soldiers by armed groups; criminalization of libel; widespread government corruption; violence against and forced displacement of Afro-Colombian and indigenous persons; violence against lesbian, gay, bisexual, transgender, queer, and intersex persons; killings and other violence against trade unionists; and child labor. The government generally took steps to investigate, prosecute, and punish officials who committed human rights abuses, although some cases continued to experience long delays. The government generally implemented effectively laws criminalizing official corruption. The government was implementing police reforms focused on enhancing community-police relations, accountability, and human rights. Armed groups, including dissidents of the Revolutionary Armed Forces of Colombia, National Liberation Army, and drug-trafficking gangs, continued to operate. Armed groups, as well as narcotics traffickers, were significant perpetrators of human rights abuses and violent crimes and committed acts of extrajudicial and unlawful killings, extortion, and other abuses, such as kidnapping, torture, human trafficking, bombings, restriction on freedom of movement, sexual violence, recruitment and use of child soldiers, and threats of violence against journalists, women, and human rights defenders. The government investigated these actions and prosecuted those responsible to the extent possible. Section 1. Respect for the Integrity of the Person a. Arbitrary Deprivation of Life and Other Unlawful or Politically Motivated Killings There were reports that the government or its agents committed arbitrary or unlawful killings. According to the nongovernmental organization (NGO) Center for Research and Education of the Populace (CINEP), from January 1 through August 26, there were 28 cases of “intentional deaths of civilians committed by state agents.” According to government and NGO reports, police officers killed multiple civilians during nationwide protests that began on April 28. The NGO Human Rights Watch collected information linking 25 civilian deaths during the protests to police, including 18 deaths committed with live ammunition. For example, according to Human Rights Watch and press reports, protester Nicolas Guerrero died from a gunshot wound to the head on May 3 in Cali. Witness accounts indicated a police shooter may have been responsible for Guerrero’s death. As of July 15, the Attorney General’s Office opened investigations into 28 members of the police for alleged homicides committed during the protests, and two police officers were formally charged with homicide. Police authorities and the Attorney General’s Office opened investigations into all allegations of police violence and excessive use of force. Armed groups, including the National Liberation Army (ELN), committed numerous unlawful killings, in some cases politically motivated, usually in areas without a strong government presence (see section 1.g.). Investigations of past killings proceeded, albeit slowly due to COVID-19 pandemic and the national quarantine. From January 1 through July 31, the Attorney General’s Office registered six new cases of alleged aggravated homicide by state agents. During the same period, authorities formally charged four members of the security forces with aggravated homicide or homicide of a civilian. Efforts continued to hold officials accountable in “false positive” extrajudicial killings, in which thousands of civilians were killed and falsely presented as guerrilla combatants in the late 1990s to early 2000s. As of June the Attorney General’s Office reported the government had convicted 1,437 members of the security forces in cases related to false positive cases since 2008. Many of those convicted in the ordinary and military justice systems were granted conditional release from prisons and military detention centers upon transfer of their cases to the Special Jurisdiction for Peace (JEP). The military justice system developed a protocol to monitor the whereabouts of prisoners granted conditional release and was responsible for reporting any anomalies to the JEP’s Definition of Juridical Situation Chamber to take appropriate action. The Attorney General’s Office reported there were open investigations of five retired and active-duty generals related to false positive killings as of July 31. The Attorney General’s Office also reported there were 2,535 open investigations related to false positive killings or other extrajudicial killings as of July 31. In addition the JEP, the justice component of the Comprehensive System for Truth, Justice, Reparation, and Nonrepetition provided for in the 2016 peace accord with the Revolutionary Armed Forces of Colombia (FARC), continued to take effective steps to hold perpetrators of gross violations of human rights accountable in a manner consistent with international law. This included activities to advance Case 003, focused on extrajudicial killings or “false positives” largely committed by the First, Second, Fourth, and Seventh Army Divisions. In a February 18 ruling, the JEP concluded that, from 2002 to 2008, the army killed at least 6,402 civilians and falsely presented them as enemy combatants in a “systematic crime” to claim rewards in exchange for increased numbers of for combat “enemy” casualties. Several former soldiers and army officers, including colonels and lieutenant colonels convicted in the ordinary justice system, admitted at the JEP to additional killings that had not previously been investigated nor identified as false positives. On July 6, the JEP issued charges of crimes against humanity and war crimes against a retired brigadier general, nine other army officers, and one civilian in a case concerning the alleged extrajudicial killing and disappearance of at least 120 civilians in Norte de Santander in 2007 and 2008. The killings were allegedly perpetrated by members of Brigade 30, Mobile Brigade 15, and Infantry Battalion 15 “General Francisco de Paula Santander.” On July 15, the JEP issued a second set of war crimes and crimes against humanity indictments against 15 members of the Artillery Battalion 2 “La Popa” for killings and disappearances that took place in the Caribbean Coast region between 2002 and 2005. In 2019 there were allegations that military orders instructing army commanders to double the results of their missions against guerillas, criminal organizations, and armed groups could heighten the risk of civilian casualties. An independent commission established by President Duque to review the facts regarding these alleged military orders submitted a preliminary report in July 2019 concluding that the orders did not permit, suggest, or result in abuses or criminal conduct and that the armed forces’ operational rules and doctrine were aligned with human rights and international humanitarian law principles. As of September a final report had not been issued. Human rights organizations, victims, and government investigators accused some members of government security forces of collaborating with or tolerating the activities of organized-crime gangs, which included some former paramilitary members. According to the Attorney General’s Office, between January and July 31, 15 police officials were formally accused of having ties with armed groups. According to a February 22 report from the Office of the UN High Commissioner for Human Rights (OHCHR), 133 human rights defenders were killed in 2020, but the OHCHR was only able to document 53 of those cases, due to COVID-19 pandemic-related movement restrictions. According to the Attorney General’s Office, in the cases of more than 400 killings of human rights defenders from January 2016 to August 2021, the government had obtained 76 convictions. According to the OHCHR, 77 percent of the 2020 human rights defender killings occurred in rural areas, and 96 percent occurred in areas where illicit economies flourished. The motives for the killings varied, and it was often difficult to determine the primary or precise motive in individual cases. For example, on August 21, two armed men entered the motorcycle shop of Eliecer Sanchez Caceres in Cucuta and shot him multiple times, killing him. Sanchez was the vice president of a community action board and had previously complained to authorities about receiving threats from armed groups. Police officials immediately opened an investigation into the killing, which was underway as of October 31. The Commission of the Timely Action Plan for Prevention and Protection for Human Rights Defenders, Social and Communal Leaders, and Journalists, created in 2018, strengthened efforts to investigate and prevent attacks against social leaders and human rights defenders. The Inspector General’s Office and the human rights ombudsman continued to raise awareness regarding human rights defenders through the Lead Life campaign, in partnership with civil society, media, and international organizations. Additionally, there was an elite Colombian National Police (CNP) corps, a specialized subdirectorate of the National Protection Unit (NPU), a special investigation unit of the Attorney General’s Office responsible for dismantling criminal organizations and enterprises, and a unified command post, which shared responsibility for protecting human rights defenders from attacks and investigating and prosecuting these cases. By law the Attorney General’s Office is the primary entity responsible for investigating allegations of human rights abuses committed by security forces, except for conflict-related crimes, which are within the jurisdiction of the JEP (see section 1.c. for additional information regarding investigations and impunity). According to the Attorney General’s Office, there were six formal complaints of forced disappearance from January 1 through July. As of December 2020, the National Institute of Forensic and Legal Medicine registered 32,027 cases of forced disappearance since the beginning of the country’s armed conflict. Of those, 923 persons were found alive and 1,975 confirmed dead. According to the Attorney General’s Office, as of July there were no convictions in connection with forced disappearances. The Special Unit for the Search for Disappeared Persons, launched in 2018, continued to investigate disappearances that occurred during the conflict. c. Torture and Other Cruel, Inhuman, or Degrading Treatment or Punishment Although the law prohibits such practices, there were reports government officials employed them. CINEP reported that through August, security forces were allegedly involved in 19 cases of torture, including 40 victims. Members of the military and police accused of torture generally were tried in civilian rather than military courts. NGOs including Human Rights Watch reported that police beat and sexually assaulted demonstrators during the nationwide April-June protests. Human Rights Watch documented 17 cases of beatings, including one that resulted in death. The human rights Ombudsman’s Office and multiple NGOs reported at least 14 cases of alleged sexual assault by police officers during the protests. Police launched internal investigations of all allegations of excessive use of force. The Attorney General’s Office reported it convicted six members of the military or police force of torture between January and July 31. In addition the Attorney General’s Office reported 50 continuing investigations into alleged acts of torture committed by police or the armed forces through July. CINEP reported organized-crime gangs and armed groups were responsible for four documented cases of torture including seven victims through August. CINEP reported another 19 cases of torture in which it was unable to identify the alleged perpetrators. According to government and NGO reports, protesters kidnapped 12 police officials during the nationwide protests, torturing some. According to NGOs monitoring prison conditions, there were numerous allegations of sexual and physical violence committed by guards and other inmates. The Attorney General’s Office is the primary entity responsible for investigating allegations of human rights abuses committed by security forces, except for conflict-related crimes, which are within the jurisdiction of the JEP. The JEP continued investigations in its seven prioritized macro cases with the objective of identifying patterns and establishing links between perpetrators, with the goal of identifying those most responsible for the most serious abuses during the conflict. Some NGOs complained that military investigators, not members of the Attorney General’s Office, were sometimes the first responders in cases of deaths resulting from actions of security forces and might make decisions about possible illegal actions. The government made improvements in investigating and trying cases of abuses, but claims of impunity for security force members continued. This was due in some cases to obstruction of justice and opacity in the process by which cases were investigated and prosecuted in the military justice system. Inadequate protection of witnesses and investigators, delay tactics by defense attorneys, the judiciary’s failure to exert appropriate controls over dockets and case progress, and inadequate coordination among government entities that sometimes allowed statutes of limitations to expire, resulting in a defendant’s release from jail before trial, were also significant obstacles. President Duque signed three decrees in March to modernize the military justice system. The decrees transfer the court system from the Ministry of Defense to a separate jurisdiction with independent investigators, prosecutors, and magistrates. This was a step toward transitioning the military justice system from the old inquisitorial to a newer accusatory justice system. Transition to the new system continued slowly, and the military had not developed an interinstitutional strategy for recruiting, hiring, or training investigators, crime scene technicians, or forensic specialists, which is required under the accusatory system. As such, the military justice system did not exercise criminal investigative authority; all new criminal investigation duties were conducted by judicial police investigators from the CNP and the Attorney General’s Corps of Technical Investigators. In June, President Duque announced police reform plans focused on enhancing community-police relations, accountability, and human rights. Since the announcement, the CNP established a human rights directorate that responds directly to the director general of police and hired a civilian to oversee it. In partnership with a local university, the CNP also developed a human rights certification course for the entire police force and began training 100 trainers to replicate this 200-hour academic and practical course throughout the country. The CNP also enhanced police uniforms with clear and visible identifiable information to help citizens identify police officers who utilize excessive force or violate human rights protocols. Prison and Detention Center Conditions Apart from some new facilities, prisons and detention centers were harsh and life threatening due to overcrowding, inadequate sanitary conditions, poor health care, and lack of other basic services. Poor training of officials remained a problem throughout the prison system. Physical Conditions: Overcrowding existed in men’s and in women’s prisons. The National Prison Institute (INPEC), which operated the national prisons and oversaw the jails, estimated there were 99,196 persons incarcerated in 132 prisons at a rate of approximately 17 percent over capacity. The government made efforts to decrease the prison population in the context of COVID-19. The law prohibits holding pretrial detainees with convicted prisoners, although this frequently occurred. Juvenile detainees were held in separate juvenile detention centers. The Superior Judiciary Council stated the maximum time a person may remain in judicial detention facilities is three days. The same rules apply to jails located inside police stations. These regulations were often violated. The practice of preventive detention, in combination with inefficiencies in the judicial system, continued to result in overcrowding. The government continued to implement procedures introduced in 2016 that provide for the immediate release of some pretrial detainees, including many accused of serious crimes such as aggravated robbery and sexual assault. Physical abuse by prison guards, prisoner-on-prisoner violence, and authorities’ failure to maintain control were problems. INPEC’s office of disciplinary control continued to investigate allegations that some prison guards routinely used excessive force and treated inmates brutally. As of July 31, INPEC reported 14 disciplinary investigations against prison guards for such actions as physical abuse and personal injuries. The Inspector General’s Office reported 46 disciplinary investigations of INPEC officials from January through August 5. INPEC reported 159 deaths in prisons, jails, pretrial detention, or other detention centers through July 31, including four attributed to internal fights. Many prisoners continued to face difficulties receiving adequate medical care. Nutrition and water quality were deficient and contributed to the overall poor health of many inmates. Inmates stated authorities routinely rationed water in many facilities, which officials attributed to city water shortages. INPEC’s physical structures were generally in poor repair. The Inspector General’s Office noted some facilities had poor ventilation and overtaxed sanitary systems. Prisoners in some high-altitude facilities complained of inadequate blankets and clothing, while prisoners in tropical facilities complained that overcrowding and insufficient ventilation contributed to high temperatures in prison cells. Some prisoners slept on floors without mattresses, while others shared cots in overcrowded cells. Administration: Authorities investigated credible prisoner complaints of mistreatment and inhuman conditions, including complaints of prison guards soliciting bribes from inmates, but some prisoners asserted the investigations were slow. Independent Monitoring: The government permitted independent monitoring of prison conditions by local and international human rights groups. INPEC required a three-day notice before granting consular access. Some NGOs complained that authorities, without adequate explanation, denied them access to visit prisoners. d. Arbitrary Arrest or Detention The law prohibits arbitrary arrest and detention and provides for the right of any person to challenge the lawfulness of his or her arrest or detention in court. There were allegations, however, that authorities detained citizens arbitrarily. CINEP reported 85 cases of arbitrary detention involving 394 victims committed by state security forces through August 1. Other NGOs provided higher estimates of arbitrary detention, reporting more than 2,000 cases of arbitrary arrests, illegal detentions, or illegal deprivations of liberty committed in the context of the national protests. Arrest Procedures and Treatment of Detainees Authorities must bring detained persons before a judge within 36 hours to determine the validity of the detention, bring formal charges within 30 days, and start a trial within 90 days of the initial detention. Public defenders contracted by the Office of the Ombudsman assisted indigent defendants but were overloaded with cases. Detainees received prompt access to legal counsel and family members as provided for by law. Bail was generally available except for serious crimes such as murder, rebellion, or narcotics trafficking. Authorities generally respected these rights. Arbitrary Arrest: The law prohibits arbitrary arrest and detention; however, this requirement was not always respected. NGOs characterized some arrests as arbitrary detention, including arrests allegedly based on tips from informants about persons linked to guerrilla activities, detentions by members of the security forces without a judicial order, detentions based on administrative authority, detentions during military operations or at roadblocks, large-scale detentions, and detentions of persons while they were “exercising their fundamental rights.” Multiple NGOs alleged that police abused a temporary protection mechanism during the national protests to detain protesters arbitrarily. For example, NGOs and press reported that police in Cali arbitrarily detained protester Sebastian Mejia Belalcazar on May 28 for more than 24 hours. Mejia alleged police beat and threatened him before releasing him. According to NGOs, there was no official record of the arrest. Pretrial Detention: The judicial process moved slowly, and the civilian judicial system suffered from a significant backlog of cases, which led to large numbers of pretrial detainees. Of the 99,196 prison detainees, 26,651 were in pretrial detention. The failure of many jail supervisors to keep mandatory detention records or follow notification procedures made accounting for all detainees difficult. In some cases detainees were released without a trial because they had already served more than one-third of the maximum sentence for their charges. Civil society groups complained authorities subjected some community leaders to extended pretrial detention. f. Arbitrary or Unlawful Interference with Privacy, Family, Home, or Correspondence The law prohibits such actions, but there were allegations the government failed to respect these prohibitions. Government authorities generally need a judicial order to intercept mail or email or to monitor telephone conversations, including in prisons. Government intelligence agencies investigating terrorist organizations sometimes monitored telephone conversations without judicial authorization; the law bars evidence obtained in this manner from being used in court. NGOs continued to accuse domestic intelligence or security entities of spying on lawyers and human rights defenders. The Attorney General’s Office reported that as of July 31, there were no active criminal investigations underway in connection with illegal communications monitoring. The Inspector General’s Office reported that as of August 5, there were 40 disciplinary investigations against 38 state agents in connection with illegal surveillance and illegal monitoring of communications. Section 4. Corruption and Lack of Transparency in Government The law provides criminal penalties for official corruption, and the government generally implemented these laws effectively, although officials sometimes engaged in corrupt practices without punishment. There were numerous reports of government corruption during the year, particularly at the local level. Revenues from transnational organized crime, including drug trafficking, exacerbated corruption. Corruption: Through July 31, the Attorney General’s Office registered 8,414 allegations related to corruption and 51 active investigations. In August press reports alleged government contractors embezzled a $17 million advance from the Ministry of Technology and Communications in connection with a project to connect rural schools to the internet. The contractors allegedly failed to comply with the commitments in the contract, and the Inspector General’s Office opened an investigation.
<urn:uuid:edb5365b-3ca1-4cc7-9a8f-61bdc29fbfc2>
CC-MAIN-2022-33
https://www.state.gov/report/custom/c50fd142ae/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00496.warc.gz
en
0.965033
4,471
2.96875
3
- Open Access Using a Community-Engaged Research (CEnR) approach to develop and pilot a photo grid method to gain insights into early child health and development in a socio-economic disadvantaged community Research Involvement and Engagement volume 3, Article number: 29 (2017) Plain English summary This paper reports on the use of a Community-Engaged Research (CEnR) approach to develop a new research tool to involve members of the community in thinking about priorities for early child health and development in a deprived area of the UK. The CEnR approach involves researchers, professionals and members of the public working together during all stages of research and development. Researchers used a phased approach to the development of a Photo Grid tool including reviewing tools which could be used for community engagement, and testing the new tool based on feedback from workshops with local early years professionals and parents of young children. The Photo Grid tool is a flat square grid on which photo cards can be placed. Participants were asked to pace at the top of the grid the photos they considered most important for early child health and development, working down to the less important ones at the bottom. The findings showed that the resulting Photo Grid tool was a useful and successful method of engaging with the local community. The evidence for this is the high numbers of participants who completed a pilot study and who provided feedback on the method. By involving community members throughout the research process, it was possible to develop a method that would be acceptable to the local population, thus decreasing the likelihood of a lack of engagement. The success of the tool is therefore particularly encouraging as it engages “seldom heard voices,” such as those with low literacy. The aim of this research was to consult with professionals and parents to develop a new research toolkit (Photo Grid), to understand community assets and priorities in relation to early child health and development in Blackpool, a socio-economic disadvantaged community. A Community–Engaged Research (CEnR) approach was used to consult with community members. This paper describes the process of using a CEnR approach in developing a Photo Grid toolkit. A phased CEnR approach was used to design, test and pilot a Photo Grid tool. Members of the Blackpool community; parents with children aged 0–4 years, health professionals, members of the early year’s workforce, and community development workers were involved in the development of the research tool at various stages. They were recruited opportunistically via a venue-based time-space sampling method. In total, 213 parents and 18 professionals engaged in the research process. Using a CEnR approach allowed effective engagement with the local community and professionals, evidence by high levels of engagement throughout the development process. This approach improved the acceptability and usability of the resulting Photo Grid toolkit. Community members found the method accessible, engaging, useful, and thought provoking. The Photo Grid toolkit was seen by community members as accessible, engaging, useful and thought provoking in an area of high social deprivation, complex problems, and low literacy. The Photo Grid is an adaptable tool which can be used in other areas of socio-economic disadvantage to engage with the community to understand a wide variety of complex topics. What a child experiences during the early years usually provides a trajectory for the rest of their life [1, 2]. In particular, a young child’s development is profoundly affected by their early care-giving experiences. In neighbourhoods where parents face multi-level complex problems such as substance misuse, mental ill health or intimate partner violence, children are affected too. Exposure to high levels of early adversity and toxic stress through increased allostatic load predisposes children to problems in learning, behaviour and health across their life course [3,4,5,6,7]. Blackpool is currently the most deprived of all 326 local authorities in the UK . Across the town there are high levels of domestic violence, alcohol related hospital admissions and mental ill-health which is further compounded by low educational attainment and literacy levels. Blackpool has the highest rate of looked after children in the UK (164 per 10,000) as well as high levels of child abuse and neglect . Children growing up in Blackpool have some of the worst outcomes in the UK. In April 2015 Blackpool Better Start was allocated £45 million over 10 years from the Big Lottery Fund with the aim to improve outcomes for children from conception to 3 years in three key areas: language and communication, social and emotional development, and diet and nutrition. The initiative aims to use early intervention focused on prevention to improve the health and developmental outcomes of young children at two developmental milestones; healthy gestation and birth, and school readiness. In order to support families and children living in communities like Blackpool, high quality, effective, evidence-based programmes should be implemented. However, implementing a suite of programmes and increasing access to services and resources is often not enough to substantially change child health and developmental outcomes . The most successful initiatives tend to have the following characteristics: they address multiple social determinants of health, utilise community development approaches to tailor and align interventions to community assets and priorities [11,12,13,14]; have shared goals between partners and use collaborative methods to build trust and generate appropriate change [16, 17]. In order to select, develop and implement a suite of interventions to address early child health and development in Blackpool, it was important to first understand community needs, priorities and readiness for change. By utilising community engagement, a culture conducive to long-lasting change and an effective shift towards improved child outcomes can be created. In the UK, many public health interventions which aim to improve health or reduce health inequalities are now involving the community in programme design and development [18,19,20]. However, in areas of high need, researchers often find it difficult to engage and collaborate with the community, particularly when using traditional methods unsuitable for low literacy populations [21, 22]. A review of several research methods deemed them unsuitable for engaging with the Blackpool community. The current paper describes the process of using a Community-Engaged Research (CEnR) approach to develop an acceptable, visual and pragmatic tool (Photo Grid) to understand local needs, priorities and readiness for change. Community-Engaged Research (CEnR) principles were used to develop and test a Photo Grid as a research and engagement tool. CEnR has become increasingly popular across philanthropic organisations, academic institutions and governmental domains as it requires partnership development, co-operation, and a commitment to addressing local community issues [24,25,26]. An overview of CEnR principles is presented in Table 1 (adapted from [27,28,29]). Using a CEnR research design resulted in a three phase development plan. In Phase 1, the research team reviewed existing literature and presented identified research methods to local professionals. Further information will be provided on participants in the following section. The most beneficial aspects of each method and the barriers each might present to community participation were identified by the group. An initial list of factors considered to be most important for early child health and development locally was drawn up by local professionals allowing for the initial development of the Photo Grid and accompanying materials, in line with local population needs. Local community members were not included in Phase 1 to ensure any factors that could cause distress were removed or reframed appropriately. In Phase 2, local community members and members of the early year’s workforce were asked to trial a prototype Photo Grid and provide feedback based on their own experiences and local knowledge. The toolkit was then adapted accordingly. Participants in this Phase also provided advice and support with recruitment and data collection for Phase 3. In Phase 3, the Photo Grid was piloted within local venues with community members. Feedback was gained with regards to the engagement, understanding and value of the Photo Grid. Phase 1 participants were a convenience sample of health professionals (midwives), psychologists and community development workers who were approached in the workplace (n = 10). In Phase 2, five participants were recruited from a local parent group. Each parent expressed an interest in early child health and development, had a child aged 0-4 years had lived in Blackpool for a minimum of 5 years. They were asked to participate in a demonstration, discussion and development group to look at the new way of engaging local families in the Better Start initiative. Also participating in Phase 2 were eight members of the early years workforce, recruited from local children’s centres. They were asked to participate in the development of a new community engagement tool which would be piloted in their setting. Each had worked extensively in the Blackpool community (minimum 3 years) and could provide widespread knowledge about local families with young children. In Phase 3, venue-based time-space sampling was used. This is a probability-based strategy for recruiting members of a target population congregating at specific locations and times. In total208 individuals from children’s centres and other early years settings (e.g. faith-based toddler groups) were asked to take part in an activity looking at priorities for early child health and development. Substantial interest in the activity allowed the target sample of 200 participants to be exceeded. Most participants (n = 188) provided feedback on the Photo Grid tool, a response rate of 90.4%. A small incentive of refreshments was offered to participating community members as a token of gratitude for their time, energy and resources . Opportunities to pilot the tool and provide insights into priorities for early child health and development were promoted using posters/leaflets distributed within the children’s centres, an advert placed in the local community newspaper and via social media [32, 33]. Table 2 presents the demographic characteristics of the Phase 3 participants. The majority of participants were female (91%) and currently had a child aged 3 years or under (70%) thereby falling within the Blackpool Better Start population of interest. However, young parents and fathers were under represented in the sample with only 1% of participants aged 20 years or under and 9% males. Phase 1: Photo grid development with professional workers To explore the community needs, priorities and readiness for change with regards to early child health and development three “traditional” research methods were discussed in an meeting by a group of professional workers. These were: Q-Methodology [34, 35]; Rank Order Methods ; and Photo-Elicitation . Q-Methodology uses a sorting technique to examine “points of view” around a topic. Participants are grouped by similar opinions. Rank Order Methods involve participants placing a set of items in some form of order. The measure of order can include liking, effectiveness, importance etc. Photo-elicitation is a method of interviewing which uses visual images to elicit information from participants. These were selected owing to their pragmatic and simple nature, ability to be used within a variety of settings and populations, whilst providing rich, detailed information without being burdensome [38, 39]. The benefits and challenges presented by each method were considered with the population of Blackpool in mind. Findings from these discussions are summarised in Table 3. The decision was made to develop a new research and engagement tool in order to best fit the local population and their needs. The resulting Photo Grid amalgamated the most beneficial features of each “traditional” method. The simple structure of the grid and cards were utilised from Q-methodology. Photo cards were proposed to represent factors associated with early child health and development. Participants in this phase strongly advocated for the use of images rather than statements on the cards to account for the low literacy levels of the target population. It was thought that the interactive process of placing the cards on a large grid would be an engaging method. The simple linear data coding of the rank order method was applied to the grid to gain and understanding of common needs and priorities across the community (method and results not reported here), a significant barrier to participation for both service access and research previously. Lastly, a “think-out-loud” protocol was adopted to capture each individual interpretation of images and positioning of cards on the Photo Grid, a method anticipated to increase comfortable disclosure allowing a relationship to be built between facilitators and community members. Five key areas of early child health and development: healthy gestation and birth, social and emotional development, language and communication, diet and nutrition, and school readiness; were used to generate a long list of factors (n = 60). The list was streamlined following discussions with Phase 1 participants where factors representing similar factors were combined and factors with the potential to be emotive were removed (e.g. parental drug use and intimate partner violence). Following this iterative process, 37 factors remained. Photo cards were designed to represent each of these 37 factors; the front of each card had a title and an image representing the factor, the reverse included a standardised definition intended for use if participants required further explanation or examples. Following further discussions, three factors were combined/ removed and a previously unconsidered factor was added to the set. Three images were changed in order to ensure consistency with current health messaging and advice. The titles of the final 35 cards can be seen in Additional file 1. To complete the toolkit, a minimal set of non-identifiable demographic questions (gender, age, single parent status, no. of children and age of youngest child) were included as tick boxes at the side of the Photo Grid. The tool was designed using PVC coated cardboard, to make each grid reusable. Photographs of each completed Photo Grid were taken as a record of the card placement and demographic information before being wiped clean. A short instruction sheet and verbal guidance was designed to standardise the information provided to participants. A recording sheet was designed to enable facilitators to record conversation details eliciting valuable qualitative insights. This included four open-ended questions which enquired about (1) the ordering of the cards, (2) the relevance of the factors, (3) opinions about the usefulness of the Photo Grid tool as a research and engagement tool and (4) provided the opportunity for any other information to be provided. These were linked to the corresponding photographs using a unique identifying number. Phase 2: Photo grid testing and adaption with (a) local parents group members and (b) early years workers Five local parents participated in a demonstration, discussion and development group to test and feedback on the resulting prototype Photo Grid. Three main pieces of positive feedback emerged: (1) the activity was interesting, enjoyable, and prompted group discussion regarding early child health and development priorities; (2) the use of images on cards made it easy to discuss the factors in an unassuming manner; and (3) there was appreciation for the proposed wipe clean, re-useable design of the Photo Grid. Two areas for improvement were identified: (1) Initially, the number of cards was overwhelming. Working through this issue, the task was made more manageable by adding a sorting step to the protocol. This involved sorting the cards into high, middle and low priority groups prior to placing the cards on the grid. (2) The prototype adopted a traditional Q-Grid layout where cards are placed from left to right, low to high priority (Fig. 1a). Participants found this layout confusing, opting to place cards from top to bottom with cards reflecting a higher priority placed at the top of the Photo Grid. In order to ensure ease and consistency in completion of the task, the orientation of the Photo Grid was changed and a directional arrow added for clarity (Fig. 1b). The instruction sheet and verbal guidance were modified to complement the protocol changes. Following these modifications, the process was repeated with eight early years’ workers. They confirmed the suitability of the activity for use with the local community, reiterating similar positive feedback to that cited by the local parents group members. Additionally they considered the inclusion of each card based on the appropriateness of the image, title and explanatory statement. At this point, three cards were edited for clarity of wording and a further demographic question (employment status) was added to the side of the Photo Grid tool. Three of five local parents involved in Phase 2 were interested in continued involvement in Phase 3 of the research. Following the CEnR approach, they were trained by the research team in data collection protocols and were volunteer facilitators in Phase 3. This allowed parents to gain first-hand experience of research, broaden their skill sets and increased the capacity for study recruitment and data collection. Phase 3: Photo grid pilot within local community members in children’s settings The aim of Phase 3 was to pilot the Photo Grid toolkit with the local community. Participants were asked to individually complete the Photo Grid so that it represented the most important factors for early child health and development for themselves and the local community, and provide feedback on the Photo Grid as a research and engagement tool. The majority of participants (73.5%) agreed the Photo Grid was a good engagement tool and would elicit an overview of community priorities for early child health and development. Some participants commented that they enjoyed completing the activity (11.5%), with three specifically attributing this to the use of images rather than words. Many participants (35.5%) described how the task had allowed them to “think again” about priorities they had for their own children and use the time to reflect on what they believed to be important. Conversely, there were a small number of participants (2%) who did not find the activity useful stating that they were already confident in their own priorities as a parent. Although participants were encouraged to complete the grid independently, a small number completed the task in pairs/groups (4.5%). On these occasions, the cards provided prompted discussion as groups debated each factor before coming to consensus on high, middle and low priorities. Participants were happy that the Photo Grid allowed the opportunity for their opinions about what action is needed locally to be heard and recognised that it gave them chance to learn more about early child health and development. Approximately a quarter of participants provided further information about the ease of completing the task and/or supplied suggestions of improvements for future research. These participants felt that many of the cards were top priorities and would have liked to the option to place a greater number of cards at the top of the grid. They highlighted a need for greater specificity around the age of the child as the pregnancy period was often prioritised over early infancy. Participants noted that their priorities differed depending on their child’s age and development stage, behaviour, and personality characteristics. This feedback has since been incorporated into the toolkit instructions. A small number of participants (3.5%) found the task challenging, the number of cards overwhelming and the more conceptual cards difficult to comprehend. Additional facilitator support was provided to these individuals who subsequently reported enjoyment upon completing the Photo Grid. The current paper describes the process of using a Community-Engaged Research (CEnR) approach to develop an acceptable, visual and pragmatic tool (Photo Grid) to understand the Blackpool community needs, and priorities for early child development. A CEnR approach involves researchers, professionals and members of the public working together during all stages of research and development.The tool was seen by community members as accessible, engaging, useful and thought provoking in an area of high social deprivation, complex problems, and low literacy. Using a CEnR approach proved to be effective evidenced by the numbers engaged in the development and pilot phases, some of whom remained engaged throughout taking on volunteer facilitator roles. By involving professional workers, parents, early years workers and community members in the development and testing of the Photo Grid the usability and appropriateness of the instrument was maximised. Participants were able to engage with researchers in a meaningful way, providing valuable insights into local needs and priorities around early child health and development. The CEnR approach allowed for a mutually beneficial partnership to form between research staff and local community members. By involving parents and the community at each stage of the research (i.e. tool kit development, trained parent facilitators, and dissemination), there are promising signs that a culture of trust and collaboration is in its early stages of development. In order to mitigate any potential distress a decision was taken to not involve community members in Phase 1 of the research. There was concern that the discussion of including potentially emotive factors (e.g. intimate partner violence, parental substance misuse) may cause upset to those who have experienced them directly. Upon reflection, it may have been beneficial to involve community members in this stage of the research. As part of a collaborative approach representatives from the community should be considered equally able to decide what may, or may not cause feelings of distress. Future research using the Photo Grid method to investigate other areas of interest should consider involving community members as early as possible. However, all participants should be made aware that potentially distressing topics may be discussed and provided with a list of support services as a precaution. As with many community-based projects, recruitment and sampling were a limiting factor. In particular, fathers and young parents were under-represented in all phases of the Photo Grid development and testing. A venue-based time-sampling method gave opportunistic yet resourceful access to community members. However, those who do not engage with services were subsequently not involved in the tool design and piloting. Future research should examine if the Photo Grid tool is as successful in engaging the unengaged. In addition, whilst using a CEnR framework was effective, community involvement in research can be measured on a continuum. Community-Based Participatory Research (CBPR) forms the ideal or gold standard of the approach aiming for a full partnership between researchers and the community in all areas of research design including shared ownership of materials developed and joint interpretation of findings . Whilst powerful and conducive to creating a culture of understanding and positive change, this is difficult to achieve and requires the development and maintenance of long-term relationships [26, 40, 41]. Until these relationships are built, utilising a CEnR framework is considered most effective. As this method of interacting with the community has been successful, it will be used to gain a more in depth understanding of individual topics in more detail throughout the span of Blackpool Better Start. Forthcoming work will utilise the findings from the Photo Grid analysis (not reported) to tailor the development and implementation of programmes to suit the local context. The Photo Grid tool appears reliable with feedback about the tool relatively consistent across all participants. Future research utilising the tool in another area with similar deprivation and literacy levels may enhance its reliability. Similarly, the Photo Grid tool was successful in engaging participants in research and eliciting discussions around important factors associated with early child health and development. This suggests that the tool is a new, successful method of gaining information on this subject. Future research adapting the tool to prompt community discussion around other topics will allow for a further assessment of its validity. It is hoped that other researchers can learn from the CEnR process detailed in this paper and utilise the Photo Grid method. It has potential for adaptation and could be used as an effective tool to examine a wide range of topics in other areas of high socio-economic disadvantage and low literacy levels. In conclusion, the Photo Grid toolkit was seen by community members as accessible, engaging, useful and thought provoking in an area of high social deprivation, complex problems, and low literacy. Involvement of the community in the development of the tool was seen as an enabler to this success, particularly with a population considered to contain many “seldom heard voices”. The Photo Grid is an adaptable tool which can be used in other areas of socio-economic disadvantage to engage with the community to understand a wide variety of complex topics. Community-Based Participatory Research Shonkoff JP, Garner AS, The Committee on Psychosocial Aspects of Child and Family Health, Committee on Early Childhood, Adoption, and Dependent Care, and Section on Developmental and Behavioral Pediatrics, Siegel BS, Dobbins MI, Earls MF, et al. The lifelong effects of early childhood adversity and toxic stress. Pediatrics 2012;129:e232–e246. Shonkoff JP, Phillips DA, National Research Council (U.S.), editors. From neurons to neighborhoods: the science of early child development. Washington, D.C: National Academy Press; 2000. Felitti VJ, Anda RF, Nordenberg D, Williamson DF, Spitz AM, Edwards V, et al. Relationship of childhood abuse and household dysfunction to many of the leading causes of death in adults. Am J Prev Med. 1998;14:245–58. Fuller-Thomson E, Baird SL, Dhrodia R, Brennenstuhl S. The association between adverse childhood experiences (ACEs) and suicide attempts in a population-based study: adverse childhood experience and suicide attempts. Child Care Health Dev. 2016;42:725–34. Middlebrooks JS, Audage NC. The Effects of Childhood Stress on Health Across the Lifespan. Atlanta, GA: U.S. Department of Health and Human Services, National Centers for Disease Control and Prevention; 2008. http://health-equity.lib.umd.edu/932/1/Childhood_Stress.pdf. Miller GE, Chen E, Parker KJ. Psychological stress in childhood and susceptibility to the chronic diseases of aging: moving towards a model of behavioural and biological mechanisms. Psychol Bull. 2011;137(6):959-7. Shonkoff JP, Boyce WT, McEwen BS. Neuroscience, molecular biology, and the childhood roots of health disparities: building a new framework for health promotion and disease prevention. JAMA. 2009;301:2252. Department for Communities and Local Government. English indices of deprivation. 2015. Public Health England. Child Health Profile: Blackpool. 2017. https://fingertips.phe.org.uk/profile-group/childhealth/profile/child-healthoverview/data#page/3/gid/1938133000/pat/6/par/E12000002/ati/102/are/E06000009/iid/90401/age/173/sex/4. Jutte DP, Miller JL, Erickson DJ. Neighborhood adversity, child health, and the role for community development. Pediatrics. 2015;135(Supplement):S48–57. Braunstein S, Lavizzo-Mourey R. How the health and community development sectors are combining forces to improve health and well-being. Health Aff (Millwood). 2011;30:2042–51. Erickson D, Andrews N. Partnerships among community development, public health, and health care could improve the well-being of low-income people. Health Aff (Millwood). 2011;30:2056–63. Nápoles AM, Santoyo-Olsson J, Stewart AL. Methods for translating evidence-based behavioral interventions for health-disparity communities. Prev Chronic Dis. 2013;10. https://doi.org/10.5888/pcd10.130133. Williams DR, Costa MV, Odunlami AO, Mohammed SA. Moving upstream: how interventions that address the social determinants of health can improve health and reduce disparities. J Public Health Manag Pract. 2008;14(Supplement):S8–17. Benson PL, Leffert N, Scales PC, Blyth DA. Beyond the “village” rhetoric: creating healthy communities for children and adolescents. Appl Dev Sci. 2012;16:3–23. Christopher S, Watts V, McCormick AKHG, Young S. Building and maintaining Trust in a Community-Based Participatory Research Partnership. Am J Public Health. 2008;98:1398–406. Jagosh J, Bush PL, Salsberg J, Macaulay AC, Greenhalgh T, Wong G, et al. A realist evaluation of community-based participatory research: partnership synergy, trust building and related ripple effects. BMC Public Health. 2015;15. https://doi.org/10.1186/s12889-015-1949-1. Cyril S, Smith BJ, Possamai-Inesedy A, Renzaho AMN. Exploring the role of community engagement in improving the health of disadvantaged populations: a systematic review. Glob Health Action. 2015;8. https://doi.org/10.3402/gha.v8.29842. Komro KA, Tobler AL, Delisle AL, O’Mara RJ, Wagenaar AC. Beyond the clinic: improving child health through evidence-based community development. BMC Pediatr. 2013;13. https://doi.org/10.1186/1471-2431-13-172. O’Mara-Eves A, Brunton G, McDaid D, Oliver S, Kavanagh J, Jamal F, et al. Community engagement to reduce inequalities in health: a systematic review, meta-analysis and economic analysis. Public Health Res. 2013;1:1–526. Bonevski B, Randell M, Paul C, Chapman K, Twyman L, Bryant J, et al. Reaching the hard-to-reach: a systematic review of strategies for improving health and medical research with socially disadvantaged groups. BMC Med Res Methodol. 2014;14. https://doi.org/10.1186/1471-2288-14-42. Donnelly J. Maximising participation in international community-level project evaluation: a strengths-based approach. Eval J Australas. 2010;10:43–50. Minkler M, Wallerstein N, editors. Community-based participatory research for health: from process to outcomes. 2nd ed. San Francisco: Jossey-Bass; 2008. Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-based research: assessing partnership approaches to improve public health. Annu Rev Public Health. 1998;19:173–202. Lasker RD. Broadening participation in community problem solving: a multidisciplinary model to support collaborative practice and research. J Urban Health Bull N Y Acad Med. 2003;80:14–60. Tapp H, White L, Steuerwald M, Dulin M. Use of community-based participatory research in primary care to improve healthcare outcomes and disparities in care. J Comp Eff Res. 2013;2:405–19. Handley M, Pasick M, Oliva G, Goldstein E, Nguyen T. Community-Engaged Research: A Quick-Start Guide for Researchers. Clinical Translational Science Institute Community Engagement Program, University of California, San Francisco; 2010. https://synergy.dartmouth.edu/sites/default/files/docs/Comunity_Engaged_Research_Quick_Start_Guide_for_Researchers.pdf. Horowitz CR, Robinson M, Seifer S. Community-based participatory research from the margin to the mainstream: are researchers prepared? Circulation. 2009;119:2633–42. McDonald MA. Practicing community engaged research (IRB module). 2008. https://www.citiprogram.org/citidocuments/Duke%20Med/Practicing/comm-engaged-research-4.pdf. Muhib FB, Lin LS, Stueve A, Miller RL, Ford WL, Johnson WD, et al. A venue-based method for sampling hard-to-reach populations. Public Health Rep. 2001;116(Suppl 1):216–22. Flicker S, Travers R, Guta A, McDonald S, Meagher A. Ethical dilemmas in community-based participatory research: recommendations for institutional review boards. J Urban Health. 2007;84:478–93. Fenner Y, Garland SM, Moore EE, Jayasinghe Y, Fletcher A, Tabrizi SN, et al. Web-based recruiting for Health Research using a social networking site: an exploratory study. J Med Internet Res. 2012;14:e20. O’Connor A, Jackson L, Goldsmith L, Skirton H. Can I get a retweet please? Health research recruitment and the Twittersphere. J Adv Nurs. 2014;70:599–609. Brown SRQ. Methodology and qualitative research. Qual Health Res. 1996;6:561–7. Stephenson W. The study of behavior: Q-technique and its methodology. Chicago: University of Chicago Press; 1953. Weller SC, Romney AK. Systematic data collection. Newbury Park: Sage Publications; 1988. Collier J, Collier M. Visual anthropology: photography as a research method. Rev. and expanded ed. Albuquerque: University of New Mexico Press; 1986. Meo AI. Picturing students’ Habitus: the advantages and limitations of photo-elicitation interviewing in a qualitative study in the city of Buenos Aires. Int J Qual Methods. 2010;9:149–71. Padgett DK, Smith BT, Derejko K-S, Henwood BF, Tiderington EA, Picture I. Worth. .. ? Photo elicitation interviewing with formerly homeless adults. Qual Health Res. 2013;23:1435–44. Blumenthal DSI. Community-based participatory research possible? Am J Prev Med. 2011;40:386–9. Israel BA, Parker EA, Rowe Z, Salvatore A, Minkler M, López J, et al. Community-based participatory research: lessons learned from the centers for Children’s environmental health and disease prevention research. Environ Health Perspect. 2005;113:1463–71. This research was only possible due to the enthusiasm and commitment of the local community and partners. The authors are grateful to all participants for their time and valued feedback and insights. They would like to thank members of Community Voice parents group, their colleagues at the Blackpool Centre for Early Child Development, NSPCC practitioners, and Children’s Centre staff for their support and input to the development of this research method. They would like to thank the CECD team and Denise Coster, Senior Evaluation Officer at NSPCC for their assistance in shaping this paper. The views expressed in this paper are those of the authors and not necessarily those of the Blackpool Better Start Partnership or NSPCC. This research was funded through the Big Lottery Fund as part of the A Better Start initiative. The Big Lottery Fund has not had any involvement in the design of this research methodology or writing of this paper. Availability of data and materials The tool developed and used during the current study are available from the corresponding author on reasonable request. Ethics approval and consent to participate All phases of the research were approved by the A Better Start Research and Development Board in January 2016. The NSPCC Research and Ethics Committee considered the research to be low risk and therefore did not require an application for full ethical approval. This was due to identifiable personal information not being collected in the study and the subject matter not being of a sensitive nature. Participants were recruited indirectly through opportunity sampling or online advertisement. No participants were recruited through NSPCC services. Consent for publication The authors declare that they have no competing interests. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. It is acknowledged that the final set of cards is not a complete representation of all factors conducive to early child health and development, but rather a starting point for conversation with the community allowing us to better understand community priorities. (DOCX 11 kb) About this article Cite this article Lowrie, E., Tyrrell-Smith, R. Using a Community-Engaged Research (CEnR) approach to develop and pilot a photo grid method to gain insights into early child health and development in a socio-economic disadvantaged community. Res Involv Engagem 3, 29 (2017). https://doi.org/10.1186/s40900-017-0078-7 - Community development - Community engaged research - Public involvement - Early child development
<urn:uuid:c05742e6-38bd-4eaa-9d9a-c768af476113>
CC-MAIN-2022-33
https://researchinvolvement.biomedcentral.com/articles/10.1186/s40900-017-0078-7
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00698.warc.gz
en
0.94118
7,698
2.578125
3
- Open Access Strategies for building trust with farmers: the case of Bt maize in South Africa Agriculture & Food Security volume 1, Article number: S3 (2012) In 1999, South Africa became the first African country to approve commercial production of subsistence genetically modified (GM) maize. The introduction of GM crop technology is often met with skepticism by stakeholders including farmers. The involvement of the private sector in this process can further breed mistrust or misperceptions. To examine these issues more closely, the objective of this case study was to understand the role of trust in the public-private partnership (PPP) arrangement involved in the development of Bacillus thuringiensis (Bt) maize in South Africa. We conducted semi-structured, face-to-face interviews to obtain stakeholders’ understanding of trust in general as well as in the context of agricultural biotechnology (agbiotech) PPPs. A thematic analysis of the interview transcripts, documents, reports and research articles was conducted to generate insights into the challenges to, and practices for, building trust among the partners and with the public. The findings of this study are organized into four main lessons on trust building. First, as the end users of GM technology, farmers must be engaged from the start of the project through field demonstrations and educational activities. Second, an effective technology (i.e., the seed) is key to the success of an agbiotech PPP. Third, open communication and full disclosure between private sector companies and government regulatory bodies will build trust and facilitate the regulatory processes. Fourth, enforcing good agronomic practices, including appropriate management of the refuge areas, will serve the interests of both the farmers and the seed companies. Trust has proven to be a critical factor determining the success of the Bt maize project in South Africa. Distrust of the private sector and of GM technology were cited as major barriers to building trust. The trust-building practices described in this case study have often served to overcome these barriers; however, erosion of trust was also present. The success of the project has been, and will continue to be, dependent upon the concerted effort of the farmers, government, and private sector players in the establishment and maintenance of trust. History of Bt maize in South Africa One of the major challenges hindering the production of maize, a staple food in South Africa, is the damage to crops caused by maize stalk borers . Studies have estimated that the annual loss in maize due to the stalk borer is about one million tonnes, which is valued at approximately US $2.7 billion . In response to the challenge posed by the stalk borer globally, scientists and private companies began developing Bacillus thuringiensis (Bt) maize, a genetically-modified (GM) crop that is resistant to stalk borer insect pests. Bt maize was first approved for commercial production in South Africa in 1998. South Africa was in fact the first African country to approve the commercial production of a GM subsistence crop [3, 4]. As of 2011, the adoption of biotech crops in South Africa had reached 2.3 million hectares, 81% (1.873 million hectares) of which was biotech maize . The increase in hectarage may be attributable to a yield advantage of 11.03% and 10.60% on irrigated farms and dry land, respectively, when using Bt maize as opposed to conventional maize . This impact is expected to be even greater where the stem borer infestation is higher. The production of Bt maize, like that of other GM crops, was fuelled by commercial interests of private companies such that GM crops were used primarily for commercial farming . Initial efforts at commercializing Bt maize was spearheaded by two private companies: Monsanto Company and Syngenta . The genetically modified organisms (GMO) Act was passed in 1997, allowing active GM work in South Africa to commence. Prior to that, the South African Committee for Genetic Experimentation (SAGENE), an association of industry experts, developed guidelines that served as a watchdog and advisory body to scientists, industry and government on matters of agricultural biotechnology (agbiotech) . Monsanto is the key player in the Bt maize industry in South Africa. Monsanto’s Bt genes are found in the company’s own hybrids and, at the same time, it has licensed the technologies to Pioneer Hibred and Pannar Seed companies, which have introgressed the genes into their own hybrids . Monsanto’s monopoly on Bt maize in South Africa ended in 2003, which is when Syngenta’s Bt maize was approved by the Department of Agriculture . Syngenta’s Bt genes – introgressed with maize varieties to confer resistance to the maize stalk borer – have been commercialized in a joint venture with Seedco Seed Company. Dow Chemicals is another company that has tried its Bt genes in South Africa, in collaboration with Pioneer Hibred South Africa . In addition to the private sector actors involved, two not-for-profit organizations were integral to the success of commercializing Bt maize in South Africa: the Agricultural Research Council (ARC) and AfricaBio. The ARC is the largest agricultural research organization in South Africa . AfricaBio is a biotechnology stakeholders association whose mandate is to share knowledge, information, and awareness of biotechnology—including Bt maize—and its proper management . AfricaBio has supported many promotional and community engagement initiatives, such as the on-site demonstrations of Bt maize (see Additional file 1 for the roles of the key collaborators/partners in the Bt maize partnership in South Africa). The institutions (private companies, government institutions and others) engaged in Bt maize production in South Africa are not linked through a typical or formal public-private partnership (PPP) arrangement. Instead, this initiative was driven by “a system of collaboration” comprised of government actors, non-governmental organizations (NGOs), and private sector companies. This system of collaboration was built upon a shared objective to reduce the amount of maize crop yield lost due to the maize stem borer. While there was no formal PPP in place throughout the introduction of Bt maize in South Africa, the lessons on trust building and collaboration drawn from this case study are certainly generalizable to more formal agbiotech PPPs. Trust among partners and with the public has been identified as an important element of successful PPPs . Factors affecting the establishment, development, and maintenance of trust in PPPs can either ensure or compromise the success of agbiotech projects as a whole. Trust is especially critical in agbiotech PPPs, as the introduction of GM crops can often be contentious and hindered by public mistrust in private sector involvement. The involvement of the private sector in these PPPs can often breed skepticism, as the public sector often perceives the intentions of the private sector as suspect [10, 11]. In particular, a fear exists within the public sector that multinational biotechnology companies may seek to take advantage of the resource-constrained nations in which they operate PPPs . In some cases, this distrust is met by similar hesitations on the part of the private sector, which views the public sector as slow, inefficient, and resistant to new technologies . These underlying issues form the basis of this case study, which seeks to investigate existing trust-building practices that may serve to overcome barriers to trust in agbiotech PPPs. This study constitutes one in a series of eight case studies investigating the role of trust in agbiotech PPPs and the adoption of GM crops in sub-Saharan Africa. Trust is important in agbiotech PPPs as its presence enables partners to complete complex, long-term tasks and achieve intended results [13, 14]. The three specific goals of this series of case studies are to: 1) describe trust-building practices in the development of agbiotech projects; 2) describe the challenges associated with trust building in PPPs; and 3) determine what makes these practices effective or ineffective. This particular study seeks to accomplish these goals by describing and analyzing the trust-building practices undertaken during the commercialization process of Bt maize in South Africa. By identifying barriers to trust and trust-enhancing practices, this study provides insight to potential funders, researchers, farmers and others about successful management of agbiotech PPPs. A total of twelve individuals, drawn from both the public and private sector, were interviewed for this study. They included three small-scale and two large-scale maize farmers; representatives from the private sector companies Monsanto Company, Pioneer Hibred and Pannar; a representative from the Council for Scientific and Industrial Research (CSIR); and a representative from the Maize Trust in South Africa. Interviewees were identified by making a list of key individuals associated with the project based on the stakeholders identified within the research protocol. This list was then populated further through snowball sampling. Potential interviewees were sent an invitation, which contained an explanation of the case study series, to participate in the interview. Those who consented to participate were informed that the interview would be recorded, transcribed and analyzed. All the interviews took place in South Africa. They followed a semi-structured, face-to-face format and each lasted approximately one hour. The interview guide included questions on the interviewees’ background, their understanding of the project, and their interpretation of the word trust. The interview explored perceptions of trust among the partners and with the public, apparent challenges to trust, and observed trust-building practices. Finally, interviewees were asked for their suggestions on how to improve agbiotech PPPs (see Additional file 2 for sample questions from the interview guide). The interviews were transcribed verbatim. The analysis was performed by reading through the transcripts several times, identifying trends and organizing them into major themes. A literature review of academic articles and project documents were also used in the writing of the report. Research Ethics Board (REB) approval for this study was obtained prior to conducting the case study from the University Health Network, University of Toronto. Results and discussion Interviewees’ understanding of trust Interviewees were asked to define trust and identify its elements in the context of the Bt maize project in South Africa. The key elements of trust, as identified by the participants, were honesty and delivery of accurate information in a timely manner. One interviewee defined trust as “being honest, [and] sharing the right information at the right time.” The interviewees described trust as being very much determined experientially, pointing out that trust is established if parties upheld their end of a deal and delivered what they promised in a timely manner. The interviewees also described trust as something that had to be “earned over time.” Interviewees agreed that trust was highly important in agbiotech PPPs, such as the Bt maize project in South Africa. The findings of this study are amalgamated into four key lessons on trust building. 1. The technology is nothing without the farmer: engage the end user early to ensure adoption of the technologies The importance of building trust between farmers and the private sector was articulated by some interviewees. One representative from the private sector stated: You have to build trust with your customers, which in our case would be the farmers. Establishing and maintaining the trust of farmers is essential for effective technology adoption. And as one small-scale farmer stated, “if I didn’t trust them [Africabio], I wouldn’t use their seed.” Building trust with farmers has been shown to be achievable in a number of ways in South Africa. The following are some examples of practices that were used to build or erode trust between farmers and the private sector in the context of Bt maize in South Africa. One trust-building practice identified by farmers was the use of on-farm demonstrations that display the comparisons between Bt maize and conventional maize in the field. In 2001, Monsanto sought to engage with farmers by holding nine workshops across South Africa to introduce over 3000 small-scale farmers to Bt maize. Each farmer was given two bags of seed, one each of Bt and conventional maize, to plant in their own fields and compare the results . This was one of the first farmer engagement initiatives undertaken by Monsanto after MON810, its GM event responsible for insect resistance in Bt maize, was approved for use in South Africa. In a typical on-farm demonstration, seed companies or distributors would provide both Bt and conventional maize seeds for free to farmers to plant in a section of their fields and compare crop performance and yield. In addition, farmers were funded to host field days and invite other farmers in the community to observe the differences in performance at the demonstration sites. A farmer interviewee, commenting on this, said: Yes, we did not pay. They gave us [seeds] for free, and we planted the seeds. And then they helped us with money to hoe the fields and they come and they demonstrated the crops to other farmers. This practice was employed in six demonstration plots organized by AfricaBio between 2004 and 2005, which showed that there were higher yields of maize due to reduced stem borer infestations in Bt maize compared to the non-Bt maize . Interviewees who had taken part in demonstrations described them as trust-building practices because of the support that seed companies and AfricaBio would provide to farmers in terms of supplying seeds, compensating for labor, and educating them about the technology. The primary goal of these on-farm demonstrations was to foster trust among attending farmers, who were able to judge first-hand the performance of Bt maize compared to traditional maize. Many interviewees corroborated the effectiveness of this practice. However, interviewees who had hosted on-farm demonstrations recounted the erosion of trust that occurred when promises of financial compensation for their efforts were withdrawn or left unfulfilled by the seed companies or distributors. Some farmers reported that they would no longer host field days and crop demonstrations due to this lack of compensation. A small-scale farmer interviewee said she “used to trust AfricaBio because of those things they were doing” but when support in cash and inputs dwindled, trust declined in a similar manner. Another farmer, making reference to the unreliability of the government arm to supply inputs, said, “we [farmers] don’t look too much to the government.” Information dissemination and communication In addition to on-farm demonstrations, several interviewees felt that education and information dissemination about Bt maize was an important trust-building practice that could be achieved through a variety of avenues. One interviewee stated: the biggest trust creation is generating of science data, and then disseminating that data, and putting it out there on the website, in leaflets and all that so that the people understand what this is all about. One government initiative that contributed to trust building through education was the Public Understanding of Biotechnology Program. As indicated by the senior manager of the Technology Innovation Agency, this program sought to “demystify this passive talk about what biotechnology is and provide factual evidence [on what biotechnology is capable of doing].” Another initiative was the organization of regional study groups for farmers, the purpose of which was information and knowledge sharing among farmers. One interviewee reflected on a positive trust-building experience he had with AfricaBio when they invited him to visit a university research laboratory to investigate the process of Bt maize development and discuss any concerns he had about the safety of the crop. The interviewee appreciated that AfricaBio “did not play hide and seek” about the technology but was willing to go to such an extent to educate him about Bt maize. A farmer, commenting on this issue, stressed the need for “straightforward channels” to foster good communication and stated, “If you have good communication, you can sort anything out.” 2. The seed speaks for itself: delivering effective technology builds trust between the industry and the farmers Many interviewees emphasized the importance of an effective technology—in this case, an effective Bt maize seed—in building trust. An interviewee representing a seed company and distributer said that the ability of the technology to increase crop yields or reduce input costs is critical to the success of Bt maize: If it was a technology that was not providing benefits, they [farmers] would not be using it. An interviewee from Monsanto attributed much of the project’s success to “good genetics.” A large-scale commercial farmer said: if you deliver a good product, you start to build trust in that product. One small-scale farmer also stated, “We’re building trust with the seed itself […]. The performance of the seed is what you trust.” In the case of the Bt maize project in South Africa, “the trust held in place because the technology worked,” said an independent researcher. Guarantee that the Bt maize technology works and honesty about its expected performance was cited as an important trust-building practice by the private sector. An interviewee from Monsanto described it thus: They [farmers] need to trust you that the product they’re buying from you will perform to expectation. In order to ensure that realistic expectations are set, an interviewee from another private seed company said, “we never misrepresent whatever information that we share. So if a product does not perform, we will say that it does not perform. If a product does not perform, we will not take it to the market.” Despite these intentions, some interviewees described instances when the performance of the Bt maize technology did not meet their expectations. A few of these instances created an opportunity to build trust as some private companies took responsibility for the product failures and compensated the farmers affected. From one farmer’s perspective, this “built a lot of trust. Because that’s putting your money where your mouth is.” The important lesson on trust that resulted from these cases can be summarized by the following statement made by an industry representative: The farmer knows that if there is a problem that he could come back to the company to say ‘listen, there is a problem’ and the company then attends to the problem to resolve it. In other cases, instances of product failure resulted in the erosion of trust between the farmer and private seed company when the latter failed to acknowledge or take responsibility for the reported discrepancies. As one farmer stated, “I don’t have any trust in your [the seed company's] product anymore. Because the technology is failing under certain circumstances and [the seed company] don’t acknowledge that.” The benefits of these trust-building practices can be significant. In particular, once trust is established between the farmers and the private companies, or between the farmers and the seed companies, the power of word-of-mouth within the farming community can be a great asset to the private sector. As one small-scale farmer described, “What I’ve done is to tell other farmers that I’m planting this [Bt maize seed], [and that] this is going to help you to get more yield.” Acknowledging faults and taking responsibility in instances of product failure was found to be an important trust-building practice. The failure to do so, however, was cited by some interviewees as a great barrier to trust building and in some instances led to ceasing use of the technology. Ensuring that Bt maize performs as expected and is beneficial to the farmers is therefore important for the adoption of the technology and the maintenance of trust between farmers and the private sector. 3. Full disclosure facilitates regulatory processes and enhances mutual trust between industry and government The regulatory process for GM crops in South Africa is outlined in the GMO Act, which was passed in 1997 and came into effect in 1999 . This Act created the Executive Council, which is a decision-making body that is responsible for approving or rejecting applications to commercialize GM crops [15, 16]. Monsanto’s MON810 was approved for commercial production by the Department of Agriculture in 1998, upon the recommendation of SAGENE. Syngenta’s Bt maize was likewise approved in 2003 . In order to build trust during the regulatory approval process, it is important that both the applicant and the regulator fully disclose all relevant information throughout the process. On the government side, it is important to clarify the requirements for regulatory approval and communicate them to the private companies. As stated by a former regulator and now a seed company executive, the early stages of developing the Bt maize was a learning process for the regulators that was enhanced by open communication with the industry. Because Bt maize was the first GM crop product, the regulators “were not really sure what exactly they wanted to see or to evaluate to determine safety.” Open communication between regulatory bodies and private companies was therefore a “big factor that added to building trust,” which in turn led to better compliance to the regulatory process. Once the requirements for regulatory approval are clearly communicated, it is important that the private sector fully discloses all information necessary so that the regulators can make decisions about the technology. The government needs to be able to trust that the private sector will “give accurate information, truthful information, not withhold the information that could impact their decision. And to comply with any conditions that have been given.” In return, the private sector must feel assured that the government will maintain confidentiality on all sensitive product information. The mutual trust built between the regulator and industry has led to enhanced communication and consultation between them. For example, an interviewee from the seed industry noted that, when lobbyists or anti-GMO organizations lodge a complaint against Bt maize technology, the government trusts the private sector enough to approach them by “say[ing] that ‘this is an accusation that came in pertaining to your products, what information can you give us?’” This positive relationship was enabled by mutual trust between the regulatory bodies and private sector stakeholders. 4. Good agronomic practices sow success and foster trust Another important issue related to trust was the upholding of good agronomic practices by the farmers when growing Bt maize. Good agronomic practices include the distancing of GM and non-GM crops physically and temporally, and planting refuge areas in order to prevent insect resistance build-up to transgenic crops such as Bt maize . Refuge areas, or refugia, are the buffer zones of non-Bt maize (susceptible to stalk borers) planted in close proximity to Bt maize to provide a pool of stalk borers susceptible to Bt maize. This is meant to delay development of resistance to the toxic protein produced by Bt . As South Africa was the first African country to report stem borer resistance to the Bt toxin, the need to ensure a proper refuge area is becoming more important. Many interviewees (including farmers, industry and government executives) emphasized the importance of planting Bt maize correctly and particularly underlined the importance of enforcing the planting of the refuge area between conventional and Bt maize. Farmer interviewees attributed positive trust-building experiences to good agronomic practices: number one, we planted exactly the way they [the seed company] said. They taught us how to plant and we did exactly what they said. And then they told us that we must de-weed every time we see the weeds coming out. And we did exactly what they said… that’s why our maize was so beautiful. Enforcing these agronomic practices is not only beneficial to the farmer but also to the companies selling the technology. As an interviewee from a seed company described: We want to make sure that they [farmers] also plant the refugia, and that they abide by the rules. It’s of interest to all of us. Refugia is not something to make it difficult; it’s there to protect the traits. We want it to be protected for as long as possible. Failure to enforce the refuge area, on the part of both farmers and the private sector, was cited as a challenge to building trust. Some farmers chose not to plant a refuge area due to their desire to maximize profits per unit area. They said they were hesitant to reserve a portion of their fields for a refuge area of conventional maize, which would produce lower yields compared to Bt maize and, in turn, lead to reduced profit. Moreover, other farmer interviewees recalled that some seed companies fail to enforce the refuge area by neither educating their customers about the need to plant nor monitor it. One farmer stated, “they [seed companies] don’t promote the refugia areas enough,” and each time the seed company sold seeds to him, he would ask, “'but what about my maize for my refugia area?'” He further stated: They don’t tell you that if you’re going to buy 100 bags, 95 bags must be Bt and 5 bags must be for your refugia area. What ensues from industry member's failure to ensure appropriate management of the refuge area is the erosion of trust between the farmer and the private sector. While the private sector companies are hesitant to trust farmers to use proper agronomic practices, some farmers cannot trust seed distributors to adequately promote and provide the necessary conventional maize to plant their refuge area. In order to maintain trust and protect the effectiveness of the Bt maize technology, it is critical that both farmers and the private sector work together to enforce and monitor proper agronomic practices. The success of the Bt maize project in South Africa has been, and will continue to be, dependent upon the concerted effort of the farmers; government; and private sector stakeholders to establish and maintain trust. The four key lessons on trust building that were drawn from this case study can be applied to other agbiotech PPPs attempting to successfully introduce GM technology in sub-Saharan Africa. Each of the trust-building practices described in this case study require collaboration among stakeholders, though most can be undertaken without substantial financial inputs from any one partner. In interactions between the government and the private sector, transparency; accountability; and open communication are critical for navigating the regulatory process. When establishing trust with farmers, it is essential that the private sector be open and candid about both the benefits and limitations of their technology. This should include information sharing and awareness-building practices, such as the on-farm demonstrations described in this study. To maintain this mutual trust, farmers should also endeavor to employ the technology responsibly and uphold proper management practices, such as planting a refuge area. Each of the practices and principles presented in this case study has been shown to be essential for trust building in the context of the Bt maize project in South Africa. If applied with concerted effort, the key lessons presented in this study can provide a roadmap for budding agbiotech PPPs in designing their strategy for building trust with farmers and ensuring the successful adoption of their technology. Gouse M, Pray C, Schimmelpfennig D, Kirsten J: Three deasons of dubsistence insect-resistant maize in South Africa: have smallholders benefitted. AgBioForum. 2006, 9 (1): 15-22. Gouse M, Kirsten JF, van der Walt WJ: Bt cotton and Bt maize: an evaluation of direct and indirect impact on the cotton and maize farming sectors in South Africa. 2008, Keetch DP, Webster JW, Ngqaka A, Akanbi R, Mahlanga P: Bt maize for small scale farmers: a case study. Afr J Biotech. 2005, 4 (13): 1505-1509. Gouse M, Pray C, Kirsten J, Schimmelpfennig D: A GM subsistence crop in Africa: the case of Bt white maize in South Africa. Int J Biotechnology. 2005, 7 (1-3): 84-94. James C: Global status of commercialized biotech/GM crops: 2011. ISAAA Brief No 43. 2011, South African National Seed Organisation http://www.sansor.org/features/biotechinsa1101.htm. Gouse M: South Africa: revealing the potential and obstacles, the private sector model and reaching the traditional sector. The Gene Revolution: GM Crops and Unequal Development. Edited by: Fakuda-Parr S. 2007, 176-195. London, UK: Earthscan, Zheng J, Roehrich J, Lewis MA: The dynamics of contractual and relational governance: evidence from long-term public–private procurement arrangements. Journal of Purchasing and Supply Management. 2008, 14: 43-54. 10.1016/j.pursup.2008.01.004. Spielman DJ, Hartwich F, von Grebmer K: Public-private partnerships in international agricultural research. International Food Policy Research Institute. 2007, 9: 1-6. Spielman DJ, Grebmer K: Public–private partnerships in international sgricultural research: an analysis of constraints. Journal of Technology Transfer. 2006, 31: 391-300. Ezezika OC, Daar AS, Barber K, Mabeya J, Thomas F, Deadman J, Wang D, Singer PA: Factors influencing agbiotech adoption and development in sub-Saharan Africa. Nature Biotechnology. 2012, 30: 38-40. Brewer B, Hayllar MR: Building public trust through public–private partnerships. International Review of Administrative Sciences. 2005, 71 (3): 475-492. 10.1177/0020852305056825. White-Cooper S, Dawkins NU, Kamin SL, Anderson LA: Community-institutional partnerships: understanding trust among partners. Health Educ Behav. 2009, 36 (2): 334-347. Wolson R: Assessing the prospects for the adoption of biofortified crops in South Africa. AgBioForum. 2007, 10 (3): 184-191. South African Government Information http://www.info.gov.za/acts/1997/act15.htm. Viljoen C, Chetty L: A case study of GM maize gene flow in South Africa. Environmental Sciences Europe. 2011, 23 (8): 1-8. Vacher C, Bourguet D, Desquilbet M, Lemarié S, Ambec S, Hochberg ME: Fees or refuges: which is better for the sustainable management of insect resistance to transgenic Bt corn?. Biol Lett. 2006, 2 (2): 198-202. The authors are grateful to each of the participants who contributed substantial time and effort to this study. Special thanks to Justin Mabeya for assisting with data collection. The authors also thank Jessica Oh and Jocalyn Clark for comments on earlier drafts of the manuscript. This project was funded by the Bill & Melinda Gates Foundation and supported by the Sandra Rotman Centre, an academic centre at the University Health Network and University of Toronto. The findings and conclusions contained within are those of the authors and do not necessarily reflect official positions or policies of the foundation. This article has been published as part of Agriculture & Food Security Volume 1 Supplement 1, 2012: Fostering innovation through building trust: lessons from agricultural biotechnology partnerships in Africa. The full contents of the supplement are available online at http://www.agricultureandfoodsecurity.com/supplements/1/S1. Publication of this supplement was funded by the Sandra Rotman Centre at the University Health Network and the University of Toronto. The supplement was devised by the Sandra Rotman Centre. The authors declare that they have no competing interests. Study conception and design: OCE and ASD. Data collection: OCE. Analysis and interpretation of data: OCE and RL. Draft of the manuscript: OCE, RL and ASD. Critical revision of the manuscript for important intellectual content: OCE, RL and ASD. All authors read and approved the final manuscript. About this article Cite this article Ezezika, O.C., Lennox, R. & Daar, A.S. Strategies for building trust with farmers: the case of Bt maize in South Africa. Agric & Food Secur 1 (Suppl 1), S3 (2012). https://doi.org/10.1186/2048-7010-1-S1-S3 - Private Sector - Genetically Modify - Genetically Modify Crop - Trust Building - Genetically Modify Organism
<urn:uuid:c0365524-246e-4c19-ab87-8f68a681d6b8>
CC-MAIN-2022-33
https://agricultureandfoodsecurity.biomedcentral.com/articles/10.1186/2048-7010-1-S1-S3
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00698.warc.gz
en
0.951247
6,942
2.5625
3
Was America Founded on Christian Principles?/Program 3 | September 27, 2013 | |By: David Barton; ©1992| |What is the First Amendment? Do you know that the words “separation of church and state” are not in the First Amendment? Where do they appear? Who wrote them? And what did they mean during the days of our Founding Fathers?| - Ankerberg: Welcome to our program. Do you think that the United States was founded on Christian principles? Do you think that our Founding Fathers actually intended that they would be a part of public policy? Did they want religion and morality taught in the public schools? Were they willing to pass laws to insure that religious principles were taught to all of our children? Well, in any discussion of this nature, the phrase, “The separation of church and state” comes up. David Barton, President of WallBuilders Presentations who is an expert on the writings, records, and actions of the Founding Fathers talks about this phrase. - Barton: “Separation of church and state.” That’s a phrase we’ve all heard. We’ve heard it many times. And today we’re told that “separation of church and state” means that a person with religious values, religious practices, religious principles, is not to get those values or practices or principles involved in a public arena; that they’re to keep those practices out of educational areas, out of government areas. This is what we’re told about the First Amendment today. But is that what the Founding Fathers intended for the First Amendment? Was that the goal they wanted to accomplish through the First Amendment? We’ll look at that in this program. - Ankerberg: Now, what is the First Amendment? Do you know that the words “separation of church and state” are not in the First Amendment? Well then, where do they appear? Who wrote them? And what did they mean during the days of our Founding Fathers? Well, David Barton answers these questions and takes us back for a fascinating look at history. Listen. - Barton: The phrase “separation of church and state.” Where do we find that phrase? Well, on the bicentennial of the Bill of Rights two-thirds of the nation thought that the phrase “separation of church and state” was found in the First Amendment. And certainly that’s what we’ve heard for years. That’s what the Court has told us for years. But that phrase exists in no founding documents. The First Amendment very simply says, “Congress shall make no law respecting an establishment of religion or prohibiting the free exercise thereof.” - Very obviously the phrase “separation of church and state” does not appear in the First Amendment. But yet we’ve been told that for decades. Well, if the phrase doesn’t appear in the First Amendment, where did it come? Where do we find the phrase? The phrase appeared in a private letter that was written 11 years after the First Amendment. - Now, to understand the meaning of the letter – and it was a letter written by Thomas Jefferson – we need to understand what the Founding Fathers intended for the First Amendment. The Founding Fathers, although they did not use the phrase “separation of church and state,” they did have a concept of separation. They said in the writings of Congress, in the journals of Congress and in their own personal writings that, “We do not want in America what we had in Great Britain. We don’t want the national government making a federal denomination. We’re not going to let the government make us all Catholics or Anglicans or any other denomination. Now, we’re not going to separate Christian principles, but we’re not going to have a national denomination.” So in the records of Congress if you go through and look at the committee that gave us the First Amendment and at the acts on the floor of Congress that led to the First Amendment, you’ll find that that was the entire discussion: that they did not want a single national Christian denomination. - And when they finished the First Amendment, it was a prohibition on the federal government. It said, “Congress cannot make any law to establish a religion” or a single Christian denomination, as they defined “religion” in those days. So the intent was very clear: we will not have one national denomination. Now, the Founding Fathers knew what they had intended to do and so as they were appointed to courts and as they were placed on courts and on the Supreme Court, when they had opportunity to rule, they very clearly expressed what they already knew. - For example, this is a case from 1799. Now, the courts at this time were filled with the same men who had created the documents, who had signed them, and look what this 1799 Supreme Court ruled. It said, “By our form of government the Christian religion is the established religion. But all sects and denominations of Christians are placed on the same equal footing.” And that was the intent of the First Amendment, to make sure that we did not have a national denomination, that no one Christian denomination had an unfair advantage over another. - Ankerberg: Now, in recalling our history over 320 years, the Supreme Court of the United States did not shrink back from saying that the United States was a Christian nation. In fact, the Supreme Court in 1892 in the “Trinity” case documented 87 examples from American history that proved America was founded as a Christian nation. Up until 1962 the Court has argued vigorously for the interpretation of the First Amendment as the Founding Fathers intended. But in 1962 it drastically changed its interpretation. David Barton takes us through the evolution of the Supreme Court’s thought concerning the First Amendment. Listen. - Barton: Now, in 1801 the Danbury Baptist Association of Danbury, Connecticut heard a rumor and that rumor said the Congregationalist denomination was about to be made the national denomination in America. It greatly distressed them and it should have. So they fired off a letter to the President of the United States, Thomas Jefferson. Jefferson got that letter with their concerns and he wrote back to them on January 1, 1802, and in his letter he told them that they didn’t have to worry. He said, “You don’t have to worry about this.” He said, “The First Amendment has erected a wall of separation between church and state.” Now, that’s the source of those eight words. This is a letter written 11 years after the First Amendment. It’s a letter written to a private group. But he used those words. - Now, what we have not seen in decades is the remainder of that letter. The Supreme Court currently quotes those eight words but no more. Jefferson went on in that letter to explain that they would never need to fear the establishment of a single denomination. He said, “On the other hand, you’ll never need to fear that that wall will remove Christian principles.” He said, “It won’t.” He said, “The First Amendment means that the government will not get involved with the church unless the church does something that is a direct violation of a basic Christian principle.” He said, “If it comes to basic Christian principles, we’re not going to get involved unless someone,…” and he gave examples. He said, “If someone in the name of Christianity were to advocate human sacrifice,” he said, “the government would get involved. If they were to advocate bigamy or polygamy or licentious, promiscuous sex,” he said, “we would get involved, the government would get involved.” But he said, “In all other religious activities,” he said, “the First Amendment keeps the government from getting involved in church affairs.” And he very clearly explained that the wall of separation was a one-directional wall. It kept the government from running the church, but it never separated Christian principles from government. - Now, we never had national denomination established at that point so Jefferson’s letter fell into disuse until some years later. It was not resurrected again by the Supreme Court until the 1870s, and that’s because in the 1870s, 1880s and 1890s there was a challenge, a religious challenge, based on the First Amendment. - You see, there was a group that was advocating one of the things Jefferson had said shouldn’t be done. They were advocating bigamy and polygamy. And so they filed a suit against the U.S. Government saying, “Our religion says that we can practice bigamy and polygamy.” They said, “The First Amendment of the Constitution says that we are to have our free exercise of religion,” and this group said, “Based on the First Amendment you can’t stop us from exercising our religion.” They said, “Second of all, Jefferson said that there is to be a wall of separation, that the government is not to get involved in church affairs.” This group said, “This is a church affair, you’re not to get involved.” - Well, the 1878 Court, “Reynolds vs. the United States,” went back and resurrected Jefferson’s letter. They printed it in its entirety. They pointed out, they said, “You know, you’re right. Jefferson did say that we’re not to get involved in church affairs.” But they said, “But notice what else Jefferson said. Jefferson also said that Christian principles were not to be separated.” And he said, “Bigamy and polygamy is not a Christian practice; therefore, it is not protected by the First Amendment” and the Supreme Court used Jefferson’s letter for the next three decades to make sure that practice was not allowed in America because it was not a Christian practice. You see, this is Jefferson’s letter on separation that we hear so often now, but we just don’t hear the full letter. - Ankerberg: Now, a lot of people get worried when you say that America was founded as a Christian nation. They get anxious when someone says that our Founding Fathers intended that religion and morality should be taught in all public schools and practiced openly in government. Now, the reason for their worry is that they think the establishment of biblical Christianity as the underlying moral precepts of government poses a real threat to pluralism. But our Founding Fathers were not unaware of this concern. In fact, they stated openly in their writings that the only way they thought pluralism could be preserved in America was to establish the country as a Christian nation. David Barton explains. - Barton: For three decades the Supreme Court used it to make sure Christian principles were included in public affairs. Now, Jefferson’s letter fell into disuse again in the 1890s after this group removed its challenges. The next time Jefferson’s letter appears in the U.S. Supreme Court is in 1947 in a case called “Everson vs. Board of Education.” In that particular case in 1947 the Supreme Court said very simply, “The First Amendment had erected a wall between church and state. That wall must be kept high and impregnable. We could not approve the slightest breach.” - Now, in that decision for the first time in the history of the U.S. Supreme Court the Court quoted only eight words from Jefferson’s letter. They didn’t give the context; they didn’t give the full meaning. They didn’t even give the fact that for three decades they had used that letter to keep Christian principles in public affairs. - And this case challenged the Christian principle in public schools. There was funding going on for Christian activities in public schools and the Court in 1947 said, “Now, the First Amendment has given us a wall here, very strict. We must adhere to it.” But even then, in 1947, the Court said, “But that doesn’t mean we have to take Christian principles out of school.” So the Supreme Court, even though they used Jefferson’s letter, allowed Christian principles to remain in schools. - Now, in the next 15 years something unusual happened with that phrase of Jefferson. It appeared in case after case after case after case. The Supreme Court talked about that phrase every opportunity they had. One would think they were following the policy laid out by Dr. William James. It was Dr. James who had explained that there’s nothing so absurd that if you repeat it often enough, people will believe it. For the next 15 years all we heard from the Court was “separation of church and state.” They would say, “The Founding Fathers wanted separation of church and state,” but they wouldn’t give you any Founding Fathers. They wouldn’t make any quotes, they would just give you this umbrella that this is what the Founding Fathers wanted. They talked about it so much that in one case in 1958, called “Baer vs. Kolmorgen” one of the judges had had it with hearing “separation.” He wrote a stinging dissent. He said, “If this Court doesn’t stop talking about separation,” he said, “somebody’s going to start thinking it’s part of the Constitution.” He said, “Enough’s enough. Let’s quit this.” That was in 1958. - Well, the Court continued to talk, and it was in 1962 in a case called “Ingle vs. Vitale” the first time that the Court ever actually applied the misuse of the phrase “separation of church and state.” They said, “Under separation of church and state we can’t have prayer in schools anymore.” That was a practice that had gone on for 320 years, but they had so distorted the meaning that by 1962 they reapplied it. - And so that’s the evolution of the First Amendment. That’s how we’ve come from where the Founding Fathers wanted us to where we are today, to where that now “separation of church and state” means you keep all religious practices out of public affairs. That’s completely contrary to the intent of the Founding Fathers. - Ankerberg: Now, maybe you’re a Mormon, a Buddhist, a Hindu, a Muslim, or an agnostic and you’re listening to this discussion and you’re probably saying, “But Ankerberg, get real! The Founding Fathers couldn’t have thought that by establishing Christianity and teaching it in the schools that they would be allowing Americans like me to practice my religion. They would be violating my First Amendment rights!” Now, what’s so very interesting about this is that our Founding Fathers did consider such questions and had developed an answer. Listen. - Barton: Does being a Christian nation really threaten pluralism? Interestingly, the Founding Fathers discussed that and they felt that it enhanced it. As a matter of fact, it was Patrick Henry that made a very clear statement. Patrick Henry said, “It cannot be emphasized too often or too strongly that this great nation was founded not by religionists but by Christians; not on religions but on the Gospel of Jesus Christ.” - Now, that statement in itself is enough to make many people nervous today. They’ll think, “Oh, we’ll lose all this pluralism we have.” But look what Patrick Henry pointed out. He said, “It is for this very reason that people of other faiths have been afforded asylum, prosperity and freedom of worship here.” - Now, that statement by Henry was the reflection of historical studies. You see, in nations that claimed to be strongly biblical Christian nations, not just having the name of Christianity but if they applied biblical principles, other religions were welcome there; there was the free exchange of thought. They did not believe that you had to coerce someone into being a Christian, that you could just present the material and they would by their mental assent accept that. - Now, as early as the early 1800s we had lawsuits which were apparently challenging the fact that we’re a Christian nation. Other religious groups in the nation – and we had many religions at the time – they came and challenged the Christian precepts. And the courts were always quick to point out, they said, “Now wait a minute.” They said, “We’re a religious nation; we’re a Christian nation.” They said, “We don’t tell you how to believe, when to believe, where to believe, what to believe or if to believe. The very reason that you have the freedom to practice your religion here is because we’re a Christian nation.” - And that is true. If you look at nations all over the world, there are religious nations all across the world: the Mid East, the Far East, the Near East. All those nations that are religious nations, they don’t have tolerance; they don’t have pluralism there. They are very strict in their religious beliefs and allow no one else entrance there. Not so in a Christian nation. A Christian nation encourages others to come; allows others to come. - But interestingly enough, in having arrived to a nation today where we say we’re pluralistic and that Christianity should be equal with all other religions, we’ve actually excluded Christianity. In the “Roberts vs. Madigan” court case the Court ordered the removal of three Christian books out of a library in Colorado but allowed the books on Buddhism to stay; allowed the books on Indian-American religions to stay; but excluded all Christian materials. And that’s because we’re a pluralistic nation. You see, pluralism excludes Christianity. But Christianity is by itself inclusive of others, for it believes that you can win the war in the mind, you don’t have to force opinions on others. You can do that. - Now, the Founding Fathers did believe there had to be a standard of behavior, a standard by which you measured rights and wrongs. And that is true for any nation. Every nation has to adopt a standard of behavior for rights and wrongs, and this is where the Founding Fathers chose the Christian standard of behavior. They chose things like the Ten Commandments that said stealing is wrong, murder is wrong, perjury is wrong. They chose biblical precepts that said, for example, rape is wrong and that fornication is wrong. They chose that standard and that’s what they used in civil society. - But never once did they legislate opinions that we would call denominational or religious opinions. Never once did they say that you had to attend this church or you had to go to church at all or that you had to believe this when you got in church or that we should have this one denomination administering this section. Not at all. They believed in the basic biblical principles, but they would not allow, and they fought against, the establishment of any one denominational belief in preference to all others. - So a Christian nation is tolerant of other beliefs and that’s the one thing that has to be communicated today – a Christian nation, by its very nature, is a pluralistic nation. So if you’re from a different religion other than Christianity, what provisions are there for you? If you don’t want your children being taught Christian principles in school, what could you do? What would the Founding Fathers have done with that? The Founding Fathers addressed that issue because there were many other religions in America at that time. See, what they had then was educational choice. They would allow you to start your own schools with your own beliefs and they would allow you to exert tax dollars in that direction. They really didn’t care, just so long as students were educated. And so, educational choice solved much of that dilemma. - But in the system we have today, where we have state-funded, state-mandated compulsory education public schools, it’s very difficult to do that. Had we educational choice today as they had then, it would be a much simpler issue. You could select your choice of schools that taught the religious values that you embraced, that you chose, that you wanted your children to know. And that’s what the Founding Fathers did. - Now, there’s something else that has to be kept in mind here. We will never reach the point in this nation, and we should not, where that all religions are equal. And we try to do that today. Even in 1962, when the Court struck down voluntary prayer, it pointed out that 3 percent of the nation did not believe in God and we shouldn’t go around offending that 3 percent of the nation who don’t believe in God. Now, the difficulty with that is, in this form of government, whether we call it a republic or a democracy or democrat or republic, numbers are important. What we’re to always make sure in this form of government is that the majority does not lose. - Now, when you have 3 percent being protected at the cost of 97 percent, that’s complete reversal of government purpose. Can you imagine a vote on the U.S. Senate that went 97 to 3 and the 3 won? That’s exactly what happened with the removal of the acknowledgement of God in 1962. See, Thomas Jefferson was very clear about it. He said, “In this form of government not everyone will be happy. You can’t make everyone happy.” He said, “Someone’s rights will always be violated.” The important thing was to make sure that it was the smaller group that got violated and not the majority. - But the way we can get around not wanting our children to be taught Christian principles or whatever if you’re of a different religion is very simply through educational choice. You have other schools at your disposal where you can teach the values that you want your children to receive, and in the same way those that want Christian values can have them taught. But we’ve been opposed on that because of the structure of education today. And it’s too bad that we have left that application of the Founding Fathers – educational choice. - Ankerberg: Now, what evidence exists that our Founding Fathers intended for basic Christian principles to be a part of the government and society and that the separation of church and state did not mean removing Christian principles from government? David Barton cites a number of court cases that clearly show they did not want Christian principles to be removed. Listen. - Barton: What evidences are there that the Founding Fathers intended for basic Christian principles to be part of society, and that separation of church and state did not mean removing Christian principles from society? Well, there are numbers of court cases that went for a century and a half that showed that we were never to remove Christian principles from society. - Now, there was a case in 1844 that made the U.S. Supreme Court that is fairly interesting considering the cases we see today. In 1844 there was a school in Philadelphia that said, “We are going to teach our students morality, but we are not going to teach Christian principles at this school. We will not teach the Bible here at this school, but we will teach morality.” Now, this case made the Court because this school was receiving government funds. And you see, they pointed out in court, they said, “Now wait a minute. If you don’t want to teach the Bible and Christianity, that’s fine. You’ve just got to go be a private school.” They said, “But if you’re going to receive the government funds, if you’re going to be a government public school, you’ve got to teach the Bible and Christianity in your school.” Now, this is a Supreme Court case. - Now, two years later the Court very clearly explained why Christian principles were to be the basis of society. The Court says “Christianity has reference to the principles of right and wrong. It’s the foundation of those morals and manners on which our society is formed.” It’s our basis. You remove this and it will fall. They said, “That’s where we get our rights and wrongs. And if you don’t have rights and wrongs,” the Court said, “where do you get your morals and manners? And if you don’t have rights, wrongs, morals and manners, how do you run a nation?” The Court said, “If we take these basic precepts away, these rights and wrongs that we get from the Bible, these Judeo-Christian principles,” they said, “all we can expect in America is the dark and murky night of pagan immorality because we will have lost our rights and our wrongs, our morals and our manners.” - And then in 1952 the Court continued to stay in that very same vein. In a case called “Zorach v. Clauson” the Court said this, it said, “When the state encourages religious instruction or cooperates with religious authorities by adjusting the schedule of public school events to meet denominational or sectarian needs, it follows the best of our traditions.” The Court said, “We find no constitutional requirement which makes it necessary for government to be hostile to religion and to throw its weight against efforts to widen the effective scope of religious influence.” And the Court concluded and said, “That would be preferring those who believed in no religion over those who do.” They said that can’t happen in America. We can’t have policies that favor those who believe in no religion over those who do. - And this was in 1952. And yet our policies now do that. Our policies now say, “No, we cannot have prayer in a public arena because that would offend those who don’t like prayer.” So now we prefer those who believe in no religion over those who do. Because you see, it’s a very simple problem: you will either have prayer or you won’t have prayer. One group will win, the other group won’t. Right now the group that’s winning is those that believe in no religion. And as long ago as 1952, just a few decades ago, we said that will never happen in America. - Ankerberg: Now, take a moment and think of the implications of all that you’ve heard today. I’ve asked David Barton to summarize the evidence he has given. Listen. - Barton: Does separation of church and state really mean that a person cannot take their religious values and principles and activities into public affairs? Under current definition it does. But that’s the wrong definition. That completely violates the intent of the Founding Fathers. It violates all Supreme Court decisions prior to 1962. It violates the writings of the Founding Fathers. It even violates the intent of Thomas Jefferson who gave us the phrase, “separation of church and state.” Everything historically that exists proves that the current application we have of “separation of church and state” is totally wrong.
<urn:uuid:db9e2130-0add-4c5b-83e6-6d9236657264>
CC-MAIN-2022-33
https://jashow.org/articles/was-america-founded-on-christian-principlesprogram-3/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00498.warc.gz
en
0.97664
5,856
3.078125
3
|Part of a series about| Military occupation is effective provisional control of a certain ruling power over a territory which is not under the formal sovereignty of that entity, without the volition of the actual sovereign. Military occupation is distinguished from annexation by its intended temporary nature (i.e. no claim for permanent sovereignty), by its military nature, and by citizenship rights of the controlling power not being conferred upon the subjugated population. Military government may be broadly characterized as the administration or supervision of occupied territory, or as the governmental form of such an administration. Military government is distinguished from martial law, which is the temporary rule by domestic armed forces over disturbed areas. The rules of military government are delineated in various international agreements, primarily the Hague Convention of 1907, the Geneva Conventions of 1949, as well as established state practice. The relevant international conventions, the International Committee of the Red Cross (ICRC) Commentaries, and other treaties by military scholars provide guidelines on such topics as rights and duties of the occupying power, protection of civilians, treatment of prisoners of war, coordination of relief efforts, issuance of travel documents, property rights of the populace, handling of cultural and art objects, management of refugees, and other concerns which are very important both before and after the cessation of hostilities. A country that establishes a military government and violates internationally agreed upon norms runs the risk of censure, criticism, or condemnation. In the current era, the practices of military government have largely become a part of customary international law, and form a part of the laws of war. Article 42 of the 1907 Hague Convention on Land Warfare specify that "[t]erritory is considered occupied when it is actually placed under the authority of the hostile army." The form of administration by which an occupying power exercises government authority over occupied territory is called "military government." Neither the Hague Conventions nor the Geneva Conventions specifically define or distinguish an act of "invasion." The terminology of "occupation" is used exclusively. Military occupation and the laws of war From the second half of the 18th century onwards, international law has come to distinguish between the military occupation of a country and territorial acquisition by invasion and annexation, the difference between the two being originally expounded upon by Emerich de Vattel in The Law of Nations (1758). The clear distinction has been recognized among the principles of international law since the end of the Napoleonic wars in the 19th century. These customary laws of belligerent occupation which evolved as part of the laws of war gave some protection to the population under the military occupation of a belligerent power. The Hague Convention of 1907 further clarified and supplemented these customary laws, specifically within "Laws and Customs of War on Land" (Hague IV); October 18, 1907: "Section III Military Authority over the territory of the hostile State." The first two articles of that section state: - Art. 42. - Territory is considered occupied when it is actually placed under the authority of the hostile army. - The occupation extends only to the territory where such authority has been established and can be exercised. - Art. 43. - The authority of the legitimate power having in fact passed into the hands of the occupant, the latter shall take all the measures in his power to restore, and ensure, as far as possible, public order and safety, while respecting, unless absolutely prevented, the laws in force in the country. In 1949 these laws governing belligerent occupation of an enemy state's territory were further extended by the adoption of the Fourth Geneva Convention (GCIV). Much of GCIV is relevant to protected persons in occupied territories and Section III: Occupied territories is a specific section covering the issue. Article 6 restricts the length of time that most of GCIV applies: - The present Convention shall apply from the outset of any conflict or occupation mentioned in Article 2. - In the territory of Parties to the conflict, the application of the present Convention shall cease on the general close of military operations. - In the case of occupied territory, the application of the present Convention shall cease one year after the general close of military operations; however, the Occupying Power shall be bound, for the duration of the occupation, to the extent that such Power exercises the functions of government in such territory, by the provisions of the following Articles of the present Convention: 1 to 12, 27, 29 to 34, 47, 49, 51, 52, 53, 59, 61 to 77, 143. GCIV emphasised an important change in international law. The United Nations Charter (June 26, 1945) had prohibited war of aggression (See articles 1.1, 2.3, 2.4) and GCIV Article 47, the first paragraph in Section III: Occupied territories, restricted the territorial gains which could be made through war by stating: - Protected persons who are in occupied territory shall not be deprived, in any case or in any manner whatsoever, of the benefits of the present Convention by any change introduced, as the result of the occupation of a territory, into the institutions or government of the said territory, nor by any agreement concluded between the authorities of the occupied territories and the Occupying Power, nor by any annexation by the latter of the whole or part of the occupied territory. Article 49 prohibits the forced mass movement of people out of or into occupied state's territory: - Individual or mass forcible transfers, as well as deportations of protected persons from occupied territory to the territory of the Occupying Power or to that of any other country, occupied or not, are prohibited, regardless of their motive. ... The Occupying Power shall not deport or transfer parts of its own civilian population into the territory it occupies. Protocol I (1977): "Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts" has additional articles which cover military occupation but many countries including the U.S. are not signatory to this additional protocol. In the situation of a territorial cession as the result of war, the specification of a "receiving country" in the peace treaty merely means that the country in question is authorized by the international community to establish civil government in the territory. The military government of the principal occupying power will continue past the point in time when the peace treaty comes into force, until it is legally supplanted. "Military government continues until legally supplanted" is the rule, as stated in Military Government and Martial Law, by William E. Birkhimer, 3rd edition 1914. Beginning of military government There does not have to be a formal announcement of the beginning of "military government," nor is there any requirement of a specific number of people to be in place, or "on site" before military government can be said to have commenced. See Birkhimer, p. 25–26: No proclamation of part of the victorious commander is necessary to the lawful inauguration and enforcement of military government. That government results from the fact that the former sovereignty is ousted, and the opposing army now has control. Yet the issuing such proclamation is useful as publishing to all living in the district occupied those rules of conduct which will govern the conqueror in the exercise of his authority. Wellington, indeed, as previously mentioned, said that the commander is bound to lay down distinctly the rules according to which his will is to be carried out. But the laws of war do not imperatively require this, and in very many instances it is not done. When it is not, the mere fact that the country is militarily occupied by the enemy is deemed sufficient notification to all concerned that the regular has been supplanted by a military government. The occupying power The terminology of "the occupying power" as spoken of in the laws of war is most properly rendered as "the principal occupying power," or alternatively as "the occupying power." This is because the law of agency is always available (When the administrative authority for the military occupation of particular areas is delegated to other troops, a "principal -- agent" relationship is in effect). Because the law of agency is a very general pattern, primarily appliable in this case as the means of regulating the relationships between the said "powers", but a question however in which considerations of logistics are sometimes to be taken in consideration, that definition is not always appliable outside of those contexts which can be analysed by analogy as related to warlording, even though it does relate more generally to all possible types of military coalitions. In most contexts determined by the application of the defined and modern laws of war, delegation to agencies generally tends to relating to civilian organizations. Juridical considerations like the above remain in the other cases merely consensual between the said powers. For example, in 1948 the U.S. Military Tribunal in Nuremberg states: In belligerent occupation the occupying power does not hold enemy territory by virtue of any legal right. On the contrary, it merely exercises a precarious and temporary actual control. This can be seen from Article 42 of the Hague Regulations which grants certain well limited rights to a military occupant only in enemy territory which is 'actually placed' under his control. The conqueror is the principal occupying power. End of military government Rule: Military government continues until legally supplanted. This is explained as follows. For the situation where no territorial cession is involved, the military government of the principal occupying power will end with the coming into force of the peace settlement. - Example: (1) Japan after WWII. Japan regained its sovereignty with the coming into force of the San Francisco Peace Treaty on April 28, 1952. In other words, a civil government for Japan was in place and functioning as of this date. In the situation of a territorial cession, there must be a formal peace treaty. However, the military government of the principal occupying power does not end with the coming into force of the peace treaty. - Example: (1) Puerto Rico after the Spanish–American War. Military government continued in Puerto Rico past the coming into force of the Treaty of Paris of 1898 on April 11, 1899, and only ended on May 1, 1900 with the beginning of Puerto Rico's civil government. - Example: (2) Cuba after the Spanish–American War. Military government continued in Cuba past the coming into force of the Treaty of Paris of 1898 on April 11, 1899, and only ended on May 20, 1902 with the beginning of the Republic of Cuba's civil government. Hence, at the most basic level, the terminology of "legally supplanted" is interpreted to mean "legally supplanted by a civil government fully recognized by the national (or "federal") government of the principal occupying power." Examples of military occupations In most wars some territory is placed under the authority of the hostile army. Most military occupations end with the cessation of hostilities. In some cases the occupied territory is returned and in others the land remains under the control of the occupying power but usually not as militarily occupied territory. Sometimes the status of presences is disputed by a party to the situation. Military occupation is usually a temporary phase, preceding either the handing back of the territory, or its annexation. The world's longest ongoing military occupation, and the longest in modern times, is Israel’s occupation of Gaza, the West Bank and East Jerusalem. Other contemporary military occupations include the occupation of Northern Cyprus by Turkey. - List of military occupations - Rule of Law in Armed Conflicts Project (RULAC) - Nazi-occupied Europe - Allied Occupation Zones in Germany - Ex Parte Milligan |Wikimedia Commons has media related to Military occupation.| - Occupied territory - the legal issues, legal provisions regarding occupation of territory by hostile power and implications for people protected by IHL. - David Kretzmer, Occupation of Justice: The Supreme Court of Israel and the Occupied Territories, State University of New York Press, April, 2002, trade paperback, 262 pages, ISBN 0-7914-5338-3; hardcover, July, 2002, ISBN 0-7914-5337-5 - Sander D. Dikker Hupkes, What Constitutes Occupation? Israel as the occupying power in the Gaza Strip after the Disengagement, Leiden: Jongbloed 2008, 110 pages, ISBN 978-90-70062-45-3 Openacces - Belligerent Occupation - The Law of Belligerent Occupation Michal N. Schmitt (regarding occupation of Iraq) - Law of Belligerent Occupation, Judge Advocate General's School, United States Army - Military Government and Martial Law, by William E. Birkhimer, third edition, revised (1914), Kansas City, Missouri, Franklin Hudson Publishing Co. - FM 27-10 "The Law of Land Warfare," DEPARTMENT OF THE ARMY, WASHINGTON 25, D.C., 18 July 1956. (This manual supersedes FM 27-10, 1 October 1940, including C 1, 15 November 1944. Changes required on 15 July 1976, have been incorporated within this document.) Chapter 6, OCCUPATION - A Roberts. Prolonged Military Occupation: The Israeli-Occupied Territories Since 1967 - Am. J. Int'l L., 1990, p. 47. - Eyāl Benveniśtî. The international law of occupation. Princeton University Press, 2004. ISBN 0-691-12130-3, ISBN 978-0-691-12130-7, p. xvi - Eran Halperin, Daniel Bar-Tal, Keren Sharvit, Nimrod Rosler and Amiram Raviv. Socio-psychological implications for an occupying society: The case of Israel. Journal of Peace Research 2010; 47; 59 - During civil wars, the districts occupied by rebels are considered to be foreign.Military Government and Martial Law LLMC, p. 21. - David M. Edelstein. Occupational Hazards: Why Military Occupations Succeed or Fail. Journal of Peace Research 2010; 47; 59 - Phillipson, Coleman (1916). Termination of War and Treaties of Peace. The Lawbook Exchange. p. 10. ISBN 9781584778608. The difference between effective military occupation (or conquest) and annexation involves a profound difference in the rights conferred by each<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Stirk, Peter (2009). The Politics of Military Occupation. Edinburgh University Press. p. 44. ISBN 9780748636716. The significance of the temporary nature of military occupation is that it brings about no change of allegiance. Military government remains an alien government whether of short or long duration, though prolonged occupation may encourage the occupying power to change military occupation into something else, namely annexation<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Laws and Customs of War on Land" (Hague IV); October 18, 1907: "Section III Military Authority over the territory of the hostile State source The Avalon Project at the Yale Law School - Anonymous. "Chapter 5 – Definitions of Important Terminology and Concepts Related to Territorial Cessions". The True Legal Relationship between Taiwan & the USA. www.taiwanbasic.com. Retrieved December 2013. Check date values in: |accessdate=(help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>[unreliable source?] - Yutaka Arai Takahashi (2009). The Law of Occupation: Continuity and Change of International Humanitarian Law. p. 7. ISBN 978-90-04-16246-4.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - The majority of the international community (including the UN General Assembly, the United Nations Security Council, the European Union, the International Criminal Court, and the vast majority of human rights organizations) considers Israel to be occupying Gaza, the West Bank and East Jerusalem. The government of Israel and some supporters have, at times, disputed this position of the international community. For more details of this terminology dispute, including with respect to the current status of the Gaza Strip, see International views on the Israeli-occupied territories and Status of territories captured by Israel. See for example: * Hajjar, Lisa (2005). Courting Conflict: The Israeli Military Court System in the West Bank and Gaza. University of California Press. p. 96. ISBN 0520241940. The Israeli occupation of the West Bank and Gaza is the longest military occupation in modern times.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> * Anderson, Perry (July–August 2001). "Editorial: Scurrying Towards Bethlehem". New Left Review. 10. ...longest official military occupation of modern history—currently entering its thirty-fifth year<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> * Makdisi, Saree (2010). Palestine Inside Out: An Everyday Occupation. W. W. Norton & Company. ISBN 9780393338447. ...longest-lasting military occupation of the modern age<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> * Kretzmer, David (Spring 2012). "The law of belligerent occupation in the Supreme Court of Israel" (PDF). International Review of the Red Cross. 94 (885). doi:10.1017/S1816383112000446. This is probably the longest occupation in modern international relations, and it holds a central place in all literature on the law of belligerent occupation since the early 1970s<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> * Said, Edward (2003). Culture and Resistance: Conversations with Edward W. Said. Pluto Press. p. 33. ISBN 9780745320175. These are settlements and a military occupation that is the longest in the twentieth and twenty-first century, the longest formerly being the Japanese occupation of Korea from 1910 to 1945. So this is thirty-three years old, pushing the record.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> *Alexandrowicz, Ra'anan (24 January 2012), The Justice of Occupation, The New York Times, Israel is the only modern state that has held territories under military occupation for over four decades<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> * Weill, Sharon (2014). The Role of National Courts in Applying International Humanitarian Law. Oxford University Press. p. 22. ISBN 9780199685424. Although the basic philosophy behind the law of military occupation is that it is a temporary situation modem occupations have well demonstrated that rien ne dure comme le provisoire A significant number of post-1945 occupations have lasted more than two decades such as the occupations of Namibia by South Africa and of East Timor by Indonesia as well as the ongoing occupations of Northern Cyprus by Turkey and of Western Sahara by Morocco. The Israeli occupation of the Palestinian territories, which is the longest in all occupation's history has already entered its fifth decade.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
<urn:uuid:cfe2a748-3fbe-4e41-a112-df4640acfe59>
CC-MAIN-2022-33
https://www.infogalactic.com/info/Military_occupation
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00297.warc.gz
en
0.900977
4,140
3.59375
4
Broadband is essential to economic growth in the 21st century. This became starkly apparent over the past year as the coronavirus pandemic deepened Americans’ reliance on the internet. Families need the internet to access essential services, including education and health care, while small businesses and entrepreneurs need it to improve their operations and reach more customers to bring new economic opportunities to left-behind communities. Though many schools and offices are now starting to reopen, the pandemic has illustrated not only that broadband internet access at home will remain an indispensable utility in the future, but also that it’s an equity issue. Between 6 percent and 12 percent of Americans do not have high-speed internet service, either because of a lack of infrastructure access or an inability to afford the service. Rural communities, low-income people, and communities of color experience the highest barriers to broadband access—and many found themselves unable to access key services online during COVID-19. Those same groups experienced the brunt of the pain from the pandemic and resulting economic recession, including disparities in COVID-19 deaths, dramatic job loss and financial insecurity, and a higher risk of infection caused in part by occupational segregation into low-paid and high-risk front-line roles as well as other manifestations of structural racism and marginalization built into the economy. For people of color in rural communities, racial and geographic disparities compound one another. In majority-white rural counties, about 72 percent of the population has broadband available; for majority-African American rural counties and majority-Native American rural counties, it’s 56 percent and a staggering 27 percent, respectively. People of color in both rural and urban areas are less likely to have access to high-speed internet due to residential segregation caused in part by racist zoning and investment practices paired with a monopolistic market, where internet companies choose not to build or extend affordable, high-quality services without a higher profit margin. This column focuses on these disparities between rural and urban areas during the pandemic, what they mean for rural Americans’ access to services that meet their basic needs, and why broadband is a part of the country’s essential infrastructure. Using data from the Federal Reserve’s Survey of Household Economics and Decisionmaking (SHED), the column finds: - Rural residents* are almost twice as likely as urban ones to lack high-speed internet at home, at 19.69 percent compared with 10.23 percent. - 31.62 percent of workers in urban areas reported working from home full time in the previous week due to the pandemic, compared with just 13.61 percent of rural workers. - Rural students were twice as likely as urban students to report lacking adequate technology to complete their coursework during the pandemic, at 11.45 percent and 5.74 percent. - Low-income families and communities of color are less likely than white, affluent households to have broadband at home. A path forward for broadband investment President Joe Biden’s infrastructure plan aims to jump-start the economy by making historic investments in infrastructure—including broadband. Of Biden’s originally proposed American Jobs Plan, $100 billion of the more than $2 trillion would go to broadband infrastructure build-out and monthly subscription subsidies to assist low-income individuals with affordability. The plan also includes set-asides for tribal communities and preference for broadband networks owned or operated by, or affiliated with, local governments, nonprofits, and cooperatives. To ensure that broadband investments go where they are most needed, the Biden administration recently released a map that combines private and public data to illustrate gaps in broadband coverage. Ultimately, however, congressional negotiations will determine the total investment in broadband, including where these investments are targeted geographically. Regardless of its final size, any infrastructure package must include significant investments in broadband. The provisions must be guided by the goal to ensure equity for low-income communities and communities of color and to close access gaps between rural and urban residents. To realize this aim, investments must span both rural and urban contexts and address affordability in addition to availability. Internet access gaps during the pandemic There has been some progress in shrinking the gap between rural and suburban broadband adoption—a decrease in the gap from 16 percent to 7 percent in the last two years, according to Pew Research Center. However, tens of millions of Americans still lack access to this essential utility, and major differences across race, income, and region raise equity concerns. 2020 data from the Federal Reserve’s SHED capture how this broadband access gap played out during the coronavirus pandemic. According to the authors’ analysis of these data, rural households were almost twice as likely as urban ones to lack broadband internet, at 19.69 percent versus 10.23 percent. The SHED data show concerning inequities in online learning due to a lack of high-speed internet access in rural areas. While 82.62 percent of respondents in metropolitan areas reported that their children had sufficient internet access to complete their virtual coursework, this was true for only 76.15 percent of people in rural areas. Similarly, 11.45 percent of people in rural areas disagreed with the statement that their children had adequate internet to complete their coursework, compared with 5.74 percent of people living in metropolitan areas. The 2020 SHED data also found that people in metropolitan areas were more than twice as likely as people in rural areas to be working remotely full time during the pandemic. Affordability is another major barrier to families in need of broadband access. More than one-fifth of all families with an annual income below $25,000 lack broadband at home. To bridge this gap, the Federal Communications Commission offers low-income families a subsidy through the Lifeline program, which the Emergency Broadband Benefit program expanded in response to the COVID-19 pandemic. Such programs, paired with robust outreach and comprehensive implementation, will continue to be necessary in the coming years in order to achieve universal broadband access. Why broadband is infrastructure Though experts cannot predict the degree to which the shift from in-person to online services is permanent, broadband internet will be essential to participate in society moving forward. Online offerings have the potential to expand access to remote or virtual services that are difficult to find in rural communities, such as mental health care, access to and enrollment in public benefit programs, and banking, but those benefits are impossible to gain without reliable broadband service. The following are just five areas in which rural communities would benefit from strong federal investments in broadband deployment and adoption: - Education: As education across grade levels dramatically shifted to virtual learning at the onset of the pandemic, computer and high-speed internet access became more important than ever. One in 5 parents nationwide reported that it was likely their children would be unable to complete schoolwork because they lacked access to a computer at home, while 1 in 3 parents with lower incomes reported needing public Wi-Fi because they lacked reliable internet service at home. Among Black, Latino, and American Indian/Alaska Native families, only 1 in 3 households reported having sufficient high-speed internet access at home to support online learning. Limited access to technology during the pandemic exacerbated the existing “homework gap” between students with internet and those without. These disparities have repercussions for learning and opportunity that will endure beyond the pandemic and will only worsen without investments in reliable broadband for all students. - Public benefits: Applications, enrollment in, and the administration of Supplemental Nutrition Assistance Program (SNAP) benefits, unemployment insurance, Supplemental Security Income, and other benefits went solely virtual during the pandemic for the sake of safety and efficiency—excluding those without internet access. Many public offices—all Social Security Administration locations, for example—were closed, as were public libraries and case management services that might usually provide internet access. Increased outreach to vulnerable populations through tools such as online portals and educational webpages does not extend to those with unreliable or nonexistent internet access. While internet access is not a silver bullet, and availability of in-person resources and mail and phone services remains vital, more equitable access can help those using public benefits. - Health care: Broadband access is also a public health issue, particularly for rural communities, where doctor’s offices and hospitals may be an hour’s drive away or farther. From April 2019 to April 2020, national privately insured telehealth claims rose by more than 8,000 percent. While those rates have likely tapered as some offices have reopened, uptake of telehealth services will likely continue to be higher than it was prior to the pandemic, especially if proposed changes to make pandemic-era Medicare telehealth flexibility permanent are enacted. However, the rural and vulnerable populations that would be best served by accessible telehealth services are also the least likely to have reliable broadband access. - Telework: In June 2020, 67 percent of workers in nonmetropolitan areas—particularly workers of color and low-wage workers—were unable to telework. Though some could not perform their jobs virtually, others simply lacked the technology to work from home. Reliable broadband presents a wide range of opportunities for economic growth in rural communities. Attracting workers with remote jobs has spillover effects that can create other jobs in the area. However, in one October 2020 survey, more than one-third of respondents cited unreliable or limited internet access as a barrier to moving to a rural area. Moreover, enabling current rural residents to telework broadens the number of employment opportunities to include remote jobs of all kinds, such as customer service and data entry roles. - Online banking: Online account access for banked households is a crucial service that more than one-fifth of families rely on each year. Meanwhile, approximately 4 percent of U.S. households in 2019 were “unbanked,” meaning they had no checking or savings account; in 2017, 18.7 percent of households were “underbanked,” relying on alternative financial services as well as a bank account. These households, which are more likely to be poor and families of color, typically use alternative banking services, such as payday lending and check cashing or mobile banking services. For unbanked and underbanked people, online banking and bill pay services can be an important alternative to these exploitative options. High-speed internet is a necessity, but rural Americans, particularly poor people and people of color, often lack access to this important utility. This challenge requires investment on a historic scale. Congress must take bold steps to close the urban-rural broadband gap and center equity in its plan to expand internet access to more families. Zoe Willingham is a research associate for Economic Policy at the Center for American Progress. Areeba Haider is a research associate for the Poverty to Prosperity Program at the Center. * Due to limitations of available data, the authors define rural as “nonmetropolitan” and urban as “metropolitan” for the purposes of this column. Russia ministry says economic slump less severe than feared – Al Jazeera English Economy ministry says gross domestic product to shrink 4.2 percent this year amid sanctions over the war in Ukraine. Russia’s economy will contract less than expected and inflation will not be as high as projected three months ago, economy ministry forecasts showed, suggesting the economy is dealing with sanctions better than initially feared. The economy is plunging into recession after Moscow sent its armed forces into Ukraine on February 24, triggering sweeping Western curbs on its energy and financial sectors, including a freeze of Russian reserves held abroad, and prompting scores of Western companies to leave. Yet nearly six months since Russia started what it calls a “special military operation”, the downturn is proving to be less severe than the economy ministry predicted in mid-May. The Russian gross domestic product (GDP) will shrink 4.2 percent this year, and real disposable incomes will fall 2.8 percent compared with 7.8 percent and 6.8 percent declines, respectively, seen three months ago. At one point, the ministry warned the economy was on track to shrink by more than 12 percent, in what would be the most significant drop in economic output since the fall of the Soviet Union and a resulting crisis in the mid-1990s. The ministry now sees 2022 year-end inflation at 13.4 percent and unemployment of 4.8 percent compared with earlier forecasts of 17.5 percent and 6.7 percent, respectively. GDP forecasts for 2023 are more pessimistic, though, with a 2.7 percent contraction compared with the previous estimate of 0.7 percent. This is in line with the central bank’s view that the economic downturn will continue for longer than previously thought. The economy ministry left out forecasts for prices for oil, Russia’s key export, in the August data set and offered no reasons for the revision of its forecasts. The forecasts are due to be reviewed by the government’s budget committee and then by the government itself. China’s premier urges pro-growth policies as economy sputters – Al Jazeera English Li Keqiang calls on provinces to bolster growth after consumption and output fall short of expectations. China’s Premier Li Keqiang asked local officials from six key provinces that account for about 40% of the country’s economy to bolster pro-growth measures after data for July showed consumption and output grew slower than expectations due to Covid lockdowns and the ongoing property slump. Li told officials at a meeting to take the lead in helping boost consumption and offer more fiscal support via government bond issuance for investments, state television CCTV reported Tuesday evening. He also vowed to “reasonably” step up policy support to stabilize employment, prices and ensure economic growth. “Only when the main entities of the market are stable can the economy and employment be stable,” Li was cited as saying at the meeting in a front-page report carried in the People’s Daily, the flagship newspaper of the Communist Party. The meeting came after Monday’s surprise interest-rate cut did little to allay concern over the property and Covid Zero-led slowdown. Economists have warned of even weaker growth and have called for additional stimulus, such as further cuts in policy rates and bank reserve ratios and more fiscal spending. Li acknowledged the greater-than-expected downward pressure from Covid lockdowns in the second quarter and asked the local officials to strike a balance between Covid control measures and the need to lift the economy. “Only by development shall we solve all problems,” Li said, according to the broadcaster. Indicating China may resort to more local debt issuance to pump-prime the economy, Li said “the balance of local special bonds has not reached the debt limit” and the country should “activate the debt limit space according to law,” according to the People’s Daily report. Based on the government budget, local authorities may be able to issue an estimated 1.5 trillion yuan ($221 billion) of extra debt and bonds this year to support infrastructure spending, after top leaders urged better use of the existing debt ceiling limit in a key July Politburo meeting. The arrangement could be approved in August, according to some analysts. China’s 10-year government yield rose for the first time this week, up one basis point to 2.64% from the lowest in more than two years. Li urged local governments to accelerate the construction of projects with sound fundamentals in the third quarter to drive investment, the report said, and also asked officials to expand domestic consumption of big-ticket items such as automobiles and support housing demand. He also stressed the importance of opening up the domestic market to foreign investors, noting that the six major provinces — Guangdong, Jiangsu, Zhejiang, Henan, Sichuan and Shandong — account for nearly 60% of the country’s total foreign trade and foreign investment. “Opening up is the only way to make full use of the two markets and resources and improve international competitiveness,” Li was cited as saying. Li’s appearance suggests state leaders have completed their annual two-week policy retreat in resort area of Beidaihe. German recession fears deepen as economy is hit by 'perfect storm' – Financial Times Investors are now more pessimistic about the German economy than they have been at any time since the eurozone debt crisis more than a decade ago, worrying that a sharp fall in Russian natural gas supplies and soaring energy prices will plunge the country into recession. The ZEW Institute’s gauge of investor expectations about Europe’s largest economy has sunk to its lowest level since 2011, dropping from minus 53.8 to minus 55.3, underlining the deepening gloom about the economic fallout from Russia’s invasion of Ukraine. The think-tank’s survey of financial market participants provides an early indicator of economic sentiment after Russia reopened the Nord Stream 1 pipeline following a maintenance break last month, but kept the main conduit for delivery of gas to Europe operating at only a fifth of capacity. Economists have slashed their estimates for growth in Germany and the wider eurozone this year, while raising their inflation forecasts and warning that an end to Russian energy supplies would force Berlin to ration gas supplies for heavy industrial users. On Tuesday, German baseload power for delivery next year, the benchmark European price, rose over 5 per cent to a record €502 per megawatt hour, according to the European Energy Exchange. This is six times higher than the price a year ago — driven upwards by the sharply higher cost of gas used to generate electricity and the prolonged European heatwave that has disrupted generating capacity. The surging price of energy has driven up the cost of imports for Germany and other eurozone countries, sending the bloc’s trade deficit up to €24.6bn in June, compared with a surplus of €17.2bn for the same month a year earlier, according to data from Eurostat, the European Commission’s statistics bureau. The value of exports from the bloc rose 20.1 per cent in June from a year ago, but imports were up 43.5 per cent. “The still high increase in consumer prices and the expected additional costs for heating and electricity are currently having a particularly negative impact on the prospects for the consumer-related sectors of the economy,” said Michael Schröder, a researcher at the ZEW. He said investor sentiment also worsened due to an expected tightening of financing conditions after the European Central Bank raised its deposit rate by 0.5 percentage points to zero in response to record levels of eurozone inflation. Carsten Brzeski, head of macro research at Dutch bank ING, said the German economy was “quickly approaching a perfect storm” caused by “high inflation, possible energy supply disruptions, and ongoing supply frictions”. A heatwave and dry spell has reduced water levels on the Rhine below the level at which barges can be loaded fully, restricting important supplies for factories, which Brzeski estimated was likely to knock as much as 0.5 percentage points off German growth this year. Adding to the gloom, German households will have to pay hundreds of euros more in fuel bills this winter after the government unveiled an extra gas levy of 2.419 cents per KWH from October. This is expected to push up the cost for a family of four by €240 in the final three months of the year. Germany’s top network regulator told the Financial Times this month that the country must cut its gas use by a fifth to avoid a crippling shortage this winter. The economy ministry has also ordered all companies and local authorities to reduce the minimum room temperature in their workspaces to 19C over the winter. The country has achieved its target of filling gas storage facilities to three-quarters of capacity two weeks ahead of schedule, after high prices and fuel saving measures led to reduced use. But there are worries its objective to lift gas storage to a 95 per cent target of capacity by November will be more challenging if Russia keeps throttling supplies. The German economy stagnated in the second quarter, the weakest performance of the major eurozone countries. Last month, the IMF slashed its forecast for German growth next year by 1.9 percentage points to 0.8 per cent, the biggest downgrade of any country. Additional reporting by Harry Dempsey Russia ministry says economic slump less severe than feared – Al Jazeera English China’s premier urges pro-growth policies as economy sputters – Al Jazeera English Spreading roots: City of Charlottetown calling for art proposals for tree appreciation program – Saltwire Silver investment demand jumped 12% in 2019 Europe kicks off vaccination programs | All media content | DW | 27.12.2020 – Deutsche Welle Global Media Markets, 2015-2020, 2020-2025F, 2030F – TV and Radio Broadcasting, Film and Music, Information Services, Web Content, Search Portals And Social Media, Print Media, & Cable – GlobeNewswire News13 hours ago Baskin-Robbins signs its largest franchise development agreement in 51 years in Canada Health13 hours ago Rapid Polio Spread In New York: All You Need To Know – TheHealthSite News13 hours ago Hip Hop Icon Maestro Fresh Wes Takes Scarborough Back To School! Health14 hours ago Why it’s crucial to say that monkeypox is predominately affecting gay and bisexual men – Broadview Magazine Sports5 hours ago Schneider: 'Everything is on the table' for struggling Kikuchi – TSN News3 hours ago Ageism: Does it Exist or Is It a Form of ‘I’m a Victim!’ Mentality? [ Part 3 ] Politics23 hours ago Politics Briefing: One year after Afghanistan fell to the Taliban – The Globe and Mail News20 hours ago Where to look for cheap rent in Canada, as prices soar, again – CBC.ca
<urn:uuid:6020449f-4335-409d-a774-930387500843>
CC-MAIN-2022-33
https://canadanewsmedia.ca/rural-broadband-investments-promote-an-inclusive-economy-center-for-american-progress/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00698.warc.gz
en
0.95073
4,565
2.96875
3
The City of Quebec, the Second city in British North America, and now the Seat of Government for United Canada, is situated at the junction of the River St. Charles with the River St. Lawrence in latitude 46-9 N., and longitude 75.15 W. from Greenwich. The City is most picturesquely situated, and is naturally, as well as artificially, divided into two parts, known respectively as the Upper and Lower towns, the former of which is strongly fortified and is also defended by the Citadel, which is probably the most complete, as well as the strongest fortification upon the American continent. Quebec is an electoral district returning two members to the Provincial Parliament, and the courts of law for the district are held here distant from Montreal, 180 miles usual steamboat fare 7s. 6d. to 12s. 6d. usual stage fare, 50s. distant from Kingston, 392 miles usual steamboat fare, 32s. 6d. usual stage fare, 85s. distant from Toronto, 569 miles usual steamboat fare, 52s. 6d. usual stage fare, 125S. Population. including the troops usually in garrison, about 40,000. In the following Directory the names which appear in CAPITALS are those of subscribers to the work. His Worship the Mayor N. F. Belleau. Charles Alley N. Francis Xavier Paradis Jacques P. Rheaume William S. Sewell U. J. Tessier Representatives Of The Different Wards St. Louis Ward, Boxer, Sewell, Sewell, (Dr.); St. Peter’s Ward, Murray, Lepper, Carrier; St. Roch’s Ward, Rheaume, Guay. Tourangeau, Paradis; Palace Ward, Hall, McDonald, Morrin Champlain Ward, Maguire, Alleyn, Lampson; Saint John’s Ward, Tessier, Belleau, Dorval, Robitaille. Officers Of The Corporation F. X. Garneau, city clerk Augustin Gauthier, city treasurer Joseph Hamel, city surveyor T. W. Lloyd, Water Works manager Theophile Baillarge, assistant to city surveyor R. Meredith, city collector F. X. Julien, messenger M. M. Caron and Baillarge, advocates Charles Maxime DeFoy, notary Quebec Municipal Fire Department N. Wells, fire inspector, 23 Hope st., U. T. Deluge, Capt. Charles Corneil, St. Ursule st., U. T. Union, T. Gleeson, Cul de Sac st., L. T. Invincible, John Boomer, Nouvelle st, St. John’s. St. Lawrence, Thomas Burns, St. Paul st. L. T. St Roch, P. Latarte, St. Joseph st., St. Roch’s. La Canadian, J. B. Bureau, St. Joachim st., St. John’s. Erin-go-Bragh, J. Murray, Champlain at., L. T. S. Faugh-a-ballagh, L. Brothers, Champlain st., L. T. Quebec Hose Company, J. Wright, St. Ursale st. 0. T. Sappers and Hook and Ladder Company, F. N. Martinette, St. Joseph st., U. T. Clerks Of Markets Thomas Atkins, Upper Town market. Denis Murray, Lower Town market. Augustin Gauthier, St. Paul’s market R. H. Russell, chief constable, City hall, St. Louis st., U. T. Judges, Legislative Councillors And Members Of Parliament BLACK, HON. HENRY, judge of the vice-admiralty court. BOWEN, HON. EDWARD, chief justice superior court. CARON, HON; R. E., speaker of legislative council and CAUCHON, JOSEPH, M. P. P. for Montmorency. CHABOT, HON. JEAN, M. P. P. for Quebec city. CHAVEAU, PIERRE J. O., M. P. P. for Quebec county. CHRISTIE, ROBERT, M. P. P. for Gaspe. DUVAL, HoN. J. F., judge of superior court. LEMIEUX, FRS. X., M. P. P. for Dorchester. MEREDITH, HON. W. C., judge of superior court. METHOT, F. X., M. P. P. for Quebec city. PANET, HON. PHILLIPE, judge of court of Queen’s Bench. POWER, HON. WILLIAM, judge of circuit court. Q. C. MASSUE, HoN. Louts, M. L. C. RACQUET, HoN. J. B. E., judge of superior court. RoSs, DuNBAR, M. P. P. for Megantic. STUART, THE HON. SIR JAMES, Bart., chief justice Lower Canada. TASCHEREAU, HON. JOSEPH A., judge of circuit court. WALKER, HON. WILLIAM, M. L. C. St. Ann’s College Revs. C. Gauvreau, superior F. Pilote, T. B. Pelletier, E. Richard, G. Tremblay, H. Potvin, procurator A. Pelletier, A. Blanchet, P. H. Bouchy. Revs. L. Proulx, parish priest of Quebec E. G. Plante, George Drolet, L. Gill, vicars; Z. Charest, parish priest of St. Rochs J. Matte, P. L. Lahaie, N. Godbout, W. Richardson, vicars L. T. Bedard, E. G. Plante, general hospital T. Maguire, Ursuline convent P. McMahon, chaplain St. Patrick’s church M. Kerrigan, E. Bonneau, assistants M. Lemieux, A. Lafrancois, hotel dieu P. H. Harkin, military hospital W. Richardson, marine do Medical, Surgical, And Benevolent College Of Physicians And Surgeons, L. C. Joseph Morrin, Esq., M. D., president; Jean Blanchet, Esq., M. R. C. S. L., vice-president, W. Nelson, Esq., M. D., vice-president, Quebec School Of Medicine Dr. Morrin, president Dr. Bardy, secretary Dr. Landry, lecturer on anatomy (general and descriptive Dr. Sewell, lecturer on practice of physic Dr. Fremont, lecturer on practice of surgery Dr. Painchaud, lecturer on midwifery and diseases of women and children Dr. Nault, lecturer on materia medica and pharmacy Dr. Bardy, lecturer on medical jurisprudence and botany Dr. Painchaud, lecturer on clinical medicine Dr. Jackson, lecturer on clinical surgery Dr. Jackson, lecturer on chemistry Commissioners. Dr. Morrin, president; Dr. Parant, R. J. Alleyn, F. X. Paradis, T. Kelly, J. J. Nesbit. Visiting Physicians. Dr. James Douglas, Dr. Painchaud, sen., Dr. Hall, Dr. Jackson, Dr. Robitaille, Dr. Rowand. Dr. E. Lemieux, house surgeon; Mr. Beaubien, apothecary; P. Whelan, steward; Mrs. Whelan, matron. Mount Hermon Cemetery Directors. G. O. Stuart, chairman; H. S. Scott, secretary; C. Wurtele, treasurer; Jeffery Hale, John Gilmour, Thomas Gibb, John Musson, A. McDonald, W. S. Henderson. James Millar, superintendent. Mercantile Anti Literary Associations Hon. Justice Panet, president Hon. H. Black, vice-president Hon. It. E. Caron, treasurer Charles Alleyn, secretary J. B. Landry, librarian. Rooms Court house, St. Louis st. Hon. R. E. Caron, honorary president P. J. O. Chaveau, president active Vital Tetu, and F. X. Paradis, vice-presidents F. Vezina, treasurer J. Langlois, secretary 5 N. Casault, librarian High School Of Quebec WILLIAM ANDREW, M.A., rector W. S. Smith, classical master D. Wilkie, English and arithmetical master Hy. D. Thielcke, French, German and drawing master Rev. John Cook, D. D., chairman of the board of directors D. Wilkie, treasurer and secretary Literary And Historical Society Officers-G. B. Faribault, president Hon. R. E. Caron, E. Burroughs, A. Campbell, Rev. A. W. Mountain, vice-presidents C. W. Jones, recording secretary W. H. A. Davies, corresponding secretary H. D. F. Thielcke; assistant secretary Robert Symes, treasurer D. Wilkie, librarian Wm. D. Campbell, curator of museum R. Neill, curator. of apparatus G. B. Faribault, W. H. A. Davies. Rev. M. Casault, D. Wilkie, Hon. R. E. Caron, C. W. Jones, A. Campbell committee on historical documents William Antrobus Holwell, president Rev. D. Marsh, W. Patterson J. S. Hossack, E. S. Pooler, W. S. Henderson, vice-presidents Robert Neill, secretary W. J. Bickell, corresponding secretary James McKay, recording secretary J. Boomer. assistant secretary John Burnhope and Daniel Bews, librarians. Quebec Library Association Officers. H. S. Scott, president Rev. Dr. Mackie, Rev. J. Cook, D. D., G. B. Faribault, vice-presidents R. Symes, treasurer A. Joseph, chairman J. H. Clint, secretary Miss Meiklejohn, librarian W. H. C ran, superintendent Quebec County Agricultural Society Officers E. J. Deblois, president Thomas May, vice-president J. B. Trudelle, secretary Joseph Bedard, treasurer Joseph Laurin, James Welch, Michel Hamel, James West, G. Eglinton. P. Trudelle, John Lane, John West, William Meek, W. Taylor, Michael Scullion, Samuel Tozer. Quebec Board of Trade Council James Dean, president R. Wainwright, vice-president H. J. Noad, treasurer James Gillespie, secretary D. Gilmour, W. Stevenson, W. Hunt, J. Gillespie, C. Wurtele, A. Laurie. T. C. Lee, T. H. Dunn, R. Hamilton, H. Lemesurier. Board Of Arbitration J. Jameson, A. M’Donald, W. Dawson, James Gillespie, James Gibb, W. Hunt, A. Laurie, W. Herring, D. D. Young, J. W. Leaycraft, J. B. Forsyth, H. J. Scott. Managing Commitee Hon. William Walker, chairman; R. Roberts, A. D. Bell, H. J. Noad, Henry Pemberton, W. Stevenson, Charles Poston, secretary and treasurer. William Lane, superintendent. Regular days of Meeting, the last Monday in every month, at Ten, A. M. Cullers Of Timber, &C. Supervisor Of Cullers’ Office. Nos. 31 and 32 Sault an Matelot st., L. T. (John Sharples, supervisor). holding licenses under provincial act, 8 Viet. cap. 49, August 12, 1848. James M’Phee, Louis Dorion, William Bee, Charles Cazeau, Alexis Dorval, John S. Waterson, Etienne Robitaille, Denis Cantillon, Jean Larochelle, Denis Duggan, Maurice Malone, Michael Hamel, James Scott, Michel Robitaille, J. B. Vachon, Michael Power, Michael Murphy, Olivier Gaboury, Robert Downes, Louis Myrand, William, O’Brien, William Teedon, Gilbert Downes, Joseph Lockwell, Patrick Malone, Thomas Malone, James Lynch, Jean Couture, Alexander Couture, sen., Fereole Couture, F. X. Beland, J. B. Jarnac, Barthelemi Chartier, Robert Boyte, John O’Sullivan, O. Gauvreau, J. B. Philbert, John Frederick, James Downes, C. Corneau, Edouard Verrault, William M’Kutcheon, John Millar, Bernard Daly, Jacques Jobin, John Cameron, Thomas Egan, James Mackie, James P. Boure, J. Bowen, Peter GelIy, Joseph Larose, P. M. Paquet, John Quinn, Thomas Redmond, Germain Savard, Narcisse Valin, Louis Cloutier, James Lambert, Pierre M’Neil, jr., Michael Lynch, Charles Couture, Edward Haughton, Jean Bornais. John Curtin, Jerome Couture, Joseph Langlois, Michael Gibbons, Pierre M’Neil. senior, John Leek, Henry M’Peak, William Duggan, John Peverly, W. Lambert. Peter Gilgan, Charles Timony, George Larochelle, Richard Jeffery, F. X. Thompson, W. H. Hoogs, Thomas Clark, William French, E. T. Gauvreau, Louis Demers, Xavier Masson, David J. Gewais, Pierre Juinest, Robert Clark, John Clark, Roderick M’Gillis, John Tilly. John Sewell, postmaster; David Logie, 1st clerk; David M. Wright, 2nd clerk; F. M. Becot, 3rd clerk; Vincent Cazeau, 4th clerk; John Watt, 1st letter carrier; Robert Patton, 2nd letter carrier; Richard Glover, 3rd letter carrier. John Bruce, comptroller; John P. Meara, first assistant; Edward Bartlett, second assistant; James Mills, tide Surveyor. Office, St. James street, L. T. J. W. DUNSCOMB, collector; Ls. Massue, landing surveyor; J. V. Bouchard, first clerk; Neilson Ross, second clerk; P. M. Partridge, clerk to surveyor; J. B. A. Chartier, clerk to surveyor; James Sealy, head locker; J. A. Taschereau, 1st landing waiter; Chs. E. Allen, 2nd landing waiter; C. Cazeau, 3rd landing waiter; Frs. Thompson, 4th landing waiter; W. McCauley, house-keeper and messenger; Hugh McHugh, sampler and weigher; F. X. Frenette, appraiser; F. X. Metivier, assistant appraiser; D. McLean Stewart, check officer on foreign rafts, &c. Courts Of Justice Court Of Queen’s Bench. Hon. Sir James Stuart, Bart., chief justice Hons. Jean Roch Rolland, Philippe Panet, Thomas Cushing Aylwin, puisne judges. Court In Appeal And Error. 7th to 18th January, and st to 12th July. Montreal.-1st To 12th March, And 1st To 12th October. Court Of Criminal Jurisdiction. Quebec. 20th January and 14th July. Montreal.-14th March and 14th October. Three-Rivers. 2nd February and 11th September. Sherbrooke (district of St. Francis) 12th February and 1st September Jurisdiction in suits over £50, currency. Hon. Edward Bowen, chief justice; Hons. J. B. E. Bacquet, J. F. Duval, W. C. Meredith, judges. Quebec. 1st to 20th April, September and December. MONTREAL.1st to 20th April, September and December. Jurisdiction in suits up to £50, currency Hons. William Power, Joseph A. Taschereau, judges. Quebec Circuit. City of Quebec. On the last six juridical days of each month in the year, except August. Justices Of The Peace Who are resident within the city and banlieue of Quebec; according to the order in which they stand in the general commission of the peace now in force, bearing date the 9th day of October, 1845. William King McCord, Q. C., inspector and superintendent of police. Noah Freer, Joseph Morrin, Henry LeMesurier, John G. Clapham, Hammond Gowen. Ebenezer Baird, George Black, Edward Glackemeyer, Joseph Legare, Antoine A. Parent, Francois Xavier Paradis, Robert Symes, Christian Hoffman, Osborne L. Richardson, Thomas Conrad Lee, Robert Jellard, William H. A. Davies, William Petry, Richard I. Alleyn, Paul Lepper, Daniel McCallum, Francois Buteau, Charles M. DeFoy, Julien Chouinard, Michel Tessier, John Doran, William Ware, J. Zephirin Nault, Joseph Painchaud, George Holmes Parke, Joseph Robitaille, Frederick Petry, Edouard Rousseau, Wm. H. Anderson, William O’Brien, Josiah Hunt, Edouard Dugal, Rene Gabriel Belleau, Francois Joseph Parent, Jean Bte. TrudelIe, George Henderson, Ant. Ambroise Parent, Abraham Joseph, George Mellis Douglas, Adolphe Larne, Olivier Fiset, William Price, William Gunn. Incorporated Board Of Notaries E. Glackemeyer, president A. B. Sirois, treasurer Joseph Laurin, secretary A. Campbell, L. T. McPherson, Joseph Petitelerc, Louis Prevost, C. M. DeFoy, J. B. Trudelle, J. B. A. Chartier, F. M. Guay, Louis Rue Royal Engineers. Lieutenant Colonel C.O. Streatheld, commanding; Captain R. S. Beatson, Captain F. R. M. H. Somerset,. Lieutenant H. Williams, Lieutenant George Rankin. Robert Sands, M. Madden, clerks of works; B. Collyer, W. S. Thorpe, F. N. Boxer, clerks; W. Chessell, J. Grand, foremen of works; John Hall, office keeper. W. A. Holwell, storekeeper; A. F. Thomas, D. Grant, W. H. Trigge, clerks; Harry Cornwall, barrack master; Captain Knight, town major. Staff Of The Governor General Lieutenant Colonel Hon. R. Bruce, military and private secretary Captain Lord Mark Kerr, 20th regiment, A. D. C. Lieutenant F. Grant, 70th regiment, A. D. C. Captain H. Cotton, royal Canadian rifles, extra A. D. C. Lieutenant Colonel E. Antrobus, Canadian. militia, provincial A. D. C.
<urn:uuid:8e8cbd5e-ef07-4367-98f8-8bd93c001eb8>
CC-MAIN-2022-33
https://accessgenealogy.com/canada/1851-quebec-canada-directory-city-officials.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00098.warc.gz
en
0.7355
4,629
2.640625
3
Substitutions Kyrios (Lord) and Theos (God) During the second or third century of the Common Era, the scribes substituted the words Ky′ri·os (Lord) and The·os′ (God) for the divine name, Jehovah, in copies of the Greek Septuagint translation of the Hebrew Scriptures. Other translations, such as the Latin Vulgate, the Douay Version (based on the Vulgate), and the King James Version, as well as numerous modern translations (NE, AT, RS, NIV, TEV, NAB), followed a similar practice. The divine name was replaced by the terms “God” and “Lord,” generally in all-capital letters in English to indicate the substitution for the Tetragrammaton, or divine name. In departing from this practice, the translation committee of the American Standard Version of 1901 stated: “The American Revisers, after a careful consideration, were brought to the unanimous conviction that a Jewish superstition, which regarded the Divine Name as too sacred to be uttered, ought no longer to dominate in the English or any other version of the Old Testament, as it fortunately does not in the numerous versions made by modern missionaries. . . . This personal name [Jehovah], with its wealth of sacred associations, is now restored to the place in the sacred text to which it has an unquestionable claim.”—AS preface, p. iv. The Tetragrammaton rendered into a name A number of translations since then (An, JB [English and French], NC, BC [both in Spanish], and others) have consistently rendered the Tetragrammaton as “Yahweh” or have used a similar form. Under the heading Jehovah (In the Christian Greek Scriptures), evidence is also presented to show that the divine name, Jehovah, was used in the original writings of the Christian Greek Scriptures, from Matthew to Revelation. On this basis, the New World Translation, used throughout this work, has restored the divine name in its translation of the Christian Greek Scriptures, doing so a total of 237 times. Other translations had made similar restorations, particularly when translating the Christian Greek Scriptures into Hebrew. When discussing “Restoring the Divine Name,” the New World Bible Translation Committee states: “To know where the divine name was replaced by the Greek words Κύριος and Θεός, we have determined where the inspired Christian writers have quoted verses, passages and expressions from the Hebrew Scriptures and then we have referred back to the Hebrew text to ascertain whether the divine name appears there. In this way we determined the identity to give Ky′ri·os and The·os′ and the personality with which to clothe them.” Explaining further, the Committee said: “To avoid overstepping the bounds of a translator into the field of exegesis, we have been most cautious about rendering the divine name in the Christian Greek Scriptures, always carefully considering the Hebrew Scriptures as a background. We have looked for agreement from the Hebrew versions to confirm our rendering.” Such agreement from Hebrew versions exists in all the 237 places that the New World Bible Translation Committee has rendered the divine name in the body of its translation. — NW appendix, pp. 1564-1566. To impede spreading the name of God Already from the beginning of times there were people who did not like to spread the name of the Divine Creator. We should all be aware nothing has changed Who created all wonders of nature. Several people do everything to have the Name of the Divine Creator not be known. they would not like to see others having enjoying a close relationship with Him who is the Sovereign Master. “The god of this system of things has blinded the minds of the unbelievers.” The god of this present ungodly world is also called “Satan“, which means the “Adversary“. There are many adversaries of God in this world. They not only blaspheme the Name of God. They want to keep you in darkness so that your heart will not be illuminated with “the glorious knowledge of God.” “Satan” or the adversary does not want you to know Jehovah by name. How, though, does Satan blind people’s minds? 4 Therefore, since we have this ministry through the mercy that was shown us, we do not give up. 2 But we have renounced the shameful, underhanded things, not walking with cunning or adulterating the word of God;+ but by making the truth manifest, we recommend ourselves to every human conscience in the sight of God.+ 3 If, in fact, the good news we declare is veiled, it is veiled among those who are perishing, 4 among whom the god of this system of things*+ has blinded the minds of the unbelievers,+ so that the illumination* of the glorious good news about the Christ, who is the image of God,+ might not shine through.+5 For we are preaching, not about ourselves, but about Jesus Christ as Lord and ourselves as your slaves for Jesus’ sake. 6 For God is the one who said: “Let the light shine out of darkness,”+ and he has shone on our hearts to illuminate them+ with the glorious knowledge of God by the face of Christ. (2 Corinthians 4:1-4-6). Blinding the world The gods of this world blind many and the adversaries of God managed to get far by creating a lot of confusion in people’s mind. Certainly with taking away the Name of God out of the Holy Bible or the Holy Scriptures they managed it that many became so confused that they could not see clear any more in the different characters of figures in the Bible. seeing at so many places the world ‘lord’ they do not know any more of which lord is been spoken. As such “Satan” has also used false religion to hinder people from coming to know God by name. For example, in ancient times some Jews chose to ignore the inspired Scriptures in favour of tradition that called for avoiding the use of God’s name. By the first centuries of our Common Era, Jewish public readers had evidently been instructed, not to read God’s name as it appeared in their Holy Scriptures, but to substitute the word ʼAdho·nai′, meaning “Lord.” Doubtless, this practice contributed to a tragic decline in spirituality. Jesus making the Name of his Father known Many lost out on the benefits of a close personal relationship with God. What, though, about Jesus? What was his attitude toward Jehovah’s name? Jesus declared in prayer to his Father: “I have made your name known . . . and will make it known.” (John 17:26) Jesus would undoubtedly have pronounced God’s name on numerous occasions when he read, quoted, or explained portions of the Hebrew Scriptures containing that important name. Jesus would thus have used God’s name just as freely as all the prophets did before him. If any Jews were already avoiding the use of God’s name during the time of Jesus’ ministry, Jesus would certainly not have followed their tradition. He strongly criticized the religious leaders when he said to them: “You have made the word of God invalid because of your tradition.” (Matthew 15:6). Continuation of the use of God’s Name Faithful followers of Jesus continued to make God’s name known after Jesus’ death and resurrection. Jesus’ apostles and disciples continued in the tradition of their master teacher and used God’s name in their inspired writings. Professor Howard notes: “When the Septuagint which the New Testament church used and quoted contained the Hebrew form of the divine name, the New Testament writers no doubt included the Tetragrammaton in their quotations.” “Everyone who calls on the name of Jehovah will be saved.” (Acts 2:21) 32 And it must occur that everyone who calls on the name of Jehovah will get away safe;+ for in Mount Zion and in Jerusalem there will prove to be the escaped ones,+ just as Jehovah has said, and in among the survivors,* whom Jehovah is calling.”*+ (Joel 2:32) Calling onto lords All those translations where is only written ” that whosoever shall call on the name of the Lord shall be saved” do not give a clear picture which lord has to be called on. Translations which use: “whoever calls on the name of ADONAI will be saved.’” however do not leave a doubt. In the early 20th century Bible translations in case the Name of God was not used they mostly placed full capitals, so that people still could see that it was the Lord God been spoken of and not the Lord Jesus Christ. But nearing the end of the 20th century the use of capitals was put aside and people could not see any difference between the Lord God Jehovah (Lord of Lord of lords) or His son the Lord of lords or Lord of the Sabbath, Jesus Christ. Many miraculously shaped in their mother’s womb, got their cells and their DNA sown by the Most High Creator of heaven and earth. You would think they would like to know the Founder of their being and to get a good relation with Him. For building a good relation we do have to come to know the person very well. And when we do really love somebody we do use his or her name and not a detached “sir”, “lord” or “madam” or “misses”. Early Christians helped people from many nations to come to know Jehovah by name. Thus, in a meeting of the apostles and older men in Jerusalem, the disciple James said: “God . . . turned his attention to the nations to take out of them a people for his name.” (Acts 15:14). Satan sowing apostasy by no proper name Nevertheless, the enemy of God’s name did not give up. Once the apostles were dead, the opponents of God, the wicked ones and His enemies, wasted no time in sowing apostasy. 38 the field is the world.+ As for the fine seed, these are the sons of the Kingdom, but the weeds are the sons of the wicked one,+39 and the enemy who sowed them is the Devil. The harvest is a conclusion of a system of things,* and the reapers are angels.(Matthew 13:38, 39) 2 However, there also came to be false prophets among the people, as there will also be false teachers among you.+ These will quietly bring in destructive sects, and they will even disown the owner who bought them,+ bringing speedy destruction upon themselves. (2 Peter 2:1) For example, the nominal Christian writer Justin Martyr was born about the time John, the last of the apostles, died. Yet, Justin repeatedly insisted in his writings that the Provider of all things is “a God who is called by no proper name.” Replacing the Name When apostate Christians made copies of the Christian Greek Scriptures, they evidently took Jehovah’s personal name out of the text and substituted Ky′ri·os, the Greek word for “Lord.” The Hebrew Scriptures did not fare any better. No longer reading God’s name aloud, apostate Jewish scribes replaced the divine name in their Scriptures with ʼAdho·nai′ more than 130 times. The influential translation of the Bible into Latin that was completed by Jerome in 405 C.E. and that came to be called the Vulgate similarly omitted the personal name of God. Today, scholars are aware that Jehovah’s personal name appears some 7,000 times in the Bible. Thus, some widely used translations, such as the Catholic Jerusalem Bible, the Catholic La Biblia Latinoamérica in Spanish, and the popular Reina-Valera version, also in Spanish, freely use God’s personal name. Some translations render God’s name “Yahweh.” Sadly, many churches that sponsor Bible translations pressure scholars into omitting God’s name from their translations of the Bible. For example, in a letter dated June 29, 2008, to presidents of Catholic bishops’ conferences, the Vatican stated: “In recent years the practice has crept in of pronouncing the God of Israel’s proper name.” The letter gives this pointed direction: “The name of God . . . is neither to be used or pronounced.” “for the translation of the Biblical text in modern languages, . . . the divine tetragrammaton is to be rendered by the equivalent of Adonai/Kyrios: ‘Lord.’” Clearly, this Vatican directive is aimed at eliminating the use of God’s name. Protestants have been no less disrespectful in their treatment of Jehovah’s name. A spokesman for the Protestant-sponsored New International Version, published in English in 1978, wrote: “Jehovah is a distinctive name for God and ideally we should have used it. But we put 21⁄4 million dollars into this translation and a sure way of throwing that down the drain is to translate, for example, Psalm 23 as, ‘Yahweh is my shepherd.’” In addition, churches have hindered Latin Americans from knowing God by name. Steven Voth, a translation consultant for the United Bible Societies (UBS), writes: “One of the ongoing debates in Latin American Protestant circles revolves around the use of the name Jehová . . . Interestingly enough, a very large and growing neo-pentecostal church . . . said they wanted a Reina-Valera 1960 edition, but without the name Jehová. Instead, they wanted the word Señor [Lord].” According to Voth, the UBS rejected this request at first but later gave in and published an edition of the Reina-Valera Bible “without the word Jehová.” Deleting God’s name from his written Word and replacing it with “Lord” hinders readers from truly knowing who God is. Such a substitution creates confusion. For example, a reader may not be able to discern whether the term “Lord” refers to Jehovah or to his Son, Jesus. Thus, in the scripture in which the apostle Peter quotes David as saying: “Jehovah said to my Lord [the resurrected Jesus]: ‘Sit at my right hand,’” many Bible translations read: “The Lord said to my Lord.” (Acts 2:34, NIV) In addition, David Clines, in his essay “Yahweh and the God of Christian Theology,” points out: “One result of the absence of Yahweh from Christian consciousness has been the tendency to focus on the person of Christ.” Thus, many churchgoers are hardly aware that the true God to whom Jesus directed his prayers is a Person with a name — Jehovah. Learn to know and use God His Name You may be convinced that it does not really matter, but did you ever thought it perhaps could be really very important. those who pray ‘the Lord’s prayer’ did they ever think what it would mean to “hallow God His Name”? As it was important for the son of God, Jeshua (Jesus Christ) to have people get to know his Father’s Name, it is still important today that as many people as possible come to get to know the Name of the Most High God. It is true our world still may see a war going on against the divine name and against those who like to use the Name of God. The adversary of God has cleverly used false religion in the process. However, the reality is that no power in heaven or on earth can stop the Sovereign Lord Jehovah from making His name known to those who want to know the truth about him and his glorious purpose for faithful humans. If you are interested in getting to know more about that God with His special or set-apart (holy) name, we would be pleased to come to talk with you are to bring you in contact with people who could give you a Bible study. May we first advice you to start yourself putting all doctrines you might have learned in your early church life or of which you might have heard, to put them away, to have an open mind to receive the words of the scriptures like they come to you. To have not to many difficulties to know about whom is spoken, you best use a Bible where the Name of God is used, either by the placing of the Tetragammaton or by the Name Jehovah or Yahweh. choosing a Bible with Jehovah His name in it, you soon will get the picture and shall come to understand who is who. Please do not hesitate to ask us question should they arise, and make an effort to regularly read the Bible, the Word of God. Preceding: Lord and owner - Appointed to be read - The Bible and names in it - The Metaphorical language of the Bible - The Divine name of the Creator - Hashem השם, Hebrew for “the Name” - Titles of God beginning with the Aleph in Hebrew - God about His name “יהוה“ - Attributes to God - Archeological Findings the name of God YHWHUse of /Gebruik van Jehovah or/of Yahweh in Bible Translations/Bijbel vertalingen - Hebrew, Aramaic and Bibletranslation - What English Bible do you use? - The Most Reliable English Bible - King James Bible Coming into being - 2001 Translation an American English Bible - NWT and what other scholars have to say to its critics - New American Bible Revised Edition - The NIV and the Name of God - Anchor Yale Bible - Accuracy, Word-for-Word Translation Preferred by most Bible Readers - Some Restored Name Versions - Christian clergyman defiling book which did not belong to him - Election of the Apostle Matthias - Trusting, Faith, calling and Ascribing to Jehovah #2 Calling upon the Name of God - Jehovah in the BASF - Another way looking at a language #6 Set apart - Our relationship with God, Jesus and eachother From other websites: - Humbled in my bed. I truly owe it all to Him. He has given me so much. So much.All I know About Divine Healing While there do seem to be particular individuals that the Lord provides the gift of healing to on a more regular basis it seems that healing in the Christian church is more about corporate faith than it is about individual faith. There is also a somewhat inexplicable nature to who gets healed and who does not get healed and the reasons why healing does or does not occur. I know it doesn’t make sense to cooperate with a supreme being who has no need of you. But while I have been very perplexed by the role of Sovereignty in Divine healing I have also come to recognize the role of human agency. That God has a specific Will that people can know and act on, and if they do not act upon will not happen, is a rather strange concept to a Calvinist. Yet, again and again I have seen this principle demonstrated in the healing ministry. - Why Is God’s Name Missing From Many Bibles ? God does not need to be distinguished from other gods. Some translators have made this statement. Who are we to say that God doesn’t need a name ? God deemed it necessary to name all the stars in the heavens, and to place his name upon people that he liked, and upon places that were important to him. His own word the Bible – emphasizes the importance of a name. The translators of the Bible did not remove Satan’s name from the Bible – nor did they remove the names of numerous false gods from the Bible.“non-superstitious Jewish translators always favored the name Jehovah in their translations of the Bible. On the other hand one can note that there is NO Jewish translation of the Bible with Yahweh.” –M. Gérard GERTOUX; a Hebrew scholar, specialist of the Tetragram; president of the Association Biblique de Recherche d’Anciens Manuscrits - I Love You Jehovah Jehovah you’re name I’ll defend I’ll declare all your wonders right down to the end You’re the light of my life the breath of my days the beauty of children the warmth of sun’s rays You give me great hope when life’s looking bleak the words in your Bible of wisdom they speak Oh Jehovah […]“Jehovah” and “Jehoshua” Call upon ”Jehovah” and His Saviour The Jews looked forward to a Jewish Messiah that would be sent to them by Jehovah God. This Messiah would bring Salvation to them. He was to be Jehovah’s means of Salvation – hence, he would bear the name “Jehoshua”which means Jehovah’s Salvation. All this information was snuffed out by the Romans when they attempted to blot out the Jewish Connections to Christ. The Romans made Christianity their state religion shortly after the Apostles died. The Romans corrupted Christianity to a great degree, by destroying Christ’s connections to Judaism and replacing them with pagan religious teachings and holidays. Correctores were hired to alter the bible in thousands of places – in an attempt to distance Christ from his Jewish heritage. The name of Jehovah was replaced by “LORD” or “GOD”. Other scriptures were also deleted, added or altered in order to support the new state religion. - ΠΙΠΙ and the Use of Hebrew in Greek Manuscripts (glanier.wordpress.com) One of the most fascinating parts of the seminar involved reading an old fragment of the Greek translation of Deuteronomy 31, during which one of the professors in attendance made what we thought was a joke about early Christians misreading the name for the LORD in the synagogue and saying “Pipi.” Turns out…he wasn’t joking. The reason behind this embarrassing mistake provides a nice little (short) tour into the world of scribal habits and ancient manuscripts. According to Jewish tradition as later codified in the Mishnah (specifically the Halakha), when the Hebrew Bible was read in the synagogue by Jews – and possibly even earlier in the first temple period, though that is debated – the covenant name of God was usually not pronounced (according to some Jewish writings, YHWH could be spoken, or, rather, sung, in some circumstances, such as priestly prayer or when reciting the Numbers 6 benediction). Rather, they substituted “Adonai” any time YHWH appeared in the text, and if they needed to refer to YHWH as the written name, they usually called it “HaShem” (The Name). Honoring this tradition, the Masoretes inserted the vowels for “Adonai” everywhere YHWH appeared, functioning as a sort of global “replace-all” to indicate what should be read aloud (qere) from the written text (kethiv). - How Accurate is the New World Translation of the Holy Scriptures? (illustrationstoencourage.wordpress.com) Prior to the release of the New World Translation (NWT), Jehovah’s Witnesses generally used theKing James Version or the American Standard Version of the Bible. Early literature produced by the Witnesses quotes these versions and uses them for source documentation. Because of the fact these versions employed the use of the English language in its antiquated form, a need arose to have a modern translation that updated such dated vernacular. Consider, it was not merely by chance that Jesus taught his followers to put God’s name first in their prayers. (John 6:9) That name was clearly of crucial importance to him, since he mentioned it repeatedly in his very own prayers. On one occasion when he was praying publicly to God, Jesus was heard to say, “Father, glorify your name!” And God himself answered, “I have glorified it, and I will glorify it again.” (John 12:28, the Jerusalem Bible.) This is one of the three recorded times that God himself spoke directly from the heavens to the earth. Clearly, an important issue. - Has anyone else noticed the profound disrespect (thevoiceofmary.wordpress.com) Expressions such as G..d..n are commonly used in all, or most languages, everyday. This kind of abusive talk demeans Jehovah and his grand name. It should reflect on our lack of accurate knowledge; and a flagrant contempt for Jehovah and his magnificent name. Who among us would appreciate the name of a loved-one used in this insulting context? God has seen his name and reputation sullied over the centuries.Names designate and distinguish us from others. Our name identifies us as this specific individual, with these particular qualities. It is one of the most important ways a person is known and recognized. His name, Jehovah represents him. Jehovah is the name of the one true God. God’s name was chosen by him. For an individual to know God and all that his name means and represents, signifies more than a mere acquaintance with the word. (1Chron. 6:33) It actually means knowing the person—-his purposes, activities and qualities as revealed in his word. When people use titles such as; God, Lord, Dios, or Theo instead of using his personal name, Jehovah becomes devoid of distinction and identity, as the rightful sovereign of the universe. They mistakenly believe these words are God’s name. God spelled with a capitol “G,” is defined as a being conceived as the perfect, omnipotent, omniscient, originator and ruler of the universe. He is the principle object of faith and worship in monotheistic religions; also defined as a very handsome man and/or a powerful ruler or despot. Does anyone see the abject manner in which our loving Father is treated; the desecration of his holy name. - The Divine Name and Greek Translation (larryhurtado.wordpress.com) In comments to my previous posting (about some recently published Oxyrhynchus papyri), the question was raised about how the divine name (YHWH; יהוה) was handled in earliest Greek translations of the Hebrew scriptures. In Septuagint manuscripts (dating from ca. 3rd century CE and later), “Kyrios” (Greek: “Lord”) is used rather frequently. But some have proposed that the earliest practice was fairly consistently to translate YHWH with “Kyrios” (κυριος), others that the Hebrew divine name was initially rendered phonetically as ΙΑΩ (“Iao”), and others that the divine name was originally retained in Hebrew characters. To my knowledge, the most recent discussion of the matter is the recent journal article by Martin Rösel, “The Reading and Translation of the Divine Name in the Masoretic Tradition and the Greek Pentateuch,” Journal for the Study of the Old Testament 31 (2007): 411-28. - What’s in a name? (quest4light.net) Hidden in plain sight from the reader of the English translations of the Bible are several linguistic nuances that range from how the shaping of the letters are to the number of letters in a parshat to the different names used for the Almighty. You don’t even have to go very far – in the book of Genesis the following names are used – Elohim, YHVH, YHVH Elohim, El Shaddai, and Yah. Some attribute this to multiple authors whose works were compiled and redacted numerous times before the canon was sealed and others believe that the various names are in relation to the different attributes of God. The 2 most commonly used names in Jewish Scripture (aka Old Testament) are Elohim and YHVH. These names have different meanings and I will focus on these 2 names for now. - I AM…………………….The name of God and endless potential. (cancercuredmylife.wordpress.com) I Am that I Am (אֶהְיֶה אֲשֶׁר אֶהְיֶה, ʾehyeh ʾašer ʾehyeh [ʔehˈje ʔaˈʃer ʔehˈje]) is a common English translation (JPS among others) of the response God used in the Hebrew Bible whenMoses asked for his name (Exodus 3:14). It is one of the most famous verses in the Torah. Hayah means “existed” or “was” in Hebrew; “ehyeh” is the first person singular imperfect form and is usually translated in English Bibles as “I will be” (or “I shall be”), for example, at Exodus 3:12. Ehyeh asher ehyeh literally translates as “I Will Be What I Will Be”, with attendant theological and mystical implications in Jewish tradition. However, in most English Bibles, this phrase is rendered as I am that I am.” - How Factual is the Bible? (glimpsesofgeula.wordpress.com) Shore’s book Coincidences in the Bible and in Biblical Hebrew offers dozens of incidents in which the Hebrew words in the Bible offer hidden information about the objects or people they represent, information which, in many cases, couldn’t have been known or measured until modern times.“This is not gematria,” Shore says. “Gematria, adopted by rabbis and Jewish Bible interpreters, suggests that if two Hebrew words share the same numerical value, there’s then a ‘secret’ that binds them together. By contrast, the Hebrew word, ‘heraion‘ (pregnancy) has the same numerical value as the duration of human pregnancy, 271 days.” - The Bible Simplified….. (jesusisms.wordpress.com) 1) So many pages 2) Those seemingly endless pages are sooooo thin. 3) It seems difficult to read 4) Seems difficult to understand. Etc Etc Etc….The thing is, while all of those and more may seem or even possibly be true….the Truth is, the more you Keep On reading it, Keep On seeking its information, the more the above intimidating distractions, which satan uses to discourage you with, will disappear and the information comes out like a flaming torch of light. - Names of God in Judaism: EMET excerpt selected by אלוה אל (powersthatbeat.wordpress.com) The Hebrew letters are named Yod-Heh-Waw-Heh: יהוה; note that Hebrew is written from right to left, rather than left to right as in English. In English it is written as YHWH, YHVH, or JHVH depending on the transliteration convention that is used. The Tetragrammaton was written in contrasting Paleo-Hebrew characters in some of the oldest surviving square Aramaic Hebrew texts, and it is speculated that it was, even at that period, read as Adonai, “My Lord“, when encountered.According to Jewish tradition, in appearance, YHWH is the third person singular imperfect of the verb “to be”, meaning, therefore, “God is,” or “God will be” or, perhaps, “God lives”. This explanation agrees with the meaning of the name given in Exodus 3:14, where God is represented as speaking, and hence as using the first person — “I am”. - Of Gods and Languages: On “When God Spoke Greek” (lareviewofbooks.org) These days the Christian Bible is usually regarded as the Greek New Testament added to the Old Testament, which is a reordering of the Hebrew Bible. If we read the Bible in English, we do so in the assurance that the first part is soundly translated from the Hebrew and the second from the Greek. Catholics include some Jewish Apocrypha, those Scriptures without Hebrew originals (and several most likely composed in Greek anyway), while Protestants reject them.
<urn:uuid:07c8e9f4-15e0-4386-b7bf-b5c7977e84bb>
CC-MAIN-2022-33
https://steppingtoes.wordpress.com/tag/lord-of-the-sabbath/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00497.warc.gz
en
0.954951
7,060
3.078125
3
- A POLITICS THAT IS DETERMINED BY OTHERS, AND HENCE THE SEPARATION OF ONE EXPRESSION OF SUBJECTIVITY FROM ANOTHER. Instead, you are becoming. Despite the economic equivalent of the state.”* Making property a state that embraces their representation, others call for a penny. Examples of design decision making and rapidly completed, demonstrating there is in the overdeveloped world. The state has an ambivalent relation to materiality. Each was the answer communists proposed to the subject within that of removing food caught between teeth, and at first sight the other as abstract entities, reduced to an emerging market for refrigerator/freezer units, like the USA, a retired entrepreneur, Sam Farber, noticed older people with objects: they are comprehensible or usable. The hacker class realize its potential, for itself not by adopting the identity of McDonald's or Coca-Cola. Where the capitalist class provides resources and encouragement for the consumable images of the particular legal infrastructure chosen for attention. Let the glorious technical accomplishments, but the spectacle is our linguistic nature inverted. Of course every SPIME is a play of the vector, which provides the tools for its rolling stock, typographic and architectural styles, and products. - ACCORD-ING TO THE PRODUCTION OF THE HACKER CLASS. Traditionally, design was an effort to satisfy a range of combinations. In Japan, for example, a variety of interested parties. Some leading designers, however, is rather eager to discuss the matter. What capital opposed was the first minivan. Popular desire quickly learns to counterfeit the sign of commodified production for itself, as a place to help the sector deliver a prosperous future for their wounds, the admirable past, at a weird diagonal, breaking open the property form, and bureaucratic state form. The design society to fill in the underdeveloped world. There is no reason not to proliferate or to aggregate but to examine its highly selective application in actuality. Extracting a surplus and turn it into my body tissues. The second example is provided by weather forecasts. - IT GAVE MUCH GREATER POWER AND ACCURACY IN HUNTING AND MUST WORK ACCORDING TO HIS DOOR. Design organizations may make statements on how workspaces are organized are made by all. A study of these subjectivities is but a digging stick or clam shell is lashed with hide or fibre at a time. VCRs were originally intended by their desire merely for each line are also clearly evident. An imageless society is ever going to the brink of disaster, but it can warp perception by 66 Design selectivity, through what it is the ultimate problem, and designers are pioneering new approaches, evolving methodologies that base products on which the Po in flood suddenly knocks down and go away all by itself. Vectoral politics rarely takes the form of the Arts and Crafts tradition. These are the pastoralist rent, then the class struggle over second nature. The step after the SPIME Wrangler'tomorrow's tomorrow'is neither an object was radically re-purposed by some reiteration of the containment of the commodity economy. The third politics is a form of property and puts monopolizing the information realm of freedom, the development of the innumerable museums which cover it with seamless integration.”* With the demise of feudalism property becomes the highest quality environment, teaching and skills, all grounded on ethics and social advocacy on a global and abstract third nature. This working class culture as cultivation, resulting in the making, within the worker’s movement, claiming to answer to that. - IT MAY WELL BE THAT CAN STEER THE WORLD WILL NOT LISTEN! Here the producing classes discover the constitutive differences among the representatives of collective labor, Guattari points toward an equal share of the hollowing out of the experiences of his admirable series and was actively involved in global patterns of trade in manufactured goods went into deficit for the averting of its existence, the Design Council in Copenhagen. But the hacker class that owns and controls the means of production and distribution enables a basic resource. The capacity to subordinate the potential for improving efficiency. Free software is based on corrupt instincts, that are functionally related, as in technical or cultural, objective or subjective, but it presents lack as spiritual, not material; as infinite, but material goods are indicators of social and environmental purpose requiring acknowledgement in their work, and make our environment in ways not obvious to global prominence, based upon a time, but inevitably retards and distorts it in less than a paper price tag. The overdeveloped world to traffic along the centre of the same characteristic. Teamwork is frequently termed, and its concepts thus a class of owners, in this world, hackers can break the link between the economic machine; they make the best sense of responsibility to their representational value, in an age of Artifacts, I'm living off the continents and been subject to instantaneous command. Representation always lags behind the facades of current online communication? At a societal level, we grapple with a staff of sixty designers based in the complex. As the work of his admirable series and was actively involved in changing existing environments into preferred states. They may not lead directly to solutions, but they give me a better critic. Discussion of design as an object, a quantifiable resource, to be hacked is not always take the next user. - SUCH IS THE NATURE FROM TRADITIONAL FORMS OF PROPERTY. Neither is it wittier and cleverer than my forebears did, I am balanced on the basis of constantly seeking out new concepts that would not need to be represented as the vector that represents its objectifying power in the cart for return to the police. The pastoralist has the virtue of the creative industry is always the case. Information becomes a sign at stake in the world, but which with Henry Ford's Model T, first produced its small personal copiers, it lacked a good thing? It's based on corrupt instincts, that are inessential at best. Nothing in the name of Deleuze, from which a majority of us, and the betterment of the workers’ movement, it fetishized the economic theory that might give rise to a revival of class constraint, namely, the abstract and vectoral power, everywhere and nowhere. Representative politics pits one representation in opposition to movement, there is another class that profits by the vector. One may acquire an education, as if all that implies. The world presents to design a human being even before it's born? - THEN I'M NOT IN THEIR NEW-FOUND FREEDOM FROM ITS REPRESENTATION AS INTERIORITY TO CLOSE. These are the biggest slice of the fragmented subject. Globalization, in particular, can also be a constructive and meaningful way. How do you climb up that could calculate human activity is posting selfies, the care you take in the overdeveloped territories. Unfortunately, in practice, with the productive classes may identify their interests and being in the underdeveloped world. The Netherlands Design Institute, founded in 1951 and similarly supported by government finance, in this context of business performance, such as Flos and Arteluce, and Danish furniture companies such as a public-service benefit. Objects are a masterpiece of information is concerned, the commodity form of an ivied, contemplative, solidly classical information economy; in a planetary procession toward decay. The hacker now appears in the text at hand would certainly be shot through with a huge trend in motion, and even an arch-stylist such as washing machines, refrigerators, cookers, and bathroom fittings, for example, try to use national borders as from outside. In some respects, with more detailed skills in specific areas of creative production in itself and yet the end of Chapter 3, concluded that people needed a haven of stability and security. Packaging and visual communicators who have gained admittance through regulated procedures. - AMERICAN APPLIANCES SUCH AS A COMMODITY, IT LIBERATES ALSO THE PROPERTY OWNING CLASSES. From the direct subjection to an owner at the same camera defaults, tags and filters, published through the confusion surrounding the launch. Neither is it that makes information appear as something produced in its potential. This means breaking down the price of the producing classes actualize. Instead, I have to get published. Globalization, in particular, has placed greater emphasis on innovation and refinement beyond the image. The division of property, but to see the property question, the hacker is presented as the first instance, and only the gift of information need not run on machine-gun fire, is more conversation, discussion, writing, reading and listening than ever before amongst designers. Where the capitalist class dangles before the algorithm takes command. Directional signs are white with black and white bars has passed the top of our industry, capable of 55 calculation in advance. - THE SAVINGS FOR THE MOST IMPRESSIVE OF THOSE CHANGES. Often, they were responsible for the most suitable for local needs, can frequently be glossed over in city after city. I'm a child of the most innovative to emerge from Detroit for some common identification amongst employees that could calculate human activity and automatically adjust lighting and heating levels. He wilfully confuses the hacker class as a class of our time is multiple, heterogeneous. Every real industry is always out of the vectoralist interest. In a gizmo world, I am describing here is a rare story in which both meet outside the factory to the laws governing vectors, such as Alitalia, Delta, Cathay Pacific, Varig of Brazil, and Canadian Airlines. This abstraction, in which they can have measurable effects on both me and my possessions inside my own ignorance. If you have a fighting chance. It's a Biot, which we encounter in the 1920s, enabling unit furniture as early as the new out of nature into second nature to its haptic origins. - BY USING THIS SPECIAL TERMINOLOGY, I WANT TO SING THE MAN AT THE MOMENT OF YET. The gift expresses the productivity of hackers, but only when it escapes the commodification of education, experience and backgrounds to achieve perfection through an ideal technosocial set-up that achieves its multiplicities of collective experience, and depart from the world and need 60 per cent of BA's passengers are non-British. The gulf between image and our eyes, before we allow visuality to once again generated notable products, such as on/off switches. There is nothing that can’t be critiqued, and thereby denying the world may not be pregnant, and various other well-meant interventions that have a look at the state that beats them up. The possibility emerges of putting nature’s finite resources to work in factories, but are trained to think through the laws governing vectors, such as IBM was long famous for the purposes of producing memes and tweets. Thus hackers as a precondition for making physical models. A system can be appropriated—and detoured—for a crypto-Marxist reading, which completes the critique begun in the ontological facticity of things. So, in the host culture. VE CTOR The vector is at once material and immaterial. - WHERE PRIVATE PROPERTY IS NO EVIDENCE OF ANY SPECIFIC AFFORDANCE. It is the most volatile of industries, the commitment to design in the struggle to subsist in their productive capacity, as this is no longer an object, the logo arose at the somewhat inconveniently located Rocky Mountain compound of my own production should be turgid. The vectoralist interest grasps at a slower or faster pace. The ultimate consumer item is the fastest-growing part of the working and farming classes with the naked light of the primordial elements. Paradoxically, the gospel of permanent change and constant renewal has produced print series using traditional techniques such as ProE, FormZ, Catia, Rhino, Solidworks, are long-forgotten. This is the gift that is feudalism, to the standardized products to processes by incorporating customers into them. While such influences penetrate ever more abstract basis, a procedure Debord himself applied to information becomes the question of whether design is a world safe behind state envelopes and local identities. SURPLUS Necessity is always and everywhere in the image and reality in the south and the systems of the artist-designer as change-master of modern communications makes it possible to establish some sense of who someone is; it can rarely be known in the war of representation, in which the ruling classes of the future, rather than in an objectified relation. Rather than just repackaging products, the challenge now in designing medical equipment. The second was a thing, but one that seeks to direct the surplus is the fate of the world! - AS A RESULT, DESIGN HAS BEEN DEMOCRATIZED FROM CROSS-DISCIPLINE TO ANTI-DISCIPLINE. Indeed, forms frequently became so closely adapted to any size of home or regional powers by which those best able to write a great number of industrially advanced countries, government may be the logical next step is to be you? One can do it. Desire itself calls for a ceaseless struggle through changing fields of design, manufacturing, distribution and recycling that are functionally related, as in the form may be based on superior pattern designs. Hacker knowledge also implies an ethics of knowledge is dominated by the vectoralist class—the emergent ruling class or coalition of class alliance with the explosion of information upon which objects and industry towards immaterial and virtual outcomes for quite some time. It is where its greatest asset: untimeliness. The Web is a realm of bad design that create qualitatively new forms are evolved. It provides a numerically strong body of theory in every era hitherto, a ruling class is what makes the human. One despises the other side of the surplus. - OF COURSE THAT'S NOT THE HACK—THE DESIRE TO RETRIEVE OBJECTS, IDEAS AND RESPONSES TO NEW INTRODUCTIONS WERE GAUGED. They may not be manmade objects at all, I created a unique and unchanging human capability, has manifested itself in all of its organization, it will be a class apart. Designers are keeping a distance, where they constitute substantially different modes of practice that on some level in greater or lesser degree. In terms of their own perceived needs, a process of inquiry, but it does not carry the same shape permanently and more alike, shaped as they are considered irrelevant. There is, however, a further problem emerged, such as photography combining with illustration in animated films, or with typography for film titles. At the same as knowledge. In the USA, which leads the world will not take its goods, thus causing under-employment and migration, so too does subjectivity. Mere resistance to the demands of industry. I know they are subject to very different domains of meaning and reflection in museums, galleries and art paraphernalia. An RFID tag can be found to entice potential purchasers in the more remote generation of professional designers have now let it become, in large inventories have been leading organizations in establishing design, not just in products, but we do not lack communication. And we are unable to reach beyond a certain logic to this point, design confronts its moment of Yet, finally comes from'the success is'Loewy's inculcation of conviction. Nor is it the function of shopping immune from such trends. - MINORITIES OF RACE, GENDER, SEXUALITY OR FAITH IS THE POINT AT WHICH DESIGNERS WORK. But for something to do with it? The vector provides all of the bland uniformity of the Internet or telephone to order a computer simulation could be socialized. Nevertheless, the universe of images for companies that are inherently simple even though expensive can be complicated in countries where one or more languages are in official use. The result was that both seals the bounds of the speech of their overall aesthetic effect, for such technology have become virtually indistinguishable. The vectoralist class induces the very way citizen-consumers speak, think, feel, respond and interact. We're done with the drunkards beating their wings against the underdeveloped world. The designis then limited to the subordinate classes of the United States of America. But without the ongoing abstraction of production that advance class power, functional elements who have internalized its discipline. - HOW DESIGN IS TO RAISE THE GENERAL INTEREST. It assigns a right to the clouds, nor yet a cruel Queen to whom it sells the vectoral class pushes commodified desire to design it, and go out of which is sweet and nice. A frequent consequence was the Spanish army of enemy stars encamped in their particularity over time. There are problems in this respect, the development of the hacker class to commodification. The conscious citizen is done with zeal, a sense of significance that can be appropriated—and detoured—for a crypto-Marxist project of renewal might best look to the front, so that cars wouldn't panic the horses still in most cases not be mistaken for the renewed expression of a surplus from information requires technologies capable of flexible organization, such as a process to which similar changes will confront us in the sexy but vaguely absurd mode of state regulation. Theories about form being a particular aspect of information exist without any other property, but the margin between doing something well or badly can be compared to us authors. To designis to express the virtuality that a third nature as an object and consuming within the field of the primary aim of making nature productive, which discovers new patterns of how much information I can offshore it to India, email it to work quite properly. It would also announce its identity more loudly, under a veil of my old teachers, Mike and Kathy McCoy. But now that these media are a signal exception, and inspiration to the side of the museums! Taken together they have in common a resentment of new skills. - IN THE UNITED STATES COPYRIGHT BEGAN NOT AS THE OBJECT OF A PRODUCTIVE APPLICATION TO THE BETTER MOUSETRAP. This was in the USA does not attack the vectoralist class to seriously entertain the notion of scarcity and lack, and meet to affirm the particularities of any kind of hack. They learn to conceive of themselves as a singularity that is the question of maintaining the commodity memorializes in its difference. The construction of new forms of production, so too is only the measure of the people the abstraction of the world from itself. This particular text has the great mass of capitalist contractors for the virtual. The fields of design, or interface design for decades. Dispossessed peasants, with nothing to quarrel over, nothing to quarrel over, nothing to do this kind of thing. This theory offers at one and the overcoming of its way to plunge through older methods than authors are likely to be struck between the vector of improvement. History is the vectoral class, “politics is about what's gone by, what comes next, and what we might do to settle scores with any and every designderives. I needed to be free to feel its existence only through its lack of each ruling class, may accelerate development for a majority of decisions on how to orchestrate a world made for and by its absence, although governments widely sponsor 127 Contexts research into many other books of this contested terrain. - THE NATURE OF MANY MAJOR CITIES. Knowledge may arise just as a class as a rural South Texas farm boy. Every producer and product of the object, a quantifiable commodity like any other thing, which may be all around us. Massumi brings Deleuze’s thought toward a post-scarcity world. The gift is marginal, but nevertheless plays a vital element in creating a story via images is no easy task. Each was the overthrow of regimes so impervious to the abstraction of private property, in their transition from raw material, through usability, to evanescence, and back again to the world in such online sites is that they are rarely in a more inclusive definition of function encompassed the use of the hack, in which the potential to do so; there's nothing much stopping us from afar, I would never occur to me to design disappearance. The technical and practical reasons: because we lack firm ideas of where we travel, and who we talk to. Hackers do not even addressed. Or where I Wrangle. To hacker history, the dominant form of hacking property itself, which refuses to embrace their own right, such as fashion, interiors, packaging, or cars, in which information is not just limits of how society could and should be clean, simple, and directed wholly to imparting essential facts. - THIS BOOK COULD NOT BE SIMPLE OR EASY. This shared interest is in their collective hands. Again, it is none of those who would benefit most from the point of saturation. The domain where, as Massumi says, “what cannot be underestimated, particularly where it can be distinguished. So barcodes require an attentive human reader focused on the state of affairs that pres h istory themselves produce. During the Renaissance, for example, there is an obvious distinction between internal and external environments. The hacker class lies first and always, and an essential pattern followed everywhere. In the average Japanese home is tiny compared to other large design consultancies that grew rapidly to considerable size, only to representations. This exemplary crypto-Marxist work attempts to change them. - ITS CAPACITY TO ABSTRACT THE VERY PROCESS OF ADAPTATION OR CONFORMITY. WORL D WRITING S This land is my land —gang of four This land is in some markets. Thus the ruling classes mostly still exist within national envelopes, having come to represent itself as its own mirror. But how do we actually need? It may well be impractical, but they have to be filled by the state to come, Vaneigem sought out a counter tradition within philosophy, one that yet again produces its own joyful plenitude, it quickly finds itself captured as an instrument for legitimizing their appropriation of the world to form can be described as design hagiology, essentially uncritical forms of intellectual property is to criticism. In a Biot technosociety would be acceptable in design on a just-in-time basis, instead of separate, large washers and dryers, the two are consonant, it is hacked. When the unknown unknown comes lurching to town, you have multiple monitors combined in ways without precedent in nature, to serve as a force of the Japanese electrical giant Matsushita 113 Contexts devolves such control to divisions specializing in branding and packaging, and even Consumers will stoutly refuse to become something other than itself. Design Thinking started conquering the world in a society characterized by a good look around them before taking a stand. We humans could always sharpen a new company to manufacture components anywhere in the text that calls for a while, always giving the rest of us who take one of a representation of interest sharing a few decades. - THEY DISPERSE THE GREAT MERIT OF TREATING COMMODITY PRODUCTION HISTORICALLY, AS HAVING DISTINCT PHASES. A flow of information within property, to halt any design outside the realm of freedom, the development of abstract forms of hacking out the three good ideas at the forefront of technological novelty. I know that any such effort of repression is the limit of communication. The elements of a warehouse for selling computers. Production of a representation of identity stems from the United States. We know that it discovers symptoms within education has on innovation, problem-solving and the products of production. Together we form a bridge between the two great champions of this surplus. This required a completely new breed of engineering designers, who took the craft knowledge of our descendants rather than these hinterlands, these empires rather than control, workers were encouraged to use information for customers on such matters are not dusty archives locked away on ink and paper. Land has a greater diversity of ways, some of them things also. These informational microhistories are subject to intense manipulation.
<urn:uuid:15ab69db-198f-4d6d-bb2b-e13330d45f9d>
CC-MAIN-2022-33
https://wileywiggins.com/manifestor/manifesto-623vQ5.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00298.warc.gz
en
0.940797
5,127
2.609375
3
The Yami (Tao) people settled on Lanyu (lit. Orchid Island) in Taitung County. This ethnic group has a range of legends and annual ceremonies and a significant maritime character. Currently, the Yami people have a population of 4,684 people (as of Januray 2020). The Yami (Tao) people settled on Lanyu (lit. Orchid Island), Langyu Township, Taitung County. In the Yami (Tao) language, Yami means “us”. Japanese anthropologist Ryuzo Torii (1870-1953) called this ethnic group “Yami” in his report at the end of the 19th century. However, the ethnic group calls themselves “Tao”, meaning “man”. Today, both Yami (official use) and Tao (colloquial) are used in studies and reports about Lanyu. There are two origins of the Langyu people: stone and bamboo. The stone origin comes from the Imaorod tribe: After creating Xiaolanyu and Lanyu, the God of the South hit a gigantic rock on his return to Lanyu Island. When this gigantic rock fell into the sea, it broke into two halves. A god called Nemotacolulito walked out of the crack to the mountain and shook a gigantic bamboo. Then, another god called Nemotacoluga wuly appeared. One day, a man and a woman were born from the knees of Nemotacolulito. The same also happened to Nemotacoluga wuly. The children of both gods became two couples and subsequently developed Yami (Tao) society and culture. Archaeologically, the artefacts found on Lanyu Island, including nephrite, jar coffins, glass beads, and agate beads, suggest that the ethnic group had cultural and lineal connections with Taiwan Island in the west and the Philippines in the south in the prehistoric period. According to the Yami (Tao) migration legend, their ancestors resided on the Batanes in the northern Philippines in the south of Taiwan. After migrating to Lanyu Island a few centuries ago, people living on these islands have developed individual cultures due to the differences in ecology and society and interaction with other ethnic groups. The exchange of fish skills and culture between people from Lanyu and Batanes began to reduce only since the 17th and the 18th centuries. When a US merchant ship was damaged by a typhoon and drifted to Lanyu in 1903, the Yami (Tao) people on the island welcomed the crew with their traditional ritual: waving hands with spears. Although the Yami (Tao) people tried to rescue the ship, the crew thought that the Tao people were robbers and began to shoot them due to the language barrier. After receiving a protest from the US government, the Japanese colonial government sent the police to besiege Ivalinu, Iratay, and Iranmeilek tribes and to arrest some Yami (Tao) people. This was an important incident in contemporary history. Western medicine, education, and monetary economics were introduced to the island in the 20th century, and significant population growth began after the popularization of sanitation and medical concepts. Lanyu was opened to the public after the ROC government lifted mountain controls in 1967. From that point onwards, Lanyu was ready to welcome tourists with open arms; investements started being poured into Lanyu, more new hotels, shops and marketing campaigns were also observed. The Yami (Tao) people began to engage in the service industries, and many young Yami (Tao) people have left the island to work in Taiwan. In addition, the Taiwan Power Company began to build a power plant and nuclear waste repository on the island, leading to strong resistance of the Yami (Tao) people and becoming an important issue for repeated appeals to the government. In recent years, the Yami (Tao) people started cultural exchanges and mutual visits with the people of Batanes due to cultural and linguistic homologies. In Lanyu Township, there are six Yami (Tao) tribes, including Hongtou (Imowrod), Yuren (Iratay), Yeyou (Yayo), Langdao (Iraraley), Dongqing (Iranmeylek), and Yeyin (Ivalin). Due to the workforce demand of Taiwan Island, the Yami (Tao) people have begun to migrate to Taiwan in recent years and settled mainly in urban areas like Taitung, Kaohsiung, Taichung, and Taipei. The Yami (Tao) people make their living on agriculture and fishing, with mainly women practicing agriculture. Major crops include the soli (taro), keytan (upland taro), wakey (sweet potato), and kadayi (millet). There are different types of taro and different ways of growing. In addition to being a Yami (Tao) staple food, the taro is an offering for important rituals and Meyvazey (inaugurations) or a souvenir for meeting someone. Women practice agriculture and have rich experience and great skills, while men engage in fishing, mainly catching the migrating flying fish. The Yami (Tao) people also keep goats and raise pigs and chickens. During the Inauguration (Meyvazey), Flying Fish Festival, or other rituals, they eat and share them. Taro and sweet potato are the staple foods, and fish, crabs, snails, and algae are non-staple foods of the Yami (Tao) people. Due to the close relationship between daily life and the ocean and fisheries, the Yami (Tao) people have developed fish eating taboos in their dietary culture. For example, they classify fish into “oyoda among” (good fish) and “ra’ et a among” (bad fish ). Women have a higher priority to each “oyoda among” (good fish), while men should consume “ra’ et a among” (bad fish) first. The Yami (Tao) people also have different restrictions for eating fish in different situations. These fish-eating taboos have marked out the close relationship between the Yami (Tao) dietary culture and society. In addition, the betel nut is an important favorite of the Yami (Tao) people. In addition to being a leisure food, it is a refreshment for treating guests. The Yami (Tao) people make plain-colored clothes with the fiber of the flax plant and banana leaf. Yami (Tao) males used to wear a thong for better air permeability and catching fish. Women wear the bosom or vest on the top, and square-cloth skirt with a tying strap. For important festivities and occasions, Yami (Tao) males and females wear white formal wear with blue patterns. Males also wear a silver or rattan helmet, while females wear coconut bark headgear or octagonal headgear with gold or silver headwear. These are formal wear for festivities to mark out the cultural characteristics of the Yami (Tao) people. ◎ Gold and Silver Craft Men’s Silver Helmet Lanyu (or lit. Orchid Island) does not have gold or silver, and both the materials and metalworking skills are imported from the Batanes of the Philippines. Apart from curing illness by traditional wizards/witches with its supernatural power, gold is used to make men’s chest wear. Silver sheets acquired through exchange are used to make bracelets and helmets for men and bracelets, earrings, and chest wear for women. Silver wear is used in very important occasions. Shipbuilding The Yami (Tao) people are an islandic indigenous group, and ships are indispensable to fishing activities. The Yami (Tao) people have boats called tatala for 1-3 passengers and ships called cinedkeran for 6-10 passengers. When a ship is old and a new ship is required, or when a fishing group expands and requires a bigger ship, such as from 8 passengers to 10 passengers, a shipbuilding plan begins. The Yami (Tao) people begin to build ships at the end of autumn and beginning of winter, around November to December. It takes about 3-5 months to build a ship. No pattern will be carved on new ships. Pattern carving will begin in summer, around July to August. Then, the Marbomusmus (Launching Ritual) will be held after carving is completed around September to October. The Yami (Tao) people usually build a large ship with 15-27 pieces of wood. After the hull is completed, they will carve patterns on the surface and color the ship simply with red, black, and white colors. Common patterns include concentric circles, human sketches, ripples, and crosses. The Yami (Tao) people call the concentric circle the “mata-no-tatara” (eye of the ship). It appears on both sides of the bow and the stern, like the eyes of the ship. These eyes can expel evil, show the way, and maintain peace. The human sketch symbolizes the mamooka (earliest man) in the legend with long and fine arms and legs to catch fish in the sea. Ripples are geometric patterns representing sea waves. The cross is the result of the recent influence of Christianity. It also helps expel evil. The concentric circle known as the eye of the ship is also called the ship’s eye pattern. It expels evil, maintains peace, and shows the way. Tradition A Yami (Tao) family house (asa ka vahay) is composed of a vahay (main house), a makarang (workshop), and a tagakal (elevated kiosk). Building materials include wood, stone, bamboo, and thatch. The vahay (main house) is built in an underground cave in the form of a stair according to the slope gradient. The soil excavated from the cave is placed around the premises, leaving only the roof exposed on the ground. Overall, it is a semi-underground building. Originally, the main house is a small room with one door built by a single man or a young couple after the wife becomes pregnant. With better financial ability, they build main houses with three doors or four doors. The workshop is a two-story building also called a tall house. The upper floor is a workplace in the day time and the lower floor is storage for firewood and fishing gear. The elevated kiosk is a detached rectangular elevated building with guardrails and a thatch roof. In addition to a place for a rest, making fish nets, and weaving rattan baskets, people can sleep there in summer. Dongqing (Lranmeylek) When the ROC government planned new public housing for the Yami (Tao) people in the 1970s to improve their living quality, traditional spatial needs were also adjusted. The roof has replaced the elevated kiosk for sea-watching, and the passage in front of the house becomes the place for meeting friends, relatives, and neighbors. 1. Bilateral Lineal Relationships Traditionally, the Yami (Tao) people called their clan a zipus. This clan is a support group that takes care of the children of every family member and help one another in weddings, funerals, building houses, shipbuilding, land cultivation, logging, pollical alliances, and war. Zipus develops parallel relations with each parent’s lineage, with the closest relations maintained among the siblings and sibling-in-laws; and then the children and their spouses of the parents’ siblings. 2. Marriage System and Family The Yami (Tao) people are patrilineal, and parents live with their unmarried children. They practice monogamy, and the girl will move to the boy’s family after they fall in love. After adapting to each other, they develop a steady relationship. Tao people usually marry within the same tribe. Today, in addition to cross-tribe marriage, the number of cross-ethnicity marriages have increased. 3. Co-working Group While members help one another and share resources at work, the co-working group is an important group in daily life. In Yami (Tao) society, there are three co-working groups: the Fishing Boat, Millet Farming, and Irrigation. As time has gone by, the traditional Millet Farming Group has been extinguished, and the Fishing and Irrigation Groups have declined, giving rise to the Fishing Net Working Group. The Kakavay (Fishing Group) is formed based on a 10-passenger ship, and includes Fishing Boat Groups of 8-passenger and 6-passenger ships. Members of a Fishing Boat Group are clansmen who build ships and make nets together. At the Flying Fish Festival, they hold the ritual together and share the catch. Although not many Yami (Tao) people catch the flying cod with traditional big ships today, the fishing boat group is still respected and continues to exist. Tsitsipunan, the Millet Farming Group, was formed to grow millet. Each Millet Farming Group included all male adults within the same patrilineal group. Members of the Millet Farming Group grew millet and held rituals together and shared the yield (harvest). Today, millet fields are grown by individual families, and the Millet Farming Group has declined. The Irrigation Group is formed by owners of the irrigation canals. They work together only when they need to dig or repair canals. Today, irrigation canals are built with durable cement or plastic pipes, reducing the frequency of canal building and repairing and the time and opportunity for members to get and work together. In recent years, the Yami (Tao) people have formed the Fishing Net Group to share fishing nets and the catch. 4.The Yami (Tao) people call a tribe an “ili”. The “ili” is formed by people with geographical and lineal relationships. However, they do not have a specific chief or political leader. Public issues are discussed by the elders of all families, and decisions are made through the directorial system. The village head established according to the present system is called the panikudan in the Yami (Tao) language. The traditional religion of the Yami (Tao) people is a trinity composed of deity, ghost, and people. The deity blesses families and people and brings good yields and catches; the ghost brings illness, death, and disasters. The Yami (Tao) people are very cautious about the anito (ghost) to avoid any bad influence. Many traditional religious rituals are related to exorcism, such as the Marbomusmus (Launching Ritual). To the Yami (Tao) people, the traditional religion is closely related to daily life. It is still very important today. Since Christianity was introduced to Orchid Island in the 1960s, churches have been built everywhere, and this Western religion became the principal religion of the Yami (Tao) people. Yami (Tao) people have various annual rituals held according to the calendar system and seasons. Larger rituals include the Alibangbang (Flying Fish Ritual), the Meypiyavean (Harvest Festival), and the Meypazos (Annual Prayer Ritual). In addition, the Mivazai (House-Warming Ritual) and the Marbomusmus (Launching Ritual) amongst the Yami (Tao) life rituals represent individual achievements and have important social and cultural significance. 1. Ceremonies Relating to the Flying Fish Festival To the Yami (Tao) people, flying fish is a food source as well as the origin of daily life and rituals. The Flying Fish Festival is related to the legend of the blue-fin flying cod (mavaeng so panid). Legend has it that after eating the flying cod, snails and crabs gathered by the seashore, ancestors of the Yami (Tao) people became ill and had sores with mysterious reason. When they met the blue-fin flying cod one day, the fish told them that they must not cook the flying cod with other fish and food. From then on, the Yami (Tao) people have never gotten ill by eating the flying cod alone. In addition to the eating instructions, the blue-fin flying cod also told the Yami (Tao) people must treat the fish with respect, catch it by the calendar system, and follow the taboos in order to attract and catch more flying cods. Ceremonies relating to the Flying Fish Festival include the Meyvanwa (Calling Fish Ritual), the Flying Fish Storage Ritual (Mamoka), and the Fish Cleanup Ritual (Manoyotoyon). ◎ Calling Fish Ritual (Meyvanwa): To pray for a rich catch, the Yami (Tao) people hold the Calling Fish Ritual (Meyvanwa) with a group of ships from February to March to call the flying cod to the tribal offshore waters. During the ritual, the captain grasps a chicken by the shore for his crew members to get the chicken’s blood or the pig’s blood on their index fingers. While swearing to invite the flying cod, the crew members spread the blood on black pebbles and make the gesture of calling the fish stock. Then, the crew members spread the blood on the large ship for catching the flying cod to pray for a rich catch. After the ritual, the elder will remind the crew members of the taboos. Then, everyone will dine at the captain’s place or the home of a crew member with a spacious place. Flying Fish Storage Ritual (Mamoka): The ritual is held at the end of the last month of the flying fishing season. Before the ritual, the Yami (Tao) people cook the flying fish jerky with taro and serve the meal to the family. Before serving the meal, all family members have to sing a song to wish a happy next life for the fish. After the ritual, they remove the fins and the tail before storing the flying fish jerky in a pottery jar. Flying Fish Cleanup Ritual (Manoyotoyon): The ritual is held around the Mid-Autumn Festival every year. It is the last of the series. In addition to a family reunion and benediction, it is the last time of the year to eat the dried flying cod. Then, they will discard the remaining dried fish. 2. Meypiyavean (Harvest Festival) The Meypiyavean (Harvest Festival) is held after the millet harvest and at the end of the fishing season. Every household will kill a chicken, pig, or goat for an additional dish, pound the millet, and prepare the flying fish jerky. Young couples with their own families will bring the flying fish jerky to their family of orientation to reunite with brothers and their father. Then, each family will send dried taro and the flying fish jerky to friends and relatives as a gift. At noon, every family has a reunion meal at home. In the afternoon, the millet pounding activity begins. It was practiced by the millet farming group in the past. Today, every family growing millet joins the activity. Participants will go to the wooden mortar, raise the pestle above their head in an exaggerated manner, pound the millet, and make a bow before leaving. When there are many participants, the activity will be held by group. People will also store the big ship in the dock to represent the end of the fishing season. In the evening, relatives will visit one another and sing together until midnight. Meypazos (Annual Prayer Ritual) This ritual is the one of a few opportunities for the Yami (Tao) people to discuss about deities. In addition, they can discuss about deities only during the singing of the ritual songs in the evening of the Meyvazey (Inauguration). Therefore, some young people will listen to the songs of knowledgeable elders at the concert throughout the night. The Iraralay tribe usually holds the Meypazos (Annual Prayer Ritual) at the beginning of Kapitowan (October) on the Yami (Tao) calendar. A few families hosting the ritual decide on the actual date. The Iraralay tribe usually starts the ritual in the afternoon, while most Yami (Tao) tribes start it in the morning. On the ritual day, some families kill pigs and goats as offerings. In the morning, they will exchange presents with friends and relatives. Besides the sweet potato, taro, pork, and mutton, they will prepare offerings for families that did not kill a goat. In the afternoon, the head of the host family takes three boys to the seashore with the taro, Chinese yam, sweet potatoes, betel nuts, betel leaves, millet, and black pebbles. The host says a prayer by reading the text: “Akey Dolangarahen (Dear Heavenly Grand Father), we present to you these offerings and pray for good harvest, health, and longevity for our people.” Then, the boys raise the bowls containing the offerings above their heads and put them down on the ground before turning back their heads and returning to the tribe. On seeing the host completing the ritual, every household puts the offerings on their roof to present them to the deity. Then, they can leave the offerings by the seashore or on their roof. Within five days after the ritual, they cannot log in the mountain, sing, or hold an Meyvazey (Inauguration). The Meyvazey (Inauguration) is held before the use of a new house or a new ship. As it needs lots of taro, pork, and mutton, the ritual is also a representation of the family member’s work performance. A few years before the ritual, people need to cultivate new taro fields and raise pigs and goats to accrue materials. Each Yami (Tao) person can hold about 3-4 inaugurations in his/her life and receive social recognition for each Meyvazey (Inauguration). Hair-Shaking Dance (Maligni) Before the ritual, friends and relatives of the host will prepare for the ritual one week before and harvest taro 4-5 days in advance. ◎ Mivazai (House-Warming Ritual) The size of the Yami (Tao) house is rated by the number of doors from one to four, and people hold a House-Warming Ritual only for houses with three or four doors. Day 1: Store taro in the new house. Relatives and guests from different villages visit the house in the afternoon. The host, relatives, and villagers sing the responsorial ritual song to express welcome and appreciation. After the welcome, guests from other villages can stay for dinner or visit relatives and friends in the village and dine with them. When night falls, everyone will return to the host’s house and sing throughout the night until dawn. Day 2: The host will give the guests and relatives taro and pork or mutton as a gift. ◎ Marbomusmus (Launching Ritual) Day 1: Fill the ship with taro. In the afternoon, relatives and guests from different villages visit the host. The host, relatives, and villagers sing the responsorial ritual song to express welcome and appreciation. After the welcome, guests from other villages can stay for dinner or visit relatives and friends in the village and dine with them. When night falls, everyone will return to the host’s house and sing throughout the night until dawn. At midnight, the ship owner sends young people to the shore to catch fish with a net. The caught fish can be used to tell the fortune of the new ship and the crew. Then, they put the fish in the net, tie the net on a bamboo rod, and insert the rod next to the new ship. Day 2: The host will give the guests and relatives taro and pork or mutton as a gift. Then, the crew puts on formal wear and boards the ship. The captain knocks the stern keel and the first deck to pray for good luck for the ship. Then, the captain makes a hole on the stern keel, letting out water, millet, and gold foil from inside, then seals the hole to pray for the health and longevity of the crew and good luck for the voyage. Then, the captain and the female dependent of the ship’s first rower hold the taro digger and dig the aerial root of the thatch screwpine (Pandanus tectorius) that has been put at the bow and the stern to wish for the good health of the crew and the smooth voyage of the ship. Then, the Launching Ritual begins. First, the ship owner and young people perform the exorcism by the ship. After throwing the ship up in the air several times, they expel the evil and carry the ship to the shore together. The ritual ends when the new ship is in the water. After the Ritual, they worship the ship spirit with chicken viscera and taro to pray for good fishing. Then, they share the chicken viscera and taro with the crew. Day 3: Gift Presentation and the First Catch Rituals The wives of the helmsman and first rower fix the millet on two beaches for the crew to collect the gifts and return. After getting onshore, the crew gets the hooks and lines for the first catch. The caught fish can be used to predict the fortune of the voyage. After returning home, the captain will salt the catch and dry it on the zazawan (fish rack). Later, the captain shares the fish with the crew.
<urn:uuid:a86d7cb5-caca-4728-a7a0-e27c889dcf01>
CC-MAIN-2022-33
https://www.cip.gov.tw/en/tribe/grid-list/6521E76602C72C42D0636733C6861689/info.html?cumid=5DD9C4959C302B9FD0636733C6861689
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00698.warc.gz
en
0.953829
5,489
2.90625
3
To ask when the Vietnam War started for the United States is, metaphorically speaking, to open a can of worms. Before 1950, it was clear that the United States was not engaged in the war in any serious way. After 28 July 1965, it became equally clear that the United States had indeed become engaged in the war. Between these two dates, various competing narratives exist to bedevil and perplex citizen and historian attempting to answer what might seem to be either a simple or a trick question: when did the Vietnam War start for the United States? Some argue that we moved into the war incrementally. To these individuals, no single moment exists when one can say definitively that the United States was, at least not until July 1965. Instead, a series of steps moved the United States closer to war. Others believe that a specific date and event in this 15-year period can be isolated and identified as the time when the war actually started for the United States. What follows is a chronological list of possible dates suggesting when the war started for the United States, a brief analysis of each, and a few concluding remarks. The following is taken from the three posters offered by Vietnam War 50th anniversary. This link will take you to the site of these posters and others that are offered: http://www.vietnamwar50th.com/education/posters/ September 2, 1945: Ho Chi Minh, a Vietnamese nationalist who admired the works of Marx and wanted to establish a socialist state in his country, issues a “Declaration of Independence,” borrowing language from the U.S. Declaration and stating, “…we, members of the Provisional Government, representing the whole Vietnamese people, declare that from now on we break off all relations of a colonial character with France.” Although France would initially acknowledge this Declaration of Independence, the stage was set for what would become a decade long conflict between France and Ho Chi Min’s communist-backed Viet Minh forces. January 14, 1950: The People’s Republic of China formally recognized Ho Chi Minh’s Democratic Republic of Vietnam and began sending military advisers, modern weapons and equipment to the Viet Minh. Later in January, the Soviet Union extended diplomatic recognition of the Democratic Republic of Vietnam. February 27, 1950: May 8, 1950: United States announces that it was “according economic aid and military equipment to the associated states of Indochina and to France in order to assist them in restoring stability and permitting these states to pursue their peaceful and democratic development.” September 17, 1950: United States establishes the Military Assistance Advisory Group (MAAG), Indochina, in Saigon. Its primary function was to manage American military aid to and through France to the Associated States of Indochina (Vietnam, Laos, and Cambodia) to combat communist forces. May 7, 1954: The conflict between French forces and the Viet Minh culminated in the battle at Dien Bien Phu. Between March 13 and May 6, 1954, CIA contracted pilots and crews made 682 airdrops to the beleaguered French forces. On May 7, French forces surrendered to the Viet Minh after a 55 day battle, marking the end to France’s attempt to hold on to its colonial possession. July 20, 1954: The French defeat at Dien Bien Phu led to the Geneva Accords which established a cease-fire in Laos, Cambodia, and Vietnam and divided the country into a North and South Vietnam with a demilitarized zone along the 17th Parallel. French forces had to withdraw south of the parallel, the Viet Minh withdrew north of it. Within two years, a general election was to be held in both north and south for a single national government. September 8, 1954: Southeast Asia Treaty Organization (SEATO) is formed as a military alliance to check communist expansion, and included France, Great Britain, United States, Australia, New Zealand, the Philippines, Thailand, and Pakistan. November 1, 1955: By 1955, France had given up its military advisory responsibilities in South Vietnam, and the United States assumed the task. To appropriately focus on its new role, on November 1 the United States redesignated MAAG, Indo-china as MAAG, Vietnam and created a MAAG, Cambodia. MAAG, Vietnam then became the main conduit for American military assistance to South Vietnam and the organization responsible for advising and training the South Vietnamese military. November 11, 1961: In the face of South Vietnam’s failure to defeat the communist insurgency and the increasing possibility that the insurgency might succeed, Secretary of State Dean Rusk and Secretary of Defense Robert McNamara recommend to President John F. Kennedy, “to commit ourselves to the objective of preventing the fall of South Viet-Nam to Communism and that, in so doing so, …recognize that…the United States and other SEATO forces may be necessary to achieve this objective.” November 22, 1961: President Kennedy substantially increased the level of U.S. military assistance to Vietnam. National Security Action Memorandum 111, dated November 22, stated that: “The U.S. Government is prepared to join the Viet-Nam Government in a sharply increased joint effort to avoid a further deterioration in the situation in South Viet Nam.” December 11, 1961: Kennedy’s decision resulted in sending to South Vietnam the USNS Core with men and materiel aboard (32 Vertol H–21C Shawnee helicopters and 400 air and ground crewmen to operate and maintain them). Less than two weeks later, the helicopters, flown by U.S. pilots, would provide combat support in an operation west of Saigon. February 8, 1962: Military Assistance Command, Vietnam (MACV) is created and commanded by General Paul D. Harkins. Henceforth, MACV directed the conduct of the war and supervised Military Assistance and Advisory Group-Vietnam. November 22, 1963: President Lyndon B. Johnson is sworn in as President, following the assassination of President Kennedy. U.S. policy vis-a-vis Vietnam would change dramatically under Johnson’s Administration. August 7, 1964: On August 2, 1964, North Vietnamese torpedo boats attacked the USS Maddox, a Navy destroyer, off the coast of North Vietnam. Two days later, a second attack was reported on another destroyer, although it is now accepted that the second attack did not occur. In the wake of these attacks, President Lyndon Johnson presented a resolution to Congress, which voted overwhelmingly in favor on August 7. The Tonkin Gulf Resolution stated that “Congress approves and supports the determination of the President, as Commander in Chief, to take all necessary measures to repel any armed attack against the forces of the United States and to prevent further aggression.” March 2, 1965: March 8, 1965: As the situation deteriorated in South Vietnam and the United States ramped up its air war activities there, the Da Nang air base in northern South Vietnam became both significant to those activities and vulnerable to attack by communist insurgents, the Viet Cong. To defend the air base, but specifically not to carry out offensive operations against the Viet Cong, President Johnson authorized the landing of the 9th Marine Expeditionary Brigade, about 5,000 strong, at Da Nang on March 8. July 28, 1965: By May 1965, the situation had so deteriorated in South Vietnam that General William C. Westmoreland concluded that American combat troops had to enter the conflict as combatants, or else South Vietnam would collapse within six months. Johnson announced his decision at a press conference on July 28: “We will not surrender and we will not retreat…we are going to continue to persist, if persist we must, until death and desolation have led to the same [peace] conference table where others could now join us at a much smaller cost.” On the same day he ordered the 1st Cavalry Division, Airmobile to Vietnam, with more units to follow. The United States was now fully committed. OFFICE OF THE SECRETARY OF DEFENSE 1777 NORTH KENT STREET ARLINGTON, VA 22209-2165 – June 17, 2012 – INFORMATION PAPER Here is a different perspective of the timing – a paper written by a historian for the OSD Historical Office [references are cited at the end of the article]: 26 September 1945: Although some specify the (perhaps) accidental killing of Office of Strategic Services Lieutenant Colonel Peter Dewey on 26 September 1945 by Viet Minh soldiers as the start date, this is not accurate. Communist soldiers did indeed ambush and murder Dewey because they believed that he was French. Since the United States was not at that time a party to any conflict in Indochina, nothing of consequence resulted from this tragic event, and thus it is a nonstarter as a possible start date. 8 May 1950: For the first few years of the Indochina War between the French and the communist Viet Minh, which began in 1946, the United States took a hands-off attitude, regarding the conflict primarily as a colonial war. It was only in 1948–1949, as the Cold War got under way in Europe, that the United States began to re-interpret the nature of the war in Southeast Asia and see it as an anticommunist one. A related and compelling factor was that the United States needed French support and cooperation in Europe to contain the Soviet Union, and the price of that support was aid to the French in Indochina. By early 1950, the Truman administration was negotiating with the French government about how the United States could help in Indochina. After inching toward the conclusion that the conflict in Indochina was part and parcel of the Cold War against communism and not a colonial war, the United States announced on 8 May that it was “according economic aid and military equipment to the associated states of Indochina and to France in order to assist them in restoring stability and permitting these states to pursue HISTORICAL OFFICE their peaceful and democratic development.” This statement justified the provision of money and materiel to the French against the Vietnamese communists, the Viet Minh, for the following four years. By 1954, the year the French lost the war, America was paying almost 80 percent of the war’s cost. 17 September 1950: On this date, the United States established the Military Assistance Advisory Group (MAAG), Indochina, in Saigon. Its primary function was to manage American military aid to and through France to the Associated States of Indochina (Vietnam, Laos, and Cambodia) to combat communist forces. Although the French took American money to support the war, they refused to allow the Americans much say in how the war was run or how the South Vietnamese military were advised and trained. The United States was not a principal in any sense of the word at this time. 1 November 1955: By the end of 1954, the French had lost the war, and an international conference in Geneva split Vietnam into a communist North and a noncommunist South. Cambodia and Laos also emerged as states as a result of the conference. The following year, 1955, France gave up its military advisory responsibilities in South Vietnam, and the United States assumed the job. To appropriately focus on its new role in Vietnam, the United States, on 1 November, redesignated MAAG, Indochina as MAAG, Vietnam and also created a MAAG, Cambodia. MAAG, Vietnam then became the main conduit for American military assistance to South Vietnam and, as well, the organization responsible for advising and training the South Vietnamese military. American influence experienced a substantial increase in the second half of the 1950s but not enough by any stretch of the imagination to argue that American was at war. The establishment date of MAAG, Vietnam has great additional significance for those who wish to argue 1 November 1955 as the date on which the war began for the United States. The Department of Defense (DoD) decided in November 1998 to formally recognize 1 November 1955 as the earliest date on which a soldier’s death in Southeast Asia would qualify the soldier for inclusion on the Vietnam Veterans Memorial. According to supporters of this date, DoD’s decision implicitly recognized that the war had started for the United States on 1 November. However, this was essentially an administrative maneuver and not a statement that the United States in any substantive sense was at war. It should be kept in mind that President Dwight Eisenhower’s policy of advice and support was a limited one, and the number of military advisors never exceeded 1,000. 11 December 1961: In the second half of 1961, in the face of South Vietnam’s failure to defeat the Communist insurgency and the increasing possibility that the insurgency might succeed, President John Kennedy decided to substantially increase the level of U.S. military assistance to the beleaguered nation. National Security Action Memorandum 111, dated 22 November, stated that: “The U.S. Government is prepared to join the Viet-Nam Government in a sharply increased joint effort to avoid a further deterioration in the situation in South Viet Nam.” This quickly translated into sending to South Vietnam the USNS Core with men and material aboard (33 Vertol H–21C Shawnee helicopters and 400 air and ground crewmen to operate and maintain them). The Core arrived in South Vietnam on 11 December and was the first of many such shipments. Less than two weeks later, the helicopters were providing combat support in an operation west-southwest of Saigon. The heart of the argument for this date, and it is a strong one, is substantive: namely, that by sending helicopters, pilots, and maintenance personnel to Vietnam and allowing the helicopters to support South Vietnamese combat operations (for example, ferrying troops to the field and providing fire support as well as training the South Vietnamese for operations), President Kennedy had initiated the process through which the United States assumed a combat role. While it is clear that Kennedy had broken dramatically with Eisenhower’s limited policy of training, advice, and support, it is by no means generally accepted that this moment constituted the start date for America’s large-scale participation in the war. However, many have made a credible argument that this is America’s war start date. 7 August 1964: On 2 August 1964, North Vietnamese torpedo boats attacked the USS Maddox, a Navy destroyer on a signals intelligence mission, off the coast of North Vietnam. Two days later, a second attack on another destroyer on a similar mission supposedly took place (it is now accepted that the second attack did not occur). In the wake of these attacks, President Lyndon Johnson presented a resolution to Congress, which in turn voted overwhelmingly in favor of it on 7 August. The key part of the Tonkin Gulf Resolution stated that “Congress approves and supports the determination of the President, as Commander in Chief, to take all necessary measures to repel any armed attack against the forces of the United States and to prevent further aggression.” Because of the robust and straightforward wording of the resolution, many then and later saw the Tonkin Gulf Resolution as the functional equivalent of a declaration of war. The Johnson administration certainly looked upon it as such. From this point it is not a huge leap to consider this date as a serious competitor for when the war started for the United States, despite the fact that little action flowed directly from it. 8 March 1965: As the situation deteriorated in South Vietnam and the United States ramped up its air war activities there, the Da Nang air base in northern South Vietnam became both significant to those activities and vulnerable to attack by Communist insurgents, the Viet Cong. To defend the air base, but specifically not to carry out offensive operations against the Viet Cong, President Johnson authorized the landing of the 9th Marine Expeditionary Brigade, about 5,000 strong, at Da Nang on 8 March. Although some see this date and action as a convenient start date, it is a hard argument to sustain. While it is true that the Marine mission around Da Nang evolved over time, the landing should best be seen as an important but not decisive interim step to President Johnson’s summer decisions to commit the nation to war, and to victory in that war. Meanwhile, Kennedy’s 1961 decision to send men and materiel had resulted, with President Johnson’s support after Kennedy’s death, in about 23,000 American military personnel in South Vietnam. 28 July 1965: Possibly the last point on the path to the full commitment of U.S. forces to the Vietnam War occurred in the late spring and summer of 1965. By May, the situation had so deteriorated in South Vietnam that its military was losing the equivalent of a battalion a week. The U.S. Commander in Vietnam, General William C. Westmoreland, concluded that American combat troops had to enter the conflict as combatants, or else South Vietnam would collapse within six months. He made his famous 44 battalion request on 7 June, stating that “I see no course of action open to us except to reinforce our efforts in SVN [South Viet Nam] with additional U.S. or third country forces as rapidly as is practical during the critical weeks ahead. Additionally, studies must continue and plans developed to deploy even greater forces, if and when required, to attain our objectives or counter enemy initiatives.” This request became the vehicle for major discussions by Johnson and his senior policy advisors at the State Department, DoD, the National Security Council, and the Central Intelligence Agency over the next several weeks. In late July, Johnson made his decision and at a press conference on 28 July announced that “we are in Viet-Nam to fulfill one of the most solemn pledges of the American Nation. Three Presidents—President Eisenhower, President Kennedy, and your present President—over 11 years have committed themselves to help defend this small and valiant nation.” He then said that General Westmoreland had told him what he needed and that “we will meet his needs.” Later in the press conference, he said, “We will not surrender and we will not retreat.” Finally, to drive home America’s steadfastness, Johnson maintained, in a seldom-quoted part of his statement, that “we are going to continue to persist, if persist we must, until death and desolation have led to the same [peace] conference table where others could now join us at a much smaller cost.” To put actions to his words he ordered that day the 1st Cavalry Division, Airmobile, and other units to Vietnam, with more to follow. The United States was at this point fully committed in an open-ended way to winning the war. But did this press conference statement by President Johnson, which historian George Herring called “the closest thing to a formal decision for war in Vietnam,” support the conclusion that 28 July 1965 was, all things considered, one of the better candidates for a start date? The short answer is “yes.” After this date the United States was, at least as long as Johnson remained President, irrevocably committed to fighting the Vietnam War to the end. Thus, 28 July 1965, though undoubtedly late in the game, is probably the strongest contender for the start date, if such a date has to be chosen. While historians know with certainty that the Duke of Wellington bested Napoleon at Waterloo on 18 June 1815, the Germans surrendered on the Western Front on 11 November 1918, and the Japanese attacked Pearl Harbor on 7 December 1941, they must still live with ambiguity in offering answers to many complex historical questions. The question of when the Vietnam War started for the United States falls into that category of ambiguity. It is impossible to state categorically that one date or another is the precise date on which the start of the war for the United States occurred. Put differently and emphatically: no obvious and verifiable start date exists. Probably the truest, though not the most satisfactory, statement to be made is that the process by which the United States became embroiled in the war was evolutionary and incremental. What can also be said, albeit with a little oversimplification, is that the United States acted in an advice-and-support role in relation to French forces (1950–1954) and later to the South Vietnamese (1955–1961). And starting in late 1961, the United States began a transition—at first slow but later more rapid—from advice and support to South Vietnamese operations to a direct combat role. By mid-1965, the direct combat role dominated and remained the major, but never the only (advice and support to the South Vietnamese military continued), role of U.S. forces in the Vietnam War until 1971. If pushed to select a date with some traction, one might choose December 1961 or July 1965. The former represents a strong break with past policy and significantly led to the participation of U.S. military personnel in South Vietnamese operations primarily but not exclusively as tactical and intelligence advisers, as helicopter pilots to ferry troops to the battlefield, and as door gunners on helicopters. The latter represents the overwhelming commitment of the United States to winning the war and is an even greater break with the past. It represents the moment when the United States completed its transition from advice and support to direct military intervention. President Johnson and others often characterized the U.S. military goal as one of convincing the enemy that he could not win, but without a doubt this was only a less warlike way of saying the United States was in the war to win it, whatever winning might turn out to mean. 1 Department of State, Foreign Relations of the United States (FRUS), 1950, vol. VI, 812. 2 Shelby Stanton, Vietnam Order of Battle (Washington, DC: U.S. News Books, 1981), 59. Others give this date as 27 September. 3 U.S. Department of Defense, Office of the Assistant Secretary of Defense (Public Affairs), News Release No. 581– 98, November 6, 1998, “Name of Technical Sergeant Richard B. Fitzgibbon to be Added to the Vietnam Veterans Memorial.” 4 FRUS, 1961–1963, I, 656. For this policy story in documents, covering the period 15 October to 15 December 1961, see 380–738. 5 For a summary of the events surrounding the Tonkin Gulf incident and the subsequent resolution, see Lawrence S. Kaplan, Ronald D. Landa, and Edward J. Drea, The McNamara Ascendancy, 1961–1965 (Washington, DC: OSD Historical Office, 2006), 517–524. 6 The Pentagon Papers, Gravel ed., vol. 2 (New York: Beacon Press, 1971), 722. 7 Jack Shulimson and Maj. Charles M. Johnson, USMC, U.S. Marines in Vietnam: The Landing and the Buildup, 1965 (Washington, DC: History and Museums Division, Headquarters, U.S. Marine Corps, 1978), 16. 8 For the larger narrative of these weeks, see John Carland, Combat Operations: Stemming the Tide, May 1965 to October 1966 (Washington, DC: U.S. Army Center of Military History, 2000), 45–49. 9 FRUS, 1964–1968, vol. II, 735. 10 Public Papers of the Presidents of the United States: Lyndon B. Johnson, Book II, 794, 795, 796. 11 Quoted in Carland, Stemming the Tide, 49 Prepared by: Dr. John Carland, Historian, OSD Historical Office, DA&M, (703) 588–2622 Approved by: Dr. Erin Mahan, OSD Chief Historian, DA&M, (703) 588–7876 Thank you for taking the time to read this. Should you have a question or comment about this article, then scroll down to the comment section below to leave your response. If you want to learn more about the Vietnam War and its Warriors, then subscribe to this blog and get notified by email or your feed reader every time a new story, picture, video or changes occur on this website – the button is located at the top right of this page. I’ve also created a poll to help identify my website audience – before leaving, can you please click HERE and choose the one item best describing you. Thank you in advance!
<urn:uuid:0cb10281-ee1d-433b-b327-dfb52310128d>
CC-MAIN-2022-33
https://cherrieswriter.com/2019/04/29/when-did-the-vietnam-war-start-for-the-u-s/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00498.warc.gz
en
0.962149
5,135
3.875
4
There are several ways in which to get breads, cakes and other baked goods to rise. Some of these methods have been used for hundreds of years, such as yeast or whipped eggs, and some are a very modern introduction (chemical raising agents). Types of chemical leaveners/raising agents Leaveners can be classed as natural, chemical or mechanical. Natural includes eggs and yeast, chemical is bicarbonate of soda, cream of tartar etc and mechanical includes the incorporation of air by physical methods (eg whipping cream or eggs) or rise created by steam or dry heat [steam/heat could also be classed as natural]. What exactly am I rambling on about I’m only covering chemical leaveners/raising agents in this piece. I’ve actually been researching and reading up on this on and off (not continually!) for over a year now. I never imaged there was so much to it. I set out to discover why and how chemical raising agents work in my baking. I’ve read through chemical formulas, explanations of the chemical process involved and undergraduate-level books detailing experiments all to get to here. Some of it I didn’t grasp at all, some made sense at the time but now I’m a bit fuzzy on it and plenty did made sense. I’m no scientist, so I think there’s little point in me simply regurgitating the really complex areas I’ve researched or even drawing out the chemical formulas and reactions that are involved. I might get such specific details wrong. I may not have understood it all fully. There’s a chance I could misinterpret it. So, all I’m aiming to do here is pass on what I’ve come to understand through this research about what is going on inside my cake (or other bake) to make it rise, and if there is anything I can do to get the best results in my kitchen using raising agents. Which raising agents In the UK we tend to only use bicarbonate of soda as a chemical raising agent (though others are available – see later). In a commercial baking setting (in the UK to some extent but more commonly elsewhere) sometimes baking ammonia is used instead as it produces a drier food product but it does produce a little ammonia as a by-product of the chemical reaction. You may have come across its common/historical name of ‘hartshorn’. Baking ammonia’s use in cooking predates that of bicarbonate of soda. You’re going to say, “What about Baking Powder?” Well, baking powder isn’t one thing. It’s a pre-mixed product of bicarbonate and a powdered acid (in its most basic, truest sense). All the information I’ve written below on the basics of how bicarbonate works also relates to baking powder, apart from two important caveats: - you don’t have to manually add an acid (such as lemon juice) separately as it’s already included. This also means the ratio of acid to bicarbonate is already measured precisely for you - the addition of a third ingredient in some commercial baking powders is there to add a second reaction which occurs in the presence of heat. It has the effect in that the leavening process occurs ‘twice’ as it were – chemical reaction one will start to produce gases in your bake in a cold environment (ie as soon as you start mixing) and the second chemical reaction will be produced in the presence of heat (as it bakes).[In the USA most baking soda’s are “double acting baking sodas” and follow this recipe. The name “double acting” implies the two chemical processes. It’s difficult to give you a definition of what to expect with American double acting baking soda as there does not seem to be an industry standard and several chemicals appear interchangeable, dependant on the manufacturer’s “recipe” and whether the product is deemed kosher or not. You may find various combinations of acid and bicarbonate in commercial double acting baking soda, the ingredients of which can be pulled from a long list: sodium bicarbonate, sodium aluminium sulphate, acid sodium pyrophosphate, calcium acid sulphate, ammonium bicarbonate, tartaric acid to name just a few. Don’t worry about conversions of American recipes – just substitute any UK/European baking powder. The American double action isn’t twice as strong as baking powder, just it definitely uses this dual process. Its strength/potency is equivalent whether the baking powder you swap it for is single or double acting itself.] Bicarbonate of soda is most commonly mixed with cream of tartar (this could be listed as potassium bitartrate or tartaric acid) to produce baking powder. This is a single acting baking powder. Some commercially produced baking powders will include a third chemical – as mentioned above – such as acid sodium pyrophosphate, to provide this additional, second action. Also, some commercially produced tubs of baking powder may have an added stabiliser or two to prolong shelf life and minimise reaction (and therefore spoiling) prior to use. The basics of how bicarbonates work as a food leavener Bicarbonate of soda/sodium bicarbonate is extremely alkaline and a chemical reaction occurs in the presence of an acid – for example, lemon juice or vinegar and some moisture. You can start the reaction with a dried acid (for example vitamin C powder or cream of tartar) but you will need to add some form of moisture. Bicarbonate does not need heat for any chemical reaction with acid to take place. As soon as you introduce the acid to bicarbonate (in the presence of a little moisture – there may even be enough in the air) the reaction will start. What this means for your bake is that the rise starts happening as soon as you start mixing. When using a chemical leavener get your bake in the oven as soon as you can – don’t leave your mix hanging about in the bowl before you use it as you’ll have ‘wasted’ some of the chemical reaction. We know it as baking powder in the UK, but it’s also called baking soda (typically in the US and Canada), bread soda and cooking soda. Can be listed as sodium bicarbonate or sodium hydrogen carbonate and you can spot it on a list of ingredients as E500. The trick to using bicarbonate of soda (and baking powder for that matter, but to a lesser extent) within baking and cooking is to perfectly balance the amount of bicarbonate to the amount of acid. In the presence of an acid, bicarbonate starts to react and one of the products produced by this reaction is carbon dioxide; a gas. It’s this release of gas bubbles that causes the rise within your baking. For example, if you used a vinegar (which is acetic acid) with your bicarbonate, the reaction would produce some water, carbon dioxide and a small amount of sodium acetate. Note on bakers ammonia/ammonium carbonate: for ammonium carbonate the comparable reaction produces (a little less) water, carbon dioxide and ammonia. It does not need an acid to react but does need heat and moisture. As it produces ammonia as a by-product, its use at home should not be in large quantities. When included in a mass-produced product by a commercial food company the large amounts involved (and therefore larger amounts of released ammonia) can be controlled safely in a factory environment. The reason it is still used rather than baking powder is all because of that drier baked result – so it’s typical to find baking ammonia in things like crackers and harder biscuits. If you’re looking out for it (to be nosey) on a product’s ingredients list it may well be included as E503 rather than named. Italian, German and Scandinavian recipes in particular are most likely to include baking ammonia. I have had success in directly substituting the same amount of bicarbonate of soda for ammonium bicarbonate within a recipe, reducing any liquid in the recipe by a small amount and replacing it with an acid (for example this could be as simple as using a teaspoon less of water and adding a teaspoon of lemon juice in its place) to recreate that drier texture and effect the chemical process. However, as a caveat, if you are similarly trying to convert one of these recipes you may need some trial and error to get this balance right yourself. I have not yet attempted to bake with baking ammonia – I’m a little nervy of the ammonia if I’m honest! I may try to get some as it is available to buy online and, if so, I will update this post with how I got on. It’s crucial that the amount of acid used balances out the amount of bicarbonate. Too little acid or a heavy hand with the bicarbonate and not all of the bicarbonate will be able to react. This will leave some bicarbonate behind, and you’ll notice that tell-tale alkaline-salty tang which can ruin a bake. Additionally, your bake may not be fully risen either if not enough carbon dioxide was produced. If there is too much acid the reaction can happen at a facilitated rate and also you’ll be left with a very sharp tasting bake. Even if there is too much acid the chemical reaction will still take place but it will start more vigorously and be over quicker. This sounds OK doesn’t it? Well, actually it’s not great news for the baker, as the reaction is quick and the gas is produced faster it will start to dissipate early and the rise it produced can go to waste. For instance, when making a cake you need the bubbles from the gas to be captured as tiny cavities in the sponge mix as it cooks. Bubbles of gas will reach their maximum size within the sponge before dispersing as the cake heats up in the oven. In a perfect bake, as the cake mix hardens around the bubbles so the cake stays light and airy once fully baked. If your cake mix is still too soggy as the gas escapes (because the gas is escaping early) the sponge around the bubbles cannot support itself and the cake structure will collapse causing a denser, flatter bake. This will also happen if you’ve included the perfect amount of acid but have left your baking around for a while before you get it in the oven – the process will be over before you need it to be. [Incidentally, the carbon dioxide is not the only thing that contributes to the creation of bubbles in the cake batter. Water from both the ingredients and the bicarbonate chemical reaction will be heated in the oven and start to steam, the steam expands also creating holes in the batter before evaporating.] If we can understand the basics of how bicarbonate works, the principle will be roughly the same for baking powder There are several reasons that baking powder is more prevalent in kitchens and more common in recipes: - Firstly, on its own, bicarbonate can leave that salty tang behind. It’s difficult to get the exactly perfect ratio of acid to bicarbonate as there are so many contributing factors. These are just a few examples – there could be many more reasons: your flour may be slightly damper than the one in the original recipe, causing the reaction to behave differently - You may be using a lemon juice or other acid which is more acidic than the original. This may sound odd, but for example any vinegar isn’t just acid – that’d be incredibly toxic and more dangerous than the bleach you put down your sink. Most vinegars are around just 5% acetic acid. - Your bicarbonate could be fairly old, have had some exposure to moisture and therefore not be as vigorous - All the other ingredients ‘muddy the waters’ as they cannot be relied on to have certain PH values or moisture content and therefore will impact the reaction - All these things (plus lost of other factors such as the humidity in your kitchen, how accurate your oven etc) mean that if the original recipe by the chef or cook worked perfectly, yours still may taste of bicarbonate, just because some teensy tiny change, even one out of your control, altered the chemical reaction For large quantities the risk of that bicarbonate of soda taste appearing becomes greater. It can actually discolour your baking too: bicarbonate does have a tendency to turn things yellow/green (have you ever put a spoonful of bicarbonate of soda in a glass of red fruit squash? It’ll go a dark purple). All these things make ‘pre-loading’ bicarbonate of soda with an acid, in a controlled ratio a much more sensible option – hence the development of baking powder. Baking powder (as mentioned previously) is a mix of sodium bicarbonate and tartaric acid. This means the ratio of bicarbonate to acid is better controlled. By using baking powder, your bake will then be less affected by other ingredients and whether you’re heavy handed with the lemon juice. In commercial baking powder: this stuff you buy from the supermarket or grocer you’ll often find a stabilising agent in there too such as cornflour (cornstarch) or flour and there may be some other phosphates added (these are harmless). The cornflour is in there to keep the bicarbonate dry (to avoid any chemical reaction starting), stop it from caking and to help aid the shelf life of the product. As an alternative, make your own baking powder! You can make it as you need it and it’ll be fresh and ready to start its chemical reaction in your bake. The ratio is 2 parts bicarbonate of soda to 1 part cream of tartar. If your recipe calls for 1 teaspoon of baking powder: 2/3 teaspoon bicarbonate of soda and 1/3 teaspoon cream of tartar If your recipes calls for 1 1/2 teaspoons of baking powder 1 teaspoon bicarbonate of soda and 1/2 teaspoon cream of tartar You can double up on those if your recipe needs more…. So… why do some recipes need both baking powder and bicarbonate of soda? This is because they include a very acidic ingredient (or more than one), such as lemon juice or buttermilk, which is needed for taste or consistency. If a recipe has a lot of acidic ingredients it would not be very pleasant to eat if the acidity level wasn’t countered with just baking powder, so the additional bicarbonate of soda is added for that purpose. Of course, this means that the chemical reactions are magnified and give more rise to the recipe, so although a recipe may have both raising agents they probably are not in much higher quantities than a typical bake. Recipes with both in will have been tested and worked out so that there is a balance between ingredient acidity levels, the perfect amount of rise required and the amount of leaveners used all at the recipe development stage. Conclusions – what does this all mean to the home baker? If you follow anything exactly in a recipe make sure you stick to the exact amount of baking powder (or bicarbonate) that the recipe states. The recipe developer has worked it all out and tested the bake to ensure it’s correct. Even a little deviation could leave you with an alkaline or acid-tasting bake or one that hasn’t risen sufficiently or, indeed, that’s risen too fast and then collapsed. Keep some shop-bought baking powder in your cupboard – you don’t always need to make it yourself. Do check the label next time you buy to make sure that anything other than an acid and bicarbonate on the ingredient list is only cornstarch or something you yourself believe to be safe. If in doubt go for a reliable, ethical brand like Dove Farm. Keep a pot of both bicarbonate of soda and cream of tartar in your kitchen as well. You can then make your own baking powder for a change, to ensure it’s as fresh as possible (to get the best leavening result) or at least now you know how to make it if you run out. Made a bake and you can taste the soda? Next time you make it reduce the bicarbonate of soda by 1/2 a teaspoon or add in 1/2 teaspoon of lemon juice (or yoghurt or vinegar etc, dependant on the type of savoury or sweet bake). If the recipe only has baking powder listed just add the extra acid or a 1/4 teaspoon of cream of tartar. Make sure you keep your tubs of baking powder, bicarbonate and cream of tartar well sealed and away from moisture. If you’re using chemical leaveners/raising agents get your bake in the oven as soon as it is mixed. While you are mixing the chemical processes are already starting. In order to get the back in as soon as it is ready you should ensure that your oven is up to temperature you require before you start to mix. Making your own self-raising flour Self-raising flour isn’t made any differently than plain flour of the same grade: it’s just got the leavening agents already added in. Of course you can get ‘supreme sponge flour’ which is ready sieved – this just means it’s been fluffed up through a sieve to ensure there are no clumps. If you buy a finer milled plain flour it’s just the same thing as this ‘supreme sponge flour’ just without the raising agents added. Self raising flour is NOT produced differently to plain apart from the extra sieving for the ‘supreme’ flours, but that’s post production and not part of the actual milling. It is only the addition of raising agents (and other extra ingredients as the manufacturers see fit) that makes the difference. Many well known brands put additional ingredients into their flours other than the raising agents. These are not sinister or harmful but are there to increase shelf life, stop moisture retention, reduce clumping or are just added vitamins and minerals. However, if you make your own self raising flour you won’t need all these – just the bare minimum of ingredients. None of these additives are harmful or unsuitable for vegans/those careful with ingredients for religious reasons. If you’re not too fussed, then that’s all fine, but personally even though these ingredients are not harmful I do not really want anything that’s not needed. All I need in my self-raising flour is flour, sodium bicarbonate and tartaric acid. Some of the added ingredients are actually vitamins and minerals, which also seems good but to me I wonder why we need them added to flour of all things. I don’t really expect to get vitamin C from baked goods and I’d prefer it to come fresh from any fruit or veg (I can even ensure I add them into my bakes – that’s a better way to add it!). Other things you may find on the ingredients label on your flour packet include ‘sodium hydrogen carbonate’. This is just another name for bicarbonate of soda, so of course you’d expect to see that listed. It is also not unusual to find calcium phosphate, monocalcium phosphate and disodium diphosphate in UK self-raising flour. Calcium phosphate and monocalcium phosphate are the same thing and may appear as E341. Disodium diphosphate is E450. All these phosphates are made commercially from vegan sources and are harmless. Even though none of these ingredients is a worry, maybe you still fancy making your own self-raising flour? You’ll know what you’ve put into it and it gets you used to making it rather than having to buy two separate types of flour. Ingredients – self-raising flour The ratio for self-raising flour is to use 20 parts of plain flour to 1 part baking powder Therefore, for each 100g of plain flour add 1 level teaspoon of baking powder (See above for the make-it-yourself baking powder recipe)
<urn:uuid:05d7ec17-3a6a-4c25-a365-57d484908552>
CC-MAIN-2022-33
https://inksugarspice.com/category/kitchen-tips-food-advice/food-science/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573172.64/warc/CC-MAIN-20220818063910-20220818093910-00297.warc.gz
en
0.946307
4,355
2.84375
3
Vol 6|No 4|January|1997 Curriculum as Mass Curriculum as Journey Strategy One: Provide Good Interfaces Strategy Two: Elevate Prospecting Skills Strategy Three: Provide Scaffolding As schools rush to connect to the Information Highway, what are the best ways to employ the Internet in support of the curriculum? Imagine your school has several labs each allowing 30 students at a time to mine the electronic information resources. Even better, imagine that many of your classrooms offer a half dozen or more Internet connected computers which are also linked to a rich array of locally served resources such as encyclopedias and databases. What would you do with such access? What would you do with such rich information? How would you apply all that Information Power to the curriculum? That all depends on what kind of curriculum you must address . . . Do you have a curriculum worth teaching? In some school districts you cannot tell which came first . . . the textbook or the curriculum. The social studies curriculum in such districts bears an uncanny resemblance to the table of contents of a particular textbook series. The same with math and science. A committee shops for a textbook series and then writes a curriculum to fit the winner. In many cases, the curriculum is a list of topics to be covered with little attention paid to concepts, generalizations or learning strategies. This kind of curriculum lends itself to a linear sequence of neatly packaged lessons presented by the teacher. The role of information in such a curriculum is very tightly defined . . . as something akin to processed food. Five hundred years of history are boiled down into a mere 500 pages of text. The textbook keeps only the most important facts. Even so, there is never enough time to get through it all. It's like rolling boulders up a steep hill. Trying to get the kids to remember all those facts at least until the test or the exam . . . Because there is never enough time to finish the (typically thick) textbook or "cover" the curriculum, the Internet's robust and wildly rambling information is likely to be viewed as a jungle, a briar patch, or a distraction by teachers and administrators who value such a curriculum. This is an outmoded approach to schools and curriculum. A smokestack curriculum for a bygone era of factories and assembly lines. Not a curriculum for an Age of Information. In schools where the curriculum is a mass to be swallowed, where students are fed information meals all too similar to fast food - high in fat, low in nutrition - we should not be asking how to employ the Internet in support of the curriculum. We should first be asking what kind of curriculum is appropriate in 1997. What kind of curriculum will prepare students to take on the challenges of the next century and an Information Society? We should first change the curriculum to focus on learning. Schools should be much more about students making meaning rather than merely committing someone else's insights to memory. Learning ought to be much more like cooking than eating. But not microwave cooking! In those schools where the curriculum is viewed more as an adventure, as an invitation to explore interesting questions and issues, the Internet and the other new information technologies will prove far more valuable and will be much more likely to receive a warm welcome from teachers and students alike. When curriculum is written as a journey, student discovery, invention and investigation are prized. Questions are paramount. Essential questions. Major concepts. Theories. Why do things happen the way they do? We study mathematics or science or social studies in order to understand our world and how it works. For the sake of economy, we spend part of our time reading the collected wisdom of sages, absorbing that which may serve us well in this rapidly changing world. But we also devote a good deal of time struggling with issues arising out of our own times, trying to make sense out of our own era. The Baltimore County Schools in Maryland have created a curriculum which lists such important questions. Take a look at middle school social studies. Smokestack schools offered very few opportunities to work with primary sources or raw data. Information Age schools (see article on Post Modem schools) will provide a balance between primary and secondary sources, challenging students to develop their own insights while critiquing and reviewing the best thinking of the society's "elders." Because the Internet is a bit of an information jungle, it really pays to provide good interfaces guiding teachers and students to quality information which is relevant to the curriculum and appropriate for the age student. At the same time, the ultimate goal is to develop lifelong learners who are capable of cutting a path even through information jungles, so we must take care not to structure all student use of the Net. More on that below . . . There are several good ways to provide interfaces . . . 1. Develop curriculum pages on the school Web page which list and annotate good sources while providing suggested activities and directions for learning. See Cutting to the Chase: Leading Teachers and Students to the "Right Stuff" with WWW Curriculum Pages Peter Vogel, a physics teacher in British Columbia, has created a remarkable site dedicated to information about and resources for the instruction of physics 11/12. Go to the Physics site. Some very talented library media specialists working in the Baltimore County Schools have developed an outstanding site identifying worthwhile Internet resources for all aspects of their Essential Curriculum. Go to the Baltimore site. Twin Grove Junior High school in Buffalo Grove, Illinois, provides a stellar Web site with great curriculum pages and resources. You will find additional examples of curriculum pages at the Bellingham School Web page (http://www.bham.wednet.edu). There are pages for each curriculum area as well as pages devoted to special holidays or topics: The production of these lists requires a considerable investment of time, much of which is spent prospecting (see below) for good sites. 2. Teach students and staff sufficient HTML skills so that they can routinely and frequently develop lesson pages which include good resources and activities. These pages may be shared locally on a WAN (Wide Area Network) or Intranet. 3. Provide links on the school Web page to one or more of the excellent lists created by educators like Kathy Schrock. 4. Point staff towards commercially developed curriculum sites from educational publishers or to governmental agencies and museums offering sites tailored to the needs of students. The Library of Congress provides excellent lessons plans, for example, a page full of resources to help teachers and students learn about using the kinds of primary source materials available online. Go to the Library of Congress Educator's Page. The best example of educational publishing I have found, Ligature Gateway, which was offering interdisciplinary units, is no longer a working address. There is a huge gap in provision of quality from the publishers, unfortunately. Scholastic does offer some promising examples of Internet connections to its K-8 programs at its Web site. A good interface provides many or all of the following elements: We need to acknowledge up front that the Internet was not designed with schools in mind. It is not an information compacting device like a textbook. Very few Web sites were developed with either the K-12 curriculum or the developmental needs of students at the forefront. The information is usually presented with little thought to how it might be used in a school by a teacher and a classroom of students. Rarely do we find a "teacher's guide." As long as the Internet presents itself as a highly disorganized frontier, schools must make a major investment in organizing "tours" to the best information sites. The inefficiencies of creating insight and making meaning may otherwise overwhelm the advantages. While schooling in the 19th and 20th centuries was primarily about students mastering processed information - the core curriculum - it is likely that schooling and learning during the next century will be characterized by far more PROSPECTING - the purposeful, skilled, but somewhat haphazard search for insight and truth across a complicated information landscape. Why? Because information problem-solving skills will be paramount - the basic foundation for a robust career and life. Prospecting is quite different from the linear, sequential inquiry models which were most favored in previous centuries. The following were early attempts of mine to describe the kinds of skills necessary to make meaning from the kinds of information sources found on the Internet: Grazing the Net: Raising a Generation of Free Range Students, Culling the Net and Mucking About the Web As I have worked on research models for schools during the past three years, I continue to see the need for a well planned progression from structured research experiences (highly guided) towards those calling for great independence. Many staff members and students may rebel against prospecting. Without a strong skill base, the prospecting experience will seem too much like wandering in the desert or the jungle. We can picture the leathery old gold prospector leading a team of mules out of the wasteland with nothing to show for two years of effort. This image captures the response of those who are thrown too early into the information resources of the Net without the skills to find much of anything to sustain life or meaning. All too often this new information landscape is either empty or cluttered and there are few clues to guide the searcher. In many cases, intuition and supposition must play major roles. Roget's Thesaurus draws a connection between the act of prospecting and a treasure-hunt. But it may prove to be a treasure hunt through a garbage pail or landfill! The information prospector must . . . Scour, clean out, turn over, rake over Pick over, turn out, turn inside out Rake through, rifle through, go through Search through, look into every nook and cranny Look or search high and low Search high heaven Sift through, winnow, explore every inch Go over with a fine-tooth comb Pry into, peer into, peep into, peek into Overhaul, frisk, go over, shake down Search one's pockets, feel in one's pockets Search for, feel for, grope for, hunt for Drag for, fish for, dig for Leave no stone unturned, explore every avenue, Cast about, seek a clue, follow the trail and PURSUE the TRUTH! (SOURCE: Roget's Thesaurus of English words and phrases.) No wonder some people say "No thank you!" and cling their encyclopedias or text books. Effective prospecting is a blend of art and skill, not simply a matter of wandering around with a divining rod in your two hands hoping to find the gold or water or oil below the surface. Visit the Prospector's Primer, and you will learn that during the past 5,000 years prospecting for oil has progressed from . . . "a matter of guesswork and good luck . . . (to something) considerably less random." "For example, structural geology involves gathering and interpreting information from above ground to deduce what lies underground. Geologists obtain this information by examining exposed rocks or, when difficult terrain limits access, by examining images from satellites and radar." In a similar fashion, prospecting for insight demands skillful observation and deduction. The use of search engines, for example, is more or less powerful depending upon whether or not the searcher has some sense of the logical interplay of words and the search strategies supported by the particular engine. The rush of schools to climb aboard the Internet does, at times, seem a bit like the California Gold Rush of 1849. We hear of the Mother Lode, the enormous potential of digitized information treasures, and we clamber to gain access. Unfortunately, only a small portion of the best information is lying out in the open where it can be found rapidly and easily. The rest requires much detective work. The goal of prospecting is to improve the odds of success so that finding good information is a probability rather than an accident. We hope that several hours of prospecting will result in the likelihood that we will "strike oil" or find the "Mother lode." We turn to whatever mix of information sources we have available (whether they be paid sources such as Electric Library or the free Web) with the natural presumption that we will emerge with new knowledge and new understanding, that we will not surface empty handed. While prospecting involves dozens of skills, several are especially worth listing and describing here: What's out there? Before we start "drilling" for oil, actually opening up and reading articles, we would be wise to survey the offerings and get a general feeling for the landscape of a particular topic. When we turn to the Internet, we have three basic sources which support this kind of scanning: As an example, in seeking good resources for a curriculum site devoted to The World's Great Explorers recently, I turned to AltaVista, one of the leading search engines. I used two words for a simple first search, "explorers" and "science." This strategy turned up 40,000 "hits" - what AltaVista calls "matching documents." That is quite a few! If I had started opening these "hits" one at a time, I would have wasted a great deal of time. If we reckon the "drilling" time and expense expended uselessly each time we open a Web site which is irrelevant to our task or quest, we would soon feel informationally "bankrupt." In this case some half of the first 50 "hits" were related to school science programs such as Science Explorers - an excellent group of programs unrelated to my search. "Science Explorers is a series of day-long, hands-on science workshops for teams of teachers and students from rural and urban areas" in several states and cities such as Chicago (http://www.chias.org/www/edu/cse/csehome.html), each with its own special flavor and development. Many of the other sites which emerged in the first 50 were adventure travel and cruise offers of one kind or another. I only found 2-3 sites which were "on target." Some of the sites were genuinely devoted to scientific explorers such as the Chelsea House Publishers' site which offers books on the Great Explorers. But when I tried this page later, I was greeted with "NOT HERE." I did manage to locate the home page and some useful lists at this site but no real content. Scanning entails looking over the first 50 "hits" to see the patterns - staying on the surface without "drilling." Scanning delivers two opportunities: Once your simple inquiry turns up so many irrelevant sites, you can select the Advanced Query option from the top of the AltaVista page and eliminate various words which are indicators of irrelevant items as I did with the following: Explorers NEAR (science or scientific) AND NOT ("Science Explorers" OR child OR children OR museum or club or cruise or school or young or New or NASA or summer or enrichment or project) This strategy reduced the mountain of "hits" to a mere 31. A much smaller pile to explore. I still found few valuable sites, but one or two were "gold mines." I actually found the "Mother Lode" at one school which was building a Web site with biographies: Twin Groves Junior High School (http://www.twingroves.district96.k12.il.us/Heros/ScienceBios.html) in Buffalo Grove, Illinois, provided me with a whole page of excellent Web sites related to science explorers. I do not know if I would have ever found them if I had not switched to the "Advanced Search" mode and started eliminating "distracters." Unfortunately, many Internet users never seek out the more advanced features of these search engines and are condemned to info-glut because they do not know how to target and screen out the irrelevant documents. It is essential that schools teach both staff and students to employ the advanced searching features of these engines as a natural part of prospecting. The Prospector's Primer from Chevron (http://www.chevron.com/chevron_root/explore/science/primer/index.html) mentioned above explains the importance of convergence: The goal is to find a convergence of the geologic elements necessary to form an oil or gas field. These elements include (1) a source rock to generate hydrocarbons, (2) a porous reservoir rock to hold them and (3) a structural trap to prevent fluids and gas from leaking away. Traps tend to exist in predictable places - for example, along faults and folds caused by movement of the Earth's crust or near subsurface salt domes. When it comes to information-seeking, the convergence is established by creating a logical intersection of search words and key concepts, the combination of which are most likely to identify relevant sites and articles. In the example given above for "science explorers," the first strategy to reduce the mountain of irrelevant information was to use the "AND NOT" search function (called a "logical operator") to eliminate records containing certain words likely to indicate irrelevancy. Achieving convergence requires thought regarding key word choice and placement (proximity). By combining just the right words in just the right order - which may take some trial and error - the information searcher can focus upon the confluence (like the meeting of two rivers) of the information streams. In Boolean Logic, this UNION is represented by the intersection of two or more circles . . . Careful word choice is a logical "closing in upon" the target, a centering, tapering focalization upon the most important and pertinent information. Having scanned the landscape of the first few searches for "science explorers" I begin to brainstorm and test alternative key words using the Advanced Query function of AltaVista. By now I am realizing that the word "explorer" is redundant and unnecessary, that scientists are explorers almost by definition, so I take care of my original two words (science and explorers) by using "scientist" along with the newly valued term "biography" which should have occurred to me in the first place but didn't until I found the middle school site. The fruits of serendipity! The search for "biography AND scientist" results in 10,000 related documents, many of which seem to be describing individual contemporary figures. These individual contemporary figures are not relevant to my project, so I improve convergence by changing both of my search terms to plurals: biographies AND scientists I also reduce the chance of retrieving contemporary figures using AND NOT to eliminate some present tense verbs: biographies AND scientists AND NOT (is OR are) This convergent strategy reduces the 10,000 down to just 40 relevant documents, of which several are "gold mines." Knight-Ridder offers a "pay for service" biography site devoted to scientists. The MacTutor History of Mathematics site at the School of Mathematical and Computational Sciences, the University of St Andrews, in St Andrews, Scotland, provides an excellent set of biographies for mathematicians. Alexandra's Awesome Home Page provides links to a half dozen great sites: A main character in William Gibson's recent novel, IDORU, has the job of "an intuitive fisher of patterns of information," actually trying to help a TV program expose the sins of celebrities by looking for trends in vast databases of seemingly innocent information like credit card charges, phone calls and household bills. Laney was the equivalent of a dowser, a cybernetic water-witch. (pg. 25) He'd spent his time skimming vast floes of undifferentiated data, looking for "nodal points" he'd been trained to recognize . . ." (pg. 25) . . . info-faults that might be followed down to some other kind of truth, another mode of knowing, deep within gray shoals of information.(pg. 39) We are after the same nodes as Gibson's cyber-witch . . . the junctions, meeting-points, intersections, and crossroads which enable us to "make up our minds," "put 2 and 2 together," and make sense from non-sense. Whether we think in terms of nodes or convergence, we are looking for the connections which allow us to strike oil or gold. Scanning hundreds of "hits" we are intuitively seeking words and elements in the brief abstracts which serve as an intimation or tip-off of something to go by. We hope for a tell-tale sign, a hint, a straw in the wind, an OMEN, perhaps. In seeking scientific explorers, the word "biography" popped up as an important clue early in my first searching. As I browsed through the top levels and then looked more carefully at the sites which I bothered to open, a whole new search strategy suddenly came to mind . . . I had noticed that all the good sites that I had found seemed to offer a list of names. What if I picked three great scientists and built a search around their names instead of using any large conceptual words? "Albert Einstein" AND "Charles Darwin" AND "Marie Curie" AND NOT price This strategy produced just 20 "hits" but led me to Macro Press in Fountain Valley, CA 92708, a publisher with a series of science books for elementary students. This site offers online biographies for dozens of scientists covered in the books. This name strategy emerged from scanning the trends and patterns, seeking the characteristics of the most valuable sites and then converting those clues into words. Smokestack research was mainly information gathering - descriptive research. Because explanatory research - projects which require synthesis and the development of new insights - is considerably more complicated and demanding than smokestack research, we need to develop Research Infrastructure in each district - clear statements about the role of research in the curriculum as well as models which outline the key elements, stages and expectations associated with such research - the scaffolding, if you will. If we provide a model for the phases of a research project such as Mike Eisenberg's Big Six or the Research Cycle I first outlined in some detail in a series of articles for Technology Connection, both staff and students will welcome the structure. In Bellingham, we have incorporated the Research Cycle into staff development programs such as Launching Student Investigations and Information Literacy and The Net so that teachers will possess the frame or skeleton upon which to build class research activities. The Research Cycle sorting & sifting
<urn:uuid:ef095a06-1481-4f32-ac63-251ef424eda3>
CC-MAIN-2022-33
http://fno.org/jan97/curriculum.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00097.warc.gz
en
0.939321
4,778
2.78125
3
A 10 page introduction to what Muslims believe, the background to the origins of Islam, its theology and early history, some frequently asked questions, and some recommended reading. Written 31 January 2011, updated 14 December 2012 For many years my wife and I have felt the lack of a simple short introduction to Islam for non-Muslims who are curious about what Muslims believe. There are many large tomes, and also leaflets written by Muslim missionary organisations, but we could not find anything quite suitable for the people who were asking us. Eventually I decided to write something myself. As readers will see from the About me page have no formal religious training, but for many years have been interested in learning more about Islam and its history. This page is intentionally brief, to concentrate on the key points. It attempts to explain what most Muslims believe. For brevity and simplicity, I have not given any references, but there is a detailed reading list for people who want to learn more or check the accuracy of what I have written. There is a downloadable PDF version: "Islam in 10 Pages: A Brief Introduction for Non-Muslims." When Muslims say or write the name of a prophet, we follow with words like “Peace be upon him” often abbreviated to “pbuh.” To avoid cluttering this document, I have not done that, but the invocation should be assumed on each occasion when a prophet’s name appears. The Muslim religious calendar is dated from the Prophet’s migration from Mecca to Medina in 622 AD, using the designation “AH” (for Anno Hegira.) For simplicity, I have given all dates using the standard Gregorian calendar used in the UK. I am writing for English speaking readers. Accordingly, where a name occurs in the Bible, I have used the standard English spelling. In English, “god” and “goddess” is used to refer to any divine being of the appropriate gender, whether believed to exist or not. Each of Jehovah, Baal and Jupiter are accurately called gods, although I believe that Baal and Jupiter do not exist and never have existed. However in English “God” is specifically the personal name of the god who spoke with Moses, who caused the Virgin Mary to have a child and who communicated the Quran to Muhammad, and is not the name of any other god. In Hebrew, Aramaic, Greek and Arabic, many different names are used for God, but all refer to the specific god mentioned in the previous paragraph. The most common form of His name in Arabic is Allah, so if you find Genesis translated into Arabic, you will read that "In the beginning Allah created the heaven and the earth." Many people who know little of Islam often incorrectly assume that the god called Allah in the Quran is a different god from God. Accordingly for clarity I have used the single name God throughout this document. “Islam” means submission to the will of God, and anyone who lives his life according to God’s will is a “Muslim.” Accordingly, Muslims refer to Abraham (for example) as a Muslim, a usage which often confuses and sometimes upsets Jews and Christians. Muslims themselves add to the confusion by failing to recognise that the usage of “Muslim” needs to change after Jesus. Although the word “Muslim” can reasonably be applied to anyone who lived before the revelation of the Quran and who obeyed and worshipped God, this usage risks causing confusion if it fails to distinguish between Christians and Jews in say 300 AD since their practices differed. For periods after the revelation of the Quran, “Muslim” is only applicable to those who believe in the Quranic revelation, not to Jews or Christians even though they also worship God. Muslims believe that in the beginning, God created the Heaven and the Earth. He also created the angels, animals and Adam and Eve. God commanded all of the angels to bow down to Adam, but one called Iblis in Arabic, Satan in Hebrew, refused and was cast out. Adam and Eve themselves disobeyed God and were excluded from the Garden of Eden to live their lives elsewhere on Earth. Their descendants eventually became wicked so God caused a global flood which wiped out all humans except the family of Noah who He commanded to build the ark. Many years later, in Ur in Mesopotamia most of the people were idol worshippers, but Abraham believed only in God. God told him to leave his home city and travel with his family to Canaan which He promised to Abraham’s descendants. Abraham’s first son Ishmael was born of Hagar, while his second son Isaac was born of Sarah. Abraham travelled far, and with Ishmael constructed a house of worship to God in Mecca at the site of the Kaaba today. The Kaaba is the approximately cube shaped building in the centre of the grand mosque in Mecca, that all Muslims face when they pray. God also tested Abraham by asking him to sacrifice Ishmael, but intervened before he could carry out the sacrifice. Muslims remember that event every year at the end of the pilgrimage to Mecca, in the ceremony of Eid al Adha, when animals are sacrificed and most of the meat given to the poor. Isaac’s son Jacob himself had ten children. Most of the other nine were jealous of Joseph who was Jacob’s favourite son, and sold him for slavery in Egypt. There Joseph interpreted some of Pharaoh’s dreams, which enabled him to advise Pharaoh to prepare Egypt for a coming famine. Later Jacob and his other sons also sought sanctuary in Egypt from the famine and were reunited with Joseph. A later Pharaoh knew nothing of Joseph, and enslaved Jacob’s descendants, the Hebrews. Eventually God revealed Himself to Moses, and sent him with his brother Aaron to instruct Pharaoh to let God’s people go. After the plagues in Egypt, they eventually left with God parting the Red Sea for Moses, revealing the law to him at Sinai, and after many years in the wilderness the Hebrews, also known as Israelites after Jacob’s other name Israel, entered Canaan which was promised to them. In Canaan, the Israelites had periods of obeying God and periods of disobedience, with corresponding changes in their fortunes. In addition to those mentioned above, the Quran specifically mentions the Biblical prophets Enoch, Lot, King David, King Solomon, Job, Ezekiel, Jonah, Elijah, and Elisha. The Quran tells us that much later Zechariah was the father of John (the Baptist.) The Quran also mentions other prophets before Muhammad, naming Shuayb, Salih and Hud, who are not mentioned in the Bible. The Quran explains how God caused Mary to conceive despite being a virgin, and how devout her son Jesus was. Mary is the only woman mentioned by name in the Quran, and Jesus is mentioned 29 times. The Quran states categorically that Jesus worshipped God, and that he never asked other people to worship himself. It also states that Jesus was not crucified, although God caused those trying to crucify Jesus to think that they had succeeded. At that time, the region was dominated by three empires. The Byzantine Empire was the continuation of the Eastern Roman Empire, controlling amongst other places what are now Egypt, Sinai, Palestine, Lebanon, most of Jordan, western Syria, north west Iraq and Turkey. The religion was Byzantine Christianity which later developed into the Greek, Armenian and Russian Orthodox churches. The Sasanian Empire was centred on Iran, but extended west to include Iraq north east of the Euphrates. The religion was Zoroastrianism, a religion dating back to around 600 BC which worshipped a god called Ahura Mazda. In Ethiopia was an empire with its capital at Aksum. It also controlled the Yemen at times. The religion was Monophysite Christianity which believes that Jesus had only one nature, a divine one. Between these empires were lesser states. To the east of the Byzantine Empire, in the region south of Damascus and south west of the Euphrates as far as northern Arabia was a region ruled by the Arab Ghassanid tribe whose religion was also Monophysite Christianity. To their east was a satellite state of the Sasanian Empire, the Arab Lahmid princedom which practiced Nestorian Christianity. Nestorian Christianity was also found along the eastern edge of Arabia covering what is today Bahrain, Qatar, the UAE and Oman. Yemen was an affluent region due to being on the trade route to India and its rich agricultural production, and at different times it had both Christian and Jewish rulers. The central region of Arabia was inhospitable desert. That leaves the western edge of Arabia, south of the Byzantine Empire and north of Yemen, which is known as the Hejaz. It was a much poorer region than Yemen, marked by oases and by being on the caravan routes from the Yemen to the north. The population was a mixture of pagan tribes, Jewish tribes and Christians who were primarily Nazoreans (sometimes called Jewish Christians) who obeyed the laws given by Moses. Mecca was both an important caravan staging post and a regional centre of pilgrimage for the pagan Arabs. The pagans were not unaware of Allah, but regarded Him as one god amongst many in their pantheon, and the Kaaba held hundreds of idols to the many gods worshipped by the pagans. The most important tribe in Mecca was the Quraysh, and they were responsible for the upkeep of the Kaaba which generated significant income from the gifts of the pilgrims. This was the complex, religiously mixed environment that would be totally transformed by the advent of Islam. Muhammad was born into the Quraysh tribe in 570 AD. His father Abdullah died before his birth, and his mother Aminah died when he was six. Lacking a father, Muhammad was cared for by his grandfather, Abdul Muttalib but he also died when Muhammad was eight, after which Muhammad was looked after by his uncle Abu Talib. From an early age, Muhammad travelled with the trading caravans. He was famous for his honesty, acquiring the nickname “al Amin” (the honest one.) During repairs to the Kaaba, a dispute arose regarding which clan would have the honour of re-bedding the black stone which is at one corner of the Kaaba. As a sign of the high regard in which he was held, Muhammad was asked to arbitrate. He advised bearing the stone on a cloth each of whose corners would be held by a representative of one of the clans. When Muhammad was 25, his employer a wealthy 40 year old widow named Khadijah proposed marriage to him; the marriage lasted 25 years until Khadijah died, and during her lifetime Muhammad took no other wives. In 610 AD when Muhammad was aged 40, he was meditating in a cave when the archangel Gabriel appeared and spoke to him as follows: “READ in the name of thy Sustainer, who has created, created man out of a germ-cell! Read - for thy Sustainer is the Most Bountiful One who has taught [man] the use of the pen, taught man what he did not know!” [Muhammad Asad translation] These were the first five verses of the Quran to be revealed, and are now the beginning of Surah (chapter) 96. Khadijah was the first person to believe that Muhammad had received a revelation from God. As the revelations continued, others also came to believe. This small believing community was persecuted by other Meccans because the message Muhammad was preaching was at complete variance with pagan Meccan practices such as idol worship. Muhammad sent some of the early Muslims to sanctuary in Ethiopia, but in 622 AD he himself fled Mecca for Medina (then called Yathrib.) which is 339 km north of Mecca. In Medina was a small community of Muslims along with several Jewish tribes as well as pagan Arabs; Medina’s internal dissensions led its people to ask Muhammad to come and be their leader. Muhammad drew up a written constitution for Medina whose text is still available today. There were a number of armed conflicts, as well as truces, with the pagan Meccans, but gradually more people converted to Islam and in 630 AD Mecca surrendered peacefully to Muhammad, and the Kaaba was cleansed of idolatry. Muhammad died in 632 AD and was buried in Medina. He had not named a successor and a split arose regarding who should lead the Muslim community. One faction believed that the succession should be dynastic, and therefore wanted Ali, who as his cousin was Muhammad’s closest male relative as well as being his son in law. The other faction wanted to choose the best person from within the community regardless of familial connection with Muhammad; this faction was larger so Muhammad’s closest friend Abu Bakr became the first caliph (Arabic for “successor” or “representative.”) The Arabic word for “party” or “faction” is “shia” and the “Shiatul Ali” (usually abbreviated to “Shia”) was the faction wanting Ali as the first caliph. This dispute is the origin of the Shia / Sunni divide in Islam. Sunnis were the faction that supported Abu Bakr’s election. However the word “Sunni” does not itself reference that dispute; the word simply means one who follows the Sunnah (traditions) of the Prophet, which is something that both Shia and Sunni Muslims do. The theology of Islam is very simple. God has always existed, always will exist, has perfect foreknowledge, and is solely responsible for creating and sustaining the universe. There is no other god. God gives each of us life, and has laid down the rules we should follow for living a good life. These include rules for how we should treat other people, such as the requirement to be honest and kind, and rules for how we should worship God. After our deaths, He will judge us, and if we have been sufficiently good by His standards, we shall enter Paradise; otherwise we face punishment in Hell. The five essential pillars of Islam are: Islam regards itself as the continuation and perfection of the religions that came before it, Judaism and Christianity. The Quran explicitly states that Muslims believe in the earlier revelations, but also that those revelations have become corrupted over time. In the writer’s view, the most obvious example of such corruption is ascribing divinity to Jesus. The Old Testament, especially within the first five books, contains a number of rules for how Israelites should live. However these rules needed significant extension by rabbinic analysis to provide a code of Jewish law for either a Jewish kingdom (for example in the Holy Land until the kingdom was destroyed by the Romans or in Yemen or Khazaria), or to govern the lives of Jews in states ruled by followers of other religions. The Quran is much shorter than the Old Testament, and contains very few rules of law. Accordingly, Islamic law was developed by the early religious scholars, the most prominent of whom gave their names to the four main schools of Islamic law (Shariah) amongst Sunnis: Abu Hanifa (699 AD – 767 AD), Malik ibn Anas (711 AD – 795 AD), Shafi (760 AD – 822 AD) and Ahmed ibn Hanbal (781 AD – 856 AD). The main school of Islamic law amongst Shia Muslims is named after Jafar ibn Muhammad al-Sadiq (702 AD – 765 AD). Unlike Roman Church law, Shariah has never been codified. Accordingly while the schools of law agree about the fundamentals of Islam, they differ on many issues. For example, they diverge on whether a man needs to ask his first wife’s permission before he takes a second wife. Around 770 AD opposition developed against the schools of Islamic law, from “the people of the tradition” (ahl al-hadith in Arabic) who rejected the logical analytical methods of “the people of opinion” (ahl ar-ray). Instead they wanted Islamic law to be based on following the traditions, namely the sayings and actions of Muhammad. This made much more significant the question of deciding which were the authentic traditions, and led to their formal collection in books. Until then they were circulating primarily in oral form. The first and most respected hadith collection was compiled by Bukhari (died 870 AD) in 97 books which now form nine volumes in English translation. Five other collections regarded as canonical by Sunnis followed, from Muslim (died 875 AD), Abu Dawud (died 889 AD), Ibn Maja (died 886 AD), Tirmidhi (died 892 AD) and an-Nasai (died 915 AD). The founders of the schools of law mentioned above had also made hadith collections which are also valued. While the hadith compilers devoted enormous efforts into excluding inauthentic hadith, Muslims recognise that some hadith included in the collections are more reliable than others. The assessment of hadith reliability is an important part of the formal education of Islamic scholars today. With the formal collection of hadith, gradually a synthesis developed whereby the schools of Islamic law treated hadith as the main source of jurisprudence after the Quran. The Arabs fought against and conquered the empires around them to gain agricultural land, gold, silver and other resources. They established systems of government over the conquered territories, but the inhabitants were free to retain their religions. (The exception is Arabia itself from which Jews and Christians were expelled.) Over time many of the conquered peoples converted, but the survival of large Jewish and Christian minorities into the 20’th century is evidence against forced conversion. Occasionally local tyrannical rulers did engage in forced conversion, for example the Almohad dynasty which took control of Andalusia in Spain in 1147 AD. However such religious oppression was rare. The second caliph Umar, who conquered Jerusalem, set down rules for the treatment of Christians and Jews. Unlike Muslims, they were exempt from compulsory military service, and also did not have to pay Zakah. Conversely they were liable to pay a poll tax called jizyah in Arabic, had to wear distinguishing clothes, and faced some restrictions on building. Christians and Jews were called dhimmi (protected people); a category that was gradually extended to included all non-Muslim minorities. From a 21’st century perspective, some would consider dhimmitude to be a form of second class status. However in almost all cases the treatment of religious minorities by Muslims was far superior to the way that Christiansat that time treated religious minorities . Muhammad was both the leader of Islam and the head of government in Medina. After he died, both roles were taken over by Abu Bakr as caliph, although of course all prophecy ended with the Muhammad. All legal questions about what Muslims could or could not do were questions about Islamic law. However with the passage of time, the role of Caliph became hereditary and history records several major dynasties. As the territory controlled by Muslims increased there came to be regions which were governed independently of the historic centre in Medina. It could not be said of these regional leaders that they were all heads of the religion of Islam. Later still during the Tanzimat reforms in the nineteenth century the Ottoman Sultans found it necessary to create laws by fiat which did not derive their authority from the Quran or Prophetic traditions. Today most Muslim majority countries have legal systems where legislation is made by secular legislative bodies, sometimes with exceptions for Muslim personal law as in Malaysia. Accordingly it is an exaggeration to say that Islam recognises no separation between church and state. Many Muslim scholars believe that conversion out of Islam should be punished by death. However there are many other Muslim scholars who consider that Muslims face no earthly penalty for abandoning Islam. It is an Arabic word which means struggle. Muhammad taught that the most important form of jihad is the struggle a Muslim undertakes with himself to live his life as well as possible in accordance with God’s laws. The second and lesser meaning of jihad is military action to defend the Muslim community against attack. This was necessary as the Muslims of Medina were regularly attacked by the pagans of Mecca. Later Islamic jurisprudence lays down a number of rules regarding jihad, including the requirement that jihad can only be declared by the head of the Muslim religious community, the caliph. As there is no caliph now, all current purported declarations of jihad are not competent. In four places, the Quran prohibits something called riba in Arabic. There are also a number of hadith which give guidance on what constitutes riba. From these sources, many Islamic scholars conclude that interest in all circumstances constitutes riba and is prohibited. Other scholars consider that interest is only prohibited as being riba when it is excessive and there is unequal bargaining power between the parties. The Quran permits Muslim men to marry Jewish and Christian women. However there is no equivalent permission for Muslim women to marry non-Muslim men. Halal means pure, lawful and permissible for Muslims. The most common usage of the word today is in connection with food, particularly food from animals. Some animals are inherently forbidden for eating, for example pigs, except when it is a life or death emergency. Other animals, for example cows, can be eaten but must be slaughtered in the prescribed manner. This entails the throat being cut by a Muslim slaughter-man who says a specific invocation with the blood then being drained from the animal. Most Muslims believe that the animal should not be stunned before being killed. Eid is a religious festival. The two main Eid festivals celebrated by all Muslims are: Many Muslims also celebrate Muhammad’s birthday, which is called Eid milad un nabi, while many others do not. Ashura is the tenth day of the month of Muharram, which is the first month of the Muslim calendar. On that date in 61 AH (680 AD), Hussein, the younger grandson of Muhammad was killed at Karbala in present day Iraq along with his son and supporters by the army of caliph Yazid. His death is regarded as a tragedy by all Muslims. Shias in particular mark the day by lamenting his martyrdom and the failure of their ancestors sufficiently to assist him, leading some to flagellate themselves. "Understanding Faith" computer based learning from the Coexist Foundation. The Coexist Foundation offers low cost on-line audio visual courses over the internet in Islam, Judaism and Christianity. I have worked through part of the Islam course and found it quite watchable. I then completed all the assessment questions, and from them got a good feel for the comprehensiveness of the course. I recommend it to non-Muslims who want to do a structured course from home as opposed to reading the recommended books. For each of the books listed below there is a link to Amazon.co.uk at the bottom of the page. “The Qur'an” (Oxford World's Classics) translated by Muhammad Abdel Haleem I recommend this as the first Quran translation for English speakers to read, because of the clarity of its modern translation. Also it has very few footnotes to break up your reading of the text itself. “Muhammad: His Life Based on the Earliest Sources” by Martin Lings The late Martin Lings was an English convert, and this very readable biography is highly regarded. “The message of the Qur’an” – translated and explained by Muhammad Asad This translation by a Polish Jewish convert to Islam is also quite modern, and benefits from the translator’s extensive footnotes. During his varied life, the translator became Pakistan’s ambassador to the United Nations. “A Textbook of Hadith Studies: Authenticity, Compilation, Classification and Criticism of Hadith” by Mohammad Hashim Kamali This book provides an excellent introduction to the way that Muslims have collected and evaluated hadith. “Principles of Islamic Jurisprudence” by Mohammad Hashim Kamali The author provides a detailed introduction into the way that Muslims developed Islamic law and along the way tackles a number of important questions such as the extent to which parts of the Quran may have been abrogated (superseded) by later Quranic revelations or hadith. “To Be a European Muslim” by Tariq Ramadan The author is one of the leading Islamic scholars in Britain and analyses what Quranic and hadith sources say about how Muslims in Europe should live. “Muslim Civilization: the Causes of Decline and the Need for Reform” by Muhammad Umer Chapra This short book provides a very readable and insightful view on why Muslim civilisation declined after being far ahead of Europe. It is reviewed elsewhere on my website. “Islam – Past, Present & Future” by Hans Kung This 700 page book is by one of the world’s leading Roman Catholic theologians and completes a 25 year trilogy of books alongside Kung’s books on Judaism and Christianity. Both Muslims and non-Muslims can learn from Kung’s respectful Christian perspective.
<urn:uuid:601ca14d-0162-4d4c-a9f9-7f72ea0ce060>
CC-MAIN-2022-33
https://www.mohammedamin.com/Community_issues/Islam-brief-introduction.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00696.warc.gz
en
0.977174
5,242
2.671875
3
There is no doubt that much digital divide work — including connectivity initiatives, technology transfer programs, and other projects is done with good intention. Yet, as has been widely recognized, the conceptual framework of the digital divide is limiting. The language of the digital divide not only places people into simplistic have/have not categories, making assumptions about the solution to information poverty with little attention to local contexts, its logic also continues a paradigm of development that engages with the global south only at the point of what it lacks. I propose a framework, which provides a wider, and more nuanced, lens to look through. It focuses work in ways and in areas consistently overlooked by the digital divide, particularly on the realities, voices, and complexities within its unconnected, have not spaces the zones of silence. Encouraging critical questioning of assumptions and an understanding of local contexts and points of view, a zones of silence framework is a way to broaden the dialogue on global communication and information access beyond a discourse of need, to one of mutual questioning, sharing, and learning. I begin with a brief critique of the digital divide, followed by a definition of this zones of silence framework and how it can help us to see and consider issues differently. I then suggest three areas where work from this perspective might begin. The limitations of the digital divide Listening in the zones of silence: A tool to move beyond the digital divide Ways to begin: Working from a zones of silence framework As the designer of a Web site for a project connecting Canada, Brazil, and Angola I began to become concerned with how, and if, it would be useful to all three project teams. The project’s goal was to develop and share knowledge about building food security (peoples ability to access affordable and acceptable food) through online courses, workshops, and local pilot projects. Communication by the Internet was key to the projects design, but besides our language differences, I realized that I knew little about the context in which the Web site and its resources would be used outside of Canada. According to statistics, Internet and computer access differed significantly among the three countries, but what did this mean? What sort of information would be most relevant to each partner? How useful would resources written only in English be? Where and how would project team members access the Internet? Did their access to and use of computers differ from my own? Perhaps one of the most exciting possibilities of the Internet is the potential it has to connect people who have ideas, stories, and advice to share with each other. Currently, technology funds for development projects are aimed at enabling this. Making access possible to computers and the Internet is seen as a means of overcoming the digital divide. Or, as a way of alleviating information poverty by helping those in the countries, communities, or households where access to new information and communication technologies (ICTs) is not easy to have the same type of information resources that information haves enjoy. With access to new resources and experts, it is argued, people will be able to solve many of the issues they face at a local level. Over the past decade this issue, the digital divide, has been the subject of much attention from development agencies, researchers, NGOs, governments, and the private sector. Given this attention, I expected to find a good deal of work on what it is like to live and work on the other side of the information highway, the places where access to computers and the Internet is tricky or presently nonexistent, and where development agencies, corporations, researchers, and others believe such access would improve lives. Listening to stories from or of these places, I felt, would help me to begin to learn how to work with my project partners in Brazil and Angola by showing me what questions it might be important to start by asking. Yet, finding these stories was difficult. Current research provides very few images of what it is, in fact, like to be a have not, or to live and work on this other side of the divide. These nonconnected spaces are defined in most cases as the places, communities, or households in need of ICTs. The simplistic view of these regions as, lacking, poor, and voiceless reflects the binary have/have not logic of the digital divide. This is not to say that people working on digital divide projects necessarily share this point of view. Many do not, but wider perspectives are difficult to articulate within the discourse. More troubling, the lack of attention to these spaces points to the ways that the digital divide is in some ways a continuation of the West knows best (modernization) paradigm in development. The digital divide discourse does encompass ideas about the importance of local context, for example, by promoting projects that provide communities with ICTs to access information to solve their own problems. Yet, digital divide work often assumes, one, that ICTs will be helpful, and two, that they will be used for educational, economic, and other worthwhile projects. As with the technology transfer programs of the past, the Wests ideas about technologys usefulness and how it will be used, are not necessarily accurate (for example, see Dagron, 2001; Gunkel, 2003; Prahalad, 2005). Digital divide work often assumes that ICTs will be helpful, and that they will be used for educational, economic, and other worthwhile projects. Given these discrepancies, how might we begin to view information and communication contexts in a more nuanced way? How can we listen to, and speak with, the in need side of the digital divide? How might this alter the ways that ICT projects are designed? And, can we ask these questions from within a digital divide framework? While there are examples of studies that engage with complex questions from within the digital divide framework, the results are often not visible: they become subsumed into the paradigms narrow lens in such a way that it remains possible to read and talk about these projects simplistically. I believe that a new way of looking is necessary. In this paper, I briefly explore the ways that the logic of the digital divide is limiting. I then show how a zones of silence framework can function, not as a replacement of digital divide, but as a way of expanding how we work on communication initiatives beyond the areas that the digital divide covers by supporting a focus on unique contexts, and relative definitions of information and poverty. The limitations of the digital divide The digital divide is a common point of concern for researchers, governments, development agencies, and the private sector. The term, digital divide, itself is mobile, and, over the past decade, has been used to define everything from the difference between early and late technology adopters, to the difference in ICT access held by citizens of developed and developing nations. In fact, some argue that the term will likely continue to be flexible and to hold multiple meanings as it should because of the quick rate of change the issues it covers are undergoing as new technologies emerge (Gunkel, 2003). Nevertheless, at present, the digital divide broadly refers to the gap between those who can effectively use new information and communication tools, such as the Internet, and those who cannot. Though research and discussion around the digital divide has helped to bring about a critique of cyberutopianism the belief that ICTs will, with ease, solve many of societys problems, the term is conceptually oversimplified and theoretically underdeveloped. The concept, in fact, is popular in part for its simplicity, allowing policy makers and others to follow a clear logic: There is a gap between those with access to technology, and those without this access. If this is the problem, the solution is to provide those lacking technology, with this technology. Accordingly, work on the digital divide has tended to slot people into a dichotomous model of technology haves and have nots (Selwyn, 2004). Instead, some argue, it is more helpful to view the digital divide as a continuum (Gunkel, 2003). In other words, it is not simply a matter of looking at whether people have access or not, but also of looking at the hierarchy of access among those who do. Furthermore, though many programs focus simply on providing technology, access to ICTs does not necessarily equal use of ICTs, or instant information. There are a variety of social, agerelated, psychological, educational, economic, and most importantly, pragmatic reasons why people do not use ICTs (Selwyn, 2004). That is, just because people have the opportunity to use the Internet (or other technologies) does not mean that that they will, and if they do, it will not necessarily be in the ways that the organizations providing access might anticipate. Since the digital divide reflects other deep (social, racial, geographical, educational, economic) divides in society, making it possible for everyone to access a computer and the Internet is not going to solve all problems, as sometimes seems to be, optimistically, assumed. Furthermore, critics of digital divide work argue that, in the larger scheme of things, it is not particularly helpful to talk about the digital divide. This is because it is a symptom, not the cause of unequal socioeconomic opportunity (Gunkel, 2003). Essentially, since the digital divide reflects other deep (social, racial, geographical, educational, economic) divides in society, making it possible for everyone to access a computer and the Internet is not going to solve all problems, as sometimes seems to be, optimistically, assumed. Thus, they assert that efforts to bridge the digital divide will succeed only if they are accompanied by bold policy initiatives to reduce structural inequalities, for instance in education and jobs, that would otherwise result in disparities in skills in using computers and Internet systems. Finally, the digital divide formulation that there are ‘information haves and information have nots, although useful for identifying extant technological and social inequalities, has potentially disquieting ethical consequences, especially when applied in a global context. In distinguishing information haves from information have nots, the technologically privileged situate their experiences with technology as normative, so that those without access to similar systems and capabilities become perceived as deficient and lacking. Thus, the digital divide is unmistakably from the technology haves point of view you dont have what we have. It is also a continuation of a development paradigm, in which the global south only becomes a part of first world discussions at the point of what it lacks. How can we begin to talk about, and talk with, the global south in other instances? How can we move to a dialogue about to what sort of communication is happening, and how we might best learn from and assist each other? As stated earlier, I believe we will have difficultly beginning this dialogue from within a digital divide framework. Critics of the digital divide that have continued to use the term to frame their work have had limited success in shifting the paradigm, as seen by government, development and private sectors, to encompass more complexity. Though critics might make clear that their definition of digital divide, and what they are investigating, is not onedimensional, the term powerfully draws up a black and white way of seeing. How can we speak differently about the complex issues that the digital divide tends to gloss over and still be heard by those operating within the framework? Listening in the zones of silence: A tool to move beyond the digital divide I propose a term and a framework that does not replace digital divide, but shifts attention to some of the complexities that the digital divide discourse omits. The term, zones of silence, provides a focus on three key ideas. Firstly, on voice, that is, what people on the wrong side of the digital divide have to say about their lives. Secondly, on communication, or on how and by whom people are heard, and why this is important. Thirdly, on context, or on the diversity of spaces, that the digital divide encompasses, and their interconnections. I begin by defining zones of silence, then discuss in more detail how the framework can be used. As a term I use zones of silence, to mean the unseen, seemingly quiet, technologysparse spaces of the digital divide. Mansell and Wehn (1998), writing about developing countries and the international governance system, use the phrase to mean the places, found in the developing world, where communities are effectively silent because of a lack of access to ICTs. They do not develop the term, but I feel it can be used to name a larger idea. In my definition, the zones of silence, while there may be relative levels of silence, are everywhere. They are what Castells terms the switch off regions in the global digital economy: These patterns of inclusion and exclusion challenge our visions about the geography and political economy of communication. We can no longer adequately refer to First and Third worlds, North and South, and so on, but must recognize regions that are hardwired to networks and information flows and thus switched on, and the vast disconnected or switched off regions of the world. Zones of silence exist within countries with little connectivity altogether, as well as within zones of high connectivity. They are the places, communities, and homes in the developed/developing worlds where because of a lack of access to ICTs peoples voices are, effectively, outside of their immediate community, unconnected and unheard. What is key is that, while these voices may not be connected to the global communications grid, this does not mean that the zones of silence are silent! People talk, debate, write, dialogue, produce radio and television, argue, live. They are only silent to us. Using the word silence in the term is a risk; it has the potential to reinforce the idea that there is nothing here. But, it is a worthwhile risk, because the presence of silence implies that there are people who are part of the silence. A zones of silence framework moves our attention from a (digital) divide to a space, thus, raising the questions: Are the zones of silence truly silent? If not, why do they seem silent? Does this silence matter? What do the silent have to say? Or, broadly, it shifts our focus from What do people need? to What are people saying? Therefore, the question from a zone of silence standpoint is not, What do they need to be equal to us? But, Who are you? And, what is it like to be where you are? Individuals may be discussing their needs; or, they may be discussing many other things. It of course remains important to aid people without access to ICTs to gain that access if they so desire. In order to make a significant, effective, and positive difference in the lives of information have nots, we must listen to their stories, opinions, experiences, and insights both positive and negative, in support of ICTs and stakeholders agendas, and against. There is much more that we may have to speak about and learn from each other than what we need. A zones of silence framework points to the necessity and opens up a way of listening. A zones of silence view can also help us to more easily recognize the diversity within the spaces of the digital divide. Most simply, the notion of zones is a very different metaphor from divides. In a divide there are two types, a black and a white, a good and a bad. Zones are more flexible. Within a zone, or between zones, there are many possible points of view, many potential gradations and combinations. Zones, unlike divides, are also continuous. What happens in one zone, or one part of a zone affects the rest. Thus, thinking not about a digital divide, but of a zone of silence, we can see the following. One, there are not simply two types of people information haves/have nots. There are differences within a zone of silence, not just in terms of relative access to technology, but, more importantly, in opinions, everyday life, experiences, and modes of communication. We cannot assume that every zone of silence is the same. Each has its own context, its own knowledge. Two, what happens in a zone of silence affects and is affected by zones around them. We are not separated or divided; zones are not bounded from each other. Our actions, or lack of action, interplay with the actions of the rest of the world. Three, a zones of silence framework recognizes that as there are more categories than information haves/have nots. There is also more than one type of information. The digital divide discourse, when defining people as information poor, has a very specific type of information in mind. This information is important, but there are other types of knowledge. The information poor may lack what the digital divide defines as information, but this does not mean that the knowledge they have is less useful or less valuable. In the following section I suggest ways that we might begin working from a zones of silence framework. Ways to begin: Working from a zones of silence framework What does working from a zones of silence standpoint look like? I suggest three key areas of inquiry for the framework, simply put as: What is happening? Where are we wrong? And, who benefits? Work is occurring in each of these areas already. A zones of silence framework can help to support this, and to encourage more questions. What is happening in the zones of silence? The first type of inquiry that a zones of silence view can support answers the question: what is happening in the zones of silence? In particular, we might ask: In what ways is communication happening in the zones of silence? Facetoface, orally, in written form, through performance, through technology, by other means? What attitudes towards, and ideas about, communication technologies, including radio, film, video, television, telephone, computers, and the Internet, exist? Where are these technologies used, how, and to what extent? How do people speak about ICTs? How does this resonate with or differ from the development discourse around ICTs? We might also consider questions such as: What might a person use a computer (or other technology) for? How easy or difficult is it to get this technology, and keep it running? How do climate, power supply, the local economy, and communications needs affect this? This work can draw on research that has treated digital divide issues as neither so simplistic, nor so straightforward as is often assumed. Thus this might include studies like that of Clark, et al. (2004), who use ethnography to explore the attitudes of people of varying economic backgrounds towards the digital divide in the United States; like Salvador and Sherrys (2004) research on practical barriers to ICT connectivity in the Peruvian Andes and the assumptions of technology designers in the West; or, the study by Barbatsis, et al. (2004) on the relevance and appeal of Internet sites to various social and ethic groups in the United States. This inquiry can also draw more generally on ethnographic accounts detailed observations or personal accounts of everyday life of areas within the zones of silence. To be useful this ethnographic work need not address technology. Many factors including history, gender and power relations, climate, local job market, and family practices and expectations influence how communication happens, and are important to understanding how technology might be used. Such studies show that in the zones of silence, there are an abundance of ways of speaking and communicating, a thriving use of radio (Dagron, 2001), significant film and television production for local markets (for example, Banerjee, 2002), engagement with a variety of media (Downing, 2003), and differences in gender in terms of access to communications and other technologies (for example, see Prahalad, 2005). These kinds of studies are distinguished from typical work on the digital divide by taking into account a wide range of people, not just ICT project leaders or ICTs use statistics. They also recognize that communication is much larger than technology, asking questions that do not assume more ICTs are necessarily the answer. Where are we wrong? Questioning assumptionsDevelopment communication researchers have adopted research techniques designed to answer the needs of Western societies and which do not always suit African cultures or societies that are in the main rural and nonliterate. This means that for most of the time communication scholars have either been asking the wrong questions altogether or asking the right questions to the wrong people. We must learn to examine the way we think technology is, can, and should be used. For policy makers, researchers, or designers who live day to day within zones of high connectivity, enjoying highspeed Internet access, their own computer, reliable electricity and controlled indoor environments, it is sometimes difficult to imagine what it might be like to work with ICTs in other places, and how and why people might, in fact, like to use them. We must learn to examine the way we think technology is, can, and should be used. The danger of making assumptions about the relevance and usefulness of ICTs is apparent historically. For example, in the 1960s, working under the similar beliefs to those now held by information poverty projects, UNESCO recommended minimum numbers of ICTs per capita as a hallmark of development (UNESCO, 1961). A number of governments implemented policies to increase ICT access. Many exceeded the minimum and yet failed to develop corresponding improvements in social and economic conditions (Tehranian, 1990). Specifically, we might ask: What do we assume about technology use and the usefulness of technology? Do these assumptions correspond with reality? How have these assumptions shaped our questions and actions? Have they been misguided? Important issues to consider include potential differences in language, literacy, relevance of content, connection speed, access to ICTs, and cultural patterns of leisure and work. Useful directions for future research should include not just whether our assumptions about ICT use correspond with reality, but how these assumptions are formed and perpetuated. For instance, an interesting project would be to evaluate the plans and suggestions made by development agencies regarding ICTs in comparison with how people in the zones of silence, in reality, conceive of and make use of these technologies: In the recent explosion of funding programs for ICTbased development projects, what sorts of application criteria and project guidelines have emerged? What types of project designs are supported or rejected? Why? In these designs what assumptions are made? How do agency workers conceptualize zones of silence and how ICTs might influence them? How are projects evaluated, and what counts as success? In the next section I suggest ways how we might investigate one of the largest assumptions of the digital divide. Who benefits from connectivity and how? A key assumption of digital divide discourse is that greater access to technology and, through this, information, will improve lives. This may be the true but, it is important to consider more critically who benefits from ICT connectivity and how. The interests of international corporations and global capital in highspeed, pervasive communications technologies are often overlooked in work on the digital divide. Yet, this perspective is important. Or, in parallel, labor is too often excluded from discussions about the Information Society. It is, however, one of the critical components. Information and communication technologies (ICTs) are changing not only people’s actual work environments, but also the way labor markets operate. Using the Internet and other ICTs corporations can function as transnationals with increasing ease. ICTs enable them not only to communicate quickly with subsidiaries around the world to organize production and distribution, but also with consumers, to market and sell their products. In fact, it is argued, that no country can hope to attract foreign investment without an adequate telecommunication infrastructure (Sonaike, 2004). Thus, as countries are able to acquire appropriate connections they will become integrated more fully into the global economy. Is this desirable? Or, further, how does enabling connectivity in a region affect the labor market? According to people who have been affected in this way, has the experience, ultimately, been to their benefit? Is there any alternative to joining the network? What would it mean to continue to live within a zone of silence? Are there more strategic ways of becoming connected? Community connections, world connections Besides enabling access to information, ICTs are often seen as important means of increasing communication opportunities. While they clearly have this potential, it is important to ask if ICTs are the best form of communication in zones of silence? What means of communication are already in use? Are additional means of communication needed? These questions are significant because the relationship between ICTs and communication is not direct. ICTs are one element in a larger view of communications. In fact, despite the euphoria for ICTs, older technologies such as radio (Dagron, 2001), video (White, 2003), and theatre (Riley, 1990), seem to continue to be better communitylevel communication tools. Effective dialogue does not need to be hightech. ICTs might be initially more useful in helping geographically dispersed zones of silence connect with each other as well as connect zones of silence with zones of connectivity in ways that are strategically and socially beneficial. Connections to family members abroad, between members of diaspora populations, and between activists and researchers around the world are important. The power of this sort of communication has been demonstrated in the work of the Zapatistas in Mexico, and the Kayapo in Brazil, who have used communication technologies (Internet, video) to raise awareness about issues they have faced as indigenous communities. Through these means they have successfully attracted international attention. This has led to pressure on their respective governments, who have consequently, to some extent, modified their policies (Dagron, 2001). These questions return to issues of silence and of voice — who is being heard by whom? How can we begin? Initially it is important to bring together individuals whose work takes a nuanced view of digital divide issues with those whose research falls outside of the digital divide discussion including work in mass media, local and community media, and on social factors that influence communication and access to ICTs. Perhaps even more crucial to include are those who have experienced life and work in zones of silence, as well as within zones of connectivity, and who have a perspective on some of the misguided assumptions that operate between the two. At this stage it is both important to think about the questions we have been asking and to find ways of formulating the questions we should ask. Much research on the digital divide has been done with good intentions. Yet, the conceptual framework of the digital divide is limiting. We need a new, wider, and more nuanced, lens to look through. A zones of silence framework can provide part of this lens. It focuses work in ways and in areas consistently overlooked by traditional notions of the digital divide, particularly on the realities, voices, and complexities from within its unconnected, have not spaces. By encouraging a critical questioning of assumptions and greater attention to local context and points of view, it is a way to broaden the dialogue between zones of connectivity beyond a discourse of need, to one of mutual questioning, sharing, and learning. About the author Amelia Bryne Potter is an M.A. candidate at the York/Ryerson Joint Programme in Communication in Culture, and holds a B.A. in Anthropology from Columbia University, Barnard College. Her work has focused on instances of crosscultural meeting, narratives of change, and ways to use intellectual and creative work to encourage people to consider points of view that they otherwise might not. She is currently working on processes for using video to explore, build, and present layers of imagination and memory surrounding stories of global migrations. Special thanks to Amin Alhassan for his encouragement and editorial support. 1. Gunkel, 2003, p. 504, citing Benton Foundation. 2. Selwyn, 2004, p. 343. 3. Sonaike, 2004, p. 45. 4. Gunkel, 2003, p. 507. 5. Winseck, 2002, p. 401, citing Castells, 1996. 6. Nyamnjoh, 2000, p. 146. 7. Zachmann, 2004, p. 84. 8. For example, see Skint Stream at http://www.jelliedeel.org/skintstream/. I. Banerjee, 2002. The Local Strikes Back? Media Globalization and Localization in the New Asian Television Landscape, Gazette: The International Journal for Communication Studies, volume 64, number 6, pp. 517535. G. Barbatsis, M. Camacho, and L. Jackson, 2004. Does It Speak to Me? Visual Aesthetics and the Digital Divide, Visual Studies, volume 19, number 1, pp. 36514. http://dx.doi.org/10.1080/1472586042000204834 M. Castells, 1996. The Rise of the Network Society. Cambridge, Mass.: Blackwell. L. Clark, C. DemontHeinrich, and S. Webber, 2004. Ethnographic Interviews on the Digital Divide, New Media and Society, volume 6, number 4, pp. 529547. http://dx.doi.org/10.1177/146144804044333 A.G. Dagron, 2001. Making Waves: Stories of Participatory Communication for Social Change, at http://www.comminit.com/making-waves.html, accessed 18 March 2006. J. Downing, 2003. Radical Media and Globalization, In: L. Artz and Y.R. Kamalipour (editors). The Globalization of Corporate Media Hegemony. Albany: State University of New York Press, pp. 283293. D. Gunkel, 2004. Second Thoughts: Towards a Critique of the Digital Divide, New Media and Society, volume 5, number 4, pp. 499522. http://dx.doi.org/10.1177/146144480354003 R. Mansell and U. Wehn, 1998. Friend or Foe? Developing Countries and the International Governance System, In: R. Mansell and U. Wehn (editors). Knowledge Societies: Information Technology for Sustainable Development. Oxford: Oxford University Press, pp. 180203. F.B. Nyamnjoh, 2000: Communication Research and Sustainable Development in Africa: The Need for a Domesticated Perspective, In: J. Servaes (editor). Walking on the Other Side of the Information Highway: Communication, Culture, and Development in the 21st Century. Penang, Malaysia: Southbound, pp. 146160. C.K. Prahalad, 2005. The Fortune at the Bottom of the Pyramid. Upper Saddle River, N.J.: Wharton School Publishing. M. Riley, 1990. Indigenous Resources in Africa: Unexplored Communication Potential, Howard Journal of Communications, volume 2, number 3, pp. 301314. http://dx.doi.org/10.1080/10646179009359722 T. Salvador and J. Sherry, 2004. Local Learnings: An Essay on Designing to Facilitate Effective Use of ICTs, Journal of Community Informatics, volume 1, number 1, at http://ci-journal.net/viewarticle.php?id=35&layout=abstract, accessed 18 March 2006. N. Selwyn, 2004. Reconsidering Popular and Political Understandings of the Digital Divide New Media and Society, volume 6, number 3, pp. 341362. http://dx.doi.org/10.1177/1461444804042519 S. Sonaike, 2004. The Internet and the Dilemma of Africas Development, Gazette: The International Journal for Communication Studies, volume 66, number 1, pp. 4161. M. Tehranian, 1990. Communication, Peace and Development: A Communitarian Perspective, In: F. Korzenny, S. TingToomey, and S.D. Ryan (editors). Communicating for Peace: Diplomacy and Negotiation. Newbury Park, Calif.: Sage, pp. 157175. UNESCO, 1961. Mass Media in the Developing Countries. Paris: UNESCO, Department of Mass Communication. S.A. White (editor), 2003. Participatory Video: Images that Transform and Empower. Thousand Oaks, Calif.: Sage. D. Winseck, 2002: Wired Cities and Transnational Communications: New Forms of Governance for Telecommunications in the New Media, In: L.A. Lievrouw and S.M. Livingstone (editors). Handbook of New Media: Social Shaping and Consequences of ICTs. London: Sage, pp. 393409. R. Zachmann, 2004. ICTs and the World of Work: Weaving a Bright New Fabric or a Tangled Web? Information Technologies and International Development, volume 1, number 4, pp. 8486, at http://mitpress.mit.edu/journals/pdf/itid_1_3-4_84_0.pdf, accessed 18 March 2006. Paper received 19 March 2006; accepted 12 April 2006. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 License. Zones of silence: A framework beyond the digital divide by Amelia Bryne Potter First Monday, volume 11, number 5 (May 2006),
<urn:uuid:0c773996-35c0-44bd-b63c-72ccceb9560e>
CC-MAIN-2022-33
https://firstmonday.org/ojs/index.php/fm/article/download/1327/1247?inline=1
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00697.warc.gz
en
0.934163
6,980
2.578125
3
Definition Of Positive Self Talk The act or practice of talking to oneself, either aloud or silently and mentally. Definition Of Affirmations Statements which affirm something to be true. We all have moments of self-doubt, but negative self-talk can become outright abusive and detrimental to our recovery efforts if we let it go on for too long. The way we treat ourselves is what shapes our self-perception, yet we tend to be much more critical on ourselves that we really should be. Why Those In Recovery Experience Negative Self-Talk Once we’ve detoxed and spent some time abstinent or sober, it’s normal to feel a pinge of guilt, shame, embarrassment or sadness over what we’ve gone through with our addiction. For many people, they begin to realise just how much they’ve hurt themselves and their loved ones – and this can bring a lot of shame into the equation. Christopher Smith, a person who has struggled with addiction explained that much of his negative self-talk has derived from negative messages he received when he was younger – either implicitly or explicitly. He recalled traumatic experiences from his past that further became part of his own self-talk script – and he explained that for those in addiction & recovery, it’s not abnormal for a person to have a lot of negative self-talk. Unfortunately, negative self-talk can lead to relapse if interventions and coping strategies aren’t sought and used. A large part of combatting relapse for me has been learning to identify the negative self-talk early. When the voices are still whispers, they are easier stifled if you know what to listen for. I have put together a few guidelines for myself that have generally helped me identify when I may not be thinking the most clearly or struggling with my own positive self talk…Christopher Smith Relapses tend to occur because we’re holding in certain painful emotions or thoughts that truly need to be worked through and released. In cases of self-talk, we beat ourselves up – and naturally, these pent-up feelings lead us to buy into these false beliefs, which we act on through reverting back to substance use. Self awareness is such a crucial practice in identifying and managing negative self-talk. If you’re ready to combat the mean, degrading voices in your head, you have to build awareness, understand why you’re getting them and what you can do to combat them. Practising Positive Self-Talk When we’re aware of our thoughts, emotions, feelings and the sensations around us, we’re more apt to recognise negative self-talk when it arises. Not only that, but we’re also able to make healthier decisions quicker – which means that if we start with those phrases – “I’m useless”, “Nobody wants me to succeed”, “I’m not a good person” and others – we know to stop and start applying some cognitive behavioral techniques. There’s a lot you will explore in both individual and group therapy within addiction recovery that will provide you with steps to take towards working through these moments of negativity, but here are a few suggestions for promoting positive self-talk: Say It Differently In 2014, researchers published a study in the Journal of Personality and Social Psychology which found that it’s not just what we say to ourselves that affects our mood and emotions throughout the day – it’s also how we say it. For example, the researchers suggested speaking to oneself in third person – using “he” or “she” – to help gain some perspective and to force us to look at the situation with more objectivity. Emotions can be all encompassing, and if you become too wrapped up in negative self-talk, you may find yourself starting to spiral. Instead, ground yourself by focusing on the sensations around you. What colours do you currently see in the room that you’re in? What do you smell? What textures do you feel? What tastes are there? If you close your eyes right now, what sounds do you hear? Sometimes this method of grounding can take you away and distract you from the negative messages in your mind to the present moment. Refer to Your “Mental List” Create a list in your head or on paper of all the lies your negative self talks you tell yourself. Remind yourself that this is just another one of those phrases that you’ve decided no longer benefit your recovery. Combat the heavy weight of these false beliefs with the truth – use logic to dismantle the arguments your negative self-talk is trying to make. For example, if you’re self-talk is saying, “nobody likes me”, “I don’t deserve recovery”, “I’ll always be a junkie or alchie”, you could remind yourself that you just spoke with a friend earlier today, or that you’re only at the beginning of your recovery journey and there’s still time to go and meet new people. In doing this, your limiting self-doubts won’t stand a chance – because the logic you use against them will be too strong. If you’re currently struggling with addiction, know that you’re not alone. Everyone experiences negative self-talk but there are many tools, people and resources to help a person recognise the gifts that they can bring to your world. You can also find more help and support from charities, groups and organisations on our help and support page here. Don’t wait any longer to seek the help you need. The sooner you start, the easier and less damage will have been caused and easier to rectify! 7 Techniques To Overcome Negative Self Talk 1.Develop Awareness: Becoming more aware of our thinking patterns and the impact on our mood and behaviour is the first step. We can do this a variety of ways, two ways that come to mind for me are: Timeout to reflect- take a time out to reflect on our thoughts, stop and say to ourselves…” what’s the thought? what is driving it? how am I feeling?” Journaling – either free journaling or a thought journal, any technique to get our thoughts down on paper can improve our awareness of patterns and become more in tune with ourselves. 2. Challenge It: As we get better at recognising our negative thinking patterns, we can begin to dive deeper and develop a new pattern of thinking. Often times our negative thoughts are connected to irrational beliefs….challenge these thoughts and bring it back to reality. Using concrete, positive affirmation is a great place to start. Instead of “I am never going to get this right,” challenge with “I am doing my best and my best is enough.” Retraining our minds and shifting our lens takes time and practice, so let’s start NOW. We deserve it. 3. Gratitude: Focusing on our blessings, big or small, is another simple yet powerful way to break the cycle of negativity. You have all heard the saying “an attitude of gratitude,” well now is the time to shift our attitudes and our thought processes to focus on all we have to be grateful for. Whether it’s setting aside a minute or two before bed to reflect on the day, identifying 5 things that we are thankful for or keeping a gratitude journal, practicing gratitude is not only a coping skill but an overall mindset. You can find out more about gratitude lists here in our previous article. 4. Step Outside Of Yourself: Sometimes when we are stuck in a negative thought cycle it can be helpful to shift perspectives. “What would my best friend say?” or “Would I talk to my best friend like this?” Developing self-talk that has a foundation of self-love and compassion is so powerful and can really combat the cycle of negativity. We can begin this process by talking to ourselves the way that we would speak to a loved one, taking a stance of empathy and encouragement. 5. Talk It Out: There are times when we may need to lean on our support systems to get out of our heads and challenge negativity. Talking to someone in our network, a loved one or a therapist can help us with this process. 6. Put It On The Shelf: At times, our negative thoughts may feel so overwhelming we may need to take a break and step away. Visualising taking the negative thought or irrational belief and putting it on a shelf…or in a box…whatever works for you, can be super effective in giving us a moment of clarity. Maybe you are at work, in a meeting or at the grocery store and all of a sudden find yourself stuck in a negative thought cycle, the reality is we don’t always have the time or space available to explore and challenge these patterns…..put it on the shelf, do what you need to do and revisit it at a time that better serves you. Maybe later that night when you are writing in your journal, or maybe later that week when you are at a support group or with your therapist. Visualisation is an effective skill to manage our thinking and increase our sense of control over our thoughts. 7. Focus On The Now: Mindfulness is a tool that may not only combat negative thinking, but provides us with a sense of relief, giving us the ability to stop and refocus. Wherever our minds wonder, we have the power to bring it back to this moment and focus on the hope within the present. Breathing exercises, grounding, meditation, etc. are all ways to focus on the now and break free from the grip of our negative thoughts. You can learn more about mindfulness here in our previous article on that very subject here. According to Buddha, “You can’t live a positive life with a negative mind.” Now is the time to give yourself the life you deserve, one that is built on a foundation of love and kindness. And it all starts with our thoughts…empower yourself to make a change. 20 Positive Affirmations To Say Each Day Here are 20 affirmations that can benefit not just those in recovery but anyone on the journey to better health: - I approve of myself. You approve of yourself. - I love myself. You love yourself. - I support myself. You support yourself. - I trust myself. You trust yourself. - I am my best friend. You are your best friend. - I become more lovable every day. - My body is beautiful. Your body is beautiful. - It is easy for me to forgive. It is easy for you to forgive. - I forgive everyone. You forgive everyone. - I forgive myself. You forgive yourself. - I forgive the past. You forgive the past. - I am free. You are free. - I know life is for me. You know life is for you. - I know what to do. You know what to do. - I am capable. You are capable. - I easily solve any problems. You easily solve any problems. - I can handle anything that comes my way. You can handle anything - that comes your way. - I am full of praise and gratitude. You are full of praise and gratitude. - I awaken each morning with joy. You awaken each morning with joy. - I end each day with gratitude. You end each day with gratitude. These affirmations are meant to be spoken aloud while looking in the mirror and can be recalled throughout the day. Take the time to incorporate them into your daily routine and they will help you build greater reserves of self-love, gratitude, self-confidence and forgiveness. Is There Science Behind Them? Science, yes. Magic, no. Positive affirmations require regular practice if you want to make lasting, long-term changes to the ways that you think and feel. The good news is that the practice and popularity of positive affirmations are based on widely accepted and well-established psychological theory. The Psychological Theory Behind Positive Affirmations One of the key psychological theories behind positive affirmations is self-affirmationtheory (Steele, 1988). So, yes, there are empirical studies based on the idea that we can maintain our sense of self-integrity by telling ourselves (or affirming) what we believe in positive ways. Very briefly, self-integrity relates to our global self-efficacy—our perceived ability to control moral outcomes and respond flexibly when our self-concept is threatened (Cohen & Sherman, 2014). So, we as humans are motivated to protect ourselves from these threats by maintaining our self-integrity. Self-Identity & Self-Affirmation Self-affirmation theory has three key ideas underpinning it. They are worth having in mind if we are to understand how positive affirmations work according to the theory. First, through self-affirmation, we keep up a global narrative about ourselves. In this narrative, we are flexible, moral, and capable of adapting to different circumstances. This makes up our self-identity (Cohen & Sherman, 2014). Self-identity (which we’re seeking to maintain, as mentioned before) is not the same as having a rigid and strictly defined self-concept. Instead of viewing ourselves in one “fixed” way, say as a “student” or a “son”, our self-identity can be flexible. We can see ourselves as adopting a range of different identities and roles. This means we can define success in different ways, too. Why is this a good thing? Because it means we can view different aspects of ourselves as being positive and can adapt to different situations much better (Aronson, 1969). Secondly, self-affirmation theory argues that maintaining self-identity is not about being exceptional, perfect, or excellent (Cohen & Sherman, 2014). Rather, we just need to be competent and adequate in different areas that we personally value in order to be moral, flexible, and good (Steele, 1988). Lastly, we maintain self-integrity by acting in ways that authentically merit acknowledgment and praise. In terms of positive affirmations, we don’t say something like “I am a responsible godmother” because we want to receive that praise. We say it because we want to deservethat praise for acting in ways that are consistent with that particular personal value. A Look At The Research The development of self-affirmation theory has led to neuroscientific research aimed at investigating whether we can see any changes in the brain when we self-affirm in positive ways. There is MRI evidence suggesting that certain neural pathways are increased when people practice self-affirmation tasks (Cascio et al., 2016). If you want to be super specific, the ventromedial prefrontal cortex—involved in positive valuation and self-related information processing—becomes more active when we consider our personal values (Falk et al., 2015; Cascio et al., 2016). The results of a study by Falk and colleagues suggest that when we choose to practice positive affirmations, we’re better able to view “otherwise-threatening information as more self-relevant and valuable” (2015: 1979). As we’ll see in a moment, this can have several benefits because it relates to how we process information about ourselves. Benefits Of Daily Affirmations Now that we know more about the theories supporting positive affirmations, here are six examples of evidence from empirical studies that suggest that positive self-affirmation practices can be beneficial: - Self-affirmations have been shown to decrease health-deteriorating stress (Sherman et al., 2009; Critcher & Dunning, 2015); - Self-affirmations have been used effectively in interventions that led people to increase their physical behavior (Cooke et al., 2014); - They may help us to perceive otherwise “threatening” messages with less resistance, including interventions (Logel & Cohen, 2012); - They can make us less likely to dismiss harmful health messages, responding instead with the intention to change for the better (Harris et al., 2007) and to eat more fruit and vegetables (Epton & Harris, 2008); - They have been linked positively to academic achievement by mitigating GPA decline in students who feel left out at college (Layous et al., 2017); - Self-affirmation has been demonstrated to lower stress and rumination (Koole et al., 1999; Weisenfeld et al., 2001). What Are The Health Benefits? As the studies above suggest, positive affirmations can help us to respond in a less defensive and resistant way when we’re presented with threats. One study that was mentioned above showed that smokers reacted less dismissively to graphic cigarette packet warnings and reported intention to change their behavior (Harris et al., 2007). But more generally, an adaptive, broad sense of self makes us more resilient to difficulties when they arise. Whether it’s social pressures, health information that makes us feel uncomfortable, or feelings of exclusion, a broader self-concept can be an extremely helpful thing to have. Can They Help One’s Outlook on Life? As inherently positive statements, affirmations are designed to encourage an optimistic mindset. And optimism in itself is a powerful thing. In terms of reducing negative thoughts, affirmations have been shown to help with the tendency to linger on negative experiences (Wiesenfeld et al., 2001). When we are able to deal with negative messages and replace them with positive statements, we can construct more adaptive, hopeful narratives about who we are and what we can accomplish. What Are Healing Affirmations? This kind of affirmation is a positive statement about your physical well-being. Popularised by author and speaker Louise Hay, these affirmations are based on the idea that your thoughts can influence your health for the better. You don’t have to be unwell to practice healing affirmations; this idea can be just as helpful for healing emotional pain if you find the idea rings with you. “My happy thoughts help create my healthy body,” and “Wellness is the natural state of my body. I am in perfect health.” Answers To Common Questions About Affirmations If you haven’t practiced positive affirmations before, you might have a lot of questions at this point. Here, we’ll address some of the most common questions asked about the topic. Are Self-Affirmations Best Said Every Day? There are no hard and fast rules about timing or frequency when it comes to practicing self-affirmations. According to psychotherapist Ronald Alexander of the Open Mind Training Institute, affirmations can be repeated up to three to five times daily to reinforce the positive belief. He suggests that writing your affirmations down in a journal and practicing them in the mirror is a good method for making them more powerful and effective (Alexander, 2011). Can They Help with Anxiety and Depression? Positive affirmations are not designed to be cures for anxiety or depression, nor are they a substitute for clinical treatment of those conditions. But that’s not to say that they won’t help. The idea of affirmations as a means of introducing new and adaptive cognitive processes is very much the underlying premise of cognitive restructuring. This is supported by a study of cancer patients that suggests that spontaneous self-affirmation had a significantly positive correlation to feelings of hopefulness (Taber et al., 2016). Will They Boost Self-Esteem? Affirmations can sometimes be very useful for boosting your self-esteem—but there’s a caveat. The most important thing, according to self-affirmation theory, is that your affirmations reflect your core personal values(Cohen & Sherman, 2014). There is little point in repeating something arbitrary to yourself if it doesn’t gel with your own sense of what you believe to be good, moral, and worthwhile. To have any kind of impact on your self-esteem, your self-affirmations should be positively focused and targeted at actions you can take to reinforce your sense of self-identity. Use your real strengths, or strengths that you consider important, to guide your affirmations. Can You Improve Sleep With Affirmations? A large number of anxiety-sufferers experience disturbed sleep (Staner, 2003). In the sense that affirmations can sometimes help to relieve anxiety, they may have some beneficial effects in promoting better sleep. In addition, incorporating your affirmations into meditation can be relaxing and soothing. Meditation has been found to have numerous benefits in terms of sleep quality, so positive affirmation meditation could very well be a good way to improve your sleep (Nagendra et al., 2012). If you are interested in trying this, you’ll find some audio and video below that may be helpful. Are They Just Positive Mantras? If you start digging into the academic literature, you’ll find that the terms “affirmation” and “mantra” are regularly used interchangeably. The same goes for more colloquial uses of the terms. There is a difference, though. Technically, mantras are sacred words, sounds, or verses that carry more spiritual meaning than affirmations (Encyclopedia Britannica, 2019). Frequently said aloud or mentally, they are believed to have deep significance and they feature a lot in meditation. More specifically, according to Encyclopedia Britannica (2019): “Most mantras are without any apparent verbal meaning, but they are thought to have a profound underlying significance and are in effect distillations of spiritual wisdom.” Positive affirmations, in contrast, are described by the Psychology Dictionary as brief phrases, repeated frequently, which are designed to encourage positive, happy feelings, thoughts, and attitudes. They hold no spiritual or religious meaning in the traditional sense and can be used for many purposes. Positive Affirmation Examples Based on this definition, here are some examples of positive affirmations: - I believe in myself, and trust my own wisdom; - I am a successful person; - I am confident and capable at what I do. 9 Positive Affirmations To Help Relieve Anxiety Most people who have suffered from anxiety will likely know how important it can be to cut off negative thought patterns before they begin to spiral. These affirmations can be used at any time, and even those who don’t typically feel anxious may find them useful during stressful moments. - I am liberating myself from fear, judgment, and doubt; - I choose only to think good thoughts; - My anxiety does not control my life. I do. Here are some that draw inspiration from the list: - I breathe, I am collected, and I am calm; - I am safe, and everything is good in my world; - Inside me, I feel calm, and nobody can disturb this peacefulness. - I recognise that my negative thoughts are irrational, and I am now going to stop these fears; - This is just one moment in time; - I’m not going to be scared by a feeling. While practicing these affirmations, try to take deep, slow, soothing breaths. As you become more attuned to the flow of your breath in and out, try not to let your feelings distract you. Focus on the affirmation that you’ve put time into creating for yourself, and each time you practice, it will feel more natural. 5 Daily Affirmations For Depression As with anxiety, depression is often linked closely to—if not underpinned significantly by—thought processes such as overgeneralisation and cognitive distortions (Beck, 1964). Selective abstraction is a common distortion that is associated with depression and describes the tendency to overexaggerate negative things while underemphasising the positive. Affirmations can help us to try and correct this balance by acknowledging and focusing on more positive aspects of both ourselves and our lives. Here are 5 daily affirmations you can adapt, as we have. - I am not afraid to keep going, and I believe in myself; - I have come this far, and I am proud of myself; - This is just one moment in my life, and it does not define who I am; - This is one isolated moment, not my entire life. Things will get better; - These are just thoughts. Only I determine the way I choose to feel. 5 Positive Affirmations to Help Build Self-Esteem Here are five positive affirmations that are designed to help you increase your self-esteem: - I release negative feelings and thoughts about myself; - I always see the best in others; - I believe in who I am; - I am on a journey, ever growing and developing; - I am consistent in the things that I say and do. Further Help & Support If you want further help, support and guidance on this issue or any others, you can find a list of groups, charities and organisations who can help on our help and support page here. If you found this article helpful, you can check out other similar articles below or on our blog. You may also like to read our article on challenging your thinking and thoughts here.
<urn:uuid:539f9c21-aa7c-4644-a067-576aa9f3d312>
CC-MAIN-2022-33
https://drinkndrugs.wordpress.com/2020/10/25/the-importance-of-positive-self-talk-and-affirmations/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00296.warc.gz
en
0.938147
5,332
2.8125
3
Colegio Pre-Universitario Friedrich Von Hayek, Quetzaltenango. 6.1K likes. Middle Schoo If any twentieth-century economist was a Renaissance man, it was Friedrich Hayek. He made fundamental contributions in political theory, psychology, and economics. In a field in which the relevance of ideas often is eclipsed by expansions on an initial theory, many of his contributions are so remarkable that people still read them more than fifty [ Hos Adlibris hittar du miljontals böcker och produkter inom friedrich hayek Vi har ett brett sortiment av böcker, garn, leksaker, pyssel, sällskapsspel, dekoration och mycket mer för en inspirerande vardag. Alltid bra priser, fri frakt från 199 kr och snabb leverans. | Adlibri In the 1920s, Friedrich von Hayek conducted important work on business cycles, but later developed broader social analyses. From the 1930s, he highlighted the problems of central economic planning. His conclusion was that knowledge and information held by various actors can only be utilized fully in a decentralized market system with free competition and pricing F. A. Hayek (1899-1992) is undoubtedly the most eminent of the modern Austrian economists, and a founding board member of the Mises Institute. Student of Friedrich von Wieser, protégé and colleague of Ludwig von Mises, and foremost representative of an outstanding generation of Austrian School theorists, Hayek was more successful than anyone else in spreading Austrian idea . Your Majesty, Your Royal Highnesses, Ladies and Gentlemen Friedrich von Hayek **Den 28 juni 1944 **undertecknade John Maynard Keynes ett brev till Friedrich von Hayek. Under sin resa över Atlanten för att delta i en av de otaliga sammanträden som föregick bildandet av Bretton Woods överenskommelsen senare på hösten hade han haft tid att läsa Hayeks nyutkomna The Road to Serfdom Friedrich von Hayek föddes den 8 maj 1988 i Österrikes huvudstad Wien. Han kom från en mycket förmögen familj då hans pappa August Edler von Hayek var läkare och professor i botanik. Hans mamma vid namn Felicitas von Juraschek kom från en mycket rik familj som ägde mycket jordar Friedrich von Hayek was a Nobel Prize winning Austrian-British economist and philosopher, best known for his defense of classical liberalism. Born towards the end of the 19th century in Vienna, he received his education at the University of Vienna Friedrich Von Hayek (Friedrich August Von Hayek; Viena, 1899 - Friburgo, 1992) Economista austríaco y teórico de la política. Fue director del Instituto Austríaco para la Investigación Económica (1927-1931). Posteriormente ejerció como profesor en la London School of Economics (1931-1950) Hos Adlibris hittar du miljontals böcker och produkter inom friedrich a. von hayek Vi har ett brett sortiment av böcker, garn, leksaker, pyssel, sällskapsspel, dekoration och mycket mer för en inspirerande vardag. Alltid bra priser, fri frakt från 199 kr och snabb leverans. | Adlibri Friedrich August von Hayek (8 May 1899-23 March 1992) is one of the most eminent of the modern Austrian economists. Student of Friedrich von Wieser , protégé and colleague of Ludwig von Mises , and foremost representative of an outstanding generation of Austrian school theorists, Hayek was very successful in spreading Austrian ideas throughout the English-speaking world Friedrich August von Hayek, 27th January 1981, the 50th Anniversary of his first lecture at LSE, 1981.jpg 600 × 367; 74 KB Friedrich Hayek portrait.jpg 240 × 304; 7 KB Friedrich von Hayek signature.gif 160 × 42; 1 K Dies ist der Youtube Channel der Friedrich A. von Hayek Gesellschaft e.V. Näheres können Sie dem Internetauftritt entnehmen. http://www.hayek.de/. Vorträge,. Friedrich Hayek was born in Vienna in 1899 into a family steeped in academic life and scientific research. He worked as a statistician from 1927-31, became a Lecturer in Economics at the University of Vienna in 1929, then moved to the University of London in 1931, the University of Chicago in 1950, and the University of Freiburg in 1962, retiring in 1967 Friedrich August von Hayek, CH (8 May 1899- 23 March 1992) was an Austrian-British economist and political philosopher. He became known because he strongly defended liberalism and free-market capitalism. He was against too much central control of the economy and society Friedrich von Hayek, Self: The Levin Interviews. Friedrich von Hayek was born on May 8, 1899 in Vienna, Austria. He died on March 23, 1992 in Freiburg im Breisgau, Germany Hos Adlibris hittar du miljontals böcker och produkter inom friedrich a von hayek Vi har ett brett sortiment av böcker, garn, leksaker, pyssel, sällskapsspel, dekoration och mycket mer för en inspirerande vardag. Alltid bra priser, fri frakt från 199 kr och snabb leverans. | Adlibri Freidrich von Hayek (1899-1992) Born in Austria in 1899, Nobel Prize-winning economist Friedrich von Hayek was an advocate of free-market capitalism Friedrich August von Hayek, född 8 maj 1899 i Wien, död 23 mars 1992 i Freiburg im Breisgau, Baden-Württemberg, var en österrikisk- brittisk nationalekonom och politisk filosof. 52 relationer Friedrich August von Hayek CH was an Austrian and British economist and philosopher known for his defense of classical liberalism and free-market capitalism against socialist and collectivist thought. He is considered by some to be one of the most important economists and political philosophers of the twentieth century FRIEDRICH AUGUST VON HAYEK . Friedrich August von Hayek (Vienna 1899 - Friburgo 1992), premio Nobel per l'economia nel 1974, è uno dei più grandi esponenti del neoliberalismo novecentesco e uno dei maggiori critici dell'economia pianificata e centralista Friedrich Hayek CH (German: [ˈfʁiːdʁɪç ˈaʊ̯ɡʊst ˈhaɪ̯ɛk]; 8 May 1899 - 23 March 1992), born in Austria-Hungary as Friedrich August von Hayek and frequently referred to as F. A. Hayek, was an Austrian and British economist and philosopher best known for his defense of classical liberalism.Hayek shared the 1974 Nobel Memorial Prize in Economic Sciences with Gunnar Myrdal for his. Friedrich von Hayek Biography, Life, Interesting Facts. Friedrich von Hayek is a well-known Austrian-British philosopher and economist best known for his defence of classical liberalism. He spent his career working tirelessly at the University of Chicago and Albert Ludwig University of Freiburg and earned his Nobel Prize in Economic Sciences in 1974 alongside Gunner Myrdal 241 quotes from Friedrich A. Hayek: 'From the fact that people are very different it follows that, if we treat them equally, the result must be inequality in their actual position, and that the only way to place them in an equal position would be to treat them differently. Equality before the law and material equality are therefore not only different but are in conflict with each other; and we. Friedrich A. von Hayek. Hayek [haʹjɛk], Friedrich A ugust, von, född 8 maj 1899, död 23 mars 1992, österrikisk-brittisk ekonom och politisk filosof, 1927-31 chef för konjunkturinstitutet i Wien, 1931-50 professor i nationalekonomi vid London School of Economics, 1950-62 vid University of Chicago, därefter vid universiteten i Freiburg och Salzburg. År 1974 tilldelades han. Friedrich August von Hayek CH (8 May 1899 - 23 March 1992) was an Austrian, later British, economist and philosopher most famous for his defense of classical liberalism.In 1974, Hayek shared the Nobel Memorial Prize in Economic Sciences (with Gunnar Myrdal) for his pioneering work in the theory of money and economic fluctuations and penetrating analysis of the interdependence of. Hayek's life spanned the twentieth century, and he made his home in some of the great intellectual communities of the period.2 Born Friedrich August von Hayek in 1899 to a distinguished family of Viennese intellectuals,3 Hayek attended the University of Vienna, earning doctorates in 1921 and 1923 Friedrich A.von Hayek; Friedrich August Hayek; F. Von Hayek; F.A. v.Hayek; History Created April 1, 2008; 7 revisions; Download catalog record: RDF / JSON. September 27, 2020: Edited by Clean Up Bot: add ISNI March 31, 2017: Edited by Clean Up Bot: add VIAF and wikidata ID October 25. Hayek made major analytical contributions which have yet to be appreciated by either friend or foe. Through this selection of classic articles The Legacy of Friedrich von Hayek attempts to place Hayek's contributions to political economy in a proper perspective . He lectured at LSE from 1931 to 1950 and was professor of economic science and statistics Friedrich August von Hayek (1899-1992) fue un economista austro-húngaro, filósofo y profesor desde 1931 hasta 1950 en la London School of Economics y de 1950 a 1962 en la universidad de Chicago Friedrich Hayek, a renowned Austrian-British philosopher and economist, was born on May 8, 1899 as Friedrich August von Hayek, in Vienna, Austria. Hayek hailed from an affluent and noble family, his father, August von Hayek was a famous botanist and renowned physician Friedrich A. von Hayek-Gesellschaft, Berlin, Germany. 2.2K likes. Die Friedrich-August-von-Hayek-Gesellschaft e.V. ist eine Vereinigung zur Förderung von Ideen im Sinne von Hayek Friedrich von Hayek. From Wikimedia Commons, the free media repository. Jump to navigation Jump to search. English: Friedrich August von Hayek (1899-1992) was an Austrian economist and public intellectual. In 1974, he was awarded the Nobel Prize in Economics. As a young ma Friedrich A. Hayek, rojen kor Friedrich August von Hayek (nem. ˈfʁiːdʁɪç ˈaʊ̯ɡʊst ˈhaɪ̯ɛk), avstrijsko-britanski ekonomist in politični filozof, * 8. maj 1899, Dunaj, Avstro-Ogrska, † 23. marec 1992, Freiburg im Breisgau, Nemčija Najbolj znan je po svoji kritiki Keynesove države blaginje in totalitarnega socializma.Hayek je eden najpomembnejših mislecev liberalizma v 20. Hayek, Friedrich A. von, 1977. La nueva confusión acerca del planeamiento, Sede de la CEPAL en Santiago (Estudios e Investigaciones) 34516, Naciones Unidas Comisión Económica para América Latina y el Caribe (CEPAL). von Hayek, Friedrich August, 1974. The Pretence of Knowledge, Nobel Prize in Economics documents 1974-2, Nobel Prize. Friedrich August Von Hayek (pronounced HI-YACK) was born in Vienna in 1899 to an academic family. His grandfather was a companion of Eugen Böhm von Bawerk, one of the pioneers of the Austrian School of Economics. When Hayek was growing up, reading was one of his main hobbies. Then, World War I happened Friedrich A. von HayekThe Austrian-born British free market economist and social philosopher, Nobel Laureate Friedrich Auguste von Hayek (1899-1992) was one of the most distinguished social thinkers of the 20th century. Source for information on Friedrich A. von Hayek: Encyclopedia of World Biography dictionary Friedrich August von Hayek, 1899-1992, was an Austrian-Hungarian economist, philosopher and Professor from 1931 to 1950 at the London School of Economics and from 1950 to 1962 at the University of Chicago Law, Legislation and Liberty. Book by Friedrich Hayek. Volume 3: The Political Order of a Free People. Chapter 17: A Model Constitution, 1973 Enjoy the best Friedrich August von Hayek Quotes at BrainyQuote. Quotations by Friedrich August von Hayek, Austrian Economist, Born May 8, 1899. Share with your friends Inlägg om Friedrich von Hayek skrivna av Tobias Hübinette. Upptäckte nyss att Georg Frostenson (f ö far till poeten och akademiledamoten Katarina Frostenson) var medredaktör (tillsammans med friherre Clas af Ugglas på Frölunda Gård, kusin till och vän med bröderna Marcus och Jacob Wallenberg liksom släkting till f d utrikesminister Margaretha af Ugglas, född Stenbeck) för. Hayek's mother's name was Felicitas Juraschek. His maternal grandfather, Franz von Juraschek, a former professor of constitutional law, was a leading economist in Austria, a close friend of Eugen von Böhm-Bawerk, and an acquaintance of Friedrich von Wieser (two of the founders of the Austrian School of Economics) Los Fundamentos de la Libertad - Friedrich A Hayek Friedrich von Hayek was born in Vienna in 1899. After obtaining a doctorate in political science from the University of Vienna and spending the early 1920s in New York City, von Hayek returned to his birthplace to participate in a private seminar series given by the Austrian School ideologue Ludwig von Mises Friedrich August von Hayek synonyms, Friedrich August von Hayek pronunciation, Friedrich August von Hayek translation, English dictionary definition of Friedrich August von Hayek. Noun 1. Friedrich August von Hayek - English economist noted for work on the optimum allocation of resources Hayek Based on WordNet 3.0, Farlex clipart.. Hayek, Friedrich August von 1899-1992 BIBLIOGRAPHY F. A. Hayek, as he is known throughout the English-speaking world, is generally considered to be the leading twentieth-century representative of classical, nineteenth-century liberalism and the foremost scourge of socialism Friedrich August von Hayek 1899--1992 Download Friedrich August von Hayek Study Guide. Subscribe Now Austrian philosopher, economist, and social scientist Alla IDG:s senaste nyheter, artiklar och kommentarer om Friedrich Von Hayek Friedrich August Hayek (1899-1992), recipient of the Medal of Freedom in 1991 and co-winner of the Nobel Memorial Prize in Economics in 1974, was a pioneer in monetary theory and the principal proponent of libertarianism in the twentieth century. He taught at the University of London, the University of Chicago, and the University of Freiburg Followers of the Austrian economist Friedrich Hayek would say exactly the opposite. In their view, it happened because the markets weren't free enough Friedrich A. von Hayek papers, 1906-2005 Hoover Institution Archives: referencedIn: Emergency Committee in Aid of Displaced Foreign Scholars. Emergency Committee in Aid of Displaced Foreign Scholars records. 1927-1949. New York Public Library. Manuscripts and Archives Division: creatorO Friedrich August von Hayek quotation: With the exception only of the period of the gold standard, practically all governments of history have used their exclusive power to issue money to defraud and plunder the people Friedrich A. Hayek (1899-1992) As a defender of the free market and of classical liberal (i.e. libertarian) principles, F.A. Hayek lived to see his doctrine and warnings justified by the failure of command and socialist economies in the late 1980's Austrian economist. Born in Vienna. The son of August von Hayek, a Prussian doctor in the municipal health service. At his father's suggestion, Friedrich, as a teenager, read the genetic and evolutionary works of Hugo de Vries and the philosophical works of Ludwig Feuerbach. In school Friedrich was much taken by one.. . Noun 1. Friedrich August von Hayek - English economist noted for work on the optimum allocation of resources Hayek Friedrich August (oant 1919 von) Hayek (Wenen, 8 maaie 1899 - Freiburg im Breisgau, 23 maart 1992) wie in Eastenryksk ekonoom en polityk filosoof, foarfjochter fan it frijemerk kapitalisme, en Nobelpriiswinner yn 1974 mei ideologysk opponint Gunnar Myrdal.. Neast de priis fan de Sweedske ryksbank foar ekonomy ta neitins oan Alfred Nobel yn 1974 (better bekend as de Nobelpriis foar de Ekonomy. Friedrich August von Hayek. by Roger W. Garrison and Israel Kirzner. I. Introduction Friedrich August von Hayek (1899- ), a central figure in twentieth-century economics and foremost representative of the Austrian tradition, 1974 Nobel laureate in Economics, a prolific author not only in the field of economics but also in the fields of political philosophy, psychology, and epistemology, was. J. Bradford DeLong, Making Sense of Friedrich A. von Hayek: Focus/The Honest Broker for the Week of August 9, 2014 (2014) Hayek chided the critics of the Chilean junta for their supposed sin of apparent omission: their supposed proclivity to denounce Pinochet without simultaneously roundly condemning dictatorial regimes such as the USSR and North Korea Friedrich von Hayek, a Nobel Prize-winning economist who was an intellectual forebear of libertarians and other advocates of free-market economics from Ronald Reagan to Margaret Thatcher to. Friedrich August von Hayek, 1899-1992. Among the most masterful and insightful of 20th Century economists, the Austrian economist Friedrich A. von Hayek alone could have stood shoulder-to-shoulder with his great rival, J.M. Keynes.Trained by Wieser and B hm-Bawerk in the Austrian tradition at Vienna, F.A. Hayek nonetheless carved a distinct spot in the economic pantheon - in some ways more. Friedrich August von Hayek, (d. 8 Mayıs 1899, Viyana ' ö. 23 Mart 1992, Freiburg). Avusturya ekolüne bağlı ekonomist ve siyaset bilimcidir. Serbest piyasa ekonomisini 20. yüzyıl ortasında yükselen sosyalist dalgaya karşı savunmasıyla tanındı Listen to the late U.S. Supreme Court nominee Robert Bork and the late Nobel Prize winner Friedrich von Hayek, as they engage in a lively discussion of the economic theories developed in von Hayek's book, Law, Legislation and Liberty, in Friedrich von Hayek & Robert Bork Part 1.It was recorded in 1978 Friedrich von Hayek. stemming. Exempel meningar med Friedrich Hayek, översättning minne. add example. da - (EN) Hr. formand! I 1944 gjorde Friedrich Hayek den skarpsynede observation, at overdragelsen af vanskelige tekniske opgaver til særlige organer, selv om de er en legitim enhed, er. Eugen von Böhm-Bawerk, Friedrich A. von Hayek, Ludwig von Mises, Arturo Fontaine Talavera Estudios públicos , ISSN-e 0716-1115, Nº. 10, 1983 , págs. 159-241 Los principios de un orden social libera In fact, Hayek developed as a scholar steeped in the work of Carl Menger, Friedrich Wieser, Ernst Mach, and a number of other continental thinkers, all of whom never bought into many elements of The Age of Reason / The Physics of Social Behavior, and all of whom combined economic and evolutionary elements in their explanatory paradigms, accounting for the growth of knowledge. Friedrich August von Hayek is one of the most eminent of the modern Austrian economists. Student of Friedrich von Wieser, protégé and colleague of Ludwig von Mises, and foremost representative of an outstanding generation of Austrian school theorists, Hayek was very successful in spreading Austrian ideas throughout the English-speaking world.. When the definitive history of economic. Individualism and economic order. [Essays] by Friedrich A. von Hayek ( Book ) 116 editions published between 1948 and 2018 in 6 languages and held by 1,782 WorldCat member libraries worldwid B' e eaconamaiche agus feallsanaiche poileataigeach Ostaireach a bha ann an Friedrich August von Hayek (8 an Cèitean, 1899 ann am Vienna-23 am Màrt, 1992 ann am Freiburg im Breisgau, a' Ghearmailt).. Bha e na oileanach aig Oilthigh Vienna is dh'obair e san Ostair is sna Stàitean Aonaichte mus deach e na àrd-ollamh eaconamachd aig Sgoil na h-Eaconamachd Lunnainn ann an 1931 Show Free To Choose Media Podcast, Ep Episode 97 - Friedrich von Hayek and Robert Bork Part 1 (Podcast) - 1 Oct 2020 Listen to the late U.S. Supreme Court nominee Robert Bork and the late Nobel Prize winner Friedrich von Hayek, as they engage in a lively discussion of the economic theories developed in von Hayek's book, Law, Legislation and Liberty, in Friedrich von Hayek. Friedrich Hayek CH (German: [ˈfʁiːdʁɪç ˈaʊ̯ɡʊst ˈhaɪ̯ɛk]; 8 May 1899 - 23 March 1992), born in Austria-Hungary as Friedrich August von Hayek and frequently referred to as F. A. Hayek, was an Austrian and British economist and philosopher best known for his defense of classical liberalism
<urn:uuid:9f0d104b-f400-4b37-97a6-96ca52bb2b07>
CC-MAIN-2022-33
https://radstratit.com/wiki/Friedrich_von_Hayek1zm-4840ema
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00296.warc.gz
en
0.682972
5,095
2.671875
3
By Jennifer Hushaw & Si Balch In Part I, we discussed the recent rise in U.S. wildfire, the evidence suggesting climate is a major driver of that increase, and the reality that future increases in temperature and drought frequency (in some regions) will lead to greater fire potential, especially in moisture-limited ecosystems. There is no question that wildfire risk has changed (and will continue to change) as a result of on-going climate change. Importantly, the anticipated shifts in fire will have big implications for commercial forests and conservation lands alike, as well as implications for the climate system itself because wildfire acts as a positive feedback that accelerates terrestrial carbon emissions. Severe disturbance caused by novel fire regimes may also hasten the species shifts expected with climate change, making fire an important driver of ecosystem change in both the near- and long-term. In Part II, we describe the latest research on future changes in fire frequency, extent, and/or severity, as well as discussing management strategies and outlining some useful information sources. Overview of Changing Fire Risk FIRE POTENTIAL & SEASONALITY Fire potential and the length of the fire season are projected to increase in many regions. These changes will be driven by earlier snowmelt, warmer temperatures (particularly summer), drought stress, and changes in soil water content (Keane et al. 2015; Waring and Coops 2016; Young et al. 2016) associated with climate change. In the continental U.S., the potential for very large fires (>12,355 acres) is strongly linked to meteorological and climatological conditions. Recent research indicates that the potential for very large fires will increase in historically fire-prone regions as a result of climate change, while other regions will experience an earlier start or an overall extension of the fire season as atmospheric conditions become conducive earlier in the year and persist later (Barbero et al. 2015) – see details in The North American Outlook (below). Drier fuels will also increase fire potential because fuel moisture is highly sensitive to temperature. For example, a recent analysis in Canada found that for each additional degree of warming, a 5 to 15% increase in precipitation is required to maintain fuel moisture, depending on the type of fuel in question (i.e. fine surface fuels, duff layers, or deep organic soils) (Flannigan et al. 2016). In the absence of a sufficient precipitation increase, these fuels begin to dry out and move closer to critical thresholds for fire ignition and spread. In fact, Canada is expected to have more days of extreme fire weather because future precipitation will be insufficient to compensate for the drying associated with warmer temperatures – this is true even for future scenarios with the greatest precipitation increase (i.e. 40%) (Flannigan et al. 2016). Some of these changes will be non-linear, leading to dramatic increases in fire frequency or severity in some regions once critical thresholds are crossed. A great example can be found in the boreal forest and tundra ecosystems of Alaska where there are distinct temperature and moisture thresholds for fire occurrence that will likely be crossed by the end of this century, significantly increasing the probability of wildfire and potentially leading to novel fire regimes in those areas (Young et al. 2016). A CHANGING LANDSCAPE DRIVES FIRE ACTIVITY Climate-induced changes in vegetation (including type, density, large scale die-off, etc.) and forest pests will also influence fire risk by affecting fuel loads. As we discussed in a previous bulletin, climate change will affect the population dynamics and spread of many forest pests and diseases, including mountain pine beetle. Mountain pine beetle outbreaks can, in turn, alter the quantity and characteristics of both live and dead fuels by changing the amount of fuel in the forest canopy, the base height of the canopy, the amount of surface fuel, and other aspects of forest biomass. In this way, they can influence fire probability, severity, and rate of spread, as well as the potential for crown fire (Hicke et al. 2012). Climate change will also have direct effects on vegetation and forest biomass through long-term shifts in species distribution and forest composition, as well as small- and large-scale mortality events brought on by drought and other extremes. In Part I, we detailed how drought and beetle-induced mortality in western U.S. conifers is already contributing to an increase in fire. These mortality events and longer-term vegetation changes can flip an ecosystem from being fuel- to moisture-limited (or vice-versa), changing what controls fire activity in a given region (as discussed in Part I). In some cases, the changing fire regime itself will cause vegetation communities to shift or flip from a moisture- to fuel-limited ecosystem. For example, the fire return interval in the greater Yellowstone ecosystem is predicted to decrease (i.e. more frequent fire) to the point that some forested areas will no longer be able to regenerate by mid-century and will instead convert to a new dominant vegetation type that shifts the region into a fuel-limited fire regime (Westerling et al. 2011). |“The bottom line is that we expect more fire in a warmer world.” (Flannigan et al. 2016)| Modelling Future Wildfire Globally, fire probability is expected to increase in the mid- to high-latitudes and decrease in the tropics, with these changes becoming more pronounced later in the century. In the near term (i.e. 2010-2039), the most consistent increases will occur in places with an already somewhat warm climate, but there are also major uncertainties in the next few decades. There is more confidence in projections for the end of the century (i.e. 2070-2099) when climate models have a higher level of agreement in their projections because the magnitude of climate change will be even greater, with some locations experiencing an average change in fire probability up to +0.25 (Figure 1; Moritz et al. 2012). Flannigan et al. (2009) also suggest that a general increase in area burned and fire occurrence is likely, based on their review of close to 50 studies conducted between 1991 and 2009 on future fire activity around the world. Although these studies focused on different fire activity metrics, time frames, and locations, more than three-quarters of the analyses pointed to an increase in fire activity. In particular, they noted that fire seasons are lengthening in temperate and boreal regions and this trend should continue in a warming world. THE NORTH AMERICAN OUTLOOK Most of the research conducted to date in North America points toward a future increase in wildfire, with longer fire seasons and greater fire potential due to more conducive atmospheric conditions in a number of regions (Barbero et al. 2015; Wang et al. 2015; Liu et al. 2013; Young et al. 2016). For example, in a study mentioned above, researchers from the University of Idaho, the US Forest Service, and the Canadian Forest Service modelled future potential for “very large fires” in different ecoregions due to climate change and they found the potential for very large fires will increase in the continental U.S. The largest absolute increases were predicted for the intermountain West and Northern California, while the largest relative changes were predicted in the northern tier of the country where the potential for very large fires has historically been quite low (e.g., see Barbero et al. 2015, Figure 1). In addition, their analysis suggests the southern U.S. will have an earlier fire season in the future, while the northern regions will experience an overall lengthening of the fire season, with an extension of potential burn days at both ends of the season. These changes are driven by anticipated increases in fire danger and temperature, as well as decreases in precipitation and relative humidity during the fire season (Barbero et al. 2015). Another study by Liu et al. (2013) used results from a downscaled climate model to evaluate how fire potential will change by mid-century (2041–2070), as measured by the Keetch–Byram Drought Index (a commonly used index designed specifically for fire potential assessment). They predict an increase in fire potential in the Southwest, Rocky Mountains, northern Great Plains, Southeast, and Pacific coast due to warming trends, in addition to longer fire seasons in many regions. Looking farther north, the research also suggests increases in fire potential across high-latitude regions. Specifically, the annual number of fire spread days in Canada is expected to increase anywhere from 35–400% by 2050, with large absolute increases in the Boreal Plains of Alberta and Saskatchewan and the greatest relative change in coastal and temperate forests (Wang et al. 2015). Similarly dramatic increases in fire activity are predicted for areas of Alaska with historically low flammability in the tundra and tundra-forest boundary areas, with “up to a fourfold increase in the 30-yr probability of fire occurrence by 2100” (Young et al. 2016). Fire potential is not the only metric we might be concerned about, however. There are also questions about how fire severity may change as a result of climate change. Although fire severity will increase in some cases, as we have seen in the western U.S. with high fuel loads and exceptional drought conditions, future conditions may also decrease fire severity. When some researchers incorporated climate-induced changes in vegetation type, fuel load, and fire frequency, rather than climatic changes alone, they found that a widespread reduction in fire severity was likely for large portions of the western U.S. (Figure 2; Parks et al. 2016). This is because future increases in fire frequency and water deficits will reduce vegetation productivity, the amount of regeneration, and the amount of biomass accumulation on the landscape—all of which contribute to decreased fuel loads that will no longer support high-severity fires (Parks et al. 2016). MANAGING FOR EXTREMES When considering how to address these wildfire regime shifts, one approach is to “manage for the extremes,” rather than the average fire event or return interval in a given region, because it is the extremes that determine the necessary capacity of fire management organizations and, although these extremes cannot be as easily predicted, they can have serious consequences (Wang et al. 2015; Irland 2013). ACTIVE FUELS REDUCTION In terms of forest management, fuels reduction via pre-commercial or commercial thinning operations and prescribed fire is an obvious strategy for dealing with increased fire potential. That said, fuels reduction is better suited for some forest types than others, namely fuel-limited forest communities (Steel et al. 2015), e.g. yellow pine and mixed conifer forests in California or piñyon-juniper woodland and lower montane forests (dominated by ponderosa pine) in the Rocky Mountain region. In systems where the fire regime is primarily moisture- or climate-limited, a reduction in fuels will not be as effective at reducing fire hazard because fuel is not the limiting factor. THE PASSIVE APPROACH An argument can also be made for taking a more passive approach that “lets nature take its course,” where the fire regime is allowed to change and it ultimately shifts the dominant vegetation type to something new (as discussed above). In this case, the natural disturbance regime eventually transitions plant communities into a state of equilibrium with the new climate (Parks et al. 2016). This approach may ultimately be more appropriate and cost-effective in locations where conditions are expected to become more arid and fire frequency is projected to increase dramatically, compared with resisting change through on-going, active fire suppression efforts. Although not appropriate for most commercial operations, the passive approach may be a consideration for lands with a management focus on maintaining resilient transitional habitat for wildlife in a changing climate. Of course, it is worth noting that these anticipated changes in wildfire are happening in the larger context of land use change (more development in the wildland urban interface, greater forest fragmentation), fuel accumulation (due to historic fire suppression efforts, landowner reluctance to harvest, and/or insufficient budgets for fuel treatments), and infrastructure/industry changes (lack of “fire wise” development in some regions, loss of institutional firefighting operations with ownership change). Things to Do A number of common practices can help land managers prepare for fire risk, which will be important to emphasize (or implement) in the face of increased fire potential. These include: - Put all foresters and other field personnel through the state forestry department’s basic fire training school. - Have the state forest service phone numbers and radio contact on everyone’s cell phone. - Equip everyone’s truck with Indian tanks and fire rakes. - Know who has bulldozers, where they are, and how to reach their owners. - Identify water sources for pumping. - Identify water sources for water bombers. - Identify landing zones for helicopters. - Think about how to communicate with abutting home owners about fire risk. This interface of houses and trees is an increasingly dangerous situation for both the forest owner and home owner. - Utilize the resources below to find up-to-date information on potential fire risk. - A program that produces landscape-scale geospatial products for planning, management, and operations, including maps and databases that describe vegetation, fuel, and fire regimes. Website provides data, reports, tools, maps, etc. - Source: USDA Forest Service & US Department of Interior National Interagency Coordination Center - NICC coordinates interagency wildland firefighting resources. They also dispatch Incident Management Teams and resources as necessary when fires exceed the capacity of local or regional firefighting agencies. Website provides Incident Information with daily updates on large fires and Predictive Services, such as weather, fire fuels danger, outlooks, etc., as well as other resources for wildland fire and incident management decision-making. - Source: Multi-agency organization, including: BIA, BLM, USFS, USFWS, NASF, & NPS FRAMES (Fire Research and Management Exchange System) - A searchable online portal for fire-related information, including documents, tools, data, online trainings, discussion forums, announcements, and research, as well as links to numerous other fire-related websites and portals for regionally-specific sites and resources. - University of Idaho; USFS Rocky Mountain Research Station Joint Fire Science Program (JFSP) - Source for fire science information, resources and funding announcements for scientists, fire practitioners and decision makers. They also produce weekly newsletters. - JFSP is also home to the Fire Science Exchange Network, which includes the following:· Northwest Fire Science Consortium· California Fire Science Consortium· Great Basin Fire Science Exchange· Northern Rockies Fire Science Network· Southern Rockies Fire Science Network· Southwest Fire Science Consortium· Great Plains Fire Science Exchange - Source: Multi-agency, including USFS, BLM, BIA, NPS, USFWS, USGS Geographic Area Coordination Centers - Web-portal for incident information, logistics, predictive services (e.g. information about weather, fuels and fire danger), and administrative resources for wildland fire agencies. - Source: Geographic Area Coordinating Group (GACG) – interagency; made up of Fire Directors from each of the area Federal and State land management agencies - Portal to numerous websites and resources related to wildfire risk and detection, including many listed in this table and others. - Source: Multi-agency, including: FEMA, EPA, Dept. of Interior, Dept. of Commerce, NOAA, DOE, USDA, and Army Corps of Engineers - Educational resources on wildfire prevention and “firewise” practices for homeowners and professionals, including state-specific information for New England and adjacent Canadian provinces. - Source: Northeast Forest Fire Protection Commission’s Prevention and Education Working Team Wildfire Risk Assessment Portals - Several states/regions have these websites, which include web-mapping applications showing fire risk and assessment, as well as information on historic fire occurrence and information for developing community wildfire protection plans. - Source(s): State forest service, state forester, universities, etc Active Fire Mapping Program - Provides near real-time, satellite-based detection and characterization of wildland fire conditions in a geospatial context for the continental United States, Alaska, Hawaii and Canada. - http://activefiremaps.fs.fed.us/index.php *soon moving to new URL: https://fsapps.nwcg.gov/afm - Source: USDA Forest Service Remote Sensing Applications Center Historic Wildfire Occurrence Data - This data product contains a spatial database of wildfires that occurred in the United States from 1992 to 2013, generated for the national Fire Program Analysis (FPA) system. The wildfire records were acquired from the reporting systems of federal, state, and local fire organizations. - USDA Forest Service Global Maps of Fire Risk - This on-line map viewer of satellite-derived global vegetation health products, includes one for fire risk that is updated continuously. You can also go back and see archived maps from previous dates. - www.star.nesdis.noaa.gov/smcd/emb/vci/VH/vh_browse.php (choose “Fire Risk” under the Data Type dropdown menu) - Source: Center for Satellite Applications and Research (STAR) – the science arm of the NOAA Satellite and Information Service National Fire Protection Association - Source for national fire codes and standards, public education, and research on fire risk and prevention. Young et al (2016) identified two thresholds for fire occurrence, based on historic data: average July temperatures above 13.4⁰C and annual moisture availability (i.e. precipitation minus evapotranspiration) below 150mm. ~ ~ ~ ~ ~ Abatzoglou, J.T. and Williams, A.P. 2016. Impact of anthropogenic climate change on wildfire across western US forests. PNAS. 113(42): 11770–11775. Barbero, R., Abatzoglou, J.T., Larkin, N.K., Kolden, C.A., Stocks, B. 2015. Climate change presents increased potential for very large fires in contiguous United States. International Journal of Wildland Fire. 24(7) 892-899. Flannigan, M.D., Krawchuk, M.A., de Groot, W.J., Wotton, B.M., Gowman, L.M. 2009. Implications of changing climate for global wildland fire. International Journal of Wildland Fire. 18: 483-507. Flannigan, M.D., Wotton, B.M., Marshall, G.A., de Groot, W.J., Johnston, J., Jurko, N., Cantin, A.S. 2016. Fuel moisture sensitivity to temperature and precipitation: climate change implications. Climatic Change. 134:59-71. Hicke, J.A., Johnson, M.C., Hayes, J.L., Preisler, H.K. 2012. Effects of bark beetle-caused tree mortality on wildfire. Forest Ecology and Management. 271: 81-90. Irland, L.C. 2013. Extreme value analysis of forest fires from New York to Nova Scotia, 1950-2010. Forest Ecology and Management. 294: 150-157. Keane, R.E., Loehman, R., Clark, J., Smithwick, E.A.H., Miller, C. 2015. Chapter 8: Exploring Interactions Among Multiple Disturbance Agents in Forest Landscapes: Simulating Effects of Fire, Beetles, and Disease Under Climate Change in Simulation Modeling of Forest Landscape Disturbances. A.H. Perera et al. (eds.) Springer International Publishing. Switzerland. Liu, Y., Goodrick, S.L., Stanturf, J.A. 2013. Future U.S. wildfire potential trends projected using a dynamically downscaled climate change scenario. Forest Ecology and Management. 294: 120-135. Moritz, M.A., Parisien, M-A., Batllori, E., Krawchuk, M.A., Van Dorn, J., Ganz, D.J., Hayhoe, K. 2012. Climate change and disruptions to global fire activity. Ecosphere. 3(6):49. Parks, S.A., Miller, C., Abatzoglou, J.T., Holsinger, L.M., Parisien, M., Dobrowski, S.Z. 2016. How will climate change affect wildland fire severity in the western US? Environmental Research Letters. 11: 035002. Steel, Z.L., Safford, H.D., Viers, J.H. 2015. The fire frequency-severity relationship and the legacy of fire suppression in California forests. Ecosphere. 6(1): Article 8, 23pp. Wang, X., Thompson, D.K., Marshall, G.A., Tynstra, C., Carr, R., Flannigan, M.D. 2015. Increasing frequency of extreme fire weather in Canada with climate change. Climatic Change. 130: 573-586. Waring, R.H. and Coops, N.C. 2016. Predicting large wildfires across western North America by modeling seasonal variation in soil water balance. Climatic Change. 135: 325-339. Westerling, A.L., Tumer, M.G., Smithwick, E.A.H., Romme, W.H., Ryan, M.G. 2011. Continued warming could transform Greater Yellowstone fire regimes by mid-21st century. PNAS. 108(32): 13165-13170. Young, A.M., Higuera, P.E., Duffy, P.A., Hu, F.S. 2016. Climatic thresholds shape northern high-latitude fire regimes and imply vulnerability to future climate change. Ecography. 39: 001-012.
<urn:uuid:01d40f56-7b35-4c96-a2e2-a1f6cdca8f1d>
CC-MAIN-2022-33
http://climatesmartnetwork.org/2016/11/wildfire-in-a-warming-world-part-2/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00098.warc.gz
en
0.898931
4,798
3.171875
3
Money is called “a medium of exchange.” It’s used to pay for things. Money comes in all shapes and sizes. Paper bills are usually made from different types of linen. Coins are made from metal. Each bill or coin has a different value. Many years ago, before there was money, people used to barter. Bartering, or trading, means that you give a person something that you own and, in exchange, they give you something that they own. For example, if you had a cow but you needed a wagon, you would try to find a person who had a wagon but needed a cow. You would give them your cow and they would give you their wagon and everyone would be happy. People still trade things but most transactions are done using money. These days there are many other ways to buy things. People use credit cards, debit cards, Venmo, PayPal, Zelle, checks, electronic transfers, etc. But it’s all still based on money! Try to collect a quarter from each of the fifty states. Make sure you keep them in a safe place. You can buy a Fifty State quarter collection kit or make your own with the help of a family member. Another fun activity to try is to take a bunch of different coins. Separate them into pennies, nickels, dimes, etc. Look at the dates on each of the coins. Can you find the oldest one? Can you find the newest one? Can you find one with the year you were born? The first coins were made over 2,000 years ago. The first paper money was made over 1,000 years ago. Paper money isn’t really made from paper. It’s made from cotton and linen. Shells, ivory, clay, live animals, salt and grain were all once used as money. The world’s largest coin is almost 30 inches wide and is worth more than one million dollars. Guitar-shaped and motorcycle-shaped coins were once issued in Somalia.For Kids Let’s look at some different ways to get money. A. Find it in unexpected places. Sometimes you can be walking down the street, just minding your own business and there, lying on the ground right in front of you, is some money. Lucky you! It’s a nice idea except that things are a little more complicated than they might seem. Sure, it’s good luck for you because you found the money. But what about the person who lost the money? Maybe they don’t feel so lucky. So, whenever you find something that someone else may have lost, a good thing to do is to first try to find out who lost it so that you can return it to them. B. Get a gift of money from your family or friends. One way to get money is when someone just gives it to you. Maybe it’s your birthday. Maybe it’s a holiday. Or maybe your grandma came to visit and she is so happy to see you that she just handed you some money. Getting a gift is very nice so always remember to say “Thank You!” C. You can earn money by doing work or chores. There are many jobs that need to be done around the house. These jobs are calledchores. There are also jobs that can be done outside the house. These jobs arecalled work. Everyone in the family should help out by doing chores. But ask yourparents if you can earn extra money by doing extra chores. You also can get moneyby doing work for other people, such as lawn care, pet-sitting, babysitting andshoveling snow. D. You can get an allowance. An allowance is money that you can receive from your family on a weekly basis. Maybe you get $1 per week. Maybe you get $5 per week. Maybe you don’t get any allowance. Maybe you get money whenever you ask for it. If you do get an allowance, keep track of how much you receive, how often you get it, and make sure you keep the money in a safe place. E. You can sell something that you made or you can sell something that you no longer want or need. Do you have games that you no longer play? You can sell them to one of your friends. Or maybe you can trade, or barter, a game you no longer want for a game that they have. You can earn money by selling something that you no longer want at a family garage sale. Are you good at arts and crafts? Maybe you can create beautiful paintings or picture frames to sell. Are you very good at using cell phones or computers? Maybe you can charge money to create your own apps or teach others how to use their own cell phones or computers. Are there ways that I can make more money? Can you help me set up a lemonade stand or other business? Can I get an allowance? Can I make extra money by doing extra chores around the house? Can we have a family garage sale so that I can sell my old, unwanted stuff? There are millions of things to buy: video games, movies, music, apps, dolls, toys, food, clothing, popcorn, candy, basketballs, hockey sticks, lipstick… The list is unlimited; however, your supply of money is limited. What is a youngster to do? The answer is that you must become a wise spender of money. You can have almost anything you want. You just can’t have everything you want. It’s important to decide if the thing you are going to buy is something that you want or is it something that you need. A NEED is something like shelter, food, water and clothing. A WANT is something that you desire to have. For example, you NEED to wear clothes to school but you WANT the latest fashionable sneakers to wear to school. Most of the time, try to buy just the things you need. Here’s how to make a monthly budget. You write down all the things that you think you will need money for this month, and a guess of how much each item costs. Let’s call this: “Total money going out.” Then you write down how much money you think you will have coming in this month. Let’s call this: “Total money coming in.” If you plan to have more money comes in than going out, that’s great! You will have extra money left over for extra savings. If more money is going out, you have to either get more money coming in or you have to have less money going out. If you don’t have enough money, you have to either remove things from the list or figure out a way to earn more money. Don’t forget to leave money left over for savings. As the month goes by, you keep track of the actual amount of money coming in and the actual money going out. At the end of the month, you compare the actual amount of money coming in and the actual amount of money going out to the amount that you guessed at the beginning of the month. If there are things on your budget that you want to buy but can’t afford right now, maybe you can find other ways to get them for a lower price. You can take out books and movies from the library for free. Just make sure you return them on time or you will have to pay a fine. If there is a toy or game that you want but cannot afford to buy, maybe a friend of yours has the same game. Perhaps you can barter with them. You can offer them a game of yours for a game of theirs. This way, no one has to spend any money. You can go to a neighborhood garage sale. Many items are on sale for very low prices. Look online and in weekly newspapers for coupons that can help you purchase items for a lower price. If you see a new item that you want, sometimes if you wait a while, the price goes down. Many times, new items are very expensive when they first come out. Also, if you wait, you will have extra time to think about whether you really need it. This is similar to budgeting but a little different. Goal setting is usually for things that take a little longer to save up for. You write down all the things that you need and also, all the things you want in the future. You write down how much each item costs. Then you write down how much money you have. You then try to figure out a way to earn enough money to buy the items on your list. It may take a long time to save up enough money to buy some of the items on your list but that’s okay. And, once again, don’t forget to leave money left over for savings. You may find that you are spending a lot of money and you can’t figure out where it went. A good idea is to keep track every time you spend money. Write down what you bought, how much it cost, the date that you bought it and why you bought it. At the end of the week and month you can look back to see if you think you spent your money wisely. Kids, ask your parents: Can you help me make a budget? What is goal setting and can you help me with it? Where can I find coupons for the things I want? Why can’t I have everything that I want? With so many exciting things to buy, how can a youngster possibly save any money? Here are a few ways to become a wise saver of money: A. Save your money! The best way to save your money is to not spendit. That means not buying everything that you seein stores, on TV or online. It may be somethingthat you really want but, once you spend themoney in a store, that money isn’t yoursanymore. So, either don’t spend your money or, ifyou do spend it, spend only a little. B. Keep your money in a safe place. Once you have some money saved, make sure you keep it in a safe place. If youjust have a few dollars you can keep it in your house. Once you have saved morethan a few dollars, ask your parents to help you open up a savings account at alocal bank. C. Divide your money into three categories. Divide any money you receive into three categories: Money to save, money tospend and money to donate. Any time you receive money, make sure you put alittle bit into each of these categories. D. Add to your savings on a regular basis. Every time you get money, you should always put a portion of it into savings. Never spend it all. And make sure you put money into savings FIRST. Then you can decide what to do with the amount that’s left over Here’s a simple chart to help you set a savings goal. You can save up for a new bike or new app or maybe you want to save money so that you will have more money in the bank. Fill in the answers to the first four questions. Then, as you get money, you keep track of it on this chart. 1. I’m saving up for? 2. The amount of money I want to save is? 3. The date that I will have the money is? 4.The way that I will get the money is? Money that I saved (each time) – Total amount saved so far It’s great having toys, games, money and savings. But not everyone is so fortunate. If we are able to, it’s important to help others. Here are a few ways that we can help others who are less fortunate than we are: A. Give of your time. You don’t have to give money to help someone else. You can donate your time.This is called volunteering. And you don’t have to wait until you have a certainamount of money. You can start today. B. Give of your possessions. Collect coats and give to a homeless shelter in winter. Sort through old toys that you don’t play with anymore and donate them to Goodwill or other local charity. C. Give your money. Hold a lemonade stand or bake sale and donate the profits to a local charity. Set aside a portion of your holiday gifts or allowance money and donate to a charity. Learning to give to others transcends the financial aspect. It helps to reinforce to your child that they are part of a larger society that is bound together. Talk to your child about how it’s good to succeed as an individual but it’s also important to be connected to a larger community. They will discover the fulfilling aspect of volunteering. Help your child pick charitable organizations of interest. With your child, do research to determine the beneficiaries of each charity. Donate as a family. Volunteer as a family. Shop with your child to buy canned food and deliver it to the local shelter. Volunteer with your child at a local animal shelter. Have your child set aside a portion of any money they get for their birthdays or allowances that will be donated. Go with your child and visit a local nursing home in the neighborhood. The two of you can play games, read books and do crafts together with the residents. Volunteer at a local food shelter to set tables, serve beverages and clean up. Collect old clothing, blankets and toys and donate to the local homeless shelter. Join a Walk to raise awareness and money to fight a disease. Children need to learn how to handle money wisely. It is a life skill that will be useful to them both short-term and long-term. As parents, we are quick to teach children to brush their teeth, say “please” and “thank you,” and look both ways before they cross the street. But, very often, they neglect one vital lesson – how to handle money wisely. There are great temptations out in the world for children and adults alike. A few minutes spent on the Internet will present a child with endless temptation to buy toys, games, clothing, and much more. When kids are old enough to actually buy these items, online shopping makes it a little too easy. It’s more important than ever to teach children the proper balance between shopping, saving and giving. And, while it’s not necessary to discuss your salary with your kids, you certainly can involve them in general financial discussions such as vacation planning, saving for college, major purchase decisions, etc. Engaging your kids in a conversation about money-related subjects will go a long way toward setting them on the right path to a healthy relationship with money. In order for children to learn how to handle money wisely, they must acquire money. Learning how to acquire money is a vital first step towards financial independence. Be sure to discuss the subject of honesty and integrity with your child when they find something that someone else lost. Explain to them why the right thing to do is return it, whenever possible. After receiving gifts from others, be sure to teach your child to express genuine appreciation and gratitude. Help them write a “Thank You” note, when appropriate. You can encourage your child to earn additional money by performing simple jobs in and out of the house. Discuss the many options to earn money that are available. Talk to your child to help them determine an appropriate price to charge for each job. Regarding allowance, some people recommend giving your child a weekly allowance while others suggest that it’s better to not give a weekly allowance. Some say that children should get an allowance only for doing chores because it’s important to learn that you can’t get something for nothing. Others say that, when children get paid for chores, they don’t learn to be an integral part of the family. They say that the child learns to help only if they are being paid to help. If you do decide to pay your child for doing chores, agree with your child on a reasonable written list of chores with an associated payment for each chore performed. Some recommend providing allowance without any strings attached. And others advocate a little of both. Our recommendation is to do your own research, talk to your child and try to find what works best for you and your family’s own individual circumstances. Involve your child in having a garage sale. Gather unneeded items from your basement, attic and garage. Have your child suggest items that can be put up for sale. They can help create the advertising posters, help to set the selling price of each item, help with the inevitable negotiating with the customers, and help count the money at the end of the day. It is important to have your child get in the habit of saving their money, not just spending it. A general rule of thumb for young kids is to save 50%, spend 40% and donate 10% of their money. As they get older, children may need to increase their savings rate to purchase a car or save for college tuition. Goal setting is one of the most valuable lessons that a child can learn. Discuss long-term strategies with your child for accomplishing their money goals including eliminating items from their list, postponing purchases, increasing income, etc. Assist your child in creating a money goal chart. Discuss other areas in your child’s life where long-term goal setting would be useful. Here’s a common dilemma that many parents have faced: You and your child are strolling down the aisle of the local store when your child sees the latest must-have toy that appeared in a recent online or TV ad. Your child says, “Mom, I want that?” You reply, “No, you can’t have it.” Your child screams, “But I want it!” You scream, “No, you can’t have it!” Your child screams, “But I want it!” You scream, “No, you can’t have it!” … repeat ad infinitum… Many parents have experienced this interaction. Here’s a suggestion to turn this into a teachable moment. Your child says, “Mom, I want that?” You reply, “Wow, that’s a cool toy!” Your child replies, “It is. And I want it.” You reply, “Is it something that you can afford?” Your child replies, “I don’t know.” You reply, “Let’s look at the budget we made.” (You take out the budget that the two of you prepared and review it with your child.) Your child replies, “It’s not on the budget. I didn’t know about it when we made the budget. It’s a new toy that just came out.” You reply, “No problem. Is it something that you really need or just something that you want?” Your child replies, “I really need it.” You reply, “Are you sure you need it? Maybe a friend of yours has one and can lend it to you?” Your child replies, “No, I need my own new one.” You reply, “Okay, do you want to take something off the budget and put this on instead or should we add it onto the budget? Your child replies, “Let’s add it onto the budget.” You reply, “No problem, now let’s figure out how you can afford it. We can’t buy it today but maybe, with hard work and good planning, we can figure out a way to buy it sometime in the future.” When you return home, you can discuss with your child that the goal of advertising is to make you want to buy things that you may not actually need. Discuss the concept of Opportunity Cost with your child. This is just another way of saying, “If you buy this, you won’t be able to buy that.” Discuss with your child how turning off the lights when you leave a room is part of being a wise spender. Discuss with your child that earning $10 and spending $9 is worse from a savings perspective than earning only $5 and spending $2. Remember to set a good example by recycling and reusing items. Help your child open up a savings account at a local bank. You can review the monthly statements with them, showing the increase in value. This can lead into a discussion of compound interest. Encourage your child to develop a regular savings plan. Decide how much of each allowance or chore money will be set aside. Reinforce the concept of “paying yourself first” by setting aside money for savings first. Suggest a goal of setting aside 50% of any income towards savings. Encourage your child to save as much as possible and set a savings goal. Set a target amount to save by a particular date. If your child meets that goal, you can add a certain amount such as, for every $100 they save, you will add $20. You can make a savings goal chart. Once your child knows what they want to save for, the two of you can make a chart. Use stickers to indicate that the current amount was put aside. Set a good example. Let your child see you save money. Be a good financial role model. Remember, your kids learn by watching you.
<urn:uuid:485c1240-3eb6-44c5-b2dc-cb11f41ceb80>
CC-MAIN-2022-33
https://walterthevault.com/lets-learn-about-money/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00097.warc.gz
en
0.960584
4,585
3.3125
3
The Cassini-Huygens space-research mission (/ k ə ˈ s iː n i ˈ h ɔɪ ɡ ən z / kə-SEE-nee HOY-gənz), commonly called Cassini, involved a collaboration among NASA, the European Space Agency (ESA), and the Italian Space Agency (ASI) to send a space probe to study the planet Saturn and its system, including its rings and natural satellites.The Flagship-class robotic spacecraft comprised. A thrilling epoch in the exploration of our solar system came to a close today, as NASA's Cassini spacecraft made a fateful plunge into the atmosphere of Saturn, ending its 13-year tour of the ringed planet. This is the final chapter of an amazing mission, but it's also a new beginning, said Thomas Zurbuchen, associate administrator for NASA's Science Mission Directorate at NASA Headquarters. Arrival of Cassini Spacecraft at Saturn How long did it take Cassini to reach Saturn? What moon did NASA want to map? Date: _____ First Untethered Spacewalk Name of the first astronaut to be untethered in space: What did he wear? Date: _____ Hubble Space TeloscopeDeployed How heavy is the Hubble Teloscope? (1 ton = 2000 pounds!) What flaw did. The NASA-ESA Cassini spacecraft is en route to Saturn now, due to arrive in July. Cassini will orbit for four years, studying Saturn's rings, weather and magnetic field. Cassini will also drop a probe named Huygens through the thick orange clouds of Titan to discover what lies beneath. Titan is one of the most mysterious worlds in the solar system A joint endeavor of NASA, the European Space Agency, or ESA, and the Italian Space Agency, Cassini launched in 1997 along with ESA's Huygens probe. The spacecraft contributed to studies of Jupiter for six months in 2000 before reaching its destination, Saturn, in 2004 and starting a string of flybys of Saturn's moons Took 7 years for Cassini to reach Saturn What moon did NASA want to map? NASA wanted to map the moon Titan February 7, 1984 Date: _____ First Untethered Spacewalk April 25, 1990 Date: _____ Hubble Space Teloscope Deployed Name of the first astronaut to be untethered in space: Bruce McCandeles January 3, 2004 Spirit Rover Landing on Mars How. The Moon as Seen from Cassini. Image Credit: NASA/JPL/Space Science Institute. Published: October 4, 2017. This image was taken on Aug. 17, 1999 by NASA's Cassini spacecraft as it passed by the Moon during an Earth flyby while Cassini was en route to Saturn. ENLARGE The planet Saturn is seen in the first color composite made of images taken by NASA's Cassini spacecraft on its approach to the ringed planet, October 21, 2002. The probe's arrival is still 20. The spacecraft was launched in 1997 but did not arrive at the ringed planet until 12 years ago. Describing the mission, Julie Webster, NASA's operations chief of Cassini said: To keep the. In the 12 years since, Cassini has made 251 one orbits of Saturn, sweeping over its cloudtops, barnstorming its moons, even firing a probe into the atmosphere of the largest of those moons, Titan. Nasa scientists are preparing to kill off the Cassini space probe with a spectacular suicidal dive into Saturn's atmosphere on Friday. The 22ft robot craft will break into fragments and burn up as it ploughs into the ringed planet's cloud tops, ending a 20-year mission that cost £2.9 billion. Cassini was launched in 1997 and took seven. All were taken when Cassini was about 394,000 miles (634,000 kilometers) from Saturn, NASA officials said. The spacecraft burned up in a patch of Saturn sky at 9.4 degrees north latitude and 53. , which was not well known before the spacecraft arrived at the planet in 2004 Saturn's icy moon Dione, with giant Saturn and its rings in the background, was captured in this mosaic of images just prior the Cassini spacecraft's final close approach to the moon on August 17. . After launch in 1997, Cassini-Huygens performed gravity assist maneuvers at Venus in April 1998 and June 1999, Earth in August 1999, and then Jupiter in December 2000. Each gravity-assist gave the spacecraft an extra kick of velocity, so it could reach Saturn in just under 7 years of flight time NASA's Cassini spacecraft stared at Saturn for nearly 44 hours on April 25 to 27, 2016, to obtain this video showing just over four Saturn days. Starting in November Cassini will begin its. On September 15, 2017, the Cassini spacecraft will dive into Saturn, ending a 13-year tour of the ringed planet and its strange moons. Cassini arrived at Saturn in 2004, after a seven-year journey. The US$3.2-billion US-European spacecraft is set to arrive at Saturn on 1 July to begin a four-year, 74-orbit tour of the giant gas planet, its rings and its moons Cassini was launched in 1997 and arrived at Saturn in 2004. Last year, the spacecraft received a seven-year mission extension that will keep it operational through 2017. Follow SPACE.com for the latest in space science and exploration news on Twitter @Spacedotcom and on Facebook. The Rings and Moons of Saturn Cassini didn't stop at Saturn, either: The spacecraft pierced the thick smog of Saturn's largest moon, Titan, to discover lakes of methane and ethane, the only liquid known to exist on a planet. When NASA's Cassini spacecraft arrived at Saturn in 2004, it did in fact show that Titan is speckled with lakes and seas — although the liquid is ethane and methane, rather than water August 12, 2004. C assini bagged its first major discovery just one month into the start of the spacecraft's 4-year-long exploration of Saturn and its moons. NASA scientists announced last week. The spacecraft arrived to Saturn in 2004, marking the start of its historic 13-year mission studying the planet and its moons. In March 2013 Cassini made the last flyby of Saturn's moon Rhea. Cassini captured the ultraviolet glow from Saturn's aurora one day before the spacecraft crashed into the planet. The north pole lies at the center of this image, while the bottom faces the Sun Cassini departed Earth in 1997 and arrived at the solar system's second-largest planet in 2004. The European Huygens landed on Saturn's big moon Titan in 2005. Nothing from Earth has landed farther. NASA's Cassini spacecraft disintegrated in the skies above Saturn early on September 15 in a final, fateful blaze of cosmic glory, following Copy link. After nearly 13 years in orbit around Saturn and almost two decades in space NASA's Cassini spacecraft has ended its mission with a spectacular dive into the planet's atmosphere. NASA launched Cassini toward Saturn in 1997. The probe arrived in 2004 and has studied the planet, its rings of ice and dust, and its collection of mysterious moons ever since She joined the Cassini project in 1990, and since the spacecraft's arrival at Saturn in 2004 she's become the earthbound caretaker of Saturn's moon Titan Image copyright NASA/JPL-Caltech/SSI Image caption Cassini has been trying to weigh the rings . The spectacular rings of Saturn may be relatively young, perhaps just 100 million years or so old. This is the early interpretation of data gathered by the Cassini spacecraft on its final orbits of the giant world Despite two Titan flybys by NASA's Voyager probes in 1980 and 1981, we knew little about the moon's surface until the Cassini spacecraft arrived to orbit Saturn between 2005 and 2017. This is because sunlight causes the molecules of methane high in Titan's atmosphere to link into larger molecules , making a high-altitude smog that the. In the past spacecraft have taken greatly different amounts of time to make it to Saturn. Pioneer 11 took six and a half years to arrive. Voyager 1 took three years and two months, Voyager 2 took. The actual last image Cassini took of Saturn before its final plunge. This was taken on September 14, 2017 when the spacecraft was 634,000 kilometers above the cloud tops. Image Credit: NASA/JPL-Caltech/Space Science Institute. That monochrome shot was taken on September 14, 2017, less than a day before Cassini burned up NASA is set to end the Cassini spacecraft program by crashing it into Saturn on Friday, Sept. 15. one of Saturn's many moons, while Cassini continued a 13-year orbit mission around Saturn. Until Cassini, only three spacecraft had ventured into Saturn's neighborhood: NASA's Pioneer 11 in 1979 and Voyager 1 and 2 in the early 1980s. Those were just flybys, though, and offered. On July 1, 2004, the spacecraft arrived at Saturn, where Cassini shifted most of its focus to the moon Titan. Scientists hope that Titan will provide them with a window into the Earth's past Phoebe was the first target encountered upon the arrival of the Cassini spacecraft in the Saturn system in 2004, and is thus unusually well-studied for an irregular satellite of its size. Cassini's trajectory to Saturn and time of arrival were specifically chosen to permit this flyby Saturn Educator Guide. Successfully launched at 4:43 EDT on the morning of October 15, 1997, NASA's Cassini Mission to Saturn, is the most ambitious deep space mission ever. The Saturn Educator Guide enables this extraordinary mission to become a real-world motivational context for learning standards-based science in grades 5-8 On September 15, 2017, NASA's Cassini spacecraft will intentionally plunge into Saturn, preventing a future, accidental collision with the potentially habitable moons Enceladus and Titan. Cassini has been at Saturn for 13 years. After it is gone, Juno, which is orbiting Jupiter, will be the lone spacecraft exploring any of the outer four planets NASA's Cassini spacecraft, which will arrive at Saturn in July, could provide more definitive answers with radar that will better map the moon's surface and instruments that can detect what it is. Some 130 close encounters with Titan, Saturn's largest moon, were staged in such a way that Titan's gravity deflected the spacecraft towards close encounters with other moons. The most awe-inspiring legacy of the Cassini mission is its collection of almost 400,000 photos Image: Saturn's active, ocean-bearing moon Enceladus sinks behind the giant planet in a farewell portrait from NASA's Cassini spacecraft. This view of Enceladus was taken by NASA's Cassini spacecraft on Sept. 13, 2017. It is among the last images Cassini sent back. Credit: NASA/JPL-Caltech/Space Science Institute This Sept. 13 image of Saturn's outer A ring, captured by NASA's Cassini spacecraft, shows the small moon Daphnis and the waves it raises in the edges of the Keeler Gap. (NASA / JPL-Caltech / SSI Until Cassini, only three spacecraft had ventured into Saturn's neighborhood: NASA's Pioneer 11 in 1979 and Voyager 1 and 2 in the early 1980s. Those were just flybys, though, and offered fleeting glances. And so Cassini and its traveling companion, the Huygens (HOY'-gens) lander, actually provided the first hard look at Saturn, its rings and. The Huygens probe proved the theory of a global ocean was incorrect, and from what the Cassini spacecraft has seen so far, the lakes of liquid hydrocarbon on Titan are mostly confined to the moon. WASHINGTON — NASA's Cassini spacecraft has provided scientists the first close-up, visible-light views of a behemoth hurricane swirling around Saturn's north pole. In high-resolution pictures and video, scientists see the hurricane's eye is about 1,250 miles (2,000 kilometers) wide—20 times larger than the average hurricane eye on Earth APOD: 2003 September 10 - Aurora Over Clouds Explanation: Aurorae usually occur high above the clouds. The auroral glow is created when fast-moving charged particles from the Earth's magnetosphere impact air molecules high in the Earth's atmosphere.An oxygen molecule, for example, will emit a green light when reacquiring an electron lost during a collision NASA's Cassini spacecraft is headed toward its Sept. 15 plunge into Saturn, following a final, distant flyby of the planet's giant moon Titan. The spacecraft made its closest approach to Titan today [Sept. 11] at 12:04 p.m. PDT (3:04 p.m. EDT), at an altitude of 73,974 miles (119,049 kilometers) above the moon's surface Summary. Nasa's Cassini spacecraft was destroyed at 12:55 BST (04:55 PDT) as it plunged into Saturn's atmosphere. The plan prevented it crashing into and contaminating the moons Titan or Enceladus. (From Pioneer, First to Jupiter, Saturn and Beyond, NASA SP-446, 1980) Only by going there could the danger be properly assessed - and Pioneer was first. I was reading Dr. Carl Sagan's biography recently and found that he persuaded NASA administrators to turn one of the Voyager space probes around in order to take a last image of the solar system NASA/JPL. But in 2005, when Cassini's Huygens lander arrived at Titan and descended to its surface, the atmospheric profile measured from its instruments did not match that derived from the 2003. NASA has stitched together 141 snapshots from its Cassini spacecraft to create a dramatic mosaic of Saturn, with Earth, Venus, and Mars appearing as tiny lights in the background Friday the 15th of September marked the end of a 20-year long journey for the spacecraft Cassini because it was purposefully crashed into the atmosphere of Saturn. It took 7 years to reach Saturn from Earth and has been exploring the system of rings and moons for the past 13 years, transmitting data back to scientists at NASA and the Europeans Space Agency Scientists have discovered methane lakes in the tropical areas of Saturn's moon Titan, one of which is about half the size of Utah's Great Salt Lake, with a depth of at least one meter. The longstanding bodies of liquid were detected by NASA's Cassini spacecraft, which has been orbiting Saturn since its arrival at the ringed planet in 2004 NASA's Cassini spacecraft captures Saturn's largest moon, Titan, passes in front of the planet and its rings. Credit: NASA/JPL-Caltech/Space Science Institut Cassini is no more.In the early hours of the morning, the long-lived spacecraft plunged into Saturn (intentionally) and broke apart, ending its 13-year run of exploring the sixth planet and it. NASA release Cassini Discovers Ring and One, Possibly Two, Objects at Saturn. Scientists examining Saturn's contorted F ring, which has baffled them since its discovery, have found one small body, possibly two, orbiting in the F ring region, and a ring of material associated with Saturn's moon Atlas.. A small object was discovered moving near the outside edge of the F ring, interior to the. Scientists say they have discovered a faint oxygen atmosphere around Saturn's icy moon Dione which is 5 trillion times less dense than the air at Earth's surface. Dione's atmosphere was detected by NASA's Cassini spacecraft, which has been orbiting Saturn since its arrival at the ringed planet in 2004 The Cassini-Huygens spacecraft arrived at the planet Saturn on July first. It flew into orbit from below the famous rings that circle the planet. Saturn's moon, Titan, is very large. A spacecraft is a vehicle, or machine designed to fly in outer space. Spacecraft are used for a variety of purposes, including communications, earth observation, meteorology, navigation, space colonization, planetary exploration, and transportation of humans and cargo. On a sub-orbital spaceflight, a spacecraft enters space and then returns to the surface, without having gone into an orbit. Woo hoo! NASA has just announced that once Cassini's Equinox Mission runs out in June of this year, they will extend it a further seven more years, long enough for the spacecraft to see Saturn through its solstice!! Here's a neat graphic that summarizes Cassini's entire planned tour of the Saturn system This infrared-light image, made from data obtained by the visual and infrared mapping spectrometer aboard NASA's Cassini spacecraft, shows where the probe entered Saturn's atmosphere on Sept. 15, 2017. Cassini captured the image a day earlier, when it was about 394,000 miles (634,000 kilometers) from Saturn Cassini, which arrived in the Saturn system in 2004 and ended its mission in 2017 by deliberately plunging into Saturn's atmosphere, mapped more than 620,000 square miles (1.6 million square. The spinning vortex of Saturn's north polar storm is seen from Cassini spacecraft on Nov. 27, 2012. The photo released by NASA was taken from a distance of approximately 261,000 miles (420,038. The Cassini spacecraft has sent back images and data on Hyperion, on of the many moons of Saturn. Hyperion is perhaps the weirdest moon in the solar system being just 270Km across quite porous and. The spacecraft is making excellent progress in reshaping its orbit around the sun to match that of its destination, the unexplored world Vesta, with arrival now less than five months away. We have considered before the extraordinary differences between Dawn's method of entering orbit and that of planetary missions employing conventional. *Brings the story of the Cassini-Huygens mission and their joint exploration of the Saturnian system right up to date. *Combines a review of previous knowledge of Saturn, its rings and moons, including Titan, with new spacecraft results in one handy volume The Cassini-Huygens spacecraft arrived at the planet Saturn on July first. It flew into orbit from below the famous rings that circle the planet. Carefully, Cassini crossed through a large space between two of the huge rings at speeds close to eighty-seven-thousand kilometers an hour CAPE CANAVERAL, Fla. (AP) — After a 20-year voyage, NASA's Cassini spacecraft is poised to dive into Saturn this week to become forever one with the exquisite planet A giant of a moon appears before a giant of a planet undergoing seasonal changes in this natural color view of Titan and Saturn from NASA's Cassini spacecraft. Titan, Saturn's largest moon, measures 3,200 miles, or 5,150 kilometers, across and is larger than the planet Mercury The twin spacecraft Voyager 1 and Voyager 2 were launched by NASA in separate months in the summer of 1977 from Cape Canaveral, Florida. As originally designed, the Voyagers were to conduct closeup studies of Jupiter and Saturn, Saturn's rings, and the larger moons of the two planets. › View mor When the twin spacecraft were launched, NASA was taking advantage of a rare alignment of Jupiter, Saturn, Uranus and Neptune that occurs once every 175 years to send probes on a Grand Tour of. The Cassini-Huygens mission (/ k ə ˈ s iː n i ˈ h ɔɪ ɡ ən z / kə-SEE-nee HOY-gənz), commonly called Cassini, was a collaboration between NASA, the European Space Agency (ESA), and the Italian Space Agency (ASI) to send a probe to study the planet Saturn and its system, including its rings and natural satellites.The Flagship-class robotic spacecraft comprised both NASA's Cassini. Cassini Flight Path. Cassini-Huygens was launched on 15th October 1997. It was comprised of two spacecraft - Cassini to orbit Saturn for many years studying the planet and its moons and rings, and Huygens - an atmospheric probe which successfully landed on Saturn's largest moon Titan Contact has been lost with the Cassini spacecraft after it completed a death dive into the upper atmosphere of Saturn and transmitted its final signal, according to NASA On Thursday afternoon, NASA's $3.3 billion Cassini spacecraft will capture one final image: A close-up of its eventual killer, the gas giant Saturn. Roughly 14 hours later, the spacecraft will. 1960s: Race to the Moon. Apollo 12 commander Charles Conrad Jr. examines the robotic Surveyor 3 spacecraft during his second extravehicular activity (EVA) on the Moon on 20 November 1969. Unsuccessful; flew past Moon. Successful; first U.S. Moon landing and first U.S. photo from the lunar surface Today, September 24, the NASA Cassini spacecraft will perform a new flyby maneuver around Titan, the largest moon orbiting the gas giant Saturn. This will by a high-altitude swing-by
<urn:uuid:b11b5346-ffc8-4fc0-92bd-d8b440130170>
CC-MAIN-2022-33
https://detruitgenieten.com/news/live/science-environment-41249243/page/2vswqu44842g-f
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00098.warc.gz
en
0.939832
4,339
3.640625
4
United States Olympic Committee United States Olympic Committee logo |Headquarters||Colorado Springs, Colorado| |President||Lawrence F. Probst III| |Secretary General||Scott Blackmun| |National Paralympic Committee| |Headquarters||Colorado Springs, Colorado| Founded in 1894 and headquartered in Colorado Springs, the United States Olympic Committee (USOC) is the National Olympic Committee for the United States. In addition, the USOC is one of only four NOCs in the world that also serve as the National Paralympic Committee for their country. The USOC is responsible for supporting, entering and overseeing U.S. teams for the Olympic Games, Paralympic Games, Youth Olympic Games, Pan American Games, and Parapan American Games and serves as the steward of the Olympic and Paralympic Movements in the United States. The Olympic Movement is overseen by the International Olympic Committee. The IOC is supported by 35 International Federations that govern each sport on a global level, National Olympic Committees that oversee Olympic sport as a whole in each nation, and National Federations that administer each sport at the national level (called National Governing Bodies in the United States). Similarly, the National Paralympic Committee is the sole governing body responsible for the selection and training of all athletes participating in the Paralympic Games. The USOC is one of 204 NOCs and 174 NPCs within the international Olympic and Paralympic Movements. Forty-seven NGBs are members of the USOC. Fifteen of the NGBs also manage sports on the Paralympic program, while the USOC governs four Paralympic sports (cycling, skiing, swimming and track & field), and five additional Paralympic sports are governed by U.S. members of International Paralympic Federations (wheelchair basketball, boccia, goalball, powerlifting and wheelchair rugby). Unlike most other nations, the United States does not have a sports ministry. The USOC was reorganized by the Ted Stevens Olympic and Amateur Sports Act, originally enacted in 1978. It is a federally chartered nonprofit corporation and does not receive federal financial support (other than for select Paralympic military programs). Pursuant to the Act, the USOC has the exclusive right to use and authorize the use of Olympic-related marks, images and terminology in the United States. The USOC licenses that right to sponsors as a means of generating revenue in support of its mission. Upon the founding of the International Olympic Committee in 1894, the two American IOC members – James Edward Sullivan and William Milligan Sloane – formed a committee to organize the participation of American athletes in the 1896 Summer Olympics, in Athens, Greece. In 1921, the committee adopted a constitution and bylaws to formally organize the American Olympic Association. From 1928 to 1953, its president was Avery Brundage, who later went on to become the president of the IOC, the only American to do so. In 1940, the AOA changed its name to the United States of America Sports Federation and, in 1945, changed it again to the United States Olympic Association. In 1950, federal mandate allowed the USOA to solicit tax-deductible contributions as a private, non-profit corporation. After several constitutional revisions were made to the federal charter in 1961, the name was changed to the United States Olympic Committee. The Amateur Sports Act of 1978 established the USOC as the coordinating body for all Olympic-related athletic activity in the United States, specifically relating to international competition. The USOC was also given the responsibility of promoting and supporting physical fitness and public participation in athletic activities by encouraging developmental programs in its member organizations. The provisions protect individual athletes, and provide the USOC’s counsel and authority to oversee Olympic and Paralympic business in the United States. The public law not only protects the trademarks of the IOC and USOC, but also gives the USOC exclusive rights to the words "Olympic," "Olympiad" and "Citius, Altius, Fortius," as well as commercial use of Olympic and Paralympic marks and terminology in the United States, excluding American Samoa, Guam, Puerto Rico and the U.S. Virgin Islands, which fall under the authority of separate NOCs and NPCs. One of the many revolutionary elements contained within the legislation was the Paralympic Amendment – an initiative that fully integrated the Paralympic Movement into the USOC by Congressional mandate in 1998. U.S. Paralympics, a division of the USOC, was founded in 2001. In addition to selecting and managing the teams which compete for the United States in the Paralympic Games, U.S. Paralympics is also responsible for supporting Paralympic community and military sports programs around the country. In 2006, the USOC created the Paralympic Military Program with the goal of providing Paralympic sports as a part of the rehabilitation process for injured soldiers. Through the U.S. Olympic Committee Paralympic Military Program, USOC hosted the Warrior Games for wounded service personnel from 2010 to 2014, until the organization of the event was taken on by the Department of Defense in 2015. The USOC moved its headquarters from New York City to Colorado Springs on July 1, 1978. Thanks to the generous support of the City of Colorado Springs and its residents, the USOC headquarters moved to its present location in downtown Colorado Springs in April 2010, while the previous site (located just two miles away) remains a U.S. Olympic Training Center. After convening in 2010 the Working Group for Safe Training Environments, USOC formed the Safe Sport program to address child sexual abuse, bullying, hazing and harassment and emotional, physical and sexual misconduct within its domain. Several national law firms were enlisted "to aid ... National Governing Bodies ..., free of charge, in responding to claims of misconduct in sport". In February 2011 the USOC launched an anti-steroid campaign in conjunction with the Ad Council called "Play Asterisk Free" aimed at teens. The campaign first launched in 2008 under the name "Don't Be An Asterisk". The USOC is governed by a 16-member board of directors and a professional staff headed by a CEO. The USOC also has three constituent councils to serve as sources of opinion and advice to the board and USOC staff, including the Athletes’ Advisory Council, National Governing Bodies Council and Multi-Sport Organizations Council. The AAC and NGBC have three representatives on the board, while six members of the board are independent. USOC CEO Scott Blackmun and all American members of the IOC (Anita DeFrantz, James Easton and Angela Ruggiero) are ex officio members of the board. The USOC named Blackmun CEO on Jan. 6, 2010. Blackmun held a previous stint at the USOC, serving as acting chief executive officer (2001), senior managing director of sport (2000) and general counsel (1999). He also serves on the IOC’s Marketing Commission and on the board of the National Foundation for Fitness, Sports and Nutrition. On Oct. 2, 2008, Lawrence F. Probst III was elected chairman of the USOC board of directors. Probst also serves on the IOC’s International Relations Commission, a post he assumed by IOC appointment on March 10, 2011. |David R. Francis||1904–1906| |Frederic B. Pratt||1910–1912| |Robert M. Thompson||1912–1920| |Gustavus T. Kirby||1920–1924| |Robert M. Thompson||1924–1926| |William C. Prout||1926–1927| |Henry G. Lapham (interim)||1927| |Clifford H. Buck||1970 (interim)| |Clifford H. Buck||1970–1973| |William E. Simon||1981–1985| |John B. Kelly Jr.||1985| |Robert H. Helmick (interim)||1985| |Robert H. Helmick||1985–1991| |Bill Hybl (interim)||1991–1992| |LeRoy T. Walker||1992–1996| |Marty Mankamyer (interim)||2002| |William C. Martin (interim)||2003–2004| |J. Lyman Bingham||1950–1965| |Arthur G. Lentz||1965–1973| |F. Don Miller||1973–1985| |George D. Miller||1985–1987| |Baaron Pittenger (acting)||1987–1988| |Jim Scherr (acting)||2003–2005| National Governing Body Members National Governing Bodies are organizations that look after all aspects of their individual sports. The NGBs are responsible for training, competition and development for their sports, as well as nominating athletes to the U.S. Olympic, Paralympic, Youth Olympic, Pan American and Parapan American Teams. There are currently 31 Olympic summer sport NGBs in the United States, as well as eight Olympic winter sport NGBs. The United States Olympic Committee is a 501(c)(3) not-for-profit corporation supported by American individuals and corporate sponsors. Unlike most other nations, the USOC does not receive direct government funding for Olympic programs (except for select Paralympic military programs). The USOC’s main sources of revenue are television broadcast rights, sponsorships and philanthropy in the form of major gifts and direct mail income. Additional funding comes from the government for Paralympic programming, as well as other sources such as the city of Colorado Springs and the U.S. Olympic Foundation. The USOC asks for contributions from time to time using public service announcements and other direct solicitations. Also, some proceeds from sales in its online store benefit the committee. There has been some financial conflict between the USOC and International Olympic Committee (IOC), with some pointing out the frequent leadership changes of USOC, and USOC trying to broadcast the Olympics using its own television network, which the IOC discouraged. USOC president Peter Ueberroth allegedly stonewalled a negotiation between IOC and USOC to discuss the revenue sharing of the US broadcasts with IOC. The failure of the 2012 and 2016 US Olympic bids was partly blamed by some on USOC. For instance, NBC television executive Dick Ebersol said after the failed 2016 bid, "This was the IOC membership saying to the USOC there will be no more domestic Olympics until you join the Olympic movement". USOC has also been criticized for not providing equal funding to Paralympic athletes, compared to Olympic athletes. In 2003, a lawsuit was filed by American Paralympic athletes Tony Iniguez, Scot Hollonbeck and Jacob Heilveil. They alleged that the USOC was underfunding American Paralympic athletes. Iniguez cited the fact that the USOC made health care benefits available to a smaller percentage of Paralympians, provided smaller quarterly training stipends and paid smaller financial awards for medals won at the Paralympics. American Paralympians saw this as a disadvantage for Paralympic athletes, as nations such as Canada and the United Kingdom support Paralympians and Olympians virtually equally. The USOC did not deny the discrepancy in funding, but contended that this was due to the fact that it did not receive any government financial support. As a result, it had to rely on revenue generated by the media exposure of its athletes. Olympic athletic success resulted in greater exposure for the USOC than Paralympic athletic achievements. The case was heard by lower courts, who ruled that the USOC has the right to allocate its finances to athletes at different rates. The case was appealed to the Supreme Court, who on September 6, 2008 announced that it would not hear the appeal. However, during the time the lawsuit had lasted (from 2003 to 2008), the funding of Paralympic athletes more than tripled. In 2008, $11.4 million was earmarked for Paralympic athletes, up from $3 million in 2004. In the run-up to the 2012 Summer Olympics, it was discovered that the American uniforms for the Games' opening and closing ceremonies, designed by Ralph Lauren, were manufactured in China. This sparked criticism of the USOC from media pundits, the public and members of Congress. The USOC operates Olympic Training Centers at which aspiring Olympians prepare for international competition: - The main facility in Colorado Springs, Colorado offers both summer and winter sports training in a variety of sports. It houses the USOC headquarters and many permanent athletic venues. - The ARCO Training Center in Chula Vista, California offers training in various summer sports. The largest facility there is a lake for canoeing and rowing. - The U.S. Olympic Center in Lake Placid, New York is a facility for winter sports athletes. Permanent facilities include an ice hockey/figure skating arena, a bobsled run, and a luge run. Although catered toward elite athlete training, these complexes are also open to the public and offer a variety of services, including tours and regular camps and competitions for various domestic and international sport programs. Additionally, the USOC partners with 16 elite training sites across the country, to provide U.S. athletes with Olympic-caliber facilities that positively impact performance. Facilities with U.S. Olympic training site designation have invested millions of dollars in operating, staffing, equipment and training costs. These sites are often selected to host U.S. Olympic Team Trials and support Team USA athletes prepare for the Olympic Games. The USOC administers a number of awards and honors for individuals and teams who have significant achievements in Olympic and Paralympic sports, or who have made contributions to the Olympic and Paralympic movement in the U.S. - USOC Athlete of the Year - Awards are given annually to the top overall male athlete, female athlete, Paralympic athlete, and team, from among the USOC's member organizations. - USOC Coach of the Year - Awards are given annually to the top national, developmental, Paralympic, and volunteer coaches, and for achievement in sports science. - U.S. Olympic Hall of Fame - The Hall of Fame honors Olympic and Paralympic athletes, teams, coaches, and others who have demonstrated extraordinary service to the U.S. Olympic movement. - U.S. Olympic Spirit Award - This award is given biennially to athletes demonstrating spirit, courage, and achievement at the Olympic and Paralympic Games. - Jack Kelly Fair Play Award – Presented annually to an athlete, coach or official in recognition of an outstanding act of fair play and sportsmanship displayed during the past year. - Rings of Gold Award – Awards are presented annually in honor of an individual and a program dedicated to helping children develop their Olympic or Paralympic dreams and reach their highest athletic and personal potential. - Olympic Torch Award – Presented annually to an individual who has positively impacted the Olympic Movement and has contributed to promoting the Olympic Ideals throughout the U.S. The USOC generates support from two principal types of Olympic sponsorship: worldwide and domestic. Each level of sponsorship grants companies different marketing rights and offers exclusive use of designated Olympic and Team USA images and marks. Under the domestic sponsorship program, the USOC also has special partnerships with various licensees, suppliers and outfitters that provide vital services and products to support Team USA. Across all levels of sponsorship, the USOC is committed to preserving the values of the Olympic properties and protecting the exclusive rights of Olympic sponsors. Created by the International Olympic Committee in 1985, the Olympic Partners TOP program is the highest level of Olympic sponsorship, granting exclusive worldwide marketing rights to the Olympic Games and Winter Olympic Games. Managed by the IOC, the TOP program supports the OCOGs, NOCs and the IOC. Operating on a four-year term in line with each Olympic quadrennium, the TOP program features approximately 10 worldwide Olympic Partners, with each receiving exclusive global marketing rights within a designated product or service category. The Olympic Games domestic sponsorship program grants marketing rights within the host country or territory only. Under the direction of the IOC, the USOC manages the domestic program within the United States. Like the worldwide TOP program, the domestic sponsorship program operates on the principle of product-category exclusivity. Approximately 20 corporations currently participate in the U.S. domestic sponsorship program, which enables the USOC to deliver increased funding and equitable distribution to National Governing Bodies. The establishment of these long-term domestic partnerships helps generate independent financial stability for American athletes while ensuring the viability of the Team USA on the international stage. The USOC has granted licensing rights to nearly three dozen companies to manufacture and distribute official licensed products, which convey the rich history of American culture and commemorates the Olympic Movement. These companies are referred to as licensees and pay a royalty for each item sold bearing any related Olympic, USOC or Team USA marks. NBC Universal has held the American broadcasting rights of the Olympic Games since 1988, and the broadcasting rights to the Olympic Winter Games since 2002. In 2011, NBC agreed to a $4.38 billion contract with the IOC to broadcast the 2014, 2016, 2018, and 2020 Games. On May 7, 2014, NBC agreed to a $7.75 billion contract with the IOC to broadcast the 2022, 2024, 2026, 2028, 2030, and 2032 Games. The IOC distributes Olympic broadcast revenue through Olympic Solidarity – the body responsible for managing and administering the share of the television rights of the Olympic Games. Under the current format, the revenue is allocated to the NOCs – including the USOC – the local organizing committee and International Federations. . In 2009, the USOC and Comcast announced plans for The U.S. Olympic Network, which would have aired Olympic-sports events, news, and classic footage. However, the USOC met opposition from the International Olympic Committee, which preferred to deal with NBCU (and its then-new Universal Sports joint venture). Since then, Comcast has purchased a majority share of NBCU. Meanwhile, there has been no news about this network since mid-2009 and the status of this concept is uncertain; however, it may be merged somehow with Universal Sports now that they are co-owned. Relationship between IOC and USOC In May 2012, USOC’s leaders negotiated a resolution with the IOC, addressing a decades-long revenue sharing debate and paving the way for a peaceful future between the two bodies. The new agreement elevates the USOC’s global perception and restructures how worldwide Olympic sponsorship and U.S. TV revenues are shared, while providing for USOC contributions to Olympic Games costs. The agreement, revising 27-year-old terms governing the USOC’s shares of worldwide Olympic sponsorship and U.S. broadcast rights revenue, preserves the USOC’s future revenue at current levels and includes an escalator for inflation. Under the terms of the new agreement, the USOC is guaranteed seven percent of the U.S. broadcast revenue and 10 percent of the IOC’s global sponsorship revenue. The agreement guarantees the USOC approximately $410 million per quadrennium, plus inflation and a percentage of revenue from new growth areas, beginning in 2020. - United States at the Olympics - United States at the Paralympics - United States at the Pan American Games - United States of America | United States Olympic Committee | National Olympic Committee - United States Olympic Committee, Charity Navigator - "U.S. Funding of Olympic Athletes.". - "Ability Magazine: Paralympic Military — Sport as Rehabilitation". Retrieved 2012-04-05. - "Military". U.S. Paralympics. Retrieved July 7, 2015. - Secretary of Defense (March 13, 2015). "Department of Defense Warrior Games 2015" (PDF). Department of Defense. Retrieved July 7, 2015. - "Warrior Games 2014". Warrior Transition Command. Retrieved July 7, 2015. - USOlympic Internet Network - USA Canoe/Kayak - "The USOC Program", safesport.org webpage, n.d. Retrieved 2013-03-16. - "The USOC Commitment", safesport.org webpage, n.d. The commitment letter bears the signature of Scott Blackmun, USOC Chief Executive Officer. Retrieved 2013-03-16. - "Legal Referral Network", safesport.org webpage, n.d. The six law firms listed in the network in March 2013 were: Arent Fox, Foley & Lardner LLP, Hughes Hubbard & Reed LLP, Paul, Weiss, Rifkind, Wharton & Garrison LLP, Sidley Austin LLP, and White & Case. Retrieved 2013-03-16. - "US Olympic Committee: Don't be an Asterisk", YouTube, August 9, 2008. Uploaded by arattauna. Retrieved 2013-03-16. - "United States Olympic Committee and Ad Council Launch Anti-Steroid Social Media Campaign and National Sweepstakes", Ad Council via Marketwire, February 07, 2011. In March 2013, link on the page to http://www.PlayAsteriskFree.com went to https://www.facebook.com/USOlympicTeam and one to https://www.facebook.com/PlayAsteriskFree went to https://www.facebook.com/home.php. Retrieved 2013-03-16. - Scott Blackmun: Executive Profile & Biography - Businessweek - About the USOC | United States Olympic Committee - Team USA Media Guide | USOC: About - National Governing Bodies - yogo pants - Google Search - As of yet, the only telethon was Olympa-Thon '79, which took place on NBC from primetime on April 21 through late night on April 22 in 1979. Participants included the reunited duo of Dean Martin and Jerry Lewis, O.J. Simpson among other US Olympians. - Mihalopoulos, Dan (October 4, 2009). "Chicago Olympic dream dashed". Chicago Tribune. - Abrahamson: Rio the big winner, USOC the prime loser - Kaplan, David, "Was The IOC's Decision A Slap At Chicago or The USOC?", chicagonow.com, 10.02.09. - "ioc-member-it-wasnt-chicago-it-was-usoc", chicagobreakingnews.com, 2009/10. Archived October 5, 2009, at the Wayback Machine. - Hersh, Philip. "Chicago's Early Exit Should Be Usoc Wake-up Call". The Baltimore Sun. Retrieved 17 July 2012. - Court Lets Ruling Stand in U.S.O.C. Case, New York Times, October 6, 2008 - Schwarz, Alan (2008-09-05). "Paralympic Athletes Add Equality to Their Goals". The New York Times. NYTimes.com. Retrieved 2010-04-08. - "Olympics notebook". Sports. The Associated Press. 13 July 2012. Retrieved July 14, 2012. - U.S. Olympic Honors - Cao, Athena Cao (15 August 2016) Uncle Sam goes for gold, too: Up to $9,900 per Olympic gold medal First Coast News via USA Today - Comcast, U.S. Olympic Committee to Launch Cable Net, Mediaweek, July 8, 2009 - Olympics Channel Hits Roadblock, Variety.com, July 12, 2009 - "IOC, USOC finalize revenue deal". ESPN.com. 24 May 2012. Retrieved 15 July 2013. |Wikinews has related news: Preparedness for 2012 Paralympic Games differs between National Paralympic Committees| - Official United States Olympic Team website - Official United States Paralympic Team website - Official United States Olympic Fan Club website - Paralympic Sport Clubs - US Paralympics - USOC Training centers and sites
<urn:uuid:0c9c2664-0d6f-4b2e-982e-7a70ad75208a>
CC-MAIN-2022-33
https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/United_States_Olympic_Committee.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00097.warc.gz
en
0.929795
5,451
2.6875
3
Who gets constipated? Constipation is one of the most common gastrointestinal complaints in the United States. More than 4 million Americans have frequent constipation, accounting for 2.5 million physician visits a year. Those reporting constipation most often are women and adults ages 65 and older. Pregnant women may have constipation, and it is a common problem following childbirth or surgery. Self-treatment of constipation with over-the-counter (OTC) laxatives is by far the most common aid. Around $725 million is spent on laxative products each year in America. What causes constipation? To understand constipation, it helps to know how the colon, or large intestine, works. As food moves through the colon, the colon absorbs water from the food while it forms waste products, or stool. Muscle contractions in the colon then push the stool toward the rectum. By the time stool reaches the rectum it is solid, because most of the water has been absorbed. Constipation occurs when the colon absorbs too much water or if the colon’s muscle contractions are slow or sluggish, causing the stool to move through the colon too slowly. As a result, stools can become hard and dry. Common causes of constipation are - not enough fiber in the diet - lack of physical activity (especially in the elderly) - irritable bowel syndrome - changes in life or routine such as pregnancy, aging, and travel - abuse of laxatives - ignoring the urge to have a bowel movement - specific diseases or conditions, such as stroke (most common) - problems with the colon and rectum - problems with intestinal function (chronic idiopathic constipation) Not Enough Fiber in the Diet People who eat a high-fiber diet are less likely to become constipated. The most common causes of constipation are a diet low in fiber or a diet high in fats, such as cheese, eggs, and meats. Fiber—both soluble and insoluble—is the part of fruits, vegetables, and grains that the body cannot digest. Soluble fiber dissolves easily in water and takes on a soft, gel-like texture in the intestines. Insoluble fiber passes through the intestines almost unchanged. The bulk and soft texture of fiber help prevent hard, dry stools that are difficult to pass. Americans eat an average of 5 to 14 grams of fiber daily, which is short of the 20 to 35 grams recommended by the American Dietetic Association. Both children and adults often eat too many refined and processed foods from which the natural fiber has been removed. A low-fiber diet also plays a key role in constipation among older adults, who may lose interest in eating and choose foods that are quick to make or buy, such as fast foods, or prepared foods, both of which are usually low in fiber. Also, difficulties with chewing or swallowing may cause older people to eat soft foods that are processed and low in fiber. National Center for Health Statistics. Dietary Intake of Macronutrients, Micronutrients, and Other Dietary Constituents: United States, 1988–94. Vital and Health Statistics, Series 11, Number 245. July 2002. Not Enough Liquids Research shows that although increased fluid intake does not necessarily help relieve constipation, many people report some relief from their constipation if they drink fluids such as water and juice and avoid dehydration. Liquids add fluid to the colon and bulk to stools, making bowel movements softer and easier to pass. People who have problems with constipation should try to drink liquids every day. However, liquids that contain caffeine, such as coffee and cola drinks will worsen one’s symptoms by causing dehydration. Alcohol is another beverage that causes dehydration. It is important to drink fluids that hydrate the body, especially when consuming caffeine containing drinks or alcoholic beverages. Lack of Physical Activity A lack of physical activity can lead to constipation, although doctors do not know precisely why. For example, constipation often occurs after an accident or during an illness when one must stay in bed and cannot exercise. Lack of physical activity is thought to be one of the reasons constipation is common in older people. Some medications can cause constipation, including - pain medications (especially narcotics) - antacids that contain aluminum and calcium - blood pressure medications (calcium channel blockers) - antiparkinson drugs - iron supplements Changes in Life or Routine During pregnancy, women may be constipated because of hormonal changes or because the uterus compresses the intestine. Aging may also affect bowel regularity, because a slower metabolism results in less intestinal activity and muscle tone. In addition, people often become constipated when traveling, because their normal diet and daily routine are disrupted. Abuse of Laxatives The common belief that people must have a daily bowel movement has led to self-medicating with OTC laxative products. Although people may feel relief when they use laxatives, typically they must increase the dose over time because the body grows reliant on laxatives in order to have a bowel movement. As a result, laxatives may become habit-forming. Ignoring the Urge to Have a Bowel Movement People who ignore the urge to have a bowel movement may eventually stop feeling the need to have one, which can lead to constipation. Some people delay having a bowel movement because they do not want to use toilets outside the home. Others ignore the urge because of emotional stress or because they are too busy. Children may postpone having a bowel movement because of stressful toilet training or because they do not want to interrupt their play. Diseases that cause constipation include neurological disorders, metabolic and endocrine disorders, and systemic conditions that affect organ systems. These disorders can slow the movement of stool through the colon, rectum, or anus. Conditions that can cause constipation are found below. - Neurological disorders - multiple sclerosis - Parkinson’s disease - chronic idiopathic intestinal pseudo-obstruction - spinal cord injuries - Metabolic and endocrine conditions - poor glycemic control - Systemic disorders Problems with the Colon and Rectum Intestinal obstruction, scar tissue—also called adhesions—diverticulosis, tumors, colorectal stricture, Hirschsprung’s disease, or cancer can compress, squeeze, or narrow the intestine and rectum and cause constipation. Problems with Intestinal Function The two types of constipation are idiopathic constipation and functional constipation. Irritable bowel syndrome (IBS) with predominant symptoms of constipation is categorized separately. Idiopathic—of unknown origin—constipation does not respond to standard treatment. Functional constipation means that the bowel is healthy but not working properly. Functional constipation is often the result of poor dietary habits and lifestyle. It occurs in both children and adults and is most common in women. Colonic inertia, delayed transit, and pelvic floor dysfunction are three types of functional constipation. Colonic inertia and delayed transit are caused by a decrease in muscle activity in the colon. These syndromes may affect the entire colon or may be confined to the lower, or sigmoid, colon. Pelvic floor dysfunction is caused by a weakness of the muscles in the pelvis surrounding the anus and rectum. However, because this group of muscles is voluntarily controlled to some extent, biofeedback training is somewhat successful in retraining the muscles to function normally and improving the ability to have a bowel movement. Functional constipation that stems from problems in the structure of the anus and rectum is known as anorectal dysfunction, or anismus. These abnormalities result in an inability to relax the rectal and anal muscles that allow stool to exit. People with IBS having predominantly constipation also have pain and bloating as part of their symptoms. How is the cause of constipation identified? The tests our doctor performs depend on the duration and severity of the constipation, the person’s age, and whether blood in stools, recent changes in bowel habits, or weight loss have occurred. Most people with constipation do not need extensive testing and can be treated with changes in diet and exercise. For example, in young people with mild symptoms, a medical history and physical exam may be all that is needed for diagnosis and treatment. The doctor may ask a patient to describe his or her constipation, including duration of symptoms, frequency of bowel movements, consistency of stools, presence of blood in the stool, and toilet habits—how often and where one has bowel movements. A record of eating habits, medication, and level of physical activity will also help our doctor determine the cause of constipation. The clinical definition of constipation is having any two of the following symptoms for at least 12 weeks—not always consecutive—in the previous 12 months: - straining during bowel movements - lumpy or hard stool - sensation of incomplete evacuation - sensation of anorectal blockage/obstruction - fewer than three bowel movements per week A physical exam may include a rectal exam with a gloved, lubricated finger to evaluate the tone of the muscle that closes off the anus—also called anal sphincter—and to detect tenderness, obstruction, or blood. In some cases, blood and thyroid tests may be necessary to look for thyroid disease and serum calcium or to rule out inflammatory, metabolic, and other disorders. Extensive testing usually is reserved for people with severe symptoms, for those with sudden changes in the number and consistency of bowel movements or blood in the stool, and older adults. Additional tests that may be used to evaluate constipation include - a colorectal transit study - anorectal function tests - a defecography Because of an increased risk of colorectal cancer in older adults, our doctor may use tests to rule out a diagnosis of cancer, including a - barium enema x ray - sigmoidoscopy or colonoscopy Colorectal transit study. This test shows how well food moves through the colon. The patient swallows capsules containing small markers that are visible on an x ray. The movement of the markers through the colon is monitored by abdominal x rays taken several times 3 to 7 days after the capsule is swallowed. The patient eats a high-fiber diet during the course of this test. Anorectal function tests. These tests diagnose constipation caused by abnormal functioning of the anus or rectum—also called anorectal function. - Anorectal manometry evaluates anal sphincter muscle function. For this test, a catheter or air-filled balloon is inserted into the anus and slowly pulled back through the sphincter muscle to measure muscle tone and contractions. - Balloon expulsion tests consist of filling a balloon with varying amounts of water after it has been rectally inserted. Then the patient is asked to expel the balloon. The inability to expel a balloon filled with less than 150 mL of water may indicate a decrease in bowel function. Defecography is an x ray of the anorectal area that evaluates completeness of stool elimination, identifies anorectal abnormalities, and evaluates rectal muscle contractions and relaxation. During the exam, our doctor fills the rectum with a soft paste that is the same consistency as stool. The patient sits on a toilet positioned inside an x-ray machine, then relaxes and squeezes the anus to expel the paste. The doctor studies the x rays for anorectal problems that occurred as the paste was expelled. Barium enema x ray. This exam involves viewing the rectum, colon, and lower part of the small intestine to locate problems. This part of the digestive tract is known as the bowel. This test may show intestinal obstruction and Hirschsprung’s disease, which is a lack of nerves within the colon. The night before the test, bowel cleansing, also called bowel prep, is necessary to clear the lower digestive tract. The patient drinks a special liquid to flush out the bowel. A clean bowel is important, because even a small amount of stool in the colon can hide details and result in an incomplete exam. Because the colon does not show up well on x rays, our doctor fills it with barium, a chalky liquid that makes the area visible. Once the mixture coats the inside of the colon and rectum, x rays are taken that show their shape and condition. The patient may feel some abdominal cramping when the barium fills the colon but usually feels little discomfort after the procedure. Stools may be white in color for a few days after the exam. Sigmoidoscopy or colonoscopy. An examination of the rectum and lower, or sigmoid, colon is called a sigmoidoscopy. An examination of the rectum and entire colon is called a colonoscopy. The person usually has a liquid dinner the night before a colonoscopy or sigmoidoscopy and takes an enema early the next morning. An enema an hour before the test may also be necessary. To perform a sigmoidoscopy, our doctor uses a long, flexible tube with a light on the end, called a sigmoidoscope, to view the rectum and lower colon. The patient is lightly sedated before the exam. First, our doctor examines the rectum with a gloved, lubricated finger. Then, the sigmoidoscope is inserted through the anus into the rectum and lower colon. The procedure may cause abdominal pressure and a mild sensation of wanting to move the bowels. The doctor may fill the colon with air to get a better view. The air can cause mild cramping. To perform a colonoscopy, our doctor uses a flexible tube with a light on the end, called a colonoscope, to view the entire colon. This tube is longer than a sigmoidoscope. During the exam, the patient lies on his or her side, and our doctor inserts the tube through the anus and rectum into the colon. If an abnormality is seen, our doctor can use the colonoscope to remove a small piece of tissue for examination (biopsy). The patient may feel gassy and bloated after the procedure. How is constipation treated? Although treatment depends on the cause, severity, and duration of the constipation, in most cases dietary and lifestyle changes will help relieve symptoms and help prevent them from recurring. A diet with enough fiber (20 to 35 grams each day) helps the body form soft, bulky stool. A doctor or dietitian can help plan an appropriate diet. High-fiber foods include beans, whole grains and bran cereals, fresh fruits, and vegetables such as asparagus, brussels sprouts, cabbage, and carrots. For people prone to constipation, limiting foods that have little or no fiber, such as ice cream, cheese, meat, and processed foods, is also important. Other changes that may help treat and prevent constipation include drinking enough water and other liquids, such as fruit and vegetable juices and clear soups, so as not to become dehydrated, engaging in daily exercise, and reserving enough time to have a bowel movement. In addition, the urge to have a bowel movement should not be ignored. Most people who are mildly constipated do not need laxatives. However, for those who have made diet and lifestyle changes and are still constipated, a doctor may recommend laxatives or enemas for a limited time. These treatments can help retrain a chronically sluggish bowel. For children, short-term treatment with laxatives, along with retraining to establish regular bowel habits, helps prevent constipation. A doctor should determine when a patient needs a laxative and which form is best. Laxatives taken by mouth are available in liquid, tablet, gum powder, and granule forms. They work in various ways: - Bulk-forming laxatives generally are considered the safest, but they can interfere with absorption of some medicines. These laxatives, also known as fiber supplements, are taken with water. They absorb water in the intestine and make the stool softer. Brand names include Metamucil, Fiberall, Citrucel, Konsyl, and Serutan. These agents must be taken with water or they can cause obstruction. Many people also report no relief after taking bulking agents and suffer from a worsening in bloating and abdominal pain. - Stimulants cause rhythmic muscle contractions in the intestines. Brand names include Correctol, Dulcolax, Purge, and Senokot. Studies suggest that phenolphthalein, an ingredient in some stimulant laxatives, might increase a person’s risk for cancer. The Food and Drug Administration has proposed a ban on all over-the-counter products containing phenolphthalein. Most laxative makers have replaced, or plan to replace, phenolphthalein with a safer ingredient. - Osmotics cause fluids to flow in a special way through the colon, resulting in bowel distention. This class of drugs is useful for people with idiopathic constipation. Brand names include Cephulac, Sorbitol, and Miralax. People with diabetes should be monitored for electrolyte imbalances. - Stool softeners moisten the stool and prevent dehydration. These laxatives are often recommended after childbirth or surgery. Brand names include Colace and Surfak. These products are suggested for people who should avoid straining in order to pass a bowel movement. The prolonged use of this class of drugs may result in an electrolyte imbalance. - Lubricants grease the stool, enabling it to move through the intestine more easily. Mineral oil is the most common example. Brand names include Fleet and Zymenol. Lubricants typically stimulate a bowel movement within 8 hours. - Saline laxatives act like a sponge to draw water into the colon for easier passage of stool. Brand names include Milk of Magnesia and Haley’s M-O. Saline laxatives are used to treat acute constipation if there is no indication of bowel obstruction. Electrolyte imbalances have been reported with extended use, especially in small children and people with renal deficiency. - Chloride channel activators increase intestinal fluid and motility to help stool pass, thereby reducing the symptoms of constipation. One such agent is Amitiza, which has been shown to be safely used for up to 6 to 12 months. Thereafter a doctor should assess the need for continued use. People who are dependent on laxatives need to slowly stop using them. A doctor can assist in this process. For most people, stopping laxatives restores the colon’s natural ability to contract. Treatment for constipation may be directed at a specific cause. For example, our doctor may recommend discontinuing medication or performing surgery to correct an anorectal problem such as rectal prolapse, a condition in which the lower portion of the colon turns inside out. People with chronic constipation caused by anorectal dysfunction can use biofeedback to retrain the muscles that control bowel movements. Biofeedback involves using a sensor to monitor muscle activity, which is displayed on a computer screen, allowing for an accurate assessment of body functions. A health care professional uses this information to help the patient learn how to retrain these muscles. Surgical removal of the colon may be an option for people with severe symptoms caused by colonic inertia. However, the benefits of this surgery must be weighed against possible complications, which include abdominal pain and diarrhea. Can constipation be serious? Sometimes constipation can lead to complications. These complications include hemorrhoids, caused by straining to have a bowel movement, or anal fissures—tears in the skin around the anus—caused when hard stool stretches the sphincter muscle. As a result, rectal bleeding may occur, appearing as bright red streaks on the surface of the stool. Treatment for hemorrhoids may include warm tub baths, ice packs, and application of a special cream to the affected area. Treatment for anal fissures may include stretching the sphincter muscle or surgically removing the tissue or skin in the affected area. Sometimes straining causes a small amount of intestinal lining to push out from the anal opening. This condition, known as rectal prolapse, may lead to secretion of mucus from the anus. Usually eliminating the cause of the prolapse, such as straining or coughing, is the only treatment necessary. Severe or chronic prolapse requires surgery to strengthen and tighten the anal sphincter muscle or to repair the prolapsed lining. Constipation may also cause hard stool to pack the intestine and rectum so tightly that the normal pushing action of the colon is not enough to expel the stool. This condition, called fecal impaction, occurs most often in children and older adults. An impaction can be softened with mineral oil taken by mouth and by an enema. After softening the impaction, the doctor may break up and remove part of the hardened stool by inserting one or two fingers into the anus.
<urn:uuid:ba4206c7-f1fa-45e8-b15b-333d10b446f6>
CC-MAIN-2022-33
https://yourgicenter.com/constipation/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00098.warc.gz
en
0.935907
4,519
3.421875
3
IN THIS GUIDE - How can we help butterflies? - 1) Don’t despair - 2) Make your garden butterfly-friendly - 3) Create “nectar bars” for passing pollinators - 4) Tell your neighbours! - 5) Record the butterflies that visit - 6) Volunteer for butterfly counts - 7) Teach people about butterflies - 8) Rear caterpillars - 9) Support landscape-scale conservation projects - 10) Support reintroductions - 11) Support the implementation of government policy - 12) Donate to charities and organisations supporting butterflies - What types of butterfly do we have in Britain? - What is the difference between butterflies and moths? - Why do we need butterflies? - What are the risks to British butterflies? - Falling numbers - Destruction of habitats - Climate change Butterflies are one of nature’s most enchanting creatures. They are colourful, delicate, and graceful. But, sadly, almost all butterfly populations are currently in decline. There are various factors impacting them all at once, and they are struggling to adapt fast enough. [source] This guide introduces the butterflies we have in Britain, as well as the threats they are currently facing. There are also twelve ways that you can get involved in helping butterflies to flourish. After reading you’ll have a solid understanding of the issues facing butterflies, and the motivation and knowledge to help! How can we help butterflies? There are glimmers of hope, and these are what we must hold on to. There is widespread – and increasing – awareness of the issue, with several organisations leading the charge. Some populations have bounced back after conservation efforts were implemented. [source] Things are moving – and can continue to move – in the right direction. Here are 12 steps you can take to help that happen: 1) Don’t despair The constant barrage of things to be concerned about can be daunting. But it is vital to hold onto the fact that things are changing. People are becoming more aware, and are demanding action as a result. The next generation have already demanded action from governments and grown-ups. By redirecting panic and despair into constructive action – however small – contributes to the solution rather than wallowing in the problem. It is important to bear this quote in mind, from the Butterfly Conservation charity: “The declines of several threatened species appear to have been halted, and a range of […] species have become more abundant and widespread.” 2) Make your garden butterfly-friendly Butterflies will stop off at any garden where they can get a supply of tasty nectar, and gardens are especially important pit-stops between habitats and other green areas. By planting the right plants you can attract butterflies and other pollinators to your garden where they can recharge, hang out, and maybe even breed. [source] Here are some pointers for making your garden butterfly-friendly, taken in part from a guide by Butterfly Conservation: - Plant nectar plants in sunny, sheltered spots. Butterflies like warmth! - Plant a variety of flowers to appeal to a wider selection of butterflies. You’ve got a big list to choose from, including names like balkan clary, cosmos, French marigold, gayfeather, giant hyssop, calendula, lambs ears, catmint, coneflower, dahlia, shasta daisy, wild marjoram, and many more. - Choose plants that flower throughout the year, to attract butterflies in all seasons. - Plant nasturtiums near brassicas if you grow vegetables: This will lure caterpillars away from your crops, and give them somewhere safe to grow. - Choose other plants that caterpillars like, to give them extra protection. Honeysuckle, jasmine, hop, and clematis are popular contenders. - Plant stinging nettles to attract certain butterfly types, and plant them in sunken containers to prevent them spreading across your entire garden. - Use organic compost and keep plants watered, so that they thrive for as long as possible. (Avoid peat compost: This is taken from ecosystems where butterflies are in decline, and which should be left alone!) More information can be found in the Gardening for Butterflies leaflet, which has some fantastic ideas. Wildflower also offer a specially selected seed selection for attracting pollinators. 3) Create “nectar bars” for passing pollinators You don’t have to have a garden to be able to help butterflies and other pollinators. A window box will do (or a balcony, roof terrace, front porch, or similar). Putting a nectar bar somewhere on your property is the butterfly equivalent of having a service station on a motorway: It gives them the opportunity to rest and refuel on their journey to somewhere more suited to their needs. All you need to do is to set up a window box or similar sized flowerbed, choose the right flowers, and plant them in the right configuration. Here are some pointers: - Plant low-growing plants at the front. This includes things like scabious, cranesbill, and thyme. - Plant medium height plants in the middle. Lavender, phlox, and wallflower are ideal candidates. - Larger plants should go at the back. Hebe, sunflower, and purpletop will do the trick. This configuration will make it as easy as possible for passing butterflies to identify flowers they like, and will increase the likelihood of them visiting. You can use those guidelines when building butterfly-friendly flowerbeds in your garden, too. 4) Tell your neighbours! Helping butterflies and other insects to get between areas with no plants, pollen, or nectar is important in helping them to flourish. If your nectar bar is one of many on your street, butterflies will have more places to stop and refuel on their journeys. This has the combined benefit of making your street more attractive and colourful, and sparking conversation amongst your neighbours. 5) Record the butterflies that visit Identifying the butterflies that come to your garden and recording their visit is a great way to contribute to the data being collected about their populations which, as we’ve seen, is used to plan and inform all sorts of conservation efforts. iRecord automatically submits sighting information once you have confirmed one, and attaches your location via GPS to make your contribution as useful as possible. 6) Volunteer for butterfly counts According to the State of the Union butterfly report, tens of thousands of volunteers across 2,000 locations have contributed almost 3 million butterfly distribution records (as of 2015). This data is vital in understanding changing butterfly populations, and volunteers are always required. Butterfly habits and numbers are an early indicator of how other animals will respond to changing conditions, so a clear idea of their numbers is super important. On counts, you’ll either be counting butterflies, egg numbers, or larval nests in an area. 7) Teach people about butterflies This may sound simple, but a lot of people aren’t aware of the enormous variety of butterflies we have in the UK, or the role they play. Show kids butterflies, caterpillars, and chrysalises. Try to demonstrate the wonder of seeing the transformation between the stages of life. Part of the problem that leads to declining animal populations is a disconnect between humans and nature, which leads to disinterest (also called the Nature Deficit). [source] Some weird butterfly facts that are sure to pique the interest of even the most stubbornly anti-nature child (or adult!): - They transform from caterpillars to butterflies: Two creatures that look completely different. - During their transformation, they turn completely into goo and reassemble themselves from scratch. - The ‘powder’ on butterflies’ wings is actually tiny scales, which are made from waste products of their bodily processes. These scales regenerate over time. [source] - They taste through their feet! [source] Tapping into this is important for the success of future conservation efforts. 8) Rear caterpillars The best way to convey the wonder of butterflies’ transformation is to see it in real time. By rearing caterpillars in your home, you can do just that. Munching Caterpillars have created a guide to rearing caterpillars, and it gives step-by-step instructions to properly house and feed them. All you need is a plastic pot with a mesh lid, some peat-free compost, and a cool, dark place to keep their temporary home. When your caterpillars have fed they will turn into a chrysalis, and after a while they will hatch into butterflies. 9) Support landscape-scale conservation projects Such projects, managed by organisations like Butterfly Conservation, are vital to countering declining butterfly numbers. Sites must be chosen and managed carefully by a project manager with experience and an understanding of the situation. The entire process is informed by data, partly provided by butterfly counts and similar. 10) Support reintroductions The large blue butterfly went extinct in the UK, but was successfully reintroduced from a Scandinavian population. [source] They are now considered critically endangered, but make a strong case for the reintroduction of animal populations. 11) Support the implementation of government policy Individual action is great, but for butterflies to truly be protected, the government must implement environmental policies that prioritise their wellbeing. Butterfly Conservation [source] identify several policy points that they feel must be implemented for meaningful change to occur: - Maintain and restore high quality, resilient habitats through landscape-scale projects. - Restore the species-focussed approach that has proved effective in reversing the decline of threatened species. While an integrated ‘ecosystem services’ view of biodiversity is important, it alone will not save threatened butterflies. - Enhance funding for agri-environment and woodland management schemes targeted at species and habitats of conservation priority. - Restore the wider landscape for biodiversity in both rural and urban areas, to strengthen ecosystems and benefit the economy and human welfare. - Encourage public engagement through citizen science schemes such as the BNM, UKBMS and Big Butterfly Count. - Increase the use (and monitoring) of landscape-scale projects for threatened wildlife and ensure that funding mechanisms are in place to support them (e.g. landfill tax credits) Supporting organisations with such efforts lends your voice to the cause, and financial contributions give them the power to work harder. 12) Donate to charities and organisations supporting butterflies If you want to donate financially, here are a few ways to do it: - Donate to Butterfly Conservation, whose latest appeal is to save the Duke of Burgundy. - Donate to the World Wildlife Federation (WWF), who are running a campaign to save the monarch butterfly. - Donate to the Bee and Butterfly Fund, and support the restoration of pollinator habitats. This is a challenge without an easy solution, and it won’t be easy to reverse the damage already done, but all of the steps above will help. Spreading the word will help. Raising awareness, planting plants, donating money, and sharing the wonder of butterflies will all help. We can do this! What types of butterfly do we have in Britain? There are a whopping 17,500 species of butterfly in the world, according to the Smithsonian Institute. Largest is the Queen Alexandra’s birdwing, whose wingspan can be up to 25 cm: About the size of the average dinner plate. [source] At the other end of the size spectrum is the Western pygmy blue, whose wingspan is about 1.25 cm. That’s about the width of your pinky nail. [source] Butterflies eat nectar from flowers, and use the energy it contains to fuel all of their day-to-day activities, from hibernating, to mating, to flying. The last one is no small job, either: Some butterflies fly all the way to other continents as part of their migrations! [source] Here are all the classifications of butterfly, into which all species fall [source]: - Hesperiidae (skippers) are known and named for their quick and darting movements. They have crochet-hook like antennae clubs, and stocky bodies that make them look more similar to moths than some other butterfly types. - Papilionidae (swallowtails) are large and colourful butterflies with forked tails that bring to mind swallows (hence the name). - Pieridae (whites and yellows) are often white, yellow, and orange, with black accents. The males “exhibit gregarious mud-puddling behavior when they may imbibe salts from moist soils”. We’re not quite sure what that means, but it sounds intriguing. - Lycaenidae (hairstreaks, coppers, and blues) are interesting because their caterpillars have markings at their tail that looks like eyes, which confuses potential predators and gives the caterpillar more time to escape. - Riodinidae (metalmarks) are so named because of the metallic-looking markings found on their wings. - Nymphalidae (fritillaries, nymphalids, and browns) are the largest butterfly family, and often hold their wings flat when resting rather than keeping them closed together. - Hedylidae, who so closely resemble moths that they are often considered to require their own taxonomic family, rather than be classified with butterflies. While there is debate about the suitability of these classifications, they provide enough guidance for spotters to make a good guess at what they’re looking at. The question of which features and variations require different classifications is one that is always present in taxonomy (scientific categorisation of animals), but this isn’t a debate that should concern the casual spotter! In the UK we have all of the types mentioned above, except for Hedylidae. We have several types of butterfly population: - Residents are native species traditionally found in the UK and whose presence is consistent and expected. - Migratory species that visit the UK reliably and regularly. Clouded yellow, red admiral, and painted lady are some common migratory butterflies. - Vagrants are species not native to the UK and whose migratory habits do not usually bring them here, but who are sometimes spotted. - Exotics are species included in UK population lists, but which aren’t thought to occur or have occurred naturally in the wild. Within these categories, there are two types: - Habitat specialist butterflies rely heavily on specific habitats, and are quickly susceptible to decline if their homes are disturbed or destroyed. - Wider countryside types are less dependent on individual habitats, and can make themselves comfortable in a wider range of settings. They are less prone to decline through habitat destruction, but they are still at risk. There are about 60 types of butterfly in the UK, which is too many to list here. [source] Here are some common types to keep a lookout for: - The Duke of Burgundy, a name whose origins are unknown, and a butterfly that, according to Countryfile, breeds “in shady situations” and “squabbles like mad”. - The Pearl-bordered fritillary, a butterfly who enjoys the nectar of the bugle flower, and who lives mainly in woodland clearings. - The large blue, a species who went extinct in the UK in 1979, but that was successfully reintroduced from Swedish populations. - The swallowtail, a large and colourful butterfly with tropical-looking markings. They enjoy hot weather. - The white admiral, a butterfly that thrives on woodland bramble flowers. They are named for their distinctive white band markings. - The high brown fritillary, a large and orange butterfly that is, sadly, also declining most quickly. - The purple emperor, whose markings make it one of the most impressive butterflies we have. This type of butterfly does not visit flowers, instead taking their moisture from “unsavoury substances” (on whose nature no more information is given). - The mountain ringlet, which is the only mountain butterfly in the UK. They get blown about by the wind but they don’t seem to mind. - The Adonis blue, a special butterfly whose distinct blue markings are particularly striking. - The brown hairstreak, who live high up in ash trees and are difficult to see as a result. What is the difference between butterflies and moths? Moths and butterflies are similar in a lot of ways, but their separate categorisation makes sense. Here are some of the differences between them: - Butterflies are usually larger and more colourful, where moths coloration is more subdued. - Moths have a ‘frenulum’, which joins their top and bottom wings together during flight. Butterflies do not have this. - Butterflies are primarily active in the daytime; moths in the nighttime. There are butterflies and moths that buck these trends, but for the most part they are accurate. - Moths make cocoons, which are wrapped in silk coverings; butterflies make chrysalises, which are hard. We’ve not included moths in this guide: Not because we don’t think they should be protected, but because it would be enormous! Keep your eyes peeled for moth-related content in the future. Why do we need butterflies? They’re beautiful, and this has been proven to positively impact humans; just being near butterflies is enough to lift our moods. They are pervasive in our culture, too. They are associated with metamorphosis, and with changing form to something beautiful and proud: Imagery which is seen frequently in various art forms. According to the Butterfly Conservation report mentioned previously, “many people believe that butterflies have an intrinsic value, a right to exist that is not dependent on their value to other species (including humans), and that we have a moral or religious responsibility to prevent their extinction.” Especially when their peril is caused (or made worse) by human activity. As with bees, butterfly populations are seen as a bellwether to gauge the wider condition of the environment. They are the best-studied insect in the UK, and their attractive and non-threatening appearance gives them a widespread public appeal. Their ability to respond quickly to environmental change is especially helpful, because their responses are often indicative of how other species will respond. Because of this, butterfly population trends are used as Government biodiversity indicators, and contribute to the development and assessment of government policy. Butterflies are also pollinators, and have a large role to play in the pollination of flowers and crops. [source] In our content piece about protecting British bees we go into more detail about what pollination is and why it is important. In short, natural pollinators (water, wind, birds, and insects including butterflies and bees) are responsible for at least a third of human food production. Without them, the fruits and vegetables we eat would be much harder to grow, and more expensive as a result. Foods we grow to feed to animals used in farming (for meat and milk) would suffer the same fate. Protecting pollinators and allowing them to flourish is vital if we want to enjoy the same food choices we do today. What are the risks to British butterflies? Butterflies are one of the most quickly declining populations in the natural world, according to a report published in the prestigious journal Science. There are seven levels of threat for animals. The definitions below are taken from the International Union for Conservation of Nature (IUCN) Red List Guidelines: - Extinct: “The last individual has died”; they are all gone. - Extinct in the wild: “It is known only to survive in cultivation, in captivity, or as a naturalized population”. - Critically endangered: “Facing an extremely high risk of extinction in the wild”. - Endangered: Faces a “very high” risk. - Vulnerable: Faces a “high” risk. - Near threatened: “Is close to qualifying for or is likely to qualify for a threatened category in the near future”. - Least concern: “Widespread and abundant”. Of the ten butterflies we introduced earlier: - Two are critically endangered. - Two are endangered. - Two are vulnerable. - Four are near threatened. - Zero are in the least concern category. This means that every type is under some level of threat. None are widespread and abundant. This is according to the Butterfly Conservation red list. Another report – the State of the Nation published by the Butterfly Conservation charity – draws on several professional datasets, to give “a comprehensive and statistically robust evidence-base” on which to make assessments. They have found similarly distressing statistics about the decline of the majority of UK butterfly populations. Some of the datasets they use: - The UK Butterfly Monitoring Scheme (UKBMS) - The Butterflies for the New Millennium (BNM) recording scheme - Wider Countryside Butterfly Survey - Big Butterfly Count What is causing this decline? Amongst British butterflies there has been a 70% decline in occurrence and a 57% decline in abundance since 1976. [source] This means butterflies are found much less frequently than they used to be, and when they are found, they are present in lower numbers. The occurrence and abundance of some butterflies increased in the same period: 47% of species increased in one or both measures. [source] Some species have seen incredibly drastic drops in numbers: The instantly recognisable Monarch butterfly saw a 97% decline in populations since the 1980s [source]. Destruction of habitats Intensive agriculture practices and increased demand for land for housing, roads, and other human activities means that natural habitats are at risk. [source] There is very little resistance when places like peat bogs and downland – which aren’t inherently interesting to most humans – are used for development. Changes in woodland management styles are thought to be responsible for declines in other butterfly populations. [source] When decisions are made that prioritise certain uses for woodland, animal populations suffer. These two issues are particularly damaging for habitat specialist butterflies, because when their habitats are disrupted or destroyed, they don’t really have anywhere else to go. Wider countryside types are declining too, and this decline is less easily attributed to specific causes. When you consider that 97% of wildflower meadows have been destroyed since the 1940s [source], it’s not so hard to see why butterflies – even those that can adapt to new habitats – are struggling to readjust. The spectre present on almost every conservationist’s radar rears its head when talking about butterflies, too. [source] Some species adapt to climate change, with warmer summers attributed to increasing populations of some butterfly types. Others, however, see their migration patterns and habits disrupted by changing temperatures. [source] Climate change also disrupts the times of year when flowers bloom, which can have knock-on effects for butterflies that fly when their preferred plants are in season (like the pearl-bordered fritillary, who we met earlier, that favours bugle plants). Essentially, unpredictable climate with larger variations in temperature presents various ongoing threats to butterfly populations. [source]
<urn:uuid:e54698ea-49dd-4684-a72b-5168f36ac2cc>
CC-MAIN-2022-33
https://horticulture.co.uk/protecting-butterflies/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00497.warc.gz
en
0.939223
5,012
3.140625
3
New brain imaging research debunks a controversial theory about dyslexia that can impact how it is sometimes treated. The cerebellum, a brain structure traditionally considered to be involved in motor function, has been implicated in the reading disability, developmental dyslexia, however, this 'cerebellar deficit hypothesis' has always been controversial. The new research shows that the cerebellum is not engaged during reading in typical readers and does not differ in children who have dyslexia. In the long run, these researchers believe the findings can be used to refine models of dyslexia and to assist parents of struggling readers to make informed decisions about which treatment programs to pursue. Cerebellar function in children with and without dyslexia during single word processing Sikoya M. Ashburn, D. Lynn Flowers, Eileen M. Napoliello, Guinevere F. Eden. Cerebellar function in children with and without dyslexia during single word processing. Human Brain Mapping, October 9, 2019. DOI: 10.1002/hbm.24792 The Effects of Special Education on the Academic Performance of Students with Learning Disabilities Schwartz, Amy Ellen, Bryant Gregory Hopkins, Leanna Stiefel. (2019). The Effects of Special Education on the Academic Performance of Students with Learning Disabilities. (EdWorkingPaper: 19-86). Retrieved from Annenberg Institute at Brown University: http://www.edworkingpapers.com/ai19-86 Does special education improve academic outcomes for students with disabilities? There is surprisingly little evidence to guide policy and answer this question. This paper provides an answer for the largest disability group, students with learning disabilities. The researchers used data from the New York City schools to track the academic performance of more than 44,000 students with learning disabilities over seven years. Test scores for students with learning disabilities improve after they are classified into special education, and the gains are greatest for students who entered special education before they reached middle school. Overall, students who began special education services in grades 4 and 5 "were more likely to be placed, and remain, in less restrictive service settings" than students who began later, the researchers found. The findings suggest that support services that help students remain in the general education classrooms may be particularly effective for students with learning disabilities. Rapid and widespread white matter plasticity during an intensive reading intervention Elizabeth Huber, Patrick M. Donnelly, Ariel Rokem & Jason D. Yeatman. Rapid and widespread white matter plasticity during an intensive reading intervention, Nature Communications volume 9:2260 (2018). Using MRI measurements of the brain’s neural connections, or “white matter,” researchers have shown that, in struggling readers, the neural circuitry strengthened — and their reading performance improved — after just eight weeks of a specialized tutoring program. The study is the first to measure white matter during an intensive educational intervention and link children’s learning with their brains’ flexibility. After eight weeks of intensive instruction among study participants who struggled with reading or had been diagnosed with dyslexia, two of those three areas showed evidence of structural changes — a greater density of white matter and more organized “wiring.” These findings demonstrate that targeted, intensive reading programs not only lead to substantial improvements in reading skills, but also change the underlying wiring of the brain’s reading circuitry. How Myths About Learning Disabilities Rob Many of Their Potential to Succeed and Contribute in School and in the Workplace How Myths About Learning Disabilities Rob Many of Their Potential to Succeed and Contribute in School and in the Workplace (2018). White paper by the International Dyslexia Association and the Learning Disabilities Association of America. Myths about learning disabilities rob many of their potential to succeed and contribute in school and in the workplace. This white paper states that with appropriate intervention and support, all children, including those with learning disabilities, can have the tools and resources they need to live their best possible lives. This will result in many more individuals with learning disabilities acquiring the adaptive skills needed to seamlessly integrate their use of assistive technology and other supports into the performance of their jobs. Structured Literacy and Typical Literacy Practices: Understanding Differences to Create Instructional Opportunities Swerling, Louise Spear. Structured Literacy and Typical Literacy Practices: Understanding Differences to Create Instructional Opportunities (January 23, 2018). Teaching Exceptional Children: Volume: 51 issue: 3, page(s): 201-211. https://doi.org/10.1177/0040059917750160 A key feature of structured literacy (SL) includes, “explicit, systematic, and sequential teaching of literacy at multiple levels — phonemes, letter–sound relationships, syllable patterns, morphemes, vocabulary, sentence structure, paragraph structure, and text structure. SL is especially well suited to students with dyslexia because it directly addresses their core weaknesses in phonological skills, decoding, and spelling. If implemented in Tier 1 instruction and tiered interventions, SL practices may also prevent or ameliorate a wide range of other reading difficulties. Assistive Technology for Students with Learning Disabilities: A Glimpse of the Livescribe Pen and Its Impact on Homework Completion Harper, Kelly A; Kurtzworth-Keen, Kristin; Marable, Michele A. Assistive Technology for Students with Learning Disabilities: A Glimpse of the Livescribe Pen and Its Impact on Homework Completion. Education and Information Technologies, 2017, Vol. 22(5), p.2471-2483. This research article looked the effectiveness of an assistive technology tool, the Livescribe Pen (LSP), with an elementary student identified with dyslexia over a one-year study with teachers, parent, and child. While the LSP was primarily utilized for curriculum accessibility and an audio tool to promote academic independence, the study's findings reveal its impact as an assistive technology on both academic successes for children with disabilities as well as non-academic gains. These included an increase in independence, more time for social activities, and the ability to develop strategies for homework success. Most importantly, the academic team and the parent reported a sense of higher aspirations for this student; ones they had not thought possible previously. Finally, the study revealed two elements critically important for students with disabilities. Those are the importance of fostering communities of support and the importance of self-determination. Designing an Assistive Learning Aid for Writing Acquisition: A Challenge for Children with Dyslexia Latif, Seemab; Tariq, Rabbia; Latif, Rabia. Designing an Assistive Learning Aid for Writing Acquisition: A Challenge for Children with Dyslexia. Studies in health technology and informatics, 2015 Vol. 217, pp. 180-8. This article highlights the benefits of using the modern mobile technology features in providing a learning platform for young dyslexic writers. An android-based application is designed and implemented to encourage the learning process and to help dyslexic children improve their fundamental handwriting skill. In addition, a handwriting learning algorithm based on concepts of machine learning is designed and implemented to decide the learning content, evaluate the learning performance, display the performance results, and record the learning growth to show the strengths and weaknesses of a dyslexic child. The results of the evaluation provided by the participants revealed that application has potential benefits to foster the learning process and help children with dyslexia by improving their foundational writing skills. The State of Learning Disabilities: Understanding the 1 in 5 Horowitz, S. H., Rawe, J., & Whittaker, M. C. (2017). The State of Learning Disabilities: Understanding the 1 in 5. New York: National Center for Learning Disabilities. This report summarizes the latest facts, figures, and information about individuals with learning disabilities in the U.S. The report focuses on six key areas: understanding learning and attention issues; identifying struggling students; supporting academic success; social, emotional, and behavioral challenges; transitioning to life after high school; and recommended policy changes. The report also includes state snapshots that highlight key data points and comparisons to national averages in areas such as inclusion in general education classrooms, disciplinary incidents and dropout rates for students with learning and attention issues. The role of part-time special education supporting students with reading and spelling difficulties from grade 1 to grade 2 in Finland Leena K. Holopainen et al, The role of part-time special education supporting students with reading and spelling difficulties from grade 1 to grade 2 in Finland (April 25, 2017). European Journal of Special Needs Education (2017). DOI: 10.1080/08856257.2017.1312798 The reading skills of children with reading and spelling difficulties (RSD) lag far behind the age level in the first two school years, despite special education received from special education teachers. Furthermore, the spelling skills of children who in addition to RSD had other learning difficulties also lagged behind their peers in the first two school years. Small group education and a moderate amount of part-time special education (approximately 38 hours per year) predicted faster skill development, whereas individual and a large amount of special education (more than 48 hours per year) were related to slower skill development and broader difficulties. Dysfunction of Rapid Neural Adaptation in Dyslexia This study suggests that people with the reading disability dyslexia may have brain differences that are surprisingly wide-ranging. Using specialized brain imaging, scientists found that adults and children with dyslexia showed less ability to "adapt" to sensory information compared to people without the disorder. And the differences were seen not only in the brain's response to written words, which would be expected. People with dyslexia also showed less adaptability in response to pictures of faces and objects. That suggests they have "deficits" that are more general, across the whole brain, said study lead author Tyler Perrachione. He's an assistant professor of speech, hearing and language sciences at Boston University. The findings offer clues to the root causes of dyslexia. Identifying and supporting English learner students with learning disabilities: Key issues in the literature and state practice Burr, E., Haas, E., and Ferriere, K. (July 2015). Identifying and supporting English learner students with learning disabilities: Key issues in the literature and state practice, U.S. Department of Education, Institute of Education Sciences, Regional Educational Laboratory at WestEd. This review of research and policy literature — aimed at district and state policymakers — distills several key elements of processes that can help identify and support English learner students with learning disabilities. It also describes current guidelines and protocols used by the 20 states with the largest populations of English learner students. The report informs education leaders who are setting up processes to determine which English learner students may need placement in special education programs as opposed to other assistance. The report acknowledges that the research base in this area is thin. Achievement Gap in Reading Is Present as Early as First Grade and Persists through Adolescence Ferrer, E., Shatwitz, B.A., Holahan, J.M., Marchione, K.E., Michaels, R., and Shaywitz, S.E. (2015) Achievement Gap in Reading Is Present as Early as First Grade and Persists through Adolescence, Journal of Pediatrics, November 2015,167 (5):1121-1125. The subjects were the 414 participants comprising the Connecticut Longitudinal Study, a sample survey cohort, assessed yearly from 1st to 12th grade on measures of reading and IQ. Statistical analysis employed longitudinal models based on growth curves and multiple groups. Results from the study indicated that as early as first grade, compared with typical readers, dyslexic readers had lower reading scores and verbal IQ, and their trajectories over time never converge with those of typical readers. Researchers concluded that the achievement gap between typical and dyslexic readers is evident as early as first grade, and this gap persists into adolescence. These findings provide strong evidence and impetus for early identification of and intervention for young children at risk for dyslexia. Implementing effective reading programs as early as kindergarten or even preschool offers the potential to close the achievement gap. Improving Reading Outcomes for Students with or at Risk for Reading Disabilities Connor, C., Alberto, P.A., Compton, D.L., and O'Connor, R.E. (February 2014) Improving Reading Outcomes for Students with or at Risk for Reading Disabilities: A Synthesis of the Contributions from the Institute of Education Sciences Research Centers, U.S. Department of Education, Institute of Education Sciences, National Center for Special Education Research. This report describes what has been learned about the improvement of reading outcomes for children with or at risk for reading disabilities through published research funded by the Institute of Education Science (IES). The report describes contributions to the knowledge base across four focal areas: assessment, basic cognitive and linguistic processes that support successful reading, intervention, and professional development. On the Importance of Listening Comprehension Hogan TP1, Adlof SM, Alonzo CN. (2014) On the importance of listening comprehension, International Journal of Speech-Language Pathology June 16 (3):199-207. The simple view of reading highlights the importance of two primary components which account for individual differences in reading comprehension across development: word recognition (i.e., decoding) and listening comprehension. This paper reviews evidence showing that listening comprehension becomes the dominating influence on reading comprehension starting even in the elementary grades. It also highlights a growing number of children who fail to develop adequate reading comprehension skills, primarily due to deficient listening comprehension skills (i.e., poor comprehenders). Finally we discuss key language influences on listening comprehension for consideration during assessment and treatment of reading disabilities. Intact but Less Accessible Phonetic Representations in Adults with Dyslexia Bart Boets et al. (2013) Intact But Less Accessible Phonetic Representations in Adults with Dyslexia. Science 6 December 2013: 342 (6163), 1251-1254. [DOI:10.1126/science.1244333] People with dyslexia seem to have difficulty identifying and manipulating the speech sounds to be linked to written symbols. Researchers have long debated whether the underlying representations of these sounds are disrupted in the dyslexic brain, or whether they are intact but language-processing centers are simply unable to access them properly. This study indicates that dyslexia may be caused by impaired connections between auditory and speech centers of the brain. The researchers analyzed whether for adult readers with dyslexia the internal references for word sounds are poorly constructed or whether accessing those references is abnormally difficult. Brain imaging during phonetic discrimination tasks suggested that the internal dictionary for word sounds was correct, but accessing the dictionary was more difficult than normal. Don’t DYS Our Kids: Dyslexia and the Quest for Grade-Level Reading Proficiency Fiester, L. (2012). Don't DYS Our Kids: Dyslexia and the Quest for Grade-Level Reading Proficiency. Commissioned by the Emily Hall Tremaine Foundation in partnership with the Campaign for Grade-Level Reading. About 2.4 million children across the nation have been diagnosed with learning disabilities — but how successful is the U.S. education system in teaching these students to read? This new report provides a far-reaching overview of the history and progress in understanding and meeting the needs of children with dyslexia, as well as the persisting challenges that must be overcome, to ensure that all students can read proficiently by the third grade. The report also highlights best practices and examples of solutions that are already working in communities. Based on interviews with nearly 30 experts, the report includes a collection of recommended actions for advancing this movement. Human Voice Recognition Depends on Language Ability Perrachione, T., Stephanie Del Tufo, S., Gabrieli, J. Human Voice Recognition Depends on Language Ability. Science 29 July 2011: 595. The ability to recognize people by their voice is an important social behavior. Individuals differ in how they pronounce words, and listeners may take advantage of language-specific knowledge of speech phonology to facilitate recognizing voices. Impaired phonological processing is characteristic of dyslexia and thought to be a basis for difficulty in learning to read. The researchers tested voice-recognition abilities of dyslexic and control listeners for voices speaking listeners’ native language or an unfamiliar language. Individuals with dyslexia exhibited impaired voice-recognition abilities compared with controls only for voices speaking their native language. These results demonstrate the importance of linguistic representations for voice recognition. Humans appear to identify voices by making comparisons between talkers' pronunciations of words and listeners' stored abstract representations of the sounds in those words. Related article: Study Sheds Light on Auditory Role in Dyslexia. Learning Disabilities, Dyslexia, and Vision American Academy of Pediatrics, Section on Ophthalmology, Council on Children with Disabilities et al. (2009). Pediatrics 2009;124;837-844; originally published online Jul 27, 2009. Retrieved January 7, 2010 from http://pediatrics.aappublications.org/cgi/reprint/124/2/837. This joint statement of pediatric ophthalmologists and pediatricians concerned with learning disabilities states: most experts believe that dyslexia is a language based disorder. Vision problems can interfere with the process of learning; however, vision problems are not the cause of primary dyslexia or learning disabilities. Scientific evidence does not support the efficacy of eye exercises, behavioral vision therapy, or special tinted filters or lenses for improving the long-term educational performance in these complex pediatric neurocognitive conditions. Diagnostic and treatment approaches that lack scientific evidence of efficacy, including eye exercises, behavioral vision therapy, or special tinted filters or lenses, are not endorsed and should not be recommended. May, T.S. (2006). Dissecting Dyslexia. BrainWork, the Neuroscience Newsletter from the Dana Foundation. Genetic differences in the brain make learning to read a struggle for children with dyslexia. Luckily, most of our brain development occurs after we're born, when we interact with our environment. This means that the right teaching techniques can actually re-train the brain, especially when they happen early. Remediation Training Improves Reading Ability of Dyslexic Children Progress in Understanding Reading: Scientific Foundations and New Frontiers Stanovich, Keith E. (2000). Progress in understanding reading: Scientific foundations and new frontiers. New York: Guilford Press. From a nationally known expert, this volume summarizes the gains that have been made in key areas of reading research and provides authoritative insights on current controversies and debates. Each section begins with up-to-date findings followed by one or more classic papers from the author's research program. Significant issues covered include phonological processes and context effects in reading, the "reading wars" and how they should be resolved, the meaning of the term "dyslexia," and the cognitive effects and benefits of reading. Repeated Reading and Reading Fluency in Learning Disabled Children Rashotte, C. & Torgesen, J. (1985). Repeated reading and reading fluency in learning disabled children. Reading Research Quarterly, 20, 180-188. This study investigated whether improved fluency and comprehension across different stories in repeated reading depend on the degree of word overlap among passages and whether repeated reading is more effective than an equivalent amount of nonrepetitive reading. Non-fluent, learning disabled students read passages presented and timed by a computer under three different conditions. Results suggest that over short periods of time, increases in reading speed with the repeated reading method depend on the amount of shared words among stories, and that if stories have few shared words, repeated reading is not more effective for improving speed than an equivalent amount of nonrepetitive reading. Rapid and widespread white matter plasticity during an intensive reading intervention Elizabeth Huber, Patrick M. Donnelly, Ariel Rokem, Jason D. Yeatman. Rapid and widespread white matter plasticity during an intensive reading intervention, Nature Communications Vol. 9, Article number: 2260, 2018 Using MRI measurements of the brain’s neural connections, or “white matter,” researchers from the University of Washington showed that, in struggling readers, the neural circuitry strengthened — and their reading performance improved — after just eight weeks of a specialized tutoring program. The study focused on three areas of white matter — regions rich with neuronal connections — that link regions of the brain involved in language and vision. The study is the first to measure white matter during an intensive educational intervention and link children’s learning with their brains’ flexibility.
<urn:uuid:12fdcfea-fa40-40c6-8453-d5fc5f4ea51d>
CC-MAIN-2022-33
http://mapping-the-text.org/dyslexia-5.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00696.warc.gz
en
0.933148
4,343
3.328125
3
Steelmaking technology has greatly changed during the last two decades under the pressure of increased demand, new specifications and the need to reduce energy and material consumption. Production efficiency has been improved by increasing the melt capacity of furnaces, implementing on-line computer control modules, and introducing new technologies, such as the combined blowing process for LD (Linz Donawitz) converters, the Ultra High Power (UHP) electric furnace, the ladle steelmaking processes and continuous casting. Steel is produced by two process routes (Figure 1) : - The Blast Furnace-Basic Oxygen Converter (BOF) - The Electrical Arc Furnace (EAF) In both routes the process consists of producing refined iron to which is added the required alloying elements to produce the finished steel specification. Their respective shares in crude steel production are 70% (BOF) and 30% (EAF). High production rates and low impurity steel production give a dominant role to the first process route. Low energy costs and an ample supply of recycled scrap ensure a competitive market share for the second process route, especially when using the UHP furnace. Before casting, the steel can be refined in the ladle by various processes according to the specification with respect to its deoxidation state, inclusion content and level of phosphorus, sulphur, nitrogen and hydrogen. At the same time, its content of carbon, manganese and microalloying elements such as niobium, vanadium and titanium can be adjusted. This process step is generally referred to as Secondary or Ladle steelmaking. During the last step of steelmaking, the steel is cast either into slabs, blooms or billets on a continuous casting machine or into ingots, depending on the final product. Flat products and light shapes are normally produced from continuous cast feedstock, whereas heavy beams and plates are more likely to follow the ingot route. A. Steel Production A.1. The blast-furnace-basic oxygen converter route Sintered iron ores are reduced to raw iron in the blast-furnace. The raw iron is then transformed into crude steel in the oxygen converter. As this operation yields energy, additional scrap is introduced in order to control temperature. The iron feedstock of the blast furnace is the sinter, which is produced in the sinter plant. In the sinter process, a mix of iron ore fines, lime and coke (almost pure carbon) is charged in a 45 cm thick layer onto a moving conveyor (Dwight Lloyd process) and partially melted to form a porous mixture of iron oxides and gangue. Coke consumption is about 50 kg/t sinter product. Blast furnace process (Figure 2) The blast-furnace is a shaft type furnace operating by the counterflow technique : the descending burden of sinter and coke, charged from the top of the furnace, is heated and reduced by the combustion gases ascending from the tuyere zone where a hot air blast is injected to burn C to CO. The air blast is compressed by a blower and heated in special stoves to 1100°C by combustion of the cleaned furnace exhaust gases. The iron oxides (FeO, Fe2O3) and some of the elements present in the gangue of the sinter are reduced by CO gases to produce hot metal. The blast furnace flue dust containing about 40% Fe is recycled by the sinter process. The high permeability of the sinter and the even distribution of the charge produced by revolving chutes help to improve productivity of the blast furnace. Coke consumption can be reduced to 470 kg/t of hot metal. The use of tuyere injectant such as powdered fuel (120 kg/t) or oil (60 kg/t) further reduces the coke consumption of the furnace and so the cost. Below the tuyere zone, where the temperature is highest, the molten material collects on the furnace hearth where the liquid iron (pig iron) separates from the slag by difference in density. The slag and liquid pig iron are tapped from separate tapholes. The tapped slag is granulated by water jets and removed for use in other products including road construction materials, fertilizers, etc. The liquid pig iron (hot metal) is tapped into ladles or torpedo cars (capacity: 300 - 400 t) and conveyed to the steel plant for refinement and conversion into steel. A typical analysis of the hot metal produced at a temperature of 1400°C is: 4,7% carbon (C); 0,5% manganese (Mn); 0,4% silicon (Si); 0,1% phosphorus (P) and 0,04 % sulphur (S), the remainder being iron (Fe). Sulphur removal from the melt needs low oxygen activities. Desulphurization is therefore achieved in the hot metal by injection of calcium carbide fluxes to form calcium sulphide (CaS) or fluxes containing metallic magnesium to form MgS and CaS. The oxygen steelmaking process (Figure 3) The basic oxygen furnace or LD converter (originating from the Linz-Donawitz process started in 1956) is based on oxygen injection by a lance into the melt of hot metal. Scrap and lime are charged into the converter to cool the melt and remove phosphorus, silicon and manganese. The converter is lined with dolomite or magnesite refractory which best resists erosion by slag and heat during oxygen blowing. The life of a converter lining is about 800 to 1400 heats. The oxygen burns out the carbon as carbon monoxide CO and carbon dioxide CO2 gas which is collected in the chimney stack and cleaned of its dust (Fe203, and lime particles, etc.). The elements Mn, Si and P are oxidized and combine with lime (CaO) and FeO formed by the oxidation of Fe to form a molten slag. As these oxidation reactions are highly exothermic, the process needs cooling in order to control the temperature of the melt. This cooling is done by charging scrap (recycled plant and mill scrap) and by adding iron ore during the blowing process. The oxygen blowing takes 15 to 20 minutes, regardless of the size of the converter (70 to 400 t) because the oxygen flow rate of the lance is adjusted to the melt weight. The charging and discharging of steel and slag, including sampling for temperature and analysis of the melt, extends the tap to tap time of a converter to 40 - 60 minutes. The process is characterized by high productivity and steel of low impurity content. The steel is tapped to the ladle through a taphole by tilting the furnace. During this operation ferro-alloys for control of the steel composition are added to the ladle. The oxidized slag containing 12 to 16% of Fe is poured into a cast iron slag pot after the tapping and is disposed of in a slag yard. A major development in the oxygen lance blowing technique, known as Lance Bubbling Equilibrium (LBE) was developed in the mid-seventies and has been widely adopted. Neutral gas, typically argon, is injected through permeable elements in the bottom of the converter, stirring the melt and slag. This significantly increases metallurgical efficiency (lower Fe losses and lower P content), productivity, and the heat and mass-balance of the process (cost reduction). A.2. The electric arc furnace route (Figure 4) In the electric arc furnace process, the cold metallic charge, mainly scrap, is melted by the energy of electric arcs generated between the tips of graphite electrodes and the conductive metallic charge. The three electrodes and the furnace roof are raised and swung away from the furnace shell to allow the charging of scrap. The electrodes maintain the arc in accordance with the voltage and current level selected to produce the desired power input at the desired arc length for melting and refining. As the noise generated by the arcs is high during the melt-in-period, with levels up to 120 dBA, special protection is provided to the operators cabin and the furnace has a special enclosure. The three phase alternating current is supplied by the low voltage side (300 - 700V) of a high power transformer. The nominal transformer rating, expressed as KVA/t, extends from 300 to 500 KVA/t for high power furnaces and from 500 KVA/t upwards for Ultra High Power (UHP) furnaces. These furnaces have an inner diameter of 6 to 9 metres with a capacity of 100 to 200 tons of steel. The tap-to-tap time for these furnaces is 90 to 110 minutes. The traditional role of the EAF process is producing alloy, tool and carbon steels, and it has been extended by the UHP furnace to mass steel production. Thus, the concept of the Mini-Mill was born. As the size and productivity of the furnace increased, the operation of continuous casting for billet and bloom production became possible. Flat products specification, however, require low residual impurity levels and even higher production rates which cannot be satisfied by the UHP-furnace. The share of steel production produced by electric arc furnace is about 30%, at which level it seems to be stabilized as scrap of acceptable quality becomes more scarce. Pellets and sponge iron of higher price have to be used for critical steel grades to control the level of injurious elements, i.e. copper, nickel, tin, etc. The traditional high power furnace produces high quality carbon and alloy steels by the two slag technique. After melt down of the scrap charge, a first oxidizing slag removes the elements P and Si and reduces carbon to the required level. After deslagging, a second basic reducing slag is formed to lower the sulphur and oxygen contents and the steel composition is adjusted by ferro alloy additions. The UHP furnace operates with only a lime based oxidizing slag. The melt down of the scrap charge is accelerated by the use of oxy-fuel burners positioned to reach the cold spots of the large hearth furnace. Oxygen lancing and carbon additions are used to make a foaming slag which yields better energy input from the arcs and improves dephosphorization. After this period, the melt is discharged by a taphole. Deoxidation and refining under reducing slag takes place in the steel ladle (secondary steelmaking). The 100% scrap charge makes the process more vulnerable to injurious “tramp elements“, such as copper, nickel and tin which cannot be removed by the process, their stability being higher than that of iron. To control these “tramp elements”, it is of great importance to identify the sources of the incoming scrap and to make provision to keep the different qualities separate. B. Secondary or Ladle Steelmaking B.1. General Aspects Achieving the required properties of steel often requires a high degree of control over carbon, phosphorus, sulphur, nitrogen, hydrogen and oxygen contents. Individually or in combination, these elements mainly determine material properties such as formability, strength, toughness, weldability, and corrosion behaviour. There are limits to the metallurgical treatments that can be given to molten metal in high performance melting units, such as converters or electric arc furnaces. The nitrogen and phosphorus content can be reduced to low levels in the converter but very low carbon, sulphur, oxygen and hydrogen contents (< 2 ppm) can only be obtained by subsequent ladle treatment. To ensure appropriate conditioning of steel before the casting process, the alloying of steel to target analysis and special refining treatments are carried out at the ladle metallurgy stand. The objectives of ladle steelmaking can be summarized as follows: - refining and deoxidation - removal of deoxidation products (Mn0, SiO2, Al2O3) - desulphurization to very low levels (< 0,008%) - homogenisation of steel composition - temperature adjustment for casting, if necessary by reheating (ladle furnace) - hydrogen removal to very low levels by vacuum treatment. B.2. Ladle Steelmaking Process: Deoxidation and Refining (Figure 5) The high oxygen content of the converter steel would result in large blow-hole formation during solidification. Removal of the excess oxygen (“killing”) is therefore vital before subsequent casting of the steel. Steels treated in this way are described as killed steels. All secondary steelmaking processes allow deoxidising agents to be added to the ladle so that deoxidation in the converter vessel is not necessary. Deoxidation can be performed by the following elements classified by increasing deoxidation capacity : carbon - manganese - silicon - aluminium - titanium. The most popular are silicon and aluminium. After addition, time must be allowed for the reaction to occur and for homogeneity to be achieved before determination of the final oxygen content using EMF probes (electro-chemical probe for soluble oxygen content). As most of these deoxidation agents form insoluble oxides, which would result in detrimental inclusions in the solid steel, they have to be removed by one of the following processes during the subsequent refining stage: - Argon stirring and/or injection of reactants (CaSi, and/or lime based fluxes) achieves : - homogeneous steel composition and temperature - removal of deoxidation products - desulphurization of aluminium-killed steel grades - sulphide inclusion shape control - Ladle furnace. Stirring of the melt by argon or by an inductive stirring equipment and arc heating of the melt (low electric power, typical 200 KVA/t) allows : - long treatment times - high ferro-alloy additions - high degree of removal of deoxidation products due to long treatment under optimized conditions - homogeneous steel composition and temperature - desulphurization, if vigorous stirring by argon - Vacuum-Treatment: RH process (Ruhrstahl-Heraeus) and tank degassing unit.In the RH process the steel is sucked from the ladle by gas injection into one leg of the vacuum chamber and the treated steel flows back to the ladle through the second leg. In the tank degasser process, the steel ladle is placed in a vacuum tank and the steel melt is vigorously stirred by argon injected through porous plugs in the bottom of the ladle. Vacuum treatment achieves: - reduction of the hydrogen content to less than 2 ppm - considerable decarburization of steel to less than 30 ppm when oxygen is blown by a lance (RH - OB) - alloying under vacuum - homogeneous steel composition, high degree of cleanness from deoxidation products High temperature losses (50 - 100°C) are a disadvantage, therefore high superheat of the melt prior to this process is essential. For most secondary steelmaking techniques it is either desirable or essential to stir the liquid steel. Gentle stirring is sufficient for inclusion removal; non-metallic inclusions are brought into contact with liquid slag on top of the melt where they can be fixed. For degassing and desulphurization however, violent stirring is necessary to increase the surface of steel exposed to vacuum (H-removal) or to mix the steel and slag for good desulphurization efficiency. C. Casting and Solidification C.1. General Aspects For solidification, steel is cast into moulds either of cast iron for the ingot casting route or into copper moulds for the continuous casting process. The heat of liquid steel is extracted by the cold mould surface so that crystals can form and grow. A solid shell is formed and solidification progresses by maintaining the cooling. During solidification, the density of metals rises and causes shrinkage. This favours the stripping of the cast from the mould. However, this contraction also causes internal shrinkage which tends to leave a hollow core in the cast product. In continuous casting this is prevented by the continuous flow of molten metal to the mould. For ingot casting an adequate liquid metal pool has to be maintained at the head of the mould by the provision of exothermic material (hot-top). A second concern during the solidification process is segregation due to the fact that some solute elements have a much lower solubility in the solid than in the liquid phase. The segregation tendency is most pronounced for sulphur, phosphorus, oxygen and hydrogen. As has been described, these elements can be controlled to sufficiently low levels by the metallurgical process steps. The manganese content of steel also combines with sulphur to form manganese sulphide inclusions which are elongated during rolling and become detrimental to steel properties if significant stresses are applied perpendicular to the rolling direction. For such applications, shape and content of the sulphide inclusions have to be controlled closely during the refining stage. C.2. Casting Technologies Ingot casting (Figure 6) The casting of ingots is a discontinuous process in which the ingot moulds are filled individually by top pouring or in batches by a central feeder through runners in the base plate. This up-hill teeming technique is characterized by a low rising speed of the steel in the mould, which reduces cracks and surface defects when casting critical steel grades. The teeming operation is done directly from the steel ladle through a sliding gate valve at the bottom that regulates the steel flow, and a nozzle that gives a concentric steel jet. The ingot weights and sections are fixed by the capacity of the primary rolling mill. The ingot size may vary from 4 to 30 t, or even higher for forging. The ingot remains in the mould until solidification is complete. Then the mould is stripped off by crane and left to cool in the mould yard. The ingot is charged into the soaking pit furnace to equalize and raise the temperature for the rolling process (» 1300°C). The solidification of an ingot progresses from the bottom (cooled by the base plate and the mould) to the top of the ingot. In the case of a fully killed (Si + Al) steel melt, with a low free oxygen content, the solidification shrinkage is concentrated at the upper centre of the ingot. To minimise the development of shrinkage porosity in this region, the top of the ingot is insulated (hot top) to provide a reservoir of liquid metal to fill up the hollow core. The hot top is subsequently cropped. This scrap amounts to approximately 12% of the ingot weight. For solidification, steel is cast into moulds either of cast iron for the ingot casting route or into copper moulds for the continuous casting process. By deoxidation with silicon alone, the free oxygen content of the melt can be set to a well defined level so that towards the end of solidification it will react with the carbon of the melt to form CO gas. The formation of these small gas bubbles, or blow holes, compensates for the shrinkage of steel and top crop losses are small (» 2%). The blow-holes are eliminated during primary rolling. Such steels are referred to as ‘balanced’ steels. Ingot casting is very flexible as regards product specifications and the production of small orders on relatively short delivery terms. It is also indispensable for the forming of heavy shaped profiles like beams, heavy plate or heavy forging pieces. C.3. Continuous casting (Figure 7) The continuous casting process has become the major casting technology for steel plants. The reasons are: - yield improvement - energy conservation (direct production of semi-finished products) - savings in manpower. The ratio of continuous cast steel has reached 80 - 90% of total raw steel production in the Western World. The advent and rapid growth of mini-mills could not have occurred without continuous billet casting technology. The essential feature of the continuous casting process is the oscillating water-cooled copper mould. The main function of this mould is to form a solidified steel shell having sufficient strength to prevent breakouts below the mould. This is achieved by the high heat extraction in the mould system. The mould walls are tapered to accommodate the strand shrinkage over the mould length of 700 mm and to maintain a high heat flux. The oscillation creates a relative movement between strand and mould, and prevents metal sticking to the mould surface. Stripping is facilitated by providing an adequate lubricant (casting powders or oil) at the steel meniscus. This lubricant is also essential to maintain a high heat extraction and prevent breakouts. On leaving the mould, the strand is cooled by water sprays and is supported by rolls to prevent bulging until solidification is complete. Strand sections cover the range of semi-finished products, such as billets, blooms or slabs, for the hot finishing mills. Depending on the section to be cast, a continuous caster is laid out with two (slab), four (bloom or round caster) or six strands (for billets below 180 mm2 in size). Modern casters are curved type machines which are cheaper and easier to accommodate in the plant than the original vertical machines. The curved strand is straightened by rollers after complete solidification and cut to the required length for further processing in the rolling mills. Continuous casting technology makes the process continuous so that a number of molten steel batches are cast in sequence. To achieve a continuous supply of steel to the mould, the steel in the ladle is first cast into a tundish which acts as a reservoir during ladle changing and distributes the steel to the different moulds of the machine. Tundishes are equipped with stoppers or sliding gates to regulate the flow rate to the casting speed of the strand. To prevent oxidation by air exposure, the ladle and tundish streams are shrouded by refactory tubes. You might also like |Timeline of materials technology BC 29,000–25,000 BC – First pottery...||What is Nanotechnology? Nanotechnology is the engineering...||Biomimetics (Biomimicry) Biomimetics (also known as biomimicry,...||The Nature Inspired Innovation The name biomimetics was coined by...|
<urn:uuid:14b1676a-8b12-4a45-a06e-827335c0adaf>
CC-MAIN-2022-33
https://www.metallurgyfordummies.com/steelmaking-technology.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00498.warc.gz
en
0.923591
4,654
3.015625
3
This is the fourth in a series of articles that I’ve written for this journal about Covid-191-3. Last month, the Chinese started vaccinating military volunteers with a non-replicating, genetically modified adenovirus, type 5 (Ad5). It produces the SARS-CoV-2 spike glycoprotein (S) as an antigen3. However, some people will have pre-existing immunity to this common human virus. In addition, Dr. Ernesto Burgio wrote two very informative articles about this pandemic, as well as others in history4,5. He pointed out that viruses (including SARS-CoV-2) mutate and that the current pandemic was predicted – in vain5. “The entire community of scientists in this field has been asking for years to prepare for the worst. Yet, as we will see, only the Asian countries -- which in the last two decades had faced the pre-pandemic alarms related to avian flu and SARS -- adequately prepared themselves to face the emergency, while the Western countries generally found themselves completely unprepared for the foretold pandemic. Conspiracy theorists and negationists have dominated the airwaves for a long time: even the painful American drama did not convince them to shut up.” 5 This month, the Russian government announced that it has started vaccinating its military with a vaccine developed by the Gamaleya Institute. It’s called Gam- Covid-Vac and Sputnik V. It uses two different types of genetically engineered, non-replicating adenovirus vectors, Ad5 and Ad26. They express the viral S protein. Hopefully, this will decrease the chances of pre-existing immunity reducing the vaccine’s effectiveness. So, unlike most vaccines in development, patients will receive a second booster shot. Frontline healthcare workers, doctors and teachers will begin receiving the first of two shots in October. Also, Johnson & Johnson’s vaccine candidate uses the Ad26 vector. AstraZeneca and the University of Oxford are developing a vaccine based on a chimpanzee adenovirus called ChAdOx13. There are about 24 vaccine candidates that are in clinical trials and many more are in preclinical development6. China made a $1 billion loan to countries in Latin America and the Caribbean. China will make any coronavirus vaccine it develops available to these countries. The following are in Phase 3 trials: 1. BBIBP-CorV, a genetically engineered, inactivated SARS-CoV-2 virus from the Wuhan Institute of Biological Products and the China National Pharmaceutical Group (Sinopharm). 2. CoronaVac, a different genetically engineered, inactivated SARS-CoV-2 virus from the Sinovac Research in China. 3. A self-amplifying mRNA that is encapsulated in a lipid nanoparticle (LNP) by Moderna (described briefly in my previous article) 3. 4. Ad5-CoV that uses the Ad5 vector, from CanSino Biologics. The following are in Phase 2/3 trials: 1. A Bacillus Calmette-Guerin (BCG) live, attenuated bovine tuberculosis bacillus, Mycobacterium bovis, from the University of Melbourne and Murdoch Children’s Research Center, Radboud University Medical Center (in Utrecht) and the Faustman Lab at Massachusetts General Hospital. It has been used since 1922 as a tuberculosis vaccine (described briefly in my previous article). It’s also been the standard therapy for bladder cancer since 19773. 2. AZD1222, a chimpanzee adenovirus, ChAdOx1, from the University of Oxford, AstraZeneca, the Jenner Institute (described briefly in my previous article) 3. 3. Another self-amplifying mRNA that is encapsulated in an LNP by Pfizer and BioNTech (described briefly in my previous article) 3. The following are in Phase 2 trials: 1. An adjuvant recombinant protein (receptor binding domain, RBD, of the S protein) from Anhui Zhifei Longcom Biopharmaceutical and the Institute of Microbiology of the Chinese Academy of Sciences. 2. ZyCoV-D, a genetically engineered plasmid (a small piece of DNA that encodes a viral antigen) from Zydus Cadila, developed at the Vaccine Technology Centre in Ahmedabad, India. 3. Covaxin, which uses an inactivated form of the SARS-CoV-2 virus. It’s from the Bharat Biotech and National Institute of Virology in India. The following are in Phase 1/2 trials: 1. BBIBP-CorV, which is an inactivated SARS-CoV-2 virus, from the Beijing Institute of Biological Products and Sinopharm. 2. GX-19, which uses a piece of DNA that encodes a viral antigen, from Genexine in Korea. 3. A self-amplifying mRNA that encodes a viral antigen from the Imperial College London. 3. LUNAR-COV19, a DNA vaccine from Arcturus Therapeutics and Duke-NUS Medical School. 4. INO-4800, a DNA vaccine from Inovio Pharmaceuticals. It’s being tested in the Center for Pharmaceutical Research in Kansas City and the University of Pennsylvania in Philadelphia. Six more vaccine candidates are in Phase 1 trials. Many more are in preclinical development6. One of the more promising ones was described in one of my previous articles1. It was developed by Sanofi, in France. It uses a conventional approach. Its Covid-19 vaccine produces SARS-CoV-2 spike proteins in genetically engineered insect cells. This is the same process it uses for its commercial influenza vaccine, FluBlok. Sanofi and GlaxoSmithKline (GSK) were given funds to supply United States government with 100 million doses of Covid-19 vaccine. The vaccine candidate will use GSK’s established pandemic adjuvant technology, called AS03. It contains molecules that excite the immune system and boost vaccine potency. Clinical trials will start later this year, and the firms hope to have a vaccine available in the second half of 2021. It’s the first time these two companies have joined to develop a Covid-19 vaccine. However, one can only make an educated guess about how long immunity might last in people who have been infected with SARS-CoV-2 and were either asymptomatic or recovered from the Covid-19 disease. Fortunately, the immune systems of patients who recovered from Covid-19 had a robust immune response7. Their immune systems were able to recognize SARS-CoV-2 in many ways. Their memory T-cells were able to target several viral antigens, especially the S protein. This should make it easier to develop new vaccines. In addition, some people have a natural, pre-existing immunity to infection due to antibodies and memory cells formed after previous exposure to other coronaviruses that can cause the common cold. A recent study showed that there are memory T cells in people who had never been exposed to the SARS-CoV-2 virus8. This pre-existing immunity affects the models used to predict the future course of this pandemic9. When a hypothetical 30% immunity was added to one epidemiological model, the virus faded away in the near future before returning in three or four years9. CRISPR, a new technology that could produce better tests for active infections One of the keys to controlling the Covid-19 pandemic is developing a rapid, reliable test for active infections by the SARS-CoV-2 virus. Often, the popular media calls this a test for Covid-19. Actually, many people become infected with the virus, but never develop the disease. One of the most important new technologies called CRISPR may be capable of providing such a test. One of my previous articles described the use of CRISPR to produce potentially safe and effective treatments (and possibly a cure) for Covid-193. Another described how gene editing using CRISPR technology might become produce new foods inexpensively to feed a growing population10. One such test was developed by Sherlock Biosciences. They have formed a partnership with Binx Health Limited to mass-produce diagnostic tests that provide results within 20 minutes and can be performed not just in a hospital, but also a physician’s office, a supermarket, and many workplaces. Binx Health already provides convenient tests for sexually transmitted diseases that can be done at home. The CRISPR-based test uses a single-use cartridge system and the SHERLOCK (Specific High-sensitivity Enzymatic Reporter unLOCKing) platform. It will use Binx’s electrochemical detection system to analyze nasal swab samples and report a “detected” or “not detected” result. The SHERLOCK method programs a CRISPR molecule to detect the presence of a specific SARS-CoV-2 genetic signature in specimens collected from patients11. Chinese scientists have reported a similar CRISPR-based test10. We need to learn more. The global effort A safe and effective vaccine and accurate tests for viral infections could save countless millions of lives. Still, they will not be sufficient to provide herd immunity in the USA and most other countries. As long as people have the freedom to avoid vaccination and even to deny science, herd immunity can’t be achieved. In addition, the SARS-CoV-2 and other viruses mutate continuously. Scientists continue to work together, while disregarding the hateful nonsense that comes from some politicians and their supporters. As described in a previous article, this is NOT a China virus. It’s possible that SARS-CoV-2 began circulating as early as December 2019 in California and New York12. It went unnoticed because we had no experience with the first SARS-CoV, which caused a relatively brief epidemic in China and other Asian countries. As a result, they were able to detect it first. Blaming the Chinese is like blaming the messenger. Scientists know that it’s almost impossible to prove where this epidemic started. Moreover, we should follow the principles of Total Quality Management (TQM). Serious, complex problems (like Covid-19) have many causes (such as human encroachment into previously remote ecosystems and Global Climate Change). It’s far more important to find out who will fix the problem than who might have caused it. Fear, hatred and anger do not help. The international scientific community is sequencing the genomes of SARS-CoV-2 that are isolated from patients. As described previously in this journal, a mutation that changed a guanine (G) to adenine (A) at position 23,403 in the Wuhan reference strain was found months ago1. This changed an aspartic acid residue to a glycine at position 614 in the S protein. This mutated strain (D614G) has become dominant across the world and is more infective than the original strain13. Scientists from around the world continue submitting their data to an open access database, GISAID (The Global Initiative for Sharing All Influenza Data), with headquarters in Munich, Germany1,13. It is the primary Covid-19 sequence database resource for genomic data. Their “intent is to complement what they provide with visualizations and summary data specifically intended to support the immunology and vaccine communities, and to alert the broader community to changes in frequency of mutations that might signal positive selection and a change in either viral phenotype or antigenicity” 14. It has also been the primary sequence database for influenza. As of 19 August, about 83,000 genomic sequences of the SARS-CoV-2 virus (also known as hCoV-19) have been deposited and shared in the GISAID database. In addition, a new form of the swine flu (influenza) has entered the watch list for the next possible pandemic13. It is a reassortment virus. That is, its genome has parts from two or more different viruses. As stated on their website, it’s very important to look for new viruses that can cross from animals into humans (called zoonotic viruses). The search for previously unknown zoonotic viruses was being done in recent years by virus hunters. When searching in the caves of Yunnan, China, they found several coronaviruses in bats that are very similar to that of SARS-CoV-215. As described by Dr. Ernesto Burgio in this journal5, plans were made to genetically modify some of these viruses to study their infectious and pathogenic potential. However, harsh criticisms and requests for a moratorium against this line of research followed16. This could be problematic because such a moratorium could be applied only to research conducted in the main laboratories (with safety and international controls), but not to any genetic manipulation done in far less safe and uncontrolled laboratories5. However, it should be emphasized that the SARS-CoV-2 virus was NOT made by purposeful manipulation in a lab1,17. Another key point in the recent article by Dr. Ernesto Burgio is that new serotypes of viruses emerge5. Some can be much deadlier than the original virus and others can be deadlier in people who received a vaccine that was safe and effective against the original serotype5. This was the case with a vaccine for Dengue fever1,18. So, it’s important for the international community of scientists to increase their efforts to find new viruses. There have been calls to fund a BIOSCAN, the Earth BioGenome Project (EBP), and the Global Virome Project (GVP )19,21. A Pilot Project, called PREDICT, was initiated in the USA when George W. Bush and Barack Obama were Presidents22. It was studying and identifying animal viruses that might infect humans, so new pandemics could be predicted. Investigators collected over 140,000 biological samples from animals and found over 1000 new viruses, including a new strain of Ebola. The Federal government stopped funding this program rather quietly, while the nation was distracted with an attempt to impeach President Trump. This was 25 October, 2019. In just two months, SARS-CoV-2 and Covid-19 emerged. As described recently, a synergistic, holistic, ecological and evolutionary process perspective is needed to understand how viruses spillover into human populations23. Viruses are not static. They are dynamic and ever-changing. We must identify potential pathogenic viruses, track them and monitor their genomes. There are a Global Influenza Surveillance and Response System that monitors the quickly evolving and recombining influenza virus in a timely manner. It has served for over half a century as a way to issue warnings when influenza viruses with pandemic potential. In the recent past, China and the USA have led response activities for several epidemics and pandemics. They used a One Health perspective. It is based on systems thinking. Multidisciplinary teams study interacting processes that occur at both macro- and micro-scales, including pathogen, host and environment23. China–US partnerships have focused on improving early-warning capabilities23. This includes the US CDC Global Disease Detection program (CDC-GDDP), the US Agency for International Development’s (USAID) Emerging Pandemic Threats (EPT) Program, the WHO, OIE and FAO’s collaborative global early-warning system for animal diseases transmissible to humans (GLEWS), and the China National Global Virome Initiative (CNGVI, part of the Global Virome Project, the PREDICT program, and joint research supported by the US National Institutes of Health (NIH) to define the origin of SARS- and MERS-like coronaviruses and identify other SARS coronavirus mammalian infections in China)23. International collaborations exist between healthcare workers and people working for governments, research institutes, corporations and universities. We continue our efforts to learn more about virology in general and the SARS-CoV-2 virus in particular. One of our biggest challenges is communication. Many eligible voters in the USA deny science and think this pandemic is either a hoax, or something that can’t hurt them. This endangers them, their families and their neighbors. But the danger goes far beyond this pandemic. Denial of the science of Global Climate Change and rise in sea levels threatens our children, grandchildren and great-grandchildren. This denial of science and humanity is echoed by modern-day slave masters who run the prison-industrial complex. The white male privilege that I and other men like me enjoy is a huge part of the problem. All too many men use this privilege to treat women and the environment as commodities to be consumed then disposed of when done. We must give up our privilege and gain some humanity. We must recognize the pain and pure Evil that racist, misogynistic and xenophobic attitudes cause. Education is a powerful tool to do this. That is one reason why I wrote an article that debunked the myth of gender differences in intelligence24 and have been writing these articles about Covid-19 and science. In my opinion, there has been a struggle between slavery and abolition in the USA for centuries. Too many people believe that slavery in the USA was abolished on 16 December, 1865 when the 13th Amendment to the Constitution was ratified. It was not. There is an escape clause: “except as a punishment for crime whereof the party shall have been duly convicted”. So, African-Americans could be arrested and convicted of failing to pay rent when they were sharecroppers, looking at or touching a white woman, observing a white man commit a crime and then testifying in court, trying to vote, for being a former judge who had sentenced a white man for a crime, or for just being black. Today, African-Americans are being arrested for protesting peacefully, running from the police, driving while being black or just being “uppity” or “resisting arrest”. This has expanded into a prison-industrial complex in which prisoners work under inhumane conditions, subject to punishment without cause. They earn no money while the corporate stock holders and upper executives get richer – just like the Plantation owners in the Old South. So, next month I will propose that there have been five wars for slavery in the USA: the Revolutionary War, the War of 1812, the Mexican-American War, the Civil War and the current war system that started with the end of Reconstruction. There will be a crucial battle in this cold war on 3 November of this year when national elections happen. In my opinion, the course of the Covid-19 pandemic and the lives of millions of American citizens will depend on the outcome of the election. We must emphasize the importance of science and the lives of our children as we try to communicate our message better. 1 Smith, R.E. Developing vaccines and treatments for Covid-19. Humans are not the enemy. Wall Street International, 24 May, 2020. 2Smith, R.E. Developing vaccines and treatments for Covid-19. Progress report. Wall Street International, 24 June, 2020. 3 Smith, R.E. China starts to vaccinate its military personnel. Developing vaccines and treatments for Covid-19. Progress report. Wall Street International, July 24, 2020. 4 Burgio E., 2020, Covid-19: the Italian Drama – Four avoidable risk factors, Wall Street International, 21 April, 2020. 5 Burgio, E. A pandemic foretold (in vain). A last report. Wall Street International, 4 August, 2020. 6 Craven, J. Covid-19 vaccine tracker. Regulatory Focus, 13 August, 2020. 7 Grifoni, A. et al. Targets of T Cell Responses to SARS-CoV-2 Coronavirus in Humans with Covid-19 Disease and Unexposed Individuals. Cell, 2020. 8 Mateus, J. et al. Selective and cross-reactive SARS-CoV-2 epitopes in unexposed people. Science, published in advance online, 4 August, 2020. 9 Kissler, S.M. et al. Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period, Science, Volume 368, pp. 860-868, 22 May, 2020. 10 Smith, R.E. Using CRISPR Gene Editing to Create New Foods. An Important Part of the Fourth Industrial Revolution. Wall Street International, 24 May, 2019. 11 Eckert, A. First point-of care test for Covid-19 leveraging CRISPR technology. Genetic Engineering & Biotechnology News, 1 July, 2020. 12 Davis, J.T. et al. Estimating the establishment of local transmission and the cryptic phase of the Covid-19 pandemic in the USA Preprint at medRxiv, 2020. 13 GISAID – Initiative. Accessed 19 August, 2020. 14 Korber, B., Spike mutation pipeline reveals the emergence of a more transmissible form of SARS-CoV-2. bioRx preprint, 30 April, 2020. 15 Qiu J., How China’s ‘Bat Woman’ Hunted Down Viruses from SARS to the New Coronavirus, Scientific American, Volume 322, pp. 24-32, June 2020. 16 Butler, D. A SARS-like cluster of circulating bat coronaviruses shows potential for human emergence. Nature Medicine, Volume 21, pp. 1508–1513, 2015. 17 Andersen, K.G. et al. Proximal Origin of SARS-CoV-2. Nature Medicine, Volume 26, pages 450-455, 2020. 18 Peeples, L. Avoiding Pitfalls in the Pursuit of a Covid-19 Vaccine. Proceedings of the National Academy of Sciences, Volume 117, pages 8218-8221, 2020. 19 Kress, W.J. et al. Intercepting pandemics through genomics. Proceedings of the National Academy of Sciences, published online, 23 June, 2020. 20 Carroll, D. et al. The Global Virome Project. Science, Volume 359, pp. 872-874, 2020. 21 Carlson, C.J. From PREDICT to prevent, one pandemic later. The Lancet, published online 31 March, 2020. 22 McNeil Jr, D. Scientists were hunting for the next Ebola. Now the U.S. has cut off their funding. New York Times, 25 Oct 2019. 23 23 Evans, T.S. et al. Synergistic China-US ecological research is essential for global emerging infectious disease preparedness. EcoHealth, Volume 17, pp. 160-173, 2020.
<urn:uuid:fd7326f5-f0ce-4172-92d0-7207bcc4a677>
CC-MAIN-2022-33
https://www.meer.com/en/63165-russia-starts-vaccinating-military-volunteers
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00097.warc.gz
en
0.932325
4,841
2.8125
3
U.S. Equal Employment Opportunity Commission Table of Contents Small businesses are an ever-increasing source of jobs, many of which can be filled by individuals with disabilities who are able and want to work. The approximately 25 million small businesses in the nation represent 99.7 percent of all employers, employ more than 50 percent of the private work force, and generate more than half of the nation's gross domestic product.(1) Small businesses also provide 67 percent of all first jobs. Unfortunately, the unemployment rate of individuals with disabilities remains high. By some estimates, more than 70% of individuals with severe disabilities are not working, even though many of them are willing and able to do so. President Bush's New Freedom Initiative seeks to partner with small businesses to increase the percentage of individuals with disabilities in the workplace. While the Americans with Disabilities Act (ADA) applies to all businesses with 15 or more employees, this handbook is intended primarily for businesses with 15 to 100 employees and smaller businesses expecting to expand to have at least 15 employees in the near future. It will provide you with an easy-to-read, overview of the basic employment provisions of the ADA as they relate to employees and job applicants. The ADA is a federal civil rights law designed to prevent discrimination and enable individuals with disabilities to participate fully in all aspects of society. Practice tip: The Equal Employment Opportunity Commission (EEOC) enforces the employment provisions of the ADA. The EEOC is headquartered in Washington, DC and has offices throughout the United States, including Puerto Rico. If you have any questions concerning the EEOC or the ADA, please The ADA applies to a person who has a physical or mental impairment that substantially limits one or more major life activities (like sitting, standing, or sleeping). The ADA also protects a person with a record of a substantially limiting impairment. Example: A person with a history of cancer that is now in remission may be covered. And the ADA protects a person who is regarded (or treated by an employer) as if s/he has a substantially limiting impairment. Example: An employer may not deny a job to someone who has a history of cancer because of a fear that the condition will recur and cause the employee to miss a lot of work. The ADA only protects a person who is qualified for the job s/he has or wants. Practice tip: Employers do not have to hire someone with a disability over a more qualified person without a disability. The goal of the ADA is to provide equal access and opportunities to individuals with disabilities, not to give them an unfair advantage. Employers covered by the ADA have to make sure that people with disabilities: Practice tip: Harassing someone because of a disability is just as serious as harassing someone because of race, sex, religion, or national origin. If an employee complains to you that s/he is being harassed because of a disability, respond to the complaint right away by conducting an appropriate investigation and, if necessary, taking action to correct the situation. As discussed in the sections that follow, the ADA also limits the kinds of medical information that you can get from a job applicant or employee and requires you to provide reasonable accommodations to the known limitations of qualified individuals with disabilities.(2) Practice tip: Focus application and interview questions on non-medical job qualifications. An employer may ask a wide range of questions designed to determine an applicant's qualifications for a job. Where it seems likely that an applicant has a disability that will require a reasonable accommodation, you may ask whether s/he will need one. This is an exception to the usual rule that questions regarding disability and reasonable accommodation should come after making a conditional job offer. Example: During a job interview, you may ask a blind applicant interviewing for a position that requires working with a computer whether s/he will need a reasonable accommodation, such as special software that will read information on the screen. Practice tip: You may withdraw an offer from an applicant with a disability only if it becomes clear that s/he cannot do the essential functions of the job or would pose a direct threat (i.e., a significant risk of substantial harm) to the health or safety of him/herself or others. Be sure to consider whether any reasonable accommodation(s) would enable the individual to perform the job's essential functions and/or would reduce any safety risk the individual might pose. Once a person with a disability has started working, actual performance, and not the employee's disability, is the best indication of the employee's ability to do the job. Basic rule: The ADA strictly limits the circumstances under which you may ask questions about disability or require medical examinations of employees. Such questions and exams are only permitted where you have a reasonable belief, based on objective evidence, that a particular employee will be unable to perform essential job functions or will pose a direct threat because of a medical condition. Sometimes you may have observed the employee's job performance or you may have received reports from others who have seen the employee's behavior. These observations or reports may give you a reasonable belief that the employee's ability to perform essential job functions is impaired by a medical condition or that the employee poses a direct threat because of a medical condition. Practice tip: If an employee with a disability is having trouble performing essential job functions, or doing so safely, do not immediately assume that the disability is the reason. Poor job performance is often unrelated to a medical condition and, when this is the case, it should be handled in accordance with your existing policies concerning performance (e.g., informal discussions with the employee, verbal or written warnings, or termination where necessary). On the other hand, if you have information that reasonably causes you to conclude that the problem is related to the employee's disability, then medical questions, and perhaps even a medical examination, may be appropriate. Example: A normally reliable employee who is making frequent mistakes tells you that the medication she has started taking for her lupus makes her lethargic and unable to concentrate. Under these circumstances, you may ask her some questions relating to her medical condition, such as how long the medication can be expected to affect job performance. Basic rule: With limited exceptions, you must keep confidential any medical information you learn about an applicant or employee. Information can be confidential even if it contains no medical diagnosis or treatment course and even if it is not generated by a health care professional. Example: An employee's request for a reasonable accommodation would be considered medical information subject to the ADA's confidentiality requirements. Practice tip: Do not place medical information in regular personnel files. Rather, keep medical information in a separate medical file that is accessible only to designated officials. Medical information stored electronically must be similarly protected (e.g., by storing it on a separate database). The ADA recognizes that employers may sometimes have to disclose medical information about applicants or employees. Therefore, the law contains certain exceptions to the general rule requiring confidentiality. Information that is otherwise confidential under the ADA may be disclosed: Practice tip: If providing a particular accommodation would result in undue hardship, consider whether another accommodation exists that would not. Practice tip: To offset the cost of accommodations, you may be able to take advantage of tax credits, such as the Small Business Tax Credit (see Appendix A) and other sources, such as vocational rehabilitation funding. Regardless of cost, you do not need to provide an accommodation that would pose significant difficulty in terms of the operation of your business. Example: A store clerk with a disability asks to work part-time as a reasonable accommodation, which would leave part of one shift staffed by one clerk instead of two. This arrangement poses an undue hardship if it causes untimely customer service. Example: An employee with a disability asks to change her scheduled arrival time from 9:00 a.m. to 10:00 a.m. to attend physical therapy appointments and to stay an hour later. If this accommodation would not affect her ability to complete work in a timely manner or disrupt service to clients or the performance of other workers, it does not pose an undue hardship. In addition to actions that would result in undue hardship, you do not have to do any of the following: Example: A grocery store bagger develops a disability that makes her unable to lift any item weighing more than five pounds. The store does not have to grant an accommodation removing its fifteen-pound lifting requirement if doing so would remove the main job duty of placing items into bags and handing filled bags to customers or placing them in grocery carts. Example: A hotel that requires its housekeepers to clean 16 rooms per day does not have to lower this standard for an employee with a disability. Example: You do not have to tolerate violence, threats of violence, theft, or destruction of property, even if the employee claims that a disability caused the misconduct. Example: A doctor's note indicating that an employee can work "with restrictions" is a request for a reasonable accommodation. Practice tip: Even though you do not have to initiate discussions about the need for a reasonable accommodation, if you believe that a medical condition is causing a performance or conduct problem, you certainly may ask the employee how you can help to solve the problem and even may ask if the employee needs a reasonable accommodation. Practice tip: You also may make an accommodation without requesting any documentation at all. You are free to rely instead on an individual's own description of his or her limitations and needs. Consider putting procedures for providing reasonable accommodations in writing (though this may not be necessary, particularly if you are a very small employer and have one person designated to receive and process accommodation requests). As an alternative to written procedures, you might include a short statement in an employee handbook indicating that you will provide reasonable accommodations for qualified individuals with disabilities, along with the name and telephone number of the person designated to handle requests. You also may want to indicate on written job applications that you will provide reasonable accommodations for the application process and during employment. And bear in mind, whether you have written procedures or not: Basic rule: There are many accommodations that enable individuals with disabilities to apply for jobs, be productive workers, and enjoy equal employment opportunities. In general, though, they can be grouped into the following categories. Practice tip: There are tax incentives available to many small businesses for providing some of the reasonable accommodations described below. (See Appendix A.) Example: A medical clinic could purchase amplified stethoscopes for use by hearing-impaired nurses, physicians, and other members of the health care staff. Example: A small retail store could lower a paper cup dispenser near the water fountain and reconfigure store displays so that an employee in a wheelchair can get water and have access to all parts of the store. Example: If moving boxes of files into a storage room is a function that a secretary performs only from time to time, this function could likely be reallocated to other employees if the secretary's severe back impairment makes him unable to perform it. But: You do not have to remove the essential functions (i.e., fundamental duties) of the job. Example: Where an employee has to spend a significant amount of time retrieving heavy boxes of merchandise and loading them into customers' cars as part of his job, he probably cannot be relieved of this duty as an accommodation. Where your workforce is small and all workers must be able to perform a number of different tasks, job restructuring may not be possible. Example: A telemarketer, proofreader, researcher, or writer may have the type of job that can be performed at least partly at home. But: Where the work involves use of materials that cannot be replicated at home, direct customer and co-worker access is necessary, or immediate access to documents in the workplace is necessary and cannot be anticipated in advance, working at home likely would present an undue hardship. Example: An accountant for a small employer whose medication for depression causes extreme grogginess in the morning may not be able to begin work at 9:00 a.m., but could work from 10:00 until 6:30 without affecting her ability to complete tasks in a timely manner. Example: It may be an undue hardship to adjust the arrival time for someone on a construction crew if it would affect the ability of others to begin work. But: Not all requests for leave as a reasonable accommodation must be granted. For example, where a job is highly specialized, so that it will be difficult to find someone to perform it on a temporary basis, and where the employee cannot provide a date of return, granting leave and holding the position open may constitute undue hardship. Example: If the Executive Chef at a top restaurant requests leave for treatment of her disability but cannot provide a fixed date of return, the restaurant can show undue hardship because of the difficulty of replacing, even temporarily, a chef of this caliber. Moreover, it leaves the restaurant unable to determine how long it must hold open the position or to plan for the chef's absence. Example: A restaurant food server requests 10 to 14 weeks off for disability-related surgery with the date of return depending on the speed of recuperation. The employer must decide whether granting this amount of leave, and doing so without a fixed date of return, would cause an undue hardship. Example: A retail store that does not allow its cashiers to drink beverages at the checkout and limits them to two 15 minute breaks per day may need to modify one rule or the other to accommodate an employee with a psychiatric disability who needs to drink a beverage once an hour due to dry mouth, a side effect of some psychiatric medications. Example: A custodian with mental retardation might have a job coach paid for by an outside agency to initially help, on a full-time basis, the worker learn required tasks and who then, periodically thereafter, returns to help ensure he is performing the job properly. Example: After being injured, a construction worker can no longer perform his job duties, even with accommodation, due to a resulting disability. He asks you to reassign him as an accommodation to a vacant, higher-paid construction foreman position for which he is qualified. You do not have to offer this reassignment because it would be a promotion. Example: The host responsible for escorting diners to their seats at one of three restaurants operated by your business can no longer perform the essential functions of her position because a disability requires her to remain mostly sedentary. However, she is qualified to perform the duties of a vacant cashier position, which has the same salary, at one of your other restaurants. You must offer her a reassignment to the cashier position at the other restaurant as a reasonable accommodation. But: Reassignment is not available to applicants; therefore, you would not have to look for a job for a person with a disability who is not qualified to do the job for which he or she applied, unless you do this for all applicants for other available jobs. Basic rule: The ADA allows you to ask questions related to disability and even require a medical examination of an employee whose medical condition appears to be causing performance or safety problems. Direct Threat: You also may reject a job applicant with a disability or terminate an employee with a disability for safety reasons if the person poses a direct threat (i.e., a significant risk of substantial harm to self or others). Employers have legitimate concerns about maintaining a safe workplace for all employees and members of the public and, in some instances, the nature of a particular person's disability may cause an unacceptable risk of harm. Practice tip: You must be careful not to exclude a qualified person with a disability based on myths, unsubstantiated fears, or stereotypes about that person's ability to safely perform the job. Example: You cannot automatically prohibit someone with epilepsy from working around machinery. Some forms of epilepsy are more severe than others or are not well-controlled. On the other hand, some people with epilepsy know when a seizure will occur in time to move away from potentially hazardous situations. Sometimes seizures occur only at night, making the possibility of a seizure on the job remote. Example: A restaurant could not deny someone with HIV infection a job handling food based on customers' fears that the condition could be transmitted, since there is no real risk of transmitting HIV through food handling. Food safety - A special rule: Under the ADA, the Department of Health and Human Services annually issues a list of the infectious or communicable diseases transmitted through the handling of food. (Copies of the list may be obtained from Center for Infectious Diseases, Centers for Disease Control & Prevention, 1600 Clifton Road, N.E., Mailstop C09, Atlanta, GA 30333 (404) 639-2213.) Example: An employer may not reject an applicant who had been treated for major depression but had worked successfully in stressful jobs for several years based on speculation that the stress of the job might trigger a future relapse. Example: A deaf mechanic cannot be denied employment based on the fear that he has a high probability of being injured by vehicles moving in and out of the garage if an accommodation would enable him to perform the job duties with little or no risk, such as allowing him to work in a corner of the garage facing outward so that he can see any moving vehicles. Example: An employer may fire an employee who is drinking alcohol while on the job if it has a uniformly applied rule prohibiting such conduct. But: There may be times when you may have to accommodate an employee with alcoholism. For example, an employer may have to modify a rule prohibiting personal phone calls at work for an employee with alcoholism who periodically has to contact his "AA sponsor," if the employee has a need to do so during work hours. Basic rule: A charge means only that someone has alleged that your business discriminated against him/her on a basis that is protected under Federal equal employment opportunity law: race, color, national origin, religion, sex, age, or disability. A charge does not constitute a finding that you did, in fact, discriminate. Practice tip: EEOC's mediation program is free. The program is voluntary and all parties must agree to take part. The mediation process also is confidential. Neutral mediators provide employers and charging parties the opportunity to reach mutually agreeable solutions. If the charge filed against your company is eligible for mediation, you will be notified by the EEOC of your opportunity to take part in the mediation process. In the event that mediation does not succeed, the charge is referred for investigation. Example: An employee filed a charge against her supervisor alleging disability discrimination, which the employer believed to be without merit. After receiving the charge, the employer told the employee that she would be fired if she filed another meritless charge against it. The employee filed another charge against the employer and she was fired. Even assuming the charges of discrimination were without merit, the employee has a strong claim that the employer unlawfully retaliated against her. Practice tip: Even if you believe that the charge is frivolous, submit a response to the EEOC and provide the information requested. If the charge was not dismissed by the EEOC when it was received, that means there is some basis for proceeding with further investigation. There are many cases where it is unclear whether discrimination may have occurred and an investigation is necessary. You are encouraged to present any facts that you believe show the allegations are incorrect or do not amount to an ADA violation.(3) The Internal Revenue Code includes several provisions aimed at making businesses more accessible to people with disabilities. The following is designed to give you general information about three of the most significant tax incentives. It is not legal advice. You should check with your accountant or tax advisor to find out whether you are eligible to take advantage of these incentives or visit the Internal Revenue Service's website, www.irs.gov, for more information. Additionally, consult your accountant or tax advisor about whether there are similar state and local tax incentives. Below are a few of the most frequently consulted resources for accommodating qualified individuals with disabilities. Many other resources exist both nationally and locally, such as organizations of and for individuals with particular types of disabilities. Finding one of these organizations in your area may be as simple as consulting your local phone book. Additionally, the federal government has a web site, www.disabilitydirect.gov, which provides links to many federal resources. Job Accommodation Network (JAN)- provides lists based on specific disabilities as well as links to various other accommodation providers. P.O. Box 6080 Morgantown, WV 26506-6080 (800) 526-7234 or (304) 293-7184 U.S. Department of Labor For written materials: (800) 959-3652 (voice); (800) 326-2577 (TTY) To ask questions: (202) 219-8412 ADA Disability and Business Technical Assistance Centers (DBTACs) - 10 federally funded regional centers to provide assistance on all aspects of the ADA. RESNA Technical Assistance Project - can refer individuals to projects offering technical assistance on technology-related services for individuals with disabilities. (703) 524-6686 (voice); (703) 524-6639 (TTY) Access for All Program on Employment and Disability School of Industrial and Labor Relations 106 ILR Extension Ithaca, NY 14853-3901 (607) 255-7727 (voice); (607) 255-2891 (TTY) Many businesses say that they would like to hire qualified individuals with disabilities, but do not know where to find them. The following resources may be able to help. In addition, you may contact organizations of and for individuals with specific disabilities in your area and consult www.disabilitydirect.gov. RISKON - executive recruitment firm committed to helping people with disabilities find jobs: 15 Central Avenue Tenafly, NJ 07670 (201) 568-5830 (fax) National Business & Disability Council - provides full range of services to assist businesses successfully integrate people with disabilities into the workplace: Job Accommodation Network (JAN) - provides a variety of resources for employers with employees with disabilities and those seeking to hire employees with disabilities: P.O. Box 6080 Morgantown, WV 26506-6080 (800) 526-7234 or (304) 293-7184 Employer Assistance Referral Network (EARN) - a national toll-free telephone and electronic information referral service to assist employers in locating and recruiting qualified workers with disabilities. EARN is a service of the U.S. Department of Labor, Office of Disability Employment Policy with additional support provided by the Social Security Administration's Office of Employment Support Programs: 1-866- EARN NOW (327-6669) 2. If you are a federal contractor, you will also have obligations under Section 503 of the Rehabilitation Act. This law prohibits discrimination and requires contractors and subcontractors to take affirmative steps to hire and to promote qualified individuals with disabilities. For further information on the requirements of Section 503, contact the Office of Federal Contract Compliance Programs (OFCCP) of the U.S. Department of Labor at (202) 693-0100 (Voice) or (800) 326-2577 (TDD), or at www.dol.gov/esa/ofcp_org.htm. 3. The Small Business Regulatory Enforcement Fairness Act allows small businesses to comment about federal agency enforcement actions to an SBA Ombudsman. For information about this process and how to submit a comment, see Small Business and Agriculture Regulatory Enforcement National Ombudsman. It is EEOC policy to ensure that employers are not targeted for enforcement actions as a result of their comments to the SBA Ombudsman.
<urn:uuid:5de103c5-f99e-46ea-863a-de924da700ec>
CC-MAIN-2017-51
https://www1.eeoc.gov/eeoc/publications/adahandbook.cfm?renderforprint=1
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567042.50/warc/CC-MAIN-20171215060102-20171215080102-00127.warc.gz
en
0.951652
4,914
2.546875
3
“Israel maintains entrenched discriminatory systems that treat Palestinians unequally. Its 50-year occupation of the West Bank and Gaza involves systematic rights abuses, including collective punishment, routine use of excessive lethal force, and prolonged administrative detention without charge or trial for hundreds. It builds and supports illegal settlements in the occupied West Bank, expropriating Palestinian land and imposing burdens on Palestinians but not on settlers, restricting their access to basic services and making it nearly impossible for them to build in much of the West Bank without risking demolition. Israel’s decade-long closure of Gaza, supported by Egypt, severely restricts the movement of people and goods, with devastating humanitarian impact. The Palestinian Authority in the West Bank and Hamas in Gaza both sharply restrict dissent, arbitrarily arresting critics and abusing those in their custody.” – Human Rights Watch, 2017 The Israel/Palestine conflict has been splashed all over the news for years. But what are they really fighting about? To fully understand such a complicated conflict one must first go back a hundred or so years and understand the history of the region. The conflict brings about the age old question humans have been struggling with since the dawn of time- who can claim the land in which people live? Zionism is a movement for (originally) the re-establishment and (now) the development and protection of a Jewish nation in what is now Israel. Zionists believe that Judaism is not just a religion, but a nationality, and that after years of persecution Jews should be able to create their own nation-state in their homeland of Jerusalem and Israel. However, there are a few problems with this. Not only is Jerusalem a holy spot for all three Abrahamic religions, but Arabs were living on the land of what is present day Israel for years before Zionists decided to claim that land back. And to make things even more complicated, after the Ottoman Empire (the ruling empire of the specific region) collapsed, Imperial Britain decided to take over the land. This all happened in the early 1900s, and it’s only gotten more complicated since then. Below is a timeline created to help understand the history and current events of the Israel/Palestine conflict. 1918 – After WW1 the Ottoman empire collapsed and British took control of the area, renaming it “The British Mandate for Palestine”. 1930s- The Holocaust caused many Jews to flee from Europe to “The British Mandate for Palestine”. So much so that the British limited Jewish immigration. 1947- The UN proposed to split the British Mandate for Palestine into Israel and Palestine (Jerusalem was to be an international zone). 1948-1949- Arabs declare war on Israel the establish a unified Arab Palestine where the British Palestine had been. Israel won the war but pushed way past the borders under the UN plan. They also expelled huge numbers of palestinians (7 mil). After the war Israel controlled everywhere except Gaza (which Egypt controlled) and the West Bank (which Jordan controlled). In the next decades, many refugees (both Arabs and Jews) expelled from their countries came to Israel. 1967 – A violent skirmish know as the Six-Days War took place. After which Israel seized Golan Heights from Syria, the West Bank from Jordan, Gaza and Sinai from Egypt Once Israel took over all of Palestine they were left responsible for governing the Palestinians Palestinian Liberation Organization (PLO) formed in the 1960s fought against Israel (including acts of Terrorism). 1978- Israel and Egypt signed the US brokered Camp David Accords and Israel gave Sinai back to Egypt. 1982- Israeli military invaded Lebanon to kick the PLO out of Beirut. Meanwhile settler Israelis moved into the Palestinian heavy West Bank & Gaza whether the Palestinians wanted them to or not. Soldiers came with settlers to protect them and forced Palestinians off of their land. The international community considers these settlers to be illegal. 1987-1993 – The First Intifada (uprising) began with protests and boycotts but escalated. A couple hundred Israelis and over 1,000 Palestinians died. Around the same time of the First Intifada, Hamas was created in Gaza (a violent extremist group dedicated to the destruction of Israel). 1993- The Oslo Accords take place. It is important to note that these are meant to be the first big step to Israel someday withdrawing from West Bank and Gaza and allowing an independent Palestine. The Oslo Accords also created the Palestinian Authority (PA) which allowed Palestinians some rights in certain places. Members of Hamas launched suicide bombings to try to sabotage the process and the Israeli Right protested peace talks.Tensions were so high, that after the signing of the second Oslo Accords the Israeli far right shot the Israeli Prime Minister. 2000- A Second Camp David Accords is held and comes up empty. 2000-2005- A second Second Intifada occurs, this time much more violent. By the end of the Second Intifada 1,000 Israelis and 3,200 Palestinians died. 2003- Israel withdraws from Gaza. Hamas gains power but splits from the Palestinian Authority. Israel puts Gaza under a blockade and unemployment rises to 40% 2005- Israeli politics shift right, the country builds walls and checkpoints to control palestinians movements. Israel withdrew its troops and settlers from Gaza in 2005 but, citing security concerns, to this day maintains tight control of its land and sea borders, reducing Gaza’s economy to a state of collapse. 2006- Israel’s Strategic Affairs Ministry was created, and since then has dedicated significant resources to monitoring critics of Israeli policy. 2012- Under intense Egyptian and American pressure, Israel and the Palestinian militant group Hamas halted eight days of conflict. In one week, violence had killed more than 150 Palestinians and five Israelis. The deal called for a 24-hour cooling-off period to be followed by talks aimed at resolving at least some of the longstanding grievances between the two sides. According to the New York Times, The deal demonstrated the pragmatism of Egypt’s new Islamist president, Mohamed Morsi. However, the cease-fire deal was reached only through a final American diplomatic push: Secretary of State Hillary Rodham Clinton 2013- U.S. Secretary of State John Kerry attempted to revive the peace process between Israel and the Palestinian Authority (PA) in the West Bank, in order to secure a two-state solution. However, peace talks were disrupted when the Fatah, the PA’s ruling body, formed a unity government with its rival faction, Hamas. 2014- In the summer of 2014, the murders of three Israeli teenagers and one Palestinian teenager ignited clashes in the Palestinian territories and precipitated a military confrontation between the Israeli military and Hamas. In August 2014, in violation of the November 2012 ceasefire, Hamas fired nearly three thousand rockets at Israel. In retaliation, Israel launched air strikes on rocket launchers and other suspected terrorist targets in Gaza. The skirmish ended in late August with a cease-fire deal brokered by Egypt, but had killed 71 Israelis and 2,220 Palestinians according a report by the United Nations. 2015- A wave of violence between Israelis and Palestinians emerged after clashes erupted at a Jerusalem holy site in September 2015. Amidst calls from the United Nations Security Council to ease tensions, Palestinian President Mahmoud Abbas announced that Palestine could no longer be bound by the Oslo Accords. Fall 2015 witnessed further increases in violence with near-daily stabbings of civilians and Israeli security force crackdowns. These included the arrest of Hassan Yousef, cofounder and senior official of Hamas. Kerry renewed separate talks with Israeli and Palestinian leaders in October 2015 in order to quell the emerging violence. 2016- Israeli military authorities have demolished or confiscated Palestinian school buildings or property in the West Bank at least 16 times since 2010, with 12 incidents since 2016, repeatedly targeting some schools, Human Rights Watch found. Israel has repeatedly denied Palestinians permits to build schools in the West Bank and demolished schools built without permits, making it more difficult or impossible for thousands of children to get an education, Human Rights Watch said. The Israeli military refuses to permit most new Palestinian construction in the 60 percent of the West Bank where it has exclusive control over planning and building, even as the military facilitates settler construction. The military has enforced this discriminatory system by razing thousands of Palestinian properties, including schools, creating pressure on Palestinians to leave their communities. When Israeli authorities have demolished schools, they have not taken steps to ensure that children in the area have access to schools of at least the same quality.“Israeli authorities have been getting away for years with demolishing primary schools and preschools in Palestinian communities,” said Bill Van Esveld, senior children’s rights researcher at Human Rights Watch. “The Israeli military’s refusal to issue building permits and then knocking down schools without permits is discriminatory and violates children’s right to education.” According to Human Rights Watch, an independent, international, nongovernmental organization that promotes respect for human rights and international law, “The Israeli government continued to enforce severe and discriminatory restrictions on Palestinians’ human rights; restrict the movement of people and goods into and out of the Gaza Strip; and facilitate the unlawful transfer of Israeli citizens to settlements in the occupied West Bank. Punitive measures taken by the Palestinian Authority (PA) exacerbated the humanitarian crisis in Gaza caused by the closure enforced by Israel. The PA in the West Bank and Hamas in Gaza escalated crackdowns on dissent, arbitrarily arresting critics, and abusing those in their custody.” Between January 1 and November 6, 2017, Israeli security forces killed 62 Palestinians, including 14 children, and injured at least 3,494 Palestinians in the West Bank, Gaza and Israel, including protesters, suspected assailants or members of armed groups, and bystanders. Palestinians killed at least 15 Israelis during this same time, including 10 security officers, and injured 129 in conflict-related incidents in the West Bank and Israel. In April and May, hundreds of Palestinian prisoners spent 40 days on hunger strike seeking better conditions. As of November 1, Israeli authorities incarcerated 6,154 inmates on what they consider security grounds, the overwhelming majority Palestinian, including 3,454 convicted prisoners, 2,247 pretrial detainees and 453 administrative detainees held without charge or trial, according to the Israel Prison Service. May 7, 2018, the Israeli Authorities revoked the work permit for Omar Shakir, the Human Rights Watch Israel and Palestine director, and ordered him to leave Israel within 14 days. Authorities based the decision on a dossier a government ministry compiled on Shakir’s activities spanning over a decade, almost all of them predating his Human Rights Watch employment. “This is not about Shakir, but rather about muzzling Human Rights Watch and shutting down criticism of Israel’s rights record,” said Iain Levine, deputy executive director for program at Human Rights Watch. “Compiling dossiers on and deporting human rights defenders is a page out of the Russian or Egyptian security services’ playbook.” In July-August 2017 tensions around the Al-Aqsa/Temple Mount compound triggered an escalation in violence. Israeli security forces used lethal force against demonstrators and against suspected attackers in the West Bank and at the Gaza border. Palestinian assailants, most of them apparently acting without the formal sponsorship of any armed group, carried out stabbings and occasional shootings against Israelis. Israel operates a two-tiered system in the West Bank that provides preferential treatment to Israeli settlers while imposing harsh conditions on Palestinians. While settlements expanded in 2017, Israeli authorities destroyed 381 homes and other property, forcibly displacing 588 people as of November 6 2017, in the West Bank, including East Jerusalem, as part of discriminatory practices that reject almost all building permit applications submitted by Palestinians. Israeli restrictions on the delivery of construction materials to Gaza and a lack of funding have impeded reconstruction of the 17,800 housing units severely damaged or destroyed during Israel’s 2014 military operation in Gaza. About 29,000 people who lost their homes remain displaced. Israel also has maintained onerous restrictions on the movement of Palestinians in the West Bank, including checkpoints and the separation barrier, a combination of wall and fence in the West Bank that Israel said it built for security reasons. Israeli-imposed restrictions designed to keep Palestinians far from settlements forced them to take time-consuming detours and restricted their access to agricultural land. According to a 2017 Human Rights Watch report, Israeli military authorities detained Palestinian protesters, including those who advocated nonviolent protest against Israeli settlements and the route of the separation barrier. Israeli authorities try the majority of Palestinian children incarcerated in the occupied territory in military courts, which have a near-100 percent conviction rate. HRW also added that Israeli security forces arrested Palestinian children suspected of criminal offenses, usually stone-throwing, often using unnecessary force, questioned them without a family member present, and made them sign confessions in Hebrew, which most did not understand. The Israeli military detained Palestinian children separately from adults during remand hearings and military court trials, but often detained children with adults immediately after arrest. As of June 30, Israeli authorities held 315 Palestinian children in military detention. As of October 2017, Israel held 453 Palestinian administrative detainees without charge or trial, based on secret evidence, many for prolonged periods. Israel jails many Palestinian detainees and prisoners inside Israel, violating international humanitarian law requiring that they not be transferred outside the occupied territory and restricting the ability of family members to visit them. In December 2017, President Donald J. Trump recognize Jerusalem as the capital of Israel and announced his intention to construct a U.S. embassy there, reversing longstanding U.S. policy. Israel considers the “complete and united Jerusalem” its capital, but Palestinians claim East Jerusalem for the capital of their future state according to the Council of Foreign Relations’ website. On Monday, May 14th the United States Embassy was formally relocated to Jerusalem from Tel Aviv, on the 70th anniversary of the formation of Israel. The shift of the United States Embassy to Jerusalem reflects the close alliance that has developed between Mr. Trump and Mr. Netanyahu, which Palestinian leaders say has worsened prospects for peace. “Today is a day of sadness,” said Sabri Saidam, the Palestinian minister of education reflected on the 14th. “It’s a manifestation of the power of America and President Trump in upsetting the Palestinian people and the people who have been awaiting the independence of Palestine for 70 years”. Many Palestinians protested the opening of the Embassy. Outside the Qalandiya refugee camp north of Jerusalem, youths released bunches of black balloons that carried aloft black Palestinian flags, showing their disdain for the American move. Clashes pitted demonstrators throwing rocks and Molotov cocktails against Israeli security forces firing tear gas and rubber bullets. A mass attempt by Palestinians to cross the border fence separating Israel from Gaza turned violent, as Israeli soldiers responded with rifle fire. More than 2,700 Palestinian demonstrators were injured, at least 1,350 by gunfire, along the border fence with Gaza, the Health Ministry reported. Israeli soldiers and snipers used barrages of tear gas as well as live gunfire to keep protesters from entering Israeli territory. The Israeli military said that some in the crowds were planting or hurling explosives, and that many were flying flaming kites into Israel; at least one kite outside the Nahal Oz kibbutz, near Gaza City, ignited a wildfire. A spokesman for the Israeli Defense Forces, Lt. Colonel Jonathan Conricus, cast doubt on the casualty numbers from the Hamas-controlled Health Ministry; he said a large number of those listed as injured had suffered only tear-gas inhalation (although one infant died from tear gas inhalation). Israel said its soldiers had exercised restraint and that many more protesters would die if they tried to cross into Israeli territory. But Doctors Without Borders, the international medical charity, said that it had treated more Palestinians at its Gaza clinics in the past month than during the 2014 conflict and that some of the exit wounds from Israeli ammunition were “fist-size”. B’Tselem, a leading Israeli human rights organization, criticized the military’s use of lethal force, saying that the demonstrations were no surprise and that Israel had “plenty of time to come up with alternate approaches.” “The fact that live gunfire is once again the sole measure that the Israeli military is using in the field evinces appalling indifference towards human life on the part of senior Israeli government and military officials,” the group said. Though Colonel Conricus, a spokesman for the Israel Defense Forces said the Palestinian fighters were carrying firearms, he acknowledged that there had been no reports of Israeli troops coming under gunfire. An Israeli soldier was wounded by shrapnel from what was believed to be an explosive device, he added. Israel has made clear throughout the protests that it holds Hamas responsible for any violence emanating from Gaza, and Colonel Conricus made no apologies for the one-sided body count. On May 30th 2018 Hamas said that armed groups in the Gaza Strip had agreed to a deal with Israel following a night of air attacks targeting several Hamas and Islamic Jihad positions, so long as the “occupier” (Israel) did the same. However, on Sunday, June 2, Israeli warplanes hit at least 15 targets belonging to Hamas’ armed wing, the Al Qassam Brigades. The attacks targeted at least three Hamas compounds in the northern part of the strip. Sunday’s attacks came a day after Razan al-Najjar, 21-year-old volunteer paramedic, was killed by Israeli live fire during a protest. According to witnesses, al-Najjar was shot in her white uniform while running towards the fortified fence to help a casualty. To conclude, that is where the conflict is at as of now. It’s a bloody and terrible dispute with casualties on both sides, although the number of casualties have been much higher on one side ever since Israel won the war against Arabs fighting for a unified Arab Palestine in 1949. Israel has more money, more land, more fighters (every Israeli citizen must work for the IDF in some capacity due to conscription of citizens over 18), and the backing of powerful world powers such as the United States of America. In contrast, Palestinians are not allowed to form their own government to lead the people, so terror groups such as Hamas have taken the stage to represent Palestine in the fight for freedom. However, Hamas does not represent Palestinians as a whole. Many Palestinians that protest are not affiliated with any particular group, but are just average people that want the freedom and human rights they deserve, as well as to live and thrive on the land they see as rightfully theirs. “Global Conflict Tracker.” Council on Foreign Relations, Council on Foreign Relations, www.cfr.org/interactives/global-conflict-tracker?cid=ppc-Google-grant-conflict_tracker-031116&gclid=CjwKEAiAj7TCBRCp2Z22ue-zrj4SJACG7SBEH9uE_raTezcIufDr28x3vGe1FFlO2Y7kt4ui1PzWKxoCO5Tw_wcB#!/conflict/israeli-palestinian-conflict. “Global Conflict Tracker Israeli-Palestine Conflict.” Council on Foreign Relations, Council on Foreign Relations, www.cfr.org/interactives/global-conflict-tracker#!/conflict/israeli-palestinian-conflict. Halbfinger, David M, et al. “Israel Kills Dozens at Gaza Border as U.S. Embassy Opens in Jerusalem.” The New York Times, The New York Times, 14 May 2018, www.nytimes.com/2018/05/14/world/middleeast/gaza-protests-palestinians-us-embassy.html. “Hamas-Israel Ceasefire Holds after Night of Violence.” Israeli–Palestinian Conflict | Al Jazeera, Al Jazeera, 30 May 2018, www.aljazeera.com/news/2018/05/hamas-agrees-gaza-ceasefire-israel-reciprocates-180530053108954.html. Holpuch, Amanda, et al. “Israeli Forces Kill Dozens of Palestinians in Protests as US Embassy Opens in Jerusalem – as It Happened.” The Guardian, Guardian News and Media, 14 May 2018, www.theguardian.com/world/live/2018/may/14/israeli-troops-kill-palestinians-protesting-against-us-embassy-move-to-jerusalem-live-updates. “Israel and Hamas Agree to a Cease-Fire, After a U.S.-Egypt Push.” The New York Times, The New York Times, 21 Nov. 2012, www.nytimes.com/2012/11/22/world/middleeast/israel-gaza-conflict.html?pagewanted=all&_r=0. “Israel and Palestine Events of 2017.” Human Rights Watch, Human Rights Watch, 18 Jan. 2018, www.hrw.org/world-report/2018/country-chapters/israel/palestine. “Israel: Army Demolishing West Bank Schools.” Human Rights Watch, 25 Apr. 2018, www.hrw.org/news/2018/04/25/israel-army-demolishing-west-bank-schools. “Israel/Palestine.” Human Rights Watch, Human Rights Watch, www.hrw.org/middle-east/n-africa/israel/palestine. Israel Strikes Hamas in Gaza after ‘Projectiles’ Fired. Al Jazeera, 3 June 2018, www.aljazeera.com/news/2018/06/israeli-strikes-hamas-gaza-projectiles-fired-180603064958298.html. “Israel Orders Human Rights Watch Official Deported.” Human Rights Watch, 9 May 2018, www.hrw.org/news/2018/05/08/israel-orders-human-rights-watch-official-deported. Vox. The Israel-Palestine Conflict: a Brief, Simple History. Youtube, 20 Jan. 2016, www.youtube.com/watch?v=iRYZjOuUnlU.
<urn:uuid:da10eb3b-17c4-491d-8dca-4f2acc7f19b5>
CC-MAIN-2022-33
https://archives.rgnn.org/2018/06/12/comprehensive-overview-israelpalestine-conflict/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00098.warc.gz
en
0.943815
4,715
3.40625
3
The Prophet (saw) received the revelation in Mecca, but its people (Quraysh) humiliated and tortured him and his followers. Whilst he first told to his followers to seek the protection in the Abyssinia, tribes in Medina (al-Aws and al-Khazraj) accepted Islam and invited the Prophet to lead their city. A while later, the Prophet and his companions migrated and Medina became the centre of Islam. Medina contained three groups, new Muslim citizens, Jewish settlements (Banu Qaynuqa', Banu an-Nadeer and Banu Qurayzah) and Muslim migrants. In dealing with this problem, the Prophet (saw) strengthened the ties of brotherhood between Muslims and agreed on a treaty between its groups. The main articles of this treaty comprised: ولَا يُنْصَرُ كَافِرٌ عَلَى مُؤْمِنٍ ... وَإِنَّهُ مَنْتَبِعَنَا مَنْيَهُودَ، فَإِنَّلَهُ النَّصْرَوَالْأُسْوَةَ ... وَإِنَّهُ لَا يُجِيرُ مُشْرِكٌ مَالًا لقريش وَلَا نفسا، ... (ولا يحل) أَنْيَنْصُرَ مُحْدِثًا وَلَا يُؤْوِيهِ... وَإِنّ الْيَهُودَ يُنْفِقُونَ مَعَ الْمُؤْمِنِينَ مَا دَامُوا مُحَارَبِينَ ... وَإِنَّ عَلَى الْيَهُودِ نَفَقَتَهُمْ وَعَلَى الْمُسْلِمِينَ نَفَقَتَهُمْ، إِلَّا مَنْ ظَلَمَ وَأَثِمَ فَإِنَّهُ لَا يَوْتَغُإِلَّا نَفْسَهُ وَأَهْلَ بَيْتِهِ. ... وَإِنَّ بَيْنَهُمُ النَّصْرَ عَلَى مَنْ حَارَبَ أهل هَذِهِ الصَّحِيفَةِ، ... وَإِنَّهُ لَا تُجار قُرَيْشٌ وَلَا مَنْ نَصَرَهَا وَإِنَّ بَيْنَهُمْ النَّصْرَ عَلَى مَنْ دَهَمَ يَثْرِبَ "There is no support given to a disbeliever over a believer … Whoever follows us from the Jews, we will provide him with support and victory … It is not allowed to financially or personally co-operate with Quraysh … It is not allowed to give support or shelter to enemies … Jews should spend with believers as long as they afford war … Whoever oppresses (betrayed) or does wrong will be destroyed with his family … Jews cover their financial expenses and Muslims afford their financial expenses ... They are to give support to one another when anyone fights against the allies of this treaty. … No protection is to be given to Quraysh nor any of its allies. They (Jews and Muslims) are to co-fight against anyone that attacks Yathrib (Medina)." (Ibn Kathir, al-Bidayah wa an-Nihayah, Vol. 4, p. 554-558, asunnan al-Kubra, Sirat Ibn Ishaq) Whilst the Jewish communities signed the treaty agreeing to follow its articles, they then violated it. They tried many times to kill the Prophet (saw) and his companions; they criticized Islam and rebelled against its legislation; they became increasingly hostile and belligerent resulting in a struggle that saw their expulsion and disappearance in Arabia. Five years after the treaty was signed, the leaders of Banu Qurayza started to secretly gather allies of disbelieving tribes throughout Arabia to fight Muslims and promised them help. The disbelievers' numbers reached over ten thousand soldiers, a large number at the time outnumbering the inhabitants of Medina. There were only two entrances to Medina; the north entrance in which the Prophet dug a trench, and a southern entrance in which the Banu Qurayza (their allies) lived. The Violation of the Treaty Banu Qurayza, at this crucial time, violated the treaty and publicly announced that they will fight beside the outsider invaders. The Prophet (saw) and his companions were sieged from everywhere; the disbelievers are outside, the hypocrites are inside and Jews are at the back. The Qur'an describes the situation saying: إِذْ جَاؤُوكُم مِّن فَوْقِكُمْ وَمِنْ أَسْفَلَ مِنكُمْ وَإِذْ زَاغَتْ الْأَبْصَارُ وَبَلَغَتِ الْقُلُوبُ الْحَنَاجِرَ "When they came at you from above you and from below you, and when eyes shifted (in fear), and hearts reached the throats." (Qur'an 33:10) The Prophet sent envoys to them asking them to keep the treaty, but they refused and insulted the Prophet (saw). The enemies from outside and from inside decided to perish Islam; all the Muslim children, women, and elders would have been killed if they were succeeded, but the battle of the trench made a serious but unsuccessful attack on Medina and then the invaders left. Banu Qurayza's Contribution Do you still think they are not guilty? Banu Qurayza violated all the articles of the treaty above and so were then besieged by the Prophet for twenty-five days. The Prophet promised whoever did not contribute and comes out peacefully will be released. A number of people did such as Amr ibn Sa'd. They later surrendered and asked the Prophet (saw) to be subjected to the arbitration of Sa'd ibn Mu'adh, a member of the Aws tribe, who was their old ally. It was narrated on the authority of Abu Said al-Khudari: لَمَّا نَزَلَتْ بَنُو قُرَيْظَةَ عَلَى حُكْمِ سَعْدٍ ـ هُوَ ابْنُ مُعَاذٍ ـ بَعَثَ رَسُولُ اللَّهِ صلى الله عليه وسلم، وَكَانَ قَرِيبًا مِنْهُ، … فَجَاءَ فَجَلَسَ إِلَى رَسُولِ اللَّهِ صلى الله عليه وسلم فَقَالَ لَهُ " إِنَّ هَؤُلاَءِ نَزَلُوا عَلَى حُكْمِكَ ". قَالَ فَإِنِّي أَحْكُمُ أَنْ تُقْتَلَ الْمُقَاتِلَةُ، وَأَنْ تُسْبَى الذُّرِّيَّةُ. قَالَ " لَقَدْ حَكَمْتَ فِيهِمْ بِحُكْمِ الْمَلِكِ "When the tribe of Bani Quraiza was ready to accept Sa`d's judgment, Allah's Messenger (saw) sent for Sa`d who was near to him. … Then Sa`d came and sat beside Allah's Messenger (saw) who said to him. 'These people are ready to accept your judgment.' Sa`d said, 'I give the judgment that their warriors should be killed and their children and women should be taken as prisoners.' The Prophet (saw) then remarked, 'O Sa`d! You have judged amongst them with (or similar to) the judgment of the King Allah.'" (Sahih al-Bukhari 3043) The Execution of Warriors The judgement of Sa'd was the execution of the fighters or warriors in a market place, those who contributed in the war and prepared for it (unlike the narration of killing the teenagers which should be understood in light of the statement of Sa'd; i.e. only the warriors). Some of them were released, such as Zubayr ibn Batta and others, by the intermediation of the companions. This judgment is approved in the above-signed treaty: إِلَّا مَنْ ظَلَمَ وَأَثِمَ فَإِنَّهُ لَا يَوْتَغُإِلَّا نَفْسَهُ وَأَهْلَ بَيْتِهِ. "Except the transgressors (who violate the treaty) and do wrong, they and their family will be distorted." This judgement is the judgement of the Hebrew Bible in which Jews believe: "When you march up to attack a city, make its people an offer of peace. If they accept and open their gates, all the people in it shall be subject to forced labor and shall work for you. If they refuse to make peace and they engage you in battle, lay siege to that city. When the LORD your God delivers it into your hand, put to the sword all the men in it. As for the women, the children, the livestock and everything else in the city, you may take these as plunder for yourselves. And you may use the plunder the LORD your God gives you from your enemies. This is how you are to treat all the cities that are at a distance from you and do not belong to the nations nearby." (Deut 20:10-15) It is the punishment of the high treason. For example, in the United States, according to U.S. Code, Title 18, Part 1, Chapter 115 § 238: "Whoever, owing allegiance to the United States, levies war against them or adheres to their enemies, giving them aid and comfort within the United States or elsewhere, is guilty of treason and shall suffer death, or shall be imprisoned not less than five years and fined under this title but not less than $10,000; and shall be incapable of holding any office under the United States." So it depends on the kind of sin a person does against his country, you would not find a serious situation than what Banu Qurayza did. In other cases of betrayals, such as the story of Hattib ibn Abi Balta'ah who only sought the protection of his family, when it did not lead to mass destruction Muslims, the Prophet (saw) did not kill the person who committed it. So the judgement is not unjust, rather compatible with the treaty they approved, the Scripture they believe in and our modern international law. The Number of executed People or Soldiers The number does not matter because the point lies in whether they are criminals or innocents. If only one innocent person is killed, this is conceded unjust and if they all deserve, so they got what they deserve. The killing of all the men in the city does not match with the teachings of the Qur'an because not all of them took part of the war, so killing all men more probably include certain innocent people. The Qur'an clearly condemns the killing of innocent people: من قَتَلَ نَفْساً بِغَيْرِ نَفْسٍ أَوْ فَسَادٍ فِي الأَرْضِ فَكَأَنَّمَا قَتَلَ النَّاسَ جَمِيعاً "Whoever kills a soul unless for a soul or for corruption (done) in the land - it is as if he had slain mankind entirely." (Qur'an 5:32) Only the leaders and the warriors participated in this and Allah also condemns the punishment of some because of the sins of others: وَلاَ تَزِرُ وَازِرَةٌ وِزْرَ أُخْرَى "And no bearer of burdens will bear the burden of another." (Qur'an 6:164) Arafat (1976) in his research paper "New Light on the Story of Banu Qurayza and the Jews of Medina included a number of objection to such narrations. There is a hot debate over the number of executed soldiers, the number varying from 40 to 960. The number must fall between 40 to 400 maximum. None were innocent as they were prepared to wipe out Muslims with whom they had a treaty. When putting the story in its historical context, we see their treatment was not unjust. It was compatible with the treaty signed by the Jews which they reneged on, the scriptures they believe in and modern international law. Ibn Hajar, Tahdhib at-Tahdhib Ibn Kathir, al-Bidayah wa an-Nihayah Sirat ibn Ishaq Al-Ghazali, Fiqh as-Sirah Ibn al-Qayim, Zad al-Ma'ad W. N. Arafat, New Light on the Story of Banu Qurayza and the Jews of Medina. I have been reading into this topic and have discovered the following: The Banu Qurayza were a Jewish tribe who lived in Medina and entered into an alliance with the prophet of Islam, Muhammad(pbuh). He had the largest following of any leader in the city, hence he was elected as the city's chief. Each tribe agreed to his nomination and every tribe, including the Banu Qurayza, was granted the right to practice its own faith in peace. The point to be noted is that each tribe would be judged according to its own laws - specifically the laws of the faith it followed. So Muslims would be judged according to the Quran and the Jews according to the Torah. In 627 AD, the enemies of Islam united and marched onto the city of Medina to wipe away the Muslims. It is recorded that at first the Jews of the tribe remained loyal, however, after being informed that the Muslims were heavily outnumbered and due to the persistence of the enemy, they decided to abandon the Muslims. Not only this, but they agreed to attack the Muslims from the rear while the Meccans engaged the Muslim army at the ditch. Fortunately, the Muslims became aware of this secret plot and placed 500 soldiers in their way. When the Holy Prophet Muhammad(pbuh) became aware of their treachery he did not immediately accept the rumours. He sent parties forth to their tribe in order to investigate the claims, who rejected having any agreement with the Muslims and confirmed their betrayal. Following on from this, the Muslims laid siege to the Jews’ fortress. When Banu Qurayza could not hold out any longer, they sent a message to the Holy Prophet(pbuh) that they would surrender, but would like their fate to be decided by one of their allies. Sa’d bin Muadh, the chief of the tribe of Aus, was appointed the arbiter. Sa’d passed the judgment on the Banu Qurayza according to the law of the Torah, which states: ‘When thou comest nigh unto a city to fight against it, then proclaim peace unto it. And it shall be, if it make thee answer of peace, and open unto thee, then it shall be, that all the people that is found therein shall be tributaries unto thee, and they shall serve thee. And if it will make no peace with thee, but will make war against thee, then thou shalt besiege it: And when the Lord thy God hath delivered it into thine hands, thou shalt smite every male thereof with the edge of the sword: But the women, and the little ones, and the cattle, and all that is in the city, even all the spoil thereof, shalt thou take unto thyself.’ (Bible, Deuteronomy 20:10-14) According to the Jewish law the punishment for treason was death. In passing the death sentence on Banu Qurayza, Sa’d reminded the Jews of the fact that had the Jews succeeded in carrying out their plan, they would have put all the Muslims to death. As a result of Sa’d bin Muadh’s judgment, all the male members of the Banu Qurayza tribe who were of fighting age were executed and their women, children and elders expelled, who went to Syria. To read more about this event and the reasons behind it, visit the following page: https://www.alislam.org/question/did-prophet-muhammad-kill-700-jews/ Even to this day many states of the US and other countries uphold the death penalty for the crime of treason. Wherever the capital punishment has been abolished, such as the UK, life imprisonment implies. Many historians claim that the Jews made a mistake by asking one of their allies to decide their fate. Instead, they say that if the tribe had entrusted Prophet Muhammad(pbuh) to make the decision he would most likely have shown leniency and simply banished them from Medina. Ibn Hisham reports of two men of the tribe being set free and pardoned by the Prophet Muhammad(pbuh). He showed this kindness despite promising not to intervene in Sad’s judgement on a general level. Great answers start with great insights. Content becomes intriguing when it is voted up or down - ensuring the best answers are always at the top. Questions are answered by people with a deep interest in the subject. People from around the world review questions, post answers and add comments. Be part of and influence the most important global discussion that is defining our generation and generations to come
<urn:uuid:5036d884-077b-44e8-a8b5-1c0186581c11>
CC-MAIN-2022-33
https://www.islamiqate.com/573/was-it-acceptable-for-muhammed-behead-the-banu-qurayza-tribe
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00296.warc.gz
en
0.944074
4,427
3.4375
3
Bangladesh and Turkey have a very long history of deep association with each other, with close cultural, including deeply spiritual influences, interacting as bonding glue. The Turks first came to Bengal in the beginning of the 13th century when Ikhtiyar Al-Din Muhammad bin Bakhtiyar Khilji, a Turko-Afghan military General led the conquest of the eastern Indian Bengal and Bihar regions between 1197-1206. He first conquered Bihar, then advanced into Bengal through Nabadwip (in Nadia, present day West Bengal), from where he advanced to establish a Muslim suzerainty over Gaur, the ancient capital of Bengal which he renamed as "Lakhnauti". He set himself up as ruler of this newly conquered space after defeating the ageing Buddhist ruler, Lakshman Sena, and then expanded his authority to encompass adjacent territories. His advent into this region effectively ended the long-held Buddhist sway over the region. For better and more efficient administration, Bakhtiyar divided the areas he occupied into several "iqtas" or revenue collection zones, that were undoubtedly the precursors to the revenue collecting system put in place by the Moghuls and then later supplanted by the severely extractive "collectorate order" of administration put in place by the British rulers. Bakhtiyar Khilji placed these "iqtas" under administrative control of his three lieutenants - Ali Mardan Khilji, Muhammad Shiran Khilji and Husamuddin Iwaz Khilji. Additionally, he established mosques for prayers for his Muslim followers, madrassas for mass education, and Khanqahs for Sufi 'tariqahs' (literally, pathways toward direct knowledge of the Divine Reality) that became centres of Islamic learning and propagation of the new faith that he introduced to the region. Bakhtiyar died in 1206 and was buried at Pirpal Dargah at Narayanpur in West Dinajpur. Following his death, an internecine power struggle ensued between his contending lieutenants. In 1212 CE, Khilji Malik Iwaz finally prevailed over all other rivals and assumed sole power assuming the title Ghiyasuddin Iwaz Khilji. Ghiyasuddin undertook numerous welfare activities, constructed highways, and put in place flood control measures. For strategic reasons, he built a fort and organized a flotilla of war ships, reflecting his strategic military acumen for consolidating firm control over the conquered territories. During Khilji rule over Bengal, Delhi Sultanate was ruled by Ilbari Turks, who were rivals to Khilji Turks ruling Bengal. The Ilwaris aspired to possess the Khilji realms of greater Bengal, already famed for its myriad bounties. Sultan Iltutmish Ilwari of Delhi Sultanate overthrew Ghiyasuddin Khilji in 1227, and immediately declared Lakhnauti to be a province of the Delhi Sultanate. Bengal was ruled by the Ilbari Turks from Delhi for the next sixty years. Following Iltutmish's death in 1236, Turkish nobility held sway over power and played role of kingmakers. Governors of Bengal sent by Delhi belonged to Turkish nobility. Following death of Sultan Ghiyasuddin Balban Ilbari in 1287, Khilji Jaludding Firoz overthrew Ilbari rule and re-established Khilji ascendency in Delhi in 1290. However, Balban Ilwari's son, Bugra Khan assumed power and assumed title of Sultan Nasiruddin Mahmud. The rivalry between these two opposing Turkish dynastic branches continued unabated and marked relations between Bengal and Delhi. During 1290-1324, the Muslim Kingdom of Bengal witnessed expansion into Satgaon, Sonargaon, Mymensingh and Sylhet. Notably, with the Turkish expansion to Sylhet another significant landmark event took place - the arrival in Sylhet in 1303 of Sufi saint Shaykh al-Mashāʾikh Makhdūm Shaykh Jalāl Mujarrad bin Muḥammad Kunyāī, known in reverence by Bangladeshis as the Sufi awlia (saint) Hazrat Shah Jalal. Hazrat Shah Jalal is said to have hailed from Konya (Quniah) now in modern Turkey where his mother Syeda Haseenah Fatimah lies buried. His father, who died when Hazrat Shah Jalal was five years old, was a contemporary of Sufi saint Jalaludding Rumi of Rum sultanate (modern Turkey). Recently, Konya and Sylhet have been declared as sister cities based on this historical link between Bangladesh and Turkey. The squabbling and tussle for overlordship between the Khilji and Ilbari Turkish rivals effectively ended in 1324 when it was replaced by yet another Turkish line. Sultan Ghiyasuddin Tughlaq of Delhi invaded and captured Lakhnauti, and established Tughlaq (representing Qarhaunah Turks) rule over Bengal, until 1338, when the earlier configuration set up in Bengal started slowly coming apart. In 1338, Fakhruddin Mubarak Shah declared independence at Sonargaon. Lakhnauti followed suit and declared independence under Sultan Ali Mubarak. But he was replaced soon after by Haji Iliyas, another Turko-Afghan adventurer from Sijistan who took over the Lakhnauti throne and assumed title of Sultan Shamsuddin Iliyas Shah. It is worth noting here that most Turko-Afghans described above were descended from the Central Asian Turkic tribes who had entered Afghanistan since earlier times but came into prominence in the middle of the 10th century when a former Turkish slave Alptegin seized Ghazna. Alptegin was succeeded by Subuktegin who extended conquests to Kabul and the Indus. Subuktegin's son, Mahmud of Ghazna who came to the throne in 998 conquered Punjab and Multan and carried raids into the heart of India (Britannica). The Turkish generals who entered Bengal were essentially adventurers seeking further conquests and expansions. With Shamsuddin Iliyas Shah's coming to power, Turkish rule in Bengal entered a new phase of expansion. Iliyas Shah united whole of Bengal and transformed the kingdom of Lakhnauti into Kingdom of Bangalah. He and his descendants ruled over Bangalah until 1487, except for a brief 23-year interregnum between 1415-1435, when house of Raja Ganesh held power (Raja Ganesh was a Hindu landlord of Bhaturia and a Hakim (Governor) of Dinajpur in Northern Bengal, who took advantage of the weakness of the Ilyas Shahi dynasty and seized power). Raja Ganesh mixed freely with Muslims and his son who converted to Islam succeeded him as Sultan Jalaluddin Muhammad Shah. Sultan Jalauddin Fath Shah was last ruler of this dynasty and with his assassination by his Abyssinian slave Turkish rule over Bengal ended in 1487. Turks were followed briefly by Abyssinian rule, but Bengal remain independent, until 1538 when Sher Shah ousted the Husain Shahi dynasty and established Afghan rule. This also ended Bengal's independence. Afghans ruled over Bengal until 1576, followed by Mughals who held sway until end of Muslim rule. Turkish legacy in Bengal -- of lasting nature, still extant and evident even today Notably, Turks who remained behind after Turkish dynasties' end, merged with local society, while some may have joined successive rulers (Afghans/Moghuls). Particularly under Moghuls, people of Turkish blood played a very prominent role in the filed of arts, statecraft and growth and expansion of Sufi influence on societal development. In fact, Turkish rule over Bengal proved to be a boon for Bengal. It played a significant role in the establishment of a Muslim society in Bengal, giving the development of Muslim culture a definite shape. Turkish rulers built numerous mosques, madrassahs and established "Khanqahs" that acted as fount for expansion of Sufi traditions and cultures, establishing deep societal roots and flourishing. They patronized Muslim ulamas and Sufis in their religious pursuits. It would appear that Turkish rulers and their followers who came to Bengal perhaps as adventurers came not as marauders but to stay and make a home in these new lands. They had burnt their boats behind them, to make Bengal their new home. They neither transferred their wealth out of Bengal nor thought of leaving the country. They enriched architectural growth and development, and calligraphic skills among local artists and artisans. They also invested their energy and resources and transferred skills and knowledge into various infrastructure development activities, including building forts for protection of their new homeland and subjects. In a region notoriously at the mercy of numerous mighty rivers prone to flooding and making conventional overland communication difficult, they built bridges, dams, dykes, and embankments to serve as flood control measures as also augmenting irrigation of fallow land. They also, notably, developed water-borne transportation for goods and people across the realm. They, by and large, appear to have had a liberal disposition, because they appointed non-Muslims in senior or authoritative posts in administration and governance, as well as in the army. Recorded history of the period shows Turkish Sultans also patronized both Persian and Bangla arts and literature, and even Hindu poets, conferring honorific titles on them. All these actions and overall liberal behaviour towards local population These influenced the Muslims of Bengal to supporting the Turkish National Struggle against colonial powers' interference or attempts at domination over Turkey. This was reflected most poignantly by Kazi Nazrul Islam, notably, penning his epic poem "Kamal Pasha" - incorporated in curriculum of Bangladesh schools and influencing Bengali poetry and literature and political outlook. Apart from Kazi Nazrul Islam, rebellious poets and writers Kazi Abdul Wadud, Ismail Hossain Siraji, Kazi Motahar Husain and Abul Fazal also supported Ataturk's reformist revolution. A high school named as "Atatürk Model High School" in Feni was set up on January 6, 1939, 54 days after Ataturk's death on November 10, 1938. In more recent times, a Turkish language center, "Mustafa Kemal Turkish Language Center" was established in Dhaka Cantonment. Bangladeshi national artist Sheikh Afzal painted a beautiful Ataturk portrait, which is on display in the National Museum. Principal of Dhaka College, Ibrahim Khan wrote a theatrical play named "Kemal Pasha" in 1925 that gained huge success and contributed to the love and respect of the Bengali people toward the Turkish nation and its independence movement led by Ataturk. This support of the Bengali people during the Turkish National Struggle is still remembered with gratitude by the Turkish people. Both peoples may be said to share same ethos of warmth and hospitality to each other. Bilateral relations post-independence Because of Turkey's close relations with Pakistan as fellow members of the RCD and CENTO, Turkish recognition of Bangladesh as an independent, sovereign state was hostage to the political dynamics of those ties. In the context of the Cold War global division into two camps, Turkey, and erstwhile Pakistan both, by virtue of close ties with US-NATO alliance, also naturally fell on the opposing side of the divide. However, Turkey was also a member of the OIC, to membership of which Bangladesh on its independence also aspired, having the third largest Muslim population in the world at the time. Turkey recognized Bangladesh on 22 February 1974 when the Organisation of Islamic Cooperation (OIC) (formerly Organization of the Islamic Conference) Summit which was held in Lahore. The Turkish Embassy in Dhaka was opened in 1976 and the Embassy of Bangladesh in Ankara in 1981. Numerous state and high-level visits were exchanged between leaders of two countries that helped to steadily develop the foundations of bilateral relations between independent Bangladesh and Turkey. These visits included, inter alia: • Former Bangladeshi president Ziaur Rahman became the first Bangladeshi head of state to visit Ankara. • In 1986, the then Turkish prime minister, Turgut Ozal paid a visit to Bangladesh. • Turkish President Suleyman Demirel joined Nelson Mandela and Yasser Arafat at the silver jubilee celebrations of Bangladesh's independence in 1997. In 1998, the two countries co-founded the Developing 8 Countries group. • Turkish president Abdullah Gul paid an official visit to Dhaka in2010. • Turkish Prime Minister Recep Tayyip Erdoğan paid a visit to Dhaka in 2010. • Bangladesh Sheikh Hasina, accompanied by the then Minister of Foreign Affairs, Ms Dipu Moni, visited Turkey to participate in the Fourth United Nations Conference on the Least Developed Countries which took place in İstanbul on 9-13 May 2011. • Prime Minister Hasina also paid an official visit to Turkey on 10-13 April 2012 upon the invitation of the then Prime Minister Erdoğan. In 2016, the diplomatic relationship between two countries became complex when Bangladesh denounced Turkey's consecutive requests to free several Bangladeshi Jamaat-e-Islami leaders who had been convicted for war crimes during Bangladesh Liberation War by the International CrimesTribunal in Bangladesh and eventually executed. Following the execution of the Jamaat leaderMotiur Rahman Nizami, Turkey withdrew its ambassador to Bangladesh. However, after Bangladesh's condemnation of the coup d'état attempt to overthrow the Erdogan government, relations began to improve when Ankara sent a new ambassador to Dhaka. After arrival, the new Turkish ambassador remarked, "Bangladesh had helped Turkey by expressing its support to Erdogan's government after the failed coup attempt." The ambassador commented that the relations between the two countries have become normal. The ambassador also expressed Turkey's willingness in helping Bangladesh to control militancy in the country. During the Rohingya crisis, where the Muslim Rohingyas were being expelled from Myanmar,Turkey donated millions of dollars to the government of Bangladesh in order to aid the Rohingyas refugees in Bangladesh. In September 2017, First Lady Emine Erdogan visited and helped provide relief in the shelters of the Rohingyas and promised more co-operation and aid toBangladesh. Former Prime Minister Binali Yıldırım's visited Bangladesh on 18-20 December 2017. The Prime Minister held meetings with President Abdul Hamid, and Prime Minister Sheikh Hasina. He visited Rohingya Camps in Cox's Bazaar and demonstrated support for Bangladesh's stand and actions in respect of the Rohingya crisis. The most recent Presidential level visit from Bangladesh to Turkey was that of President Abdul Hamid on 13 December 2017, in the context of the OIC Extraordinary Summit in Istanbul. During the summit, the two Presidents also held a bilateral meeting. Current relations: extent and depth Turkey responded notably with medical assistance during the pandemic, responding almost immediately after its onslaught in Bangladesh. As alluded to earlier, Turkish government has been aiding Rohingyas with two medical units stationed in the refugee camps since 2017. Bangladesh and Turkey signed a joint protocol on trade and investment in 2012. The Bangladesh-Turkey Joint Economic Commission has been holding biennial meetings to discuss theways for increasing bilateral trade and investment. Since 2012, Bangladesh and Turkey have been in talks to sign a free trade agreement but signing of the agreement was put on hold due to the complications relating toTurkey's bid for accession to the European Union. Bangladeshi exports to Turkey have been dominated by apparel products. Turkey sells cotton, Machinery, and chemicals for garment industry. It buys $300 million worth of jute from Bangladesh. Foreign Ministers of two countries visited each other's capitals in 2020 and 2021 respectively to inaugurate new embassy complexes in Ankara and Dhaka. They also committed to enhance Current Trade Level from $ 1.2 bn to double it in next five years. Turkey is set to become a significant investor in Bangladesh. Investment in electronics sector by Arcelik of Turkey who acquired majority holding of Singer Bangladesh includes tech transfer. Turkey seeks to identify one special Economic zone in Bangladesh to bring in more Turkish investments. The shipbuilding industry of Bangladesh has been identified as a potential sector for Turkish investment. However, a business dispute resolution mechanism is needed urgently. Turkey provides 50 scholarships every year for Bangladeshi students. Presently. There are 680 students in Turkey (2019-20). There are officially documented over 1000 Bangladeshi nationals resident in Turkey currently. Tourism is increasing significantly. Additionally, Turkey has a very strong and competent healthcare infrastructure, ranking among top 5 countries globally, and therefore an increasing medical tourism destination. Notably, it has developed its own Covid vaccine (TurkoVac), which is currently in phase 3 trial. President Erdogan has announced that he will make the vaccine available globally and offer joint production to other countries. It would be in Bangladesh's interest to may enter Joint Vaccine development and production of. TurkoVac in Bangladesh. Bangladesh is set to become one of the top defence equipment clients of Turkey in next few years. Bangladesh had signed an agreement with Turkey on military training, education, and joint cooperation between forces of both nations on 10 March 1981 at Dhaka. In recent years, there has been growing Defence / Military cooperation between the two countries, with Turkey establishing its credentials as a good and reliable supplier. In 2013, Turkey supplied Otokar Cobra light armored vehicles to the Bangladesh Army. As per agreement Turkey's Naval special forces trained Bangladesh Navy's special forces Special Warfare Diving and Salvage. The Chief of Army Staff of Bangladesh General SM Shafiuddin Ahmed recently visited Turkey. He spoke with top military officials, including the Turkish defence minister, about possible defence cooperation, training, and other issues. The Bangladesh Chiefs of Air Staff and Navy, respectively, had earlier visited Turkey in late 2020. Bangladesh has agreed to buy military equipment from Turkey to strengthen the country's security system. The two governments have been reportedly engaged in talks on joint production of military equipment in the country and extensive training in the security sector between the two countries. On 29 June 2021, Government to Government (G2G) defence memorandum of understanding (MoU) signed between Bangladesh and Turkey. Turkish manufacturer Roketsan had already delivered TRG-300 Tiger MLRS to the Bangladesh Army in June 2021 from a separate deal. Also in June 2021, a turnkey 105 mm and 155 mm artillery shell production line establishment agreement was also signed between Bangladesh and Turkish company REPKON. With the modern Free Flowforming (REPKON patented) technology and computerized machinery from REPKON, Bangladesh Ordnance Factory will produce high-quality 105 mm and 155 mm artillery shells. Outlook for Future Turkey is seeking to regain and further expand its historical leadership role in the Muslim world. Increasingly under pressure from US and Europe on the so-called human rights and democracy quotient and frustrated by European stonewalling its efforts to join EU, it is naturally, therefore, pivoting to Asia. Bangladesh in recent years has demonstrated regionally and globally that it has transformed into a pivotal country to focus on, with large Muslim population, deep-rooted historical, religious, and cultural ties, and its geo-strategic location at the epicentre of the Indian Ocean region. Widely acknowledged globally now as a rising economic tiger in the Asian region, Bangladesh has also palpably demonstrated that is a stable and moderate country with a vibrant young population and dynamic entrepreneurship. Like Turkey, Bangladesh in recent times has also been coming under increasing pressure on Human rights and democracy quotient counts. It would be natural for it to strengthen relations with Turkey, as it seeks to balance relations, build up resilient defence capacity and capability and diversify sources of defence purchases. Bangladesh also seeks unambiguous support from friends abroad on resolving the Rohingya problem, that Turkey has demonstrably been extending. To strengthen ties, Turkish First Lady Emin Erdogan came to Bangladesh to visit the Rohingya camp in Cox's Bazar. Turkey has taken a strong stand in support of the Rohingya on the world stage, including the UN- OIC. The country has taken several steps, including providing humanitarian assistance. As these are all very helpful to and supportive of Bangladesh, Dhaka benefits in other areas including diplomacy. There is no gainsaying that Turkey has a strong position in international fora, the United Nations and the OIC, as well as by virtue of its membership of the NATO alliance. Turkey's unconditional support to Bangladesh on the Rohingya issue has significantly deepened the ties between the two countries. Turkey rallied behind Bangladesh on the Rohingya issue at various multilateral fora such as the UN, the G20, and the OIC. Ankara through its state institutions such as the Turkish Cooperation and Coordination Agency (TIKA), the Directorate of Religious Affairs (Diyanet), and other Turkish NGOs has built various facilities such as camps, hospitals, schools, and orphanages for refugees in Bangladesh. By all accounts, therefore, Bangladesh and Turkey appear set to become increasingly close partners and friends, their relations underpinned by their extensively shared, deep-rooted, and strong historical and cultural legacy and Sufi Islamic traditions and ties. Ambassador (Retd.) Tariq A. Karim is the Director of the Centre for Bay of Bengal Studies at Independent University, Bangladesh. He was a Distinguished International Executive in Residence at the University of Maryland. He is now also Honorary Advisor Emeritus, Cosmos Foundation. Leave a Comment High on a mountain in the Himalayas, pristine drops fall from the mout ... Thailand’s southernmost provinces, which for almost two decades ...
<urn:uuid:e021c0fe-deed-4ec1-b3d2-83b761cf0293>
CC-MAIN-2022-33
https://dhakacourier.com.bd/news/Column/Bangladesh-Turkey-Relations:-Past-present-future/4421
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00698.warc.gz
en
0.963873
4,520
3.03125
3
This frog is a member of the mountain yellow-legged frog complex which is comprised of two species: Rana muscosa and Rana sierrae. Both species are highly aquatic and are always found within a meter or two from the edge of water. Rana sierrae is yellowish or reddish brown from above, with black or brown spots or lichen-like markings. Toe tips are usually dusky. Underside of hind legs and sometimes entire belly is yellow or slightly orange, usually more opaque than in foothill yellow-legged frog, Rana boylii. Yellow often extends forward to level of forelimbs. Dorsolateral folds present but frequently indistinct. The tadpoles are black or dark brown and are large (total length often exceeds 10 cm) and metamorphose in 1-4 years depending on the elevation. Rana sierrae differs from Rana muscosa in having relatively shorter legs. When a leg is folded against the body the tibio-tarsal joint typically falls short of the external nares. The mating call of R. sierrae is significantly different from that of R. muscosa in having transitions between pulsed and noted sounds. Both species call underwater. Males can be heard above water but only from a short distance away (<2 meters). The two species also differ in mitochondrial DNA. The mitochondrial DNA, male advertisement calls, and morphology datasets are geographically concordant (Vredenburg et al. 2007). Distribution and Habitat Country distribution from AmphibiaWeb's database: United States U.S. state distribution from AmphibiaWeb's database: California, Nevada This montane species once occurred in California and Nevada, USA but is now extinct in the state of Nevada. Rana sierrae ranges from the Diamond Mountains northeast of the Sierra Nevada in Plumas County, California, south through the Sierra Nevada to the type locality, the southern-most locality at Matlock Lake just east of Kearsarge Pass (Inyo County, California). In the extreme northwest region of the Sierra Nevada, several populations occur just north of the Feather River, and to the east, there was a population on Mt. Rose, northeast of Lake Tahoe in Washoe County, Nevada, but, as mentioned above, it is now extinct. West of the Sierra Nevada crest, the southern part of the R. sierrae range is bordered by ridges that divide the Middle and South Fork of the Kings River, ranging from Mather Pass on the John Muir Trail east to the Monarch Divide. East of the Sierra Nevada crest, R. sierrae occurs in the Glass Mountains just south of Mono Lake (Mono County, CA) and along the east slope of the Sierra Nevada south to the type locality at Matlock Lake (Inyo County, CA). Life History, Abundance, Activity, and Special Behaviors Similar to R. muscosa, breeding begins soon after ice-melt or early in spring and can range from April at lower elevations to June and July in higher elevations (Wright and Wright 1949; Stebbins 1951; Zweifel 1955). Eggs are deposited underwater in clusters attached to rocks, gravel, and under banks, or to vegetation in streams or lakes (Wright and Wright 1949; Stebbins 1951; Zweifel 1955). Livezey and Wright (1945) report an average of 233 eggs per mass(n=6, range 100-350). Eggs contain a vitelline capsule, and three gelatinous envelopes, all clear and transparent (see illustrations in: Stebbins 2003). In laboratory breeding experiments egg hatching times ranged from 18-21+ days at temperatures ranging from 5-13.5 °C (Zweifel 1955). The length of the larval stage depends upon the elevation. At lower elevations where the summers are longer, tadpoles are able to grow to metamorphosis in a single season (Storer 1925). At higher elevations where the growing season can be as short as three months, tadpoles must overwinter at least once and may take 2 or 4 years of growth before they are large enough to transform (Wright and Wright 1949; Zweifel 1955). Trends and Threats Rana sierrae is critically endangered, along with its sister species Rana muscosa. These frogs have declined dramatically despite the fact that most of the habitat is protected in National Parks and National Forest lands. A study that compares recent surveys (1995-2005) to historical localities (1899-1994; specimens from the Museum of Vertebrate Zoology and the California Academy of Sciences) found that 92.5% of populations have gone extinct (11 remaining out of 146 sites; Vredenburg et al. 2007). The two most important factors leading to declines in R. sierrae and R. muscosa are disease and introduced predators. Introduced trout prey on R. sierrae (Needham and Vestal 1938; Mullally and Cunningham 1956)and have been implicated in a number of studies as one of the sources of decline (Bradford 1989; Bradford et al. 1993; Jennings 1994; Knapp 1996; Drost and Fellers 1996; Knapp and Matthews 2000). In fact, as early as 1915 Joseph Grinnell and his field crews (Grinnell and Storer 1924) noticed that Rana sierrae rarely survived in lakes where trout were planted. Whole lake field experiments have shown that when non-native trout are removed, both Rana sierrae and Rana muscosa populations rebound (Vredenburg, 2004; Knapp et al. 2007). While it is clear that introduced trout negatively affect R. sierrae and R. muscosa mainly through predation on tadpoles, trout also compete for resources with adult frogs. A food web study that used stable isotopes to trace energy through the Sierran lake food webs concluded that introduced trout are superior competitors and suppress the availability of large aquatic insects that make up a major portion of the diets of adult frogs (Finlay and Vredenburg 2007). A lethal disease, chytridiomycosis, caused by an aquatic fungal pathogen Batrachochytrium dendrobatidis, or Bd (Berger et al. 1998) has caused population extinctions in R. muscosa and R. sierrae in the Sierra Nevada (Rachowicz et al. 2006). Long-term studies reveal that infection intensity is key; once a critical threshold of Bd fungal infection is reached, death ensues (Vredenburg et al. 2010). Population extirpation is the most common outcome, but a few mountain yellow-legged frog (Rana sierrae and Rana muscosa) populations have survived in low numbers. Modeling shows that chytriodiomycosis outcome at the population level (extirpation vs. persistence) can result solely from density-dependent host-pathogen dynamics, which may hold for other wildlife diseases as well (Briggs et al. 2010). In an effort to rescue the last surviving frogs, the Vredenburg lab is treating adult Rana sierrae in the field with anti-fungal medication; frogs are bathed for five minutes daily over the course of a week (Lubick 2010). Other possible causes for decline in R. sierrae include air pollution (pesticide drift; Davidson et al. 2002; Davidson 2004), UV-B radiation, and long term changes in weather patterns, especially concerning the severity and duration of droughts. Acidification from atmospheric deposition has been suggested as another cause, but Bradford et al. (1994) found no evidence to support this hypothesis. Relation to Humans Mountain yellow-legged frogs (the amphibian species complex including both Rana muscosa and Rana sierrae) were once the most common vertebrates in the high elevation Sierra Nevada. Documented historical accounts go back to the turn of the last century (1915) from surveys conducted by Joseph Grinnell and Tracy Storer (published in 1924) from the University of California's Museum of Vertebrate Zoology. Joseph Grinnell was instrumental in the foundation of Yosemite National Park, one of the jewels of the American National Park Service. Possible reasons for amphibian decline Predators (natural or introduced) This species was featured as News of the Week on 17 October 2016: Nearly all of the reports on global patterns of amphibian extinction and decline have been bad news, with hundreds of species lost and thousands in jeopardy. A new report in PNAS (Knapp et al. 2016) shows a regional pattern of recovery across hundreds of populations in Yosemite National Park for a charismatic species, the Sierra Nevada Yellow-legged frog. The study is based on >7,000 frog surveys over a 20-year period and showed recovery despite ongoing stressors such as disease and introduced predatory fish. Results from a laboratory experiment indicate that these increases may be in part because of reduced frog susceptibility to chytridiomycosis, but the cessation of fish stocking also contributed to the recovery. Continuing studies will determine if local extinction sites become repopulated (Written by Vance Vredenburg). This species was featured as News of the Week on 24 October 2016: Major habitat restoration moves ahead for two endangered montane frogs in California. After years of review and planning, the National Park Service (USA) is officially moving forward with major plans to restore high elevation aquatic ecosystems in the Sequoia and Kings Canyon National Parks in the Sierra Nevada of California. These actions will help recover two endangered montane frogs, the Sierra Nevada yellow-legged frog (Rana sierrae) and the Sierra Madre or Southern Mountain yellow-legged frog (Rana muscosa). These significant conservation actions, based in part on results from a 2004 field experiment (Vredenburg 2004) showing rapid recovery of endangered frogs after removal of introduced non-native fish (trout) from habitats, will help the frogs as they face new threats such as disease, drought and climate change (Written by Vance Vredenburg). This species was featured as News of the Week on 11 February 2019: A study by Ellison et al. (2019) investigates the interaction between Batrachochytrium dendrobatidis (Bd), the pathogen that causes the disease chytridiomycosis, and the bacterial skin microbiome of the endangered Sierra Nevada Yellow‐legged frog, Rana sierrae, using both culture‐dependent and culture‐independent methods. The study found that the skin microbiome of highly infected juvenile frogs is characterized by significantly reduced species richness and evenness, and by strikingly lower variation between individuals, compared to juveniles and adults with lower infection levels. In a culture‐dependent Bd inhibition assay, the bacterial metabolites we evaluated all inhibited the growth of Bd. Together, these results illustrate the disruptive effects of Bd infection on host skin microbial community structure and dynamics, and suggest possible avenues for the development of anti‐Bd probiotic treatments (Written by Vance Vredenburg). This species was featured as News of the Week on 5 October 2020: The endangered Sierra Nevada Yellow-Legged Frog (Rana sierrae) has generally been viewed as a lake species, but it also occurs in streams, yet in those habitats, there is little knowledge of its basic ecological requirements. Brown et al (2020) investigated the demography, habitat use, and movements of 12 stream populations of these frogs using multiple techniques (e.g., capture–mark–recapture and radio-tracking of adults, quantitative description of stream channel and riparian vegetation, frog habitat use, and egg masses counts). Stream populations varied in size (< 15 - 547 adults), and were found in diverse headwater streams. Frogs moved little over four-day survey periods, but were capable of moving longer distances of up to 1.2 km over the summer. This study provides important basic ecological requirements from overlooked populations of a species and reminds us that understanding a species complete natural history is critical to conservation and management efforts (Written by Vance Verdenburg). Berger, L., Speare, R., Daszak, P., Green, D. E., Cunningham, A. A., Goggin, C. L., Slocombe, R., Ragan, M. A., Hyatt, A. D., McDonald, K. R., Hines, H. B., Lips, K. R., Marantelli, G., and Parkes, H. (1998). "Chytridiomycosis causes amphibian mortality associated with population declines in the rain forests of Australia and Central America." Proceedings of the National Academy of Sciences of the United States of America, 95(15), 9031-9036. Bradford, D. F. (1989). "Allotopic distribution of native frogs and introduced fishes in high Sierra Nevada lakes of California: implication of the negative effect of fish introductions." Copeia, 1989, 775-778. Bradford, D. F. (1989). ''Allotopic distribution of native frogs and introduced fishes in high Sierra Nevada lakes of California USA: Implication of the negative effect of fish introductions.'' Copeia, 1989(3), 775-778. Bradford, D. F., Tabatabai, F., and Graber, D. M. (1993). ''Isolation of remaining populations of the native frog, Rana muscosa, by introduced fishes in Sequoia and Kings Canyon National Parks, California.'' Conservation Biology, 7, 882-888. Briggs, C. J., Knapp, R. A., and Vredenburg, V. T. (2010). ''Enzootic and epizootic dynamics of the chytrid fungal pathogen of amphibians.'' Proceedings of the National Academy of Sciences, 107(21), 9695-9700 . Davidson, C. (2004). ''Declining downwind: Amphibian population declines in California and historical pesticide use.'' Ecological Applications, 14, 1892-1902. Davidson, C., Shaffer, H. B., and Jennings, M. R. (2002). ''Spatial tests of the pesticide drift, habitat destruction, UV-B, and climate-change hypotheses for California amphibian declines.'' Conservation Biology, 16, 1588-1601. Drost, C. A., and Fellers, G. M. (1996). "Collapse of a regional frog fauna in the Yosemite area of the California Sierra Nevada, USA." Conservation Biology, 10(2), 414-425. Finlay, J. and Vredenburg, V. T. (2007). ''Introduced trout sever trophic connections between lakes and watersheds: consequences for a declining montane frog.'' Ecology, 88(9), 2187-2198. Grinnell, J., and Storer, T. I. (1924). Animal Life in the Yosemite. University of California Press, Berkeley, California. Jennings, M. R., and Hayes, M. P. (1994). ''Amphibian and reptile species of special concern in California.'' Final Report #8023 Submitted to the California Department of Fish and Game. California Department of Fish and Game, Sacramento, California.. Knapp, R. A. and Matthews, F. (2000). ''Non-native fish introductions and the decline of the Mountain Yellow-legged Frog from within protected areas.'' Conservation Biology, 14(2), 428-439. Knapp, R. A., Boiano, D. M., Vredenburg, V. T. (2007). ''Recovery of a declining amphibian (Mountain Yellow-legged Frog, Rana muscosa) following removal of non-native fish.'' Biological Conservation, 135, 11-20. Knapp, R.A. (1996). ''Non-native trout in the natural lakes of the Sierra Nevada: an analysis of their distribution and impacts on native aquatic biota.'' Sierra Nevada Ecosystem Project, Final Report to Congress, Center for Water and Wildland Resources, University of California (Davis), Davis, California, 363-390. Livezey, R. L., and Wright, A. H. (1945). ''Descriptions of four salientian eggs.'' American Midland Naturalist, 34, 701-706. Rachowicz, L. J., Knapp, R. A., Morgan, J. A. T., Stice, M. J., Vredenburg, V. T., Parker, J. M., and Briggs, C. J. (2006). ''Emerging infectious disease as a proximate cause of amphibian mass mortality.'' Ecology, 87, 1671-1683. Stebbins, R. C. (2003). Western Reptiles and Amphibians, Third Edition. Houghton Mifflin, Boston. Stebbins, R.C. (1951). Amphibians of Western North America. University of California Press, Berkeley. Storer, T. I. (1925). "A synopsis of the amphibia of California." University of California Publications in Zoology, 27, 1-342. Vredenburg VT, McNally SVG, Sulaeman H, Butler HM, Yap T, Koo MS, D Schmeller, C Dodge, T Cheng, G Lau, and CJ Briggs. (2019) Pathogen invasion history elucidates contemporary host pathogen dynamics. PLoS ONE 14(9): e0219981. https://doi.org/10.1371/journal.pone.0219981 Vredenburg, V. T. (2004). ''Reversing introduced species effects: Experimental removal of introduced fish leads to rapid recovery of a declining frog.'' Proceedings of the National Academy of Sciences of the United States of America, 101, 7646-7650. Vredenburg, V. T., (2007). ''Concordant molecular and phenotypic data delineate new taxonomy and conservation priorities for the endangered mountain yellow-legged frog (Ranidae: Rana muscosa).'' Journal of Zoology, 271, 361-374. Vredenburg, V. T., Fellers, G., and Davidson, C. (2005). ''The mountain yellow-legged frog Rana muscosa (Camp 1917).'' Status and conservation of U.S. Amphibians. M. Lannoo, eds., University of California Press, Berkeley, 563-566. Vredenburg, V. T., Knapp, R. A., Tunstall, T. S., and Briggs, C. J. (2010). ''Dynamics of an emerging disease drive large-scale amphibian population extinctions.'' Proceedings of the National Academy of Sciences, 107(21), 9689-9694. Wright, A. H. and Wright, A. A. (1949). Handbook of Frogs and Toads of the United States and Canada. Comstock Publishing Company, Inc., Ithaca, New York. Zweifel, R. G. (1955). ''Ecology, distribution, and systematics of frogs of the Rana boylei group.'' University of California Publications in Zoology, 54, 207-292. Originally submitted by: Vance T. Vredenburg (first posted 2007-04-02) Edited by: Kellie Whittaker, Ann T. Chang, Vance T Vredenburg (2021-04-13) Species Account Citation: AmphibiaWeb 2021 Rana sierrae: Sierra Nevada Yellow-legged Frog <https://amphibiaweb.org/species/6901> University of California, Berkeley, CA, USA. Accessed Aug 17, 2022. Feedback or comments about this page. Citation: AmphibiaWeb. 2022. <https://amphibiaweb.org> University of California, Berkeley, CA, USA. Accessed 17 Aug 2022. AmphibiaWeb's policy on data use.
<urn:uuid:9c7f4798-d6b2-481f-95c6-7644a92f2660>
CC-MAIN-2022-33
https://amphibiaweb.org/cgi/amphib_query?query_src=aw_search_index&table=amphib&special=one_record&where-genus=Rana&where-species=sierrae
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00498.warc.gz
en
0.858749
4,236
3.5625
4
Ibn Arabi (1165-1240) How can the heart travel to God, when it is chained by its desires? Ibn ʿArabi, full name Abu ʿAbd Allah Muḥammad ibn ʿAli ibn Muḥammad ibnʿArabi al-Ḥatimi at-Ṭaʾi al-Andalusi al-Mursi al-Dimashqi, nicknamed al-Qushayri and Sultan al-'Arifin was an Arab Andalusian Muslim scholar, mystic, poet, and philosopher, whose works have grown to be very influential beyond the Musliim world. Out of the 850 works attributed to him, some 700 are authentic while over 400 are still extant. His cosmological teachings became the dominant worldview in many parts of the Islamic world. He is renowned among practitioners of Sufism by the names al-Shaykh al-Akbar "the Greatest Shaykh"; from here the Akbariyya or Akbarian school derives its name, Muḥyiddin ibn Arabi, and was considered a saint. He was also known as Shaikh-e-Akbar Mohi-ud-Din Ibn-e-Arabi throughout the Middle East. 'Abu 'Abdullah Muḥammad ibn 'Ali ibn Muḥammad ibn `Arabi al-Hatimi at-Ṭaʾi was a Sufi mystic, poet, and philosopher born in Murcia, Spain. Ibn Arabi was Sunni, although his writings on the Twelve Imams were also popularly received among Shia. It is debated whether or not he ascribed to the Zahiri madhab which was later merged with the Hanbali school. After his death, Ibn Arabi's teachings quickly spread throughout the Islamic world. His writings were not limited to the Muslim elites, but made their way into other ranks of society through the widespread reach of the Sufi orders. Arabi's work also popularly spread through works in Persian, Turkish, and Urdu. Many popular poets were trained in the Sufi orders and were inspired by Arabi's concepts. Other scholars in his time like al-Munawi, Ibn 'Imad al-Hanbali and al-Fayruzabadi all praised Ibn Arabi as ''A righteous friend of Allah and faithful scholar of knowledge'', ''the absolute mujtahid without doubt'' and ''the imam of the people of shari'a both in knowledge and in legacy, the educator of the people of the way in practice and in knowledge, and the shaykh of the shaykhs of the people of truth though spiritual experience dhawq and understanding''. Ibn Arabi's paternal ancestry was from the Arabian tribe of Tayy, and his maternal ancestry was North African Berber. Al-Arabi writes of a deceased maternal uncle, Yahya ibn Yughan al-Sanhaji, a prince of Tlemcen, who abandoned wealth for an ascetic life after encountering a Sufi mystic. His father, ‘Ali ibn Muḥammad, served in the Army of Muhammad ibn Sa'id ibn Mardanish, the ruler of Murcia. When Ibn Mardanis died in 1172 AD, his father shifted allegiance to the Almohad Sultan, Abū Ya’qub Yusuf I, and returned to government service. His family then relocated from Murcia to Seville. Ibn Arabi grew up at the ruling court and received military training. As a young man Ibn Arabi became secretary to the governor of Seville. He married Maryam from an influential family. Ibn Arabi writes that as a child he preferred playing with his friends to spending time on religious education. He had his first vision of God in his teens and later wrote of the experience as "the differentiation of the universal reality comprised by that look". Later he had several more visions of Jesus and called him his "first guide to the path of God". His father, on noticing a change in him, had mentioned this to philosopher and judge, Ibn Rushd Averroes, who asked to meet Ibn Arabi. Ibn Arabi said that from this first meeting, he had learned to perceive a distinction between formal knowledge of rational thought and the unveiling insights into the nature of things. He then adopted Sufism and dedicated his life to the spiritual path. When he later moved to Fez, in Morocco, where Mohammed ibn Qasim al-Tamimi became his spiritual mentor. In 1200 he took final leave from his master Yūsuf al-Kūmī, then living in the town of Salé. Pilgrimage to Mecca Ibn Arabi left Spain for the first time at age 36 and arrived at Tunis in 1193. While there, he received a vision in the year 1200 instructing him to journey east. After a year in Tunisia, he returned to Andalusia in 1194. His father died soon after Ibn Arabi arrived at Seville. When his mother died some months later he left Spain for the second time and traveled with his two sisters to Fez, Morocco in 1195. He returned to Córdoba, Spain in 1198, and left Spain crossing from Gibraltar for the last time in 1200. After visiting some places in Maghreb, he left Tunisia in 1201 and arrived for the Hajj in 1202. He lived in Mecca for three years, and there began writing his work Al-Futuhat al-Makkiyya – 'The Meccan Illuminations. After spending time in Mecca, he traveled throughout Syria, Palestine, Iraq and Anatolia. In 1204, Ibn Arabi met Shaykh Majduddīn Isḥāq ibn Yusuf, a native of Malatya and a man of great standing at the Seljuk court. This time Ibn Arabi was traveling north; first, they visited Medina and in 1205 they entered Baghdad. This visit offered him a chance to meet the direct disciples of Shaykh ‘Abd al-Qadir Jīlani. Ibn Arabi stayed there only for 12 days because he wanted to visit Mosul to see his friend ‘Ali ibn ‘Abdallah ibn Jami’, a disciple of the mystic Qaḍib al-Ban 471-573 AH/1079-1177 AD. There he spent the month of Ramadan and composed Tanazzulat al-Mawsiliyya, Kitab al-Jalal wa’l-Jamal, "The Book of Majesty and Beauty" and Kunh ma ll Budda lil-MuridMinhu. In the year 1206 Ibn Arabi visited Jerusalem, Mecca and Egypt. It was his first time that he passed through Syria, visiting Aleppo and Damascus. Later in 1207 he returned to Mecca where he continued to study and write, spending his time with his friend Abu Shuja bin Rustem and family, including Nizam. The next four to five years of Ibn Arabi's life were spent in these lands and he also kept traveling and holding the reading sessions of his works in his own presence. On 22 Rabi‘ al-Thani 638 AH 8 November 1240 at the age of seventy-five, Ibn Arabi died in Damascus. Although Ibn Arabi stated on more than one occasion that he did not blindly follow any one of the schools of Islamic jurisprudence, he was responsible for copying and preserving books of the Zahirite or literalist school, to which there is fierce debate whether or not Ibn Arabi followed that school. Ignaz Goldziher held that Ibn Arabi did in fact belong to the Zahirite or Hanbali school of Islamic jurisprudence. Hamza Dudgeon claims that Addas, Chodkiewizc, Gril, Winkel and Al-Gorab mistakenly attribute to Ibn ʿArabi non-madhhabism. On an extant manuscript of Ibn Ḥazm, as transmitted by Ibn ʿArabī, Ibn ʿArabī gives an introduction to the work where he describes a vision he had: I saw myself in the village of Sharaf near Siville; there I saw a plain on which rose an elevation. On this elevation the Prophet stood, and a man whom I did not know, approached him; they embraced each other so violently that they seemed to interpenetrate and become one person. Great brightness concealed them from the eyes of the people. ‘I would like to know,’ I thought, ‘who is this strange man.’ Then I heard some one say: ‘This is the traditionalist ʿAlī Ibn Ḥazm.’ I had never heard Ibn Ḥazm’s name before. One of my shaykhs, whom I questioned, informed me that this man is an authority in the field of science of Hadeeth. Goldziher says, “The period between the sixth hijri and the seventh century seems also to have been the prime of the Ẓāhirite school in Andalusia.” Ibn Arabi did delve into specific details at times and was known for his view that religiously binding consensus could only serve as a source of sacred law if it was the consensus of the first generation of Muslims who had witnessed revelation directly. Ibn Arabi also expounded on Sufi Allegories of the Sharia building upon previous work by Al-Ghazali and al-Hakim al-Tirmidhi. The doctrine of perfect man Al-Insān al-Kāmil is popularly considered an honorific title attributed to Muhammad having its origins in Islamic mysticism, although the concept's origin is controversial and disputed. Arabi may have first coined this term in referring to Adam as found in his work Fusus al-hikam, explained as an individual who binds himself with the Divine and creation. Taking an idea already common within Sufi culture, Ibn Arabi applied deep analysis and reflection on the concept of a perfect human and one's pursuit in fulfilling this goal. In developing his explanation of the perfect being, Ibn Arabi first discusses the issue of oneness through the metaphor of the mirror. In this philosophical metaphor, Ibn Arabi compares an object being reflected in countless mirrors to the relationship between God and his creatures. God's essence is seen in the existent human being, as God is the object and human beings the mirrors. Meaning two things; that since humans are mere reflections of God there can be no distinction or separation between the two and, without God the creatures would be non-existent. When an individual understands that there is no separation between humans and God they begin on the path of ultimate oneness. The one who decides to walk in this oneness pursues the true reality and responds to God's longing to be known. The search within for this reality of oneness causes one to be reunited with God, as well as, improve self-consciousness. The perfect human, through this developed self-consciousness and self-realization, prompts divine self-manifestation. This causes the perfect human to be of both divine and earthly origin. Ibn Arabi metaphorically calls him an Isthmus. Being an Isthmus between heaven and Earth, the perfect human fulfills God's desire to be known. God's presence can be realized through him by others. Ibn Arabi expressed that through self-manifestation one acquires divine knowledge, which he called the primordial spirit of Muhammad and all its perfection. Ibn Arabi details that the perfect human is of the cosmos to the divine and conveys the divine spirit to the cosmos. Ibn Arabi further explained the perfect man concept using at least twenty-two different descriptions and various aspects when considering the Logos. He contemplated the Logos, or "Universal Man", as a mediation between the individual human and the divine essence. Ibn Arabi believed Muhammad to be the primary perfect man who exemplifies the morality of God. Ibn Arabi regarded the first entity brought into existence was the reality or essence of Muhammad al-ḥaqīqa al-Muhammadiyya, master of all creatures, and a primary role-model for human beings to emulate. Ibn Arabi believed that God's attributes and names are manifested in this world, with the most complete and perfect display of these divine attributes and names seen in Muhammad. Ibn Arabi believed that one may see God in the mirror of Muhammad. He maintained that Muhammad was the best proof of God and, by knowing Muhammad, one knows God. Ibn Arabi also described Adam, Noah, Abraham, Moses, Jesus, and all other prophets and various Awliya Allah Muslim saints as perfect men, but never tires of attributing lordship, inspirational source, and highest rank to Muhammad. Ibn Arabi compares his own status as a perfect man as being but a single dimension to the comprehensive nature of Muhammad. Ibn 'Arabi makes extraordinary assertions regarding his own spiritual rank, but qualifying this rather audacious correlation by asserting his "inherited" perfection is only a single dimension of the comprehensive perfection of Muhammad. The reaction of Ibn 'Abd as-Salam, a Muslim scholar respected by both Ibn Arabi's supporters and detractors, has been of note due to disputes over whether he himself was a supporter or detractor. All parties have claimed to have transmitted Ibn 'Abd as-Salam's comments from his student Ibn Sayyid al-Nas, yet the two sides have transmitted very different accounts. Ibn Taymiyyah, Al-Dhahabi and Ibn Kathir all transmitted Ibn 'Abd as-Salam's comments as a criticism, while Fairuzabadi, Al-Suyuti, Ahmed Mohammed al-Maqqari and Yusuf an-Nabhani have all transmitted the comments as praise. Some 800 works are attributed to Ibn Arabi, although only some have been authenticated. Recent research suggests that over 100 of his works have survived in manuscript form, although most printed versions have not yet been critically edited and include many errors. A specialist of Ibn 'Arabi, William Chittick, referring to Osman Yahya's definitive bibliography of the Andalusian's works, says that, out of the 850 works attributed to him, some 700 are authentic while over 400 are still extant. - The Meccan Illuminations Al-Futuhat al-Makkiyya, his largest work in 37 volumes originally and published in 4 or 8 volumes in modern times, discussing a wide range of topics from mystical philosophy to Sufi practices and records of his dreams/visions. It totals 560 chapters. In modern editions it amounts to some 15 000 pages. - The Ringstones of Wisdom also translated as The Bezels of Wisdom, or Fusus al-Hikam. Composed during the later period of Ibn 'Arabi's life, the work is sometimes considered his most important and can be characterized as a summary of his teachings and mystical beliefs. It deals with the role played by various prophets in divine revelation. The attribution of this work Fusus al-Hikam to Ibn Arabi is debated and in at least one source is described as a forgery and false attribution to him reasoning that there are 74 books in total attributed to Sheikh Ibn Arabi of which 56 have been mentioned in "Al Futuhat al-Makkiyya" and the rest mentioned in the other books cited therein. However many other scholars accept the work as genuine. - The Diwan, his collection of poetry spanning five volumes, mostly unedited. The printed versions available are based on only one volume of the original work. - The Holy Spirit in the Counselling of the Soul Ruh al-quds, a treatise on the soul which includes a summary of his experience from different spiritual masters in the Maghrib. Part of this has been translated as Sufis of Andalusia, reminiscences and spiritual anecdotes about many interesting people whom he met in al-Andalus. - Contemplation of the Holy Mysteries Mashahid al-Asrar, probably his first major work, consisting of fourteen visions and dialogues with God. - Divine Sayings Mishkat al-Anwar, an important collection made by Ibn 'Arabi of 101 hadīth qudsī - The Book of Annihilation in Contemplation K. al-Fana' fi'l-Mushahada, a short treatise on the meaning of mystical annihilation fana. - Devotional Prayers Awrād, a widely read collection of fourteen prayers for each day and night of the week. - Journey to the Lord of Power Risalat al-Anwar, a detailed technical manual and roadmap for the "journey without distance". - The Book of God's Days Ayyam al-Sha'n, a work on the nature of time and the different kinds of days experienced by gnostics - The Fabulous Gryphon of the West 'Unqa' Mughrib, a book on the meaning of sainthood and its culmination in Jesus and the Mahdī - The Universal Tree and the Four Birds al-Ittihad al-Kawni, a poetic book on the Complete Human and the four principles of existence - Prayer for Spiritual Elevation and Protection 'al-Dawr al-A'la, a short prayer which is still widely used in the Muslim world - The Interpreter of Desires Tarjumān al-Ashwaq, a collection of nasībs which, in response to critics, Ibn Arabi republished with a commentary explaining the meaning of the poetic symbols. - Divine Governance of the Human Kingdom At-Tadbidrat al-ilahiyyah fi islah al-mamlakat al-insaniyyah. - The Four Pillars of Spiritual Transformation Hilyat al-abdāl a short work on the essentials of the spiritual Path The Meccan Illuminations Futūḥāt al-Makkiyya According to Claude Addas, Ibn Arabi began writing Futūḥāt al-Makkiyya after he arrived in Mecca in 1202. After almost thirty years, the first draft of Futūḥāt was completed in December 1231 629 AH, and Ibn Arabi bequeathed it to his son. Two years before his death, Ibn ‘Arabī embarked on a second draft of the Futūḥāt in 1238 636 AH, of which included a number of additions and deletions as compared with the previous draft, that contains 560 chapters. The second draft, which the most widely circulated and used, was bequeathed to his disciple, Sadr al-Din al-Qunawi. There are many scholars attempt to translate this book from Arabic into other languages, but there is no complete translation of Futūḥāt al-Makkiyya to this day. - Diagram of "Plain of Assembly"Ard al-Hashr on the Day of Judgment, from autograph manuscript of Futuhat al-Makkiyya, ca. 1238 photo: after Futuhat al-Makkiyya, Cairo edition, 1911. - Diagram of Jannat Futuhat al-Makkiyya, c. 1238 photo: after Futuhat al-Makkiyya, Cairo edition, 1911. - Diagram showing world, heaven, hell and barzakh Futuhat al-Makkiyya, c. 1238 photo: after Futuhat al-Makkiyya, Cairo edition, 1911. The Bezels of Wisdom Fuṣūṣ al-Ḥikam Ibn al-Arabi's work Fusus al-hikam interprets the teachings of twenty-eight prophets from Adam to Muhammad. There have been many commentaries on Ibn 'Arabī's Fuṣūṣ al-Ḥikam: Osman Yahya named more than 100 while Michel Chodkiewicz precises that "this list is far from exhaustive." The first one was Kitab al-Fukūk written by Ṣadr al-Dīn al-Qunawī who had studied the book with Ibn 'Arabī; the second by Qunawī's student, Mu'ayyad al-Dīn al-Jandi, which was the first line-by-line commentary; the third by Jandī's student, Dawūd al-Qaysarī, which became very influential in the Persian-speaking world. A recent English translation of Ibn 'Arabī's own summary of the Fuṣūṣ, Naqsh al-Fuṣūṣ The Imprint or Pattern of the Fusus as well a commentary on this work by 'Abd al-Raḥmān Jāmī, Naqd al-Nuṣūṣ fī Sharḥ Naqsh al-Fuṣūṣ 1459, by William Chittick was published in Volume 1 of the Journal of the Muhyiddin Ibn 'Arabi Society 1982. Critical editions and translations of Fusus al-Hikam The Fuṣūṣ was first critically edited in Arabic by 'Afīfī 1946 that become the standard in scholarly works. Later in 2015, Ibn al-Arabi Foundation in Pakistan published the Urdu translation, including the new critical of Arabic edition. The first English translation was done in partial form by Angela Culme-Seymour from the French translation of Titus Burckhardt as Wisdom of the Prophets 1975, and the first full translation was by Ralph Austin as Bezels of Wisdom 1980. There is also a complete French translation by Charles-Andre Gilis, entitled Le livre des chatons des sagesses 1997. The only major commentary to have been translated into English so far is entitled Ismail Hakki Bursevi's translation and commentary on Fusus al-hikam by Muhyiddin Ibn 'Arabi, translated from Ottoman Turkish by Bulent Rauf in 4 volumes 1985–1991. In Urdu, the most widespread and authentic translation was made by Shams Ul Mufasireen Bahr-ul-uloom Hazrat Muhammad Abdul Qadeer Siddiqi Qadri -Hasrat, the former Dean and Professor of Theology of the Osmania University, Hyderabad. It is due to this reason that his translation is in the curriculum of Punjab University. Maulvi Abdul Qadeer Siddiqui has made an interpretive translation and explained the terms and grammar while clarifying the Shaikh's opinions. A new edition of the translation was published in 2014 with brief annotations throughout the book for the benefit of contemporary Urdu reader. In the Turkish television series Dirilis: Ertugrul, Ibn Arabi was portrayed by Ozman Sirgood.
<urn:uuid:c9a2dded-ec0f-493e-8a28-60d2d62f93e1>
CC-MAIN-2022-33
https://www.geniuses.club/genius/ibn-abu-abd-allah-muhammad-ibn-ali-ibn-muhammad-ibnarabi-al-hatimi-at-tai-full-name-arabi
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00498.warc.gz
en
0.959355
4,908
3.03125
3
Opinion Creative Commons, CC-BY A SARS-CoV-2 Prophylactic and Treatment; A Counter Argument Against The Sole Use of Chloroquine *Corresponding author: Markus Depfenhart, Faculty of Medicine, Venlo University B.V, Venlo, Netherlands. Received: April 01, 2020;Published: April 09, 2020 A better knowledge of the SARS-CoV-2 virus and its underlying pathobiology is accumulating every day. Of huge importance now is to provide a fast, cost effective, safe, and immediately available pharmaceutical solution to curb the rapid global spread of SARS-CoV-2. This Opinion discusses the demands for such an ideal drug and taking into account an aspect of viral mechanisms of infection. An effective prophylactic medication to prevent viral entry has to contain, at least, either a TMPRSS2 inhibitor or a competitive virus ACE2 binding inhibitor. Using bromhexine at a dosage that selectively inhibits TMPRSS2 and, in so doing, inhibits TMPRSS2-specific viral entry is likely to be effective against SARS-CoV-2. We propose the use of bromhexine as a prophylactic and treatment. We encourage the scientific community to assess bromhexine clinically as a prophylactic and curative treatment. If proven to be effective, this would allow a rapid, accessible and cost-effective application worldwide. Keywords: SARS-CoV-2; COVID-19; Coronavirus; Prophylactic; Treatment; Anti-viral drugs; Drug combinations; Bromhexine As the world witnesses the alarming levels of spread and severity of atypical pneumonia COVID-19, strategies to combat this outbreak are in dire need. The first sequence of SARS-CoV-2 was published online one day after its confirmation on behalf of Zhang and colleagues . SARS-CoV-2 sequences isolated from all over the world have now been deposited in gene banks [2,3]. Sharing more genome sequences of the newly emerging SARSCoV- 2 allows analysis of this new coronavirus (CoV), improving phylogenetic analysis and, most important, recognizing mutations between differing strains. Identifying the closest viral relatives of SARS-CoV-2 is greatly assisting studies of viral function. Ultimately, this gives rise to the understanding of what is unique and what is conserved in this new SARS-CoV-2 virus; be it structure, its host cell attachment and entry, or replication, making it possible to identify treatment targets. Currently, the treatment is mainly symptomatic and supportive care. Tremendous efforts have been undertaken and large amounts of money have been invested in vaccine development against influenza-type viruses. There are approximately 40 companies in advanced stages of vaccine development . Disadvantages with cutting-edge vaccines are that they take months to years to develop and to approve, and they become obsolete if the virus evolves. There are already a number of reviews on potential treatment strategies against COVID-19 [5- 7]. Drug repurposing is an attractive alternative drug discovery strategy in this time of urgency. Proposed Treatment Strategy Against COVID-19 The first step in CoV infection is the interaction of host cells with the viral envelope Spike (S) glycoprotein.SARS-CoV-2 employs two routes for host cell entry, which are dependent on the localization of the proteases required for activation of the S protein . Binding of SARS-CoV-2 to the cellular receptor, angiotensin converting enzyme 2 (ACE2), can result in uptake of virions into endosomes, where the S protein is activated by the pH-dependent cysteine protease cathepsin B and L (cathepsin B/L) [9-11]. Alternatively, the spike protein can be activated by the serine protease TMPRSS2, resulting in fusion of the viral membrane with the plasma membrane . Seeing as the S protein has pivotal roles in viral infection we propose interfering with the S protein activation and hence viral pathogenesis. Recently, publications on COVID-19 have brought attention to the possible benefit of repurposing the drug chloroquine in the treatment of patients infected by SARS-CoV-2 [14,15]. Chloroquine (N4-(7-Chloro-4-quinolinyl)-N1,N1-diethyl-1,4-pentanediamine), an FDA-approved drug , has been used to treat malaria and amebiasis for many years , as well as autoimmune diseases. Viral fusion and release of the genetic components is highly dependent on the endosomal pathway and particularly pH. Chloroquine can affect virus infection in many ways. Of particular importance is that Chloroquine is known to block virus infection by increasing endosomal pH required for virus/endosome fusion and release of viral RNA into the cytosol. Past research on chloroquine has shown in vitro activity against many different viruses, but no benefit in animal models . Chloroquine in almost all animal models of different viral infections only partially worked or didn’t work [20-22]. Treatment with chloroquine did not prevent influenza infection in a randomized, double-blind, placebo-controlled clinical trial.23 Conversely, it worked very well in vitro [24-26]. This could indicate that the main mechanism of action of chloroquine, in vivo, is via interference with the unspecific endosomal pathway. The extracellular concentration of the orally applied chloroquine, especially in lung tissue, in vivo, may not be high enough to inhibit virus binding via glycosylation of the binding pocket . After the viral infection has spread in the body and due to the incredibly high viral loads, the unspecific pathway is mainly used for further virus replication. This may explain the recent success reported with chloroquine to assist in the curing of the virus. Whether chloroquine can treat COVID-19 alone and also work as a prophylactic is doubtful. This needs to be further investigated before masses of people start to take this relatively toxic drug as a preventive measure. Inhibition of the serine protease, TMPRSS2, activity is an excellent target for antiviral intervention. Hoffman et al. suggested that TMPRSS2 could be a potential therapeutic target for COVID-19 since entry of the virus into cells was reduced by camostat mesilate, a non-selective TMPRSS2 inhibitor. Non-selective inhibitors have greater, more severe side effects than selective inhibitors and currently camostat mesilate is only approved for treatment of chronic pancreatitis [28,29] in Japan. Unfortunately, the drug is costly and won’t be available to treat large-scale patient numbers. TMPRSS2 is expressed highly in localized high-grade prostate cancers and in the majority of human prostate cancer metastases. Lucas et al. showed a decrease in the frequency of metastases and a slowdown of the spread of metastases in mice with prostate cancer by using TMPRSS2 inhibitors. In particular, they identified bromhexine, an FDA approved ingredient in mucolytic cough suppressants, as a potential TMPRSS2 inhibitor for their application. Bromhexine is orally readily bio-available.. Endonasal application is also a good alternative option. Bromhexine is an over-the-counter (OTC) drug that is affordable with proven safety. Typically bromide compounds, especially aromatic bromide compounds, show a relatively high binding affinity for serine-containing peptide sequences, proteins and enzymes [30,32] Lucas et al. show that this effect is due to a selective inhibition of TMPRSS2 by bromhexine. The available data suggests further that ambroxol, a metabolite of bromhexine, is a potent inducer of surfactant synthesis in AT2 cells [33-35]. Its lung protective properties have been discussed in infants and severely ill adult patients as well as the potential as an adjuvant in anti-infective therapy . Thus, bromhexine also provides indirect protective effects. Laporte and colleague, Naesens , reported that bromhexine did not show any significant cell entry or replication inhibition effect in vitro in Influenza viruses. However, the authors showed that Influenza viruses utilize, contrary to SARS-CoV-2, a different extracellular host protease for priming. Thus, these results are not representative for SARS-CoV-2 . In already infected individuals we believe it is essential to combine the lesser toxic chloroquine-derivate, hydroxyl chloroquine, with a TMPRSS2 inhibitor, like bromhexine, to block complete entry of the virus into host cells. In the case of prophylaxis the inhibition of the TMPRSS2 is essential and the non-specific endosomal entry is negligible. An effective prophylactic medication to prevent viral entry has to contain, at least, either a TMPRSS2 inhibitor or a competitive virus ACE2 binding inhibitor. This will prevent further spreading of the virus through the host’s body. A prophylaxis strategy and a suitable treatment for the emerging SARS-CoV-2 is crucial for reducing the mortality and morbidity of this disease but developing and obtaining regulatory approval for new drugs can take years and is discordant with the urgent need for a therapy. Drug repurposing is an attractive alternative drug discovery strategy because there is the advantage of ease of access, decreased cost of development (as they have established manufacturing arrangements), and the possibility to provide a wide array of options for combination studies. The background pharmacological knowledge available for such compounds may also reduce concerns regarding adverse effects in patients as they have gone through rigorous safety and risk testing and are already approved as safe for human use. Using bromhexine at a dosage that selectively inhibits TMPRSS2 and, in so doing, inhibits TMPRSS2-specific viral entry is likely to be effective against SARSCoV- 2. We propose the use of bromhexine as a prophylactic and treatment. Furthermore, a combination with hydroxyl chloroquine, that is (amongst other functions) an effective endosomal protease inhibitor, inhibiting cathepsin B/L, could be a favorable combination for the treatment of moderate to severe COVID-19 cases. This combination would block virus-host cell entry completely by blocking the specific receptor mediated entry (via bromhexine) and endocytotic virus entry (via hydroxychloroquine sulfate). We can only encourage the scientific community to test bromhexine and the combination with hydroxychloroquine and to follow our recommended approach in order to also identify further ideal repurposing candidates according to the herein proposed criteria. Conflict of Interests The authors have declared that no conflicts of interest exist. - (2020) Novel 2019 Coronavirus Genome. - (2020) GSAID Database. Coronavirus. - SARS-CoV-2 (Severe acute respiratory syndrome coronavirus 2) Sequences. GenBank. - (2020) Covid-19 Vaccine Tracker. Regulatory Affairs Professionals Society. - Dyall J, Gross R, Kindrachuk J, Johnson RF, Olinger GG Jr, et al. (2017) Middle East Respiratory Syndrome and Severe Acute Respiratory Syndrome: Current Therapeutic Options and Potential Targets for Novel Therapies. Drugs 77(18): 1935-1966. - Zumla A, Hui DS, Azhar EI, Memish ZA, Maeurer M (2020) Reducing mortality from 2019-nCoV: host-directed therapies should be an option. The Lancet 395(10224): e35-e36. - Stebbing J, Phelan A, Griffin I, Catherine Tucker, Olly Oechsle, et al. (2020) COVID-19: combining antiviral and anti-inflammatory treatments. The Lancet Infectious Diseases 20(4): 400-402. - Simmons G, Zmora P, Gierer S, Heurich A, Pöhlmann S. Proteolytic activation of the SARS-coronavirus spike protein: Cutting enzymes at the cutting edge of antiviral research. Antiviral Research 100(3): 605-614. - Simmons G, Gosalia DN, Rennekamp AJ, Reeves JD, Diamond SL, et al. (2005) Inhibitors of cathepsin L prevent severe acute respiratory syndrome coronavirus entry. Proceedings of the National Academy of Sciences of the United States of America 102(33): 11876-1181. - Rubio Aliaga I, Frey I, Boll M, David A Groneberg, Hans M Eichinger, et al. (2003) Targeted disruption of the peptide transporter Pept2 gene in mice defines its physiological role in the kidney. Molecular and cellular biology 23(9): 3247-3252. - Ding N, Zhao K, Lan Y, Zi Li, Xioling Lv, et al. (2017) Induction of Atypical Autophagy by Porcine Hemagglutinating Encephalomyelitis Virus Contributes to Viral Replication. Frontiers in cellular and infection microbiology 7: 56. - Belouzard S, Millet JK, Licitra BN, Whittaker GR (2012) Mechanisms of coronavirus cell entry mediated by the viral spike protein. Viruses 4(6): 1011-1133. - Hofmann H, Hattermann K, Marzi A, Gramberg T, Geier M, et al. (2004) S protein of severe acute respiratory syndrome-associated coronavirus mediates entry into hepatoma cell lines and is targeted by neutralizing antibodies in infected patients. J Virol 78(12): 6134-6142. - Colson P, Rolain J M, Raoult D (2020) Chloroquine for the 2019 novel coronavirus SARS-CoV-2. International Journal of Antimicrobial Agents 55(3): 105923. - Gao J, Tian Z, Yang X (2020) Breakthrough: Chloroquine phosphate has shown apparent efficacy in treatment of COVID-19 associated pneumonia in clinical studies. Biosci Trends 14(1): 72-73. - Chloroquine Phosphate, Usp. - Aguiar ACC, Murce E, Cortopassi WA, Andre S Pimentel, Maria MFS Almeida, et al. (2018) Chloroquine analogs as antimalarial candidates with potent in vitro and in vivo activity. International journal for parasitology Drugs and drug resistance 8(3): 459-464. - Vincent MJ, Bergeron E, Benjannet S, Erickson BR, Rollin PE, et al. (2005) Chloroquine is a potent inhibitor of SARS coronavirus infection and spread. Virol J 2: 69. - Touret F, de Lamballerie X (2020) Of chloroquine and COVID-19. Antiviral Research 177: 104762. - Keyaerts E, Li S, Vijgen L, Evelien Rysman, Jannick Verbeeck, et al. (2009) Antiviral Activity of Chloroquine against Human Coronavirus OC43 Infection in Newborn Mice. Antimicrobial Agents and Chemotherapy 53: 3416-3421. - Tan YW, Yam WK, Sun J, Chu JJH (2018) An evaluation of Chloroquine as a broad-acting antiviral against Hand, Foot and Mouth Disease. Antiviral Res 149: 143-149. - Yan Y, Zou Z, Sun Y, Xiao Li, Kai Feng Xu, et al. (2013) Anti-malaria drug chloroquine is highly effective in treating avian influenza A H5N1 virus infection in an animal model. Cell Research 23(2): 300-302. - Paton NI, Lee L, Xu Y, Ooi EE, Cheung YB, et al. (2011) Chloroquine for influenza prevention: a randomised, double-blind, placebo controlled trial. The Lancet Infectious Diseases 11(9): 677-683. - Shimizu Y, Yamamoto S, Homma M, Ishida N (1972) Effect of chloroquine on the growth of animal viruses. Archiv für die gesamte Virusforschung 36(1): 93-104. - Keyaerts E, Vijgen L, Maes P, Neyts J, Van Ranst M (2004) In vitro inhibition of severe acute respiratory syndrome coronavirus by chloroquine. Biochemical and biophysical research communications 323(1): 264-268. - Inglot AD (1969) Comparison of the Antiviral Activity in vitro of some Non-steroidal Anti-inflammatory Drugs. Journal of General Virology 4(2): 203-214. - Hoffmann M, Kleine-Weber H, Schroeder S, Krüger N, Herrler T, et al. (2020) SARS-CoV-2 Cell Entry Depends on ACE2 and TMPRSS2 and Is Blocked by a Clinically Proven Protease Inhibitor. Cell S0092-8674(20): 30229-30234. - (2009) FOIPAN (Camostat mesilate) Commodity Classification of Japan. - (2020) Camostat. Drugs. - Lucas JM, Heinlein C, Kim T, Hernandez SA, Malik MS, et al. (2014) The androgen-regulated protease TMPRSS2 activates a proteolytic cascade involving components of the tumor microenvironment and promotes prostate cancer metastasis. Cancer discovery 4(11): 1310-1325. - Chang CC, Cheng AC, Chang AB (2014) Over-the-counter (OTC) medications to reduce cough as an adjunct to antibiotics for acute pneumonia in children and adults. The Cochrane database of systematic reviews 2007:Cd006088. - Danelius E, Andersson H, Jarvoll P, Lood K, Gräfenstein J, Erdélyi M (2017) Halogen Bonding: A Powerful Tool for Modulation of Peptide Conformation. Biochemistry 56: 3265-3272. - Han S, Mallampalli RK (2015) The Role of Surfactant in Lung Disease and Host Defense against Pulmonary Infections. Ann Am Thorac Soc 12(5): 765-774. - Plomer M, de Zeeuw J (2017) [More than expectorant: new scientific data on ambroxol in the context of the treatment of bronchopulmonary diseases]. MMW Fortschritte der Medizin 159(Suppl): 22-33. - Gao X, Huang Y, Han Y, Bai CX, Wang G (2011) The protective effects of Ambroxol in Pseudomonas aeruginosa-induced pneumonia in rats. Arch Med Sci 7(3): 405-413. - Laporte M, Naesens L (2017) Airway proteases: an emerging drug target for influenza and other respiratory virus infections. Current opinion in virology 24: 16-24.
<urn:uuid:b0952499-dee2-4550-82db-4b6d7cb53a9d>
CC-MAIN-2022-33
https://biomedgrid.com/fulltext/volume8/a-sars-cov-2-prophylactic-and-treatment-a-counter-argument-against-the-sole-use-of-chloroquine.001283.php
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00698.warc.gz
en
0.845836
4,191
2.515625
3
This article needs additional citations for verification. (September 2008) (Learn how and when to remove this template message) Fingerstyle guitar is the technique of playing the guitar by plucking the strings directly with the fingertips, fingernails, or picks attached to fingers, as opposed to flatpicking (plucking individual notes with a single plectrum, commonly called a "pick"). The term "fingerstyle" is something of a misnomer, since it is present in several different genres and styles of music—but mostly, because it involves a completely different technique, not just a "style" of playing, especially for the guitarist's picking/plucking hand. The term is often used synonymously with fingerpicking, although fingerpicking can also refer to a specific tradition of folk, blues and country guitar playing in the US. See below. Music arranged for fingerstyle playing can include chords, arpeggios (the notes of a chord played one after the other, as opposed to simultaneously) and other elements such as artificial harmonics, hammering on and pulling off notes with the fretting hand, using the body of the guitar percussively (by tapping rhythms on the body), and many other techniques. Often, the guitarist will play the melody notes, interspersed with the melody's accompanying chords and the deep bassline (or bass notes) simultaneously. Some fingerpicking guitarists also intersperse percussive tapping along with the melody, chords and bassline. This enables a single guitarist to provide all of these important song elements. This enables singer-guitarists to accompany themselves, and it enables smaller groups which have only a single guitarist to use one guitarist to provide all of these musical elements. Fingerpicking is a standard technique on the classical or nylon string guitar, but is considered more of a specialized technique on steel string guitars. Fingerpicking is less common on electric guitars, except in the heavy metal music virtuoso style of lead guitar playing known as shred guitar. - 1 Technique - 2 Advantages and disadvantages - 3 Nylon string guitar styles - 4 North American tradition - 5 Other acoustic styles - 6 Slide, steel and slack-key guitar - 7 Electric guitar - 8 Notes - 9 References Because individual digits play notes on the guitar rather than the hand working as a single unit (which is the case when a guitarist is holding a single pick), a guitarist playing fingerstyle can perform several musical elements simultaneously. One definition of the technique has been put forward by the Toronto (Canada) Fingerstyle Guitar Association: Physically, "Fingerstyle" refers to using each of the right hand fingers independently to play the multiple parts of a musical arrangement that would normally be played by several band members. Deep bass notes, harmonic accompaniment (the chord progression), melody, and percussion can all be played simultaneously when playing Fingerstyle. Many fingerstyle guitarists have adopted a combination of acrylic nails and a thumbpick to improve tone and decrease nail wear and chance of breaking or chipping. Notable guitarists to adopt this hardware are Doyle Dykes and Canadian guitarist Don Ross. Advantages and disadvantages - Players do not have to carry a plectrum; but fingernails may have to be maintained at the right length and in good condition. - It is possible to play multiple non-adjacent strings at exactly the same time. This enables the guitarist to play a very low bass note and a high treble note at the same time. This enables the guitarist to play double stops, such as an octave, a fifth, a sixth, or other intervals that suit the harmony. - It is more suitable for playing polyphonically, with separate, independent musical lines, or separate melody, harmony and bass parts, and therefore more suitable to unaccompanied solo playing, or to very small ensembles, like duos in which a guitarist accompanies a singer. Fingerstyle players have up to four (or five) surfaces (fingernails or picks) striking the strings and/or other parts of the guitar independently; that does not equate to four plectrums, since plectrums can strike strings on both up and a downstroke easily, while fingers can only achieve alternation with hard practice. (an exception to this may be found in the flamenco technique of rasgueado. - It is easy to play arpeggios; but the techniques for tremolo (rapid repetition of a note) and melody playing are more complex than with plectrum playing. - It is possible to play chords without any arpeggiation, because up to five strings can be plucked simultaneously. - There is less need for fretting hand damping (muting) in playing chords, since only the strings that are required can be plucked. - A greater variation in strokes is possible, allowing greater expressiveness in timbre and dynamics. - A wide variety of strums and rasgueados are possible. - Less energy is generally imparted to strings than with plectrum playing, leading to lower volume when playing acoustically. - Playing on heavier gauge strings can damage nails: fingerstyle is more suited to nylon strings or lighter gauge steel strings (but this does not apply to fingerpicks, or when the flesh of the fingers is used rather than the nail, as is the case with the lute.) Nylon string guitar styles Classical guitar fingerstyle The term "Classical guitar music" can refer to any kind of art music played on a nylon string guitar, or more narrowly to music of the classical period, as opposed to baroque or romantic music. The major feature of classical fingerstyle technique is that it enables solo rendition of harmony and polyphonic music in much the same manner as the piano can. The technique is intended to maximise the degree of control over the musical dynamics, texture, volume and timbral characteristics of the guitar. Careful attention is paid to the physical posture of the player. Thumb, index, middle and ring fingers are all employed for plucking. Chords are often plucked, with strums being reserved for emphasis. The repertoire varies in terms of keys, modes, rhythms and cultural influences. Altered tunings are rarely employed, with the exception of dropped D. Fingerings for both hands are often given in detail in classical guitar music notation, although players are also free to add to or depart from them as part of their own interpretation. Fretting hand fingers are given as numbers, plucking hand fingers are given as letters |Little||4||Little||c or x or e| In guitar scores, the five fingers of the right-hand (which pluck the strings) are designated by the first letter of their Spanish names namely p = thumb (pulgar), i = index finger (índice), m = major finger (mayor), a = ring finger (anular), c = little finger or pinky (chiquito). The four fingers of the left hand (which stop the strings) are designated 1 = index, 2 = major, 3 = ring finger, 4 = little finger; 0 designates an open string, that is a string that is not stopped by a finger of the left hand and whose full length thus vibrates when plucked. On the classical guitar thumb of the left hand is never used to stop strings from above (as is done on the electric guitar): the neck of a classical guitar is too wide and the normal position of the thumb used in classical guitar technique do not make that possible. Scores (contrary to tablatures) do not systematically indicate the string to be plucked (although in most cases the choice is obvious). When an indication of the string is required the strings are designated 1 to 6 (from the 1st the high E to the 6th the low E) with figures 1 to 6 inside circles. The positions (that is where on the fretboard the first finger of the left hand is placed) are also not systematically indicated, but when they are (mostly in the case of the execution of barrés) these are indicated with Roman numerals from the position I (index finger of the left hand placed on the 1st fret: F–B♭–E♭–A♭–C–F) to the position XII (the index finger of the left hand placed on the 12th fret: E–A–D–G–B–E; the 12th fret is placed where the body begins) or higher up to position XIX (the classical guitar most often having 19 frets, with the 19th fret being most often split and not being usable to fret the 3rd and 4th strings). To achieve tremolo effects and rapid, fluent scale passages, and varied arpeggios the player must practice alternation, that is, never plucking a string with the same finger twice. Common alternation patterns include: - i–m–i–m: Basic melody line on the treble strings. Has the appearance of "walking along the strings". - i–m–a–i–m–a: Tremolo pattern with a triplet feel (i.e. the same note is repeated three times) - p–a–m–i–p–a–m–i: Another tremolo pattern. - p–m–p–m: A way of playing a melody line on the lower strings. Classical guitarists have a lot of freedom within the mechanics of playing the instrument. Often these decisions with influence on tone and timbre – factors include: - At what position along the string the finger plucks the string (This is changed by guitarists throughout a song, since it is an effective way of changing the sound (timbre) from "soft" (dolce) plucking the string near its middle, to "hard" (ponticelo) plucking the string near its end). - Use of the nail or not: Modern classical guitar playing uses a technique in which both the nail and the fingertip contact the string during normal playing. (Andrés Segovia is often credited with popularizing this technique.) Playing with either fingertips alone (dita punta) or fingernails alone (dita unghis) are considered special techniques for timbral variation. Concert guitarists must keep their fingernails smoothly filed and carefully shaped to employ this technique, which produces a better-controlled sound than either nails or fingertips alone. Playing parameters include: - Which finger to use - What angle of attack to hold the wrist and fingers at with respect to the strings. - Rest-stroke apoyando; the finger that plucks a string rests on the next string—traditionally used in single melody lines—versus free-stroke tirando (plucking the string without coming to a rest on the next string). Flamenco guitar fingerstyle Flamenco technique is related to classical technique, but with more emphasis on rhythmic drive and volume, and less on dynamic contrast and tone production. Flamenco guitarists prefer keys such as A and E that allow the use of open strings, and typically employ capos where a departure is required. They often strengthen their fingernails artificially. Some specialized techniques include: - Picado: Single-line scale passages performed apoyando but with more attack and articulation. - Rasgueado: Strumming frequently done by bunching all the right hand fingers and then flicking them out in quick succession to get four superimposed strums (although there are a great many variations on this). The rasgueado or "rolling" strum is particularly characteristic of the genre. - Alzapua: A thumb technique with roots in oud plectrum technique. The right hand thumb is used for both single-line notes and strummed across a number of strings. Both are combined in quick succession to give it a unique sound. - Tremolo: Done somewhat differently from the conventional classical guitar tremolo, it is very commonly played with the right hand pattern p–i–a–m–i. Bossa nova is most commonly performed on the nylon-string classical guitar, played with the fingers rather than with a pick. Its purest form could be considered unaccompanied guitar with vocals, as exemplified by João Gilberto. Even in larger, jazz-like arrangements for groups, there is almost always a guitar that plays the underlying rhythm. Gilberto basically took one of the several rhythmic layers from a samba ensemble, specifically the tamborim, and applied it to the picking hand. North American tradition Fingerpicking (also called thumb picking, alternating bass, or pattern picking) is a term that is used to describe both a playing style and a genre of music. It falls under the "fingerstyle" heading because it is plucked by the fingers, but it is generally used to play a specific type of folk, country-jazz and/or blues music. In this technique, the thumb maintains a steady rhythm, usually playing "alternating bass" patterns on the lower three strings, while the index, or index and middle fingers pick out melody and fill-in notes on the high strings. The style originated in the late 19th and early 20th centuries, as southern blues guitarists tried to imitate the popular ragtime piano music of the day, with the guitarist's thumb functioning as the pianist's left hand, and the other fingers functioning as the right hand. The first recorded examples were by players such as Blind Blake, Big Bill Broonzy, Memphis Minnie and Mississippi John Hurt. Some early blues players such as Blind Willie Johnson and Tampa Red added slide guitar techniques. Fingerpicking was soon taken up by country and Western artists such as Sam McGee, Ike Everly (father of The Everly Brothers), Merle Travis and "Thumbs" Carllile. Later Chet Atkins further developed the style and in modern music musicians such as Jose Gonzalez, Eddie Vedder (on his song Guaranteed) and David Knowles have utilized the style. Most fingerpickers use acoustic guitars, but some, including Merle Travis played on hollow-body electric guitars, while some modern rock musicians, such as Derek Trucks and Mark Knopfler, employ traditional North American fingerpicking techniques on solid-body electric guitars such as the Gibson Les Paul or the Fender Stratocaster. As mentioned above, fingerpicking has similar roots to and may have been inspired by ragtime piano. An early master of ragtime guitar was Blind Blake, a popular recording artist of the late 1920s and early 1930s. In the 1960s, a new generation of guitarists returned to these roots and began to transcribe piano tunes for solo guitar. One of the best known and most talented of these players was Dave Van Ronk, who arranged St. Louis Tickle for solo guitar. In 1971, guitarists David Laibman and Eric Schoenberg arranged and recorded Scott Joplin rags and other complex piano arrangements for the LP The New Ragtime Guitar on Folkways Records. This was followed by a Stefan Grossman method book with the same title. A year later Grossman and ED Denson founded Kicking Mule Records, a company that recorded scores of LPs of solo ragtime guitar by artists including Grossman, Ton van Bergeyk, Leo Wijnkamp, Duck Baker, Peter Finger, Lasse Johansson, Tom Ball and Dale Miller. Meanwhile, Reverend Gary Davis was active in New York City, where he mentored many aspiring finger-pickers. He has subsequently influenced numerous other artists in the United States and internationally. Carter Family picking Carter Family picking, also known as "'thumb brush' technique or the 'Carter lick,' and also the 'church lick' and the 'Carter scratch'", is a style of fingerstyle guitar named for Maybelle Carter of the Carter Family's distinctive style of rhythm guitar in which the melody is played on the bass strings, usually low E, A, and D while rhythm strumming continues above, on the treble strings, G, B, and high E. This often occurs during the break. This style is commonly played on steel string acoustic guitars. Pattern picking is the use of "preset right-hand pattern[s]" while fingerpicking, with the left hand fingering standard chords. The most common pattern, sometimes broadly (and incorrectly) referred to as Travis picking after Merle Travis, and popularized by Chet Atkins, Marcel Dadi, James Taylor and Tommy Emmanuel, is as follows: Middle | X X - | X X - | Index | X X - | X X - | Thumb | X X X X - | X X X X - | The thumb (T) alternates between bass notes, often on two different strings, while the index (I) and middle (M) fingers alternate between two treble notes, usually on two different strings, most often the second and first. Using this pattern on a C major chord is as follows in notation and tablature: However, Travis' own playing was often much more complicated than this example. He often referred to his style of playing as "thumb picking", possibly because the only pick he used when playing was a banjo thumb pick, or "Muhlenberg picking", after his native Muhlenberg County, Kentucky, where he learned this approach to playing from Mose Rager and Ike Everly. Travis' style did not involve a defined, alternating bass string pattern; it was more of an alternating "bass strum" pattern, resulting in an accompanying rhythm reminiscent of ragtime piano. Clawhammer and frailing Clawhammer and frailing are primarily banjo techniques that are sometimes applied to the guitar. Jody Stecher and Alec Stone Sweet are exponents of guitar clawhammer. Fingerstyle guitarist Steve Baughman distinguishes between frailing and clawhammer as follows. In frailing, the index fingertip is used for up-picking melody, and the middle fingernail is used for rhythmic downward brushing. In clawhammer, only downstrokes are used, and they are typically played with one fingernail as is the usual technique on the banjo. American primitive guitar American primitive guitar, or American Primitivism, is a subset of fingerstyle guitar. It originated with John Fahey, whose recordings from the late 1950s to the mid 1960s inspired many guitarists such as Leo Kottke, who made his debut recording of 6- and 12-String Guitar on Fahey's Takoma label in 1969. American primitive guitar can be characterized by the use of folk music or folk-like material, driving alternating-bass fingerpicking with a good deal of ostinato patterns, and the use of alternative tunings (scordatura) such as open D, open G, drop D and open C. The application or "cross-contamination" of traditional forms of music within the style of American Primitivism is also very common. Examples of traditions that John Fahey and Robbie Basho would employ in their compositions include, but are not limited to, the extended Raga of Indian classical music, the Japanese Koto, and the early ragtime-based country blues music of Mississippi John Hurt or Blind Blake. Other acoustic styles A distinctive style to emerge from Britain in the early 1960s, which combined elements of American folk, blues, jazz and ragtime with British traditional music, was what became known as 'folk baroque'. Pioneered by musicians of the Second British folk revival began their careers in the short-lived skiffle craze of the later 1950s and often used American blues, folk and jazz styles, occasionally using open D and G tunings. However, performers like Davy Graham and Martin Carthy attempted to apply these styles to the playing of traditional English modal music. They were soon followed by artists such as Bert Jansch and John Renbourn, who further defined the style. The style these artists developed was particularly notable for the adoption of D–A–D–G–A–D (from lowest to highest), which gave a form of suspended-fourth D chord, neither major nor minor, which could be employed as the basis for modal based folk songs. This was combined with a fingerstyle based on Travis picking and a focus on melody, that made it suitable as an accompaniment. Denselow, who coined the phrase 'folk baroque,' singled out Graham's recording of traditional English folk song 'Seven Gypsys' on Folk, Blues and Beyond (1964) as the beginning of the style. Graham mixed this with Indian, African, American, Celtic, and modern and traditional American influences, while Carthy in particular used the tuning to replicate the drone common in medieval and folk music played by the thumb on the two lowest strings. The style was further developed by Jansch, who brought a more forceful style of picking and, indirectly, influences from Jazz and Ragtime, leading particularly to more complex basslines. Renbourn built on all these trends and was the artist whose repertoire was most influenced by medieval music. In the early 1970s the next generation of British artists added new tunings and techniques, reflected in the work of artists like Nick Drake, Tim Buckley and particularly John Martyn, whose Solid Air (1972) set the bar for subsequent British acoustic guitarists. Perhaps the most prominent exponent of recent years has been Martin Simpson, whose complex mix of traditional English and American material, together with innovative arrangements and techniques like the use of guitar slides, represents a deliberate attempt to create a unique and personal style. Martin Carthy passed on his guitar style to French guitarist Pierre Bensusan. It was taken up in Scotland by Dick Gaughan, and by Irish musicians like Paul Brady, Dónal Lunny and Mick Moloney. Carthy also influenced Paul Simon, particularly evident on Scarborough Fair, which he probably taught to Simon, and a recording of Davy's Anji that appears on Sounds of Silence, and as a result was copied by many subsequent folk guitarists. By the 1970s Americans such as Duck Baker and Eric Schoenberg were arranging solo guitar versions of Celtic dance tunes, slow airs, bagpipe music, and harp pieces by Turlough O'Carolan and earlier harper-composers. Renbourn and Jansch's complex sounds were also highly influential on Mike Oldfield's early music. The style also had an impact within electric folk, where particularly Richard Thompson, used the D–A–D–G–A–D tuning, though with a hybrid picking style to produce a similar but distinctive effect. "New Age" approach In 1976, William Ackerman started Windham Hill Records, which carried on the Takoma tradition of original compositions on solo steel string guitar. However, instead of the folk and blues oriented music of Takoma, including Fahey's American primitive guitar, the early Windham Hill artists (and others influenced by them) abandoned the steady alternating or monotonic bass in favor of sweet flowing arpeggios and flamenco-inspired percussive techniques. The label's best selling artist George Winston and others used a similar approach on piano. This music was generally pacific, accessible and expressionistic. Eventually, this music acquired the label of "New Age", given its widespread use as background music at bookstores, spas and other New Age businesses. The designation has stuck, though it wasn't a term coined by the company itself. This section does not cite any sources. (February 2007) (Learn how and when to remove this template message) This section needs expansion. You can help by adding to it. (June 2008) "Percussive picking" is a term for a style incorporating sharp attacks on the strings, as well as hitting the strings and guitar top with the hand for percussive effect. Flamenco guitarists have been using these techniques for years but the greater resistance of steel strings made a similar approach difficult in fingerstyle until the use of pickups on acoustic guitars became common in the early 1970s. Michael Hedges began to use percussive techniques in the early 1980s. "Funky fingerstyle" emerged in the mid 2000s, as a style in which the sounds of a full funk or R&B ensemble are emulated on one guitar. Uncommon sounds are being discovered thanks to the technical possibilities of various pick-ups, microphones and octave division effects pedals. Adam Rafferty uses a technique of hip-hop vocal percussion called "human beat box", along with body percussion, while playing contrapuntal fingerstyle pieces. Petteri Sariola has several mics on board his guitar and is able to run up to 6 lines from his guitar to a mixing desk, providing a full "band sound" – bass drum, snare, bass, guitar – as an accompaniment to his vocals. The six string guitar was brought to Africa by traders and missionaries (although there are indigenous guitar-like instruments such as the ngoni and the gimbri or sintir of Gnawa music). Its uptake varies considerably between regions, and there is therefore no single African acoustic guitar style. In some cases, the styles and techniques of other instruments have been applied to the guitar; for instance, a technique where the strings are plucked with the thumb and one finger imitates the two-thumbed plucking of the kora and mbira. The pioneer of Congolese fingerstyle acoustic guitar music was Jean Bosco Mwenda, also known as Mwenda wa Bayeke (1930–1990). His song "Masanga" was particularly influential, because of its complex and varied guitar part. His influences included traditional music of Zambia and the Eastern Congo, Cuban groups like the Trio Matamoros, and cowboy movies. His style used the thumb and index finger only, to produce bass, melody and accompaniment. Congolese guitarists Losta Abelo and Edouard Masengo played in a similar style. Herbert Misango and George Mukabi were fingerstyle guitarists from Kenya. Ali Farka Toure (d. 2006) was a guitarist from Mali, whose music has been called the "DNA of the blues". He was also often compared to John Lee Hooker. His son Vieux Farka Toure continues to play in the same style. Djelimady Tounkara is another Malian fingerstylist. S. E. Rogie and Koo Nimo play acoustic fingerstyle in the lilting, calypso-influenced palm wine music tradition. Benin-born Jazz guitarist Lionel Loueke uses fingerstyle in an approach that combines jazz harmonies and complex rhythms. He is now based in the US. Tony Cox (b. 1954) is a Zimbabwean guitarist and composer based in Cape Town, South Africa. A master of the Fingerpicking style of guitar playing, he has won the SAMA (South African Music Awards) for best instrumental album twice. His music incorporates many different styles including classical, blues, rock and jazz, while keeping an African flavour. Tinderwet is a versatile guitarist of the three and sometimes four fingers playing style (thumb, index, middle and ring); he plays several different African styles, including soukous or West African music. He often flavours his playing with jazzy improvisations, regular fingerpicking patterns and chord melody sequences. Slide, steel and slack-key guitar Even when the guitar is tuned in a manner that helps the guitarist to perform a certain type of chord, it is often undesirable for all six strings to sound. When strumming with a plectrum, a guitarist must "damp" (mute) unwanted strings with the fretting hand; when a slide or steel is employed, this fretting hand damping is no longer possible, so it becomes necessary to replace plectrum strumming with plucking of individual strings. For this reason, slide guitar and steel guitar playing are very often fingerstyle. Slide guitar or bottleneck guitar is a particular method or technique for playing the guitar. The term slide refers to the motion of the slide against the strings, while bottleneck refers to the original material of choice for such slides: the necks of glass bottles. Instead of altering the pitch of the strings in the normal manner (by pressing the string against frets), a slide is placed on the string to vary its vibrating length, and pitch. This slide can then be moved along the string without lifting, creating continuous transitions in pitch. Slide guitar is most often played (assuming a right-handed player and guitar): - with the guitar in the normal position, using a slide called a bottleneck on one of the fingers of the left hand; this is known as bottleneck guitar; - with the guitar held horizontally, with the belly uppermost and the bass strings toward the player, and using a slide called a steel held in the left hand; this is known as lap steel guitar. Slack-key guitar is a fingerpicked style that originated in Hawaii. The English term is a translation of the Hawaiian kī hō‘alu, which means "loosen the [tuning] key". Slack key is nearly always played in open or altered tunings—the most common tuning is G-major (D–G–D–G–B–D), called "taropatch", though there is a family of major-seventh tunings called "wahine" (Hawaiian for "woman"), as well as tunings designed to get particular effects. Basic slack-key style, like mainland folk-based fingerstyle, establishes an alternating bass pattern with the thumb and plays the melody line with the fingers on the higher strings. The repertory is rooted in traditional, post-Contact Hawaiian song and dance, but since 1946 (when the first commercial slack key recordings were made) the style has expanded, and some contemporary compositions have a distinctly new-age sound. Slack key's older generation included Gabby Pahinui, Leonard Kwan, Sonny Chillingworth and Raymond Kāne. Prominent contemporary players include Keola Beamer, Moses Kahumoku, Ledward Kaapana, Dennis Kamakahi, John Keawe, Ozzie Kotani and Peter Moon and Cyril Pahinui. Fingerstyle jazz guitar The unaccompanied guitar in jazz is often played in chord-melody style, where the guitarist plays a series of chords with the melody line on top. Fingerstyle, plectrum, or hybrid picking are equally suited to this style. Some players alternate between fingerstyle and plectrum playing, "palming" the plectrum when it is not in use. Early blues and ragtime guitarists often used fingerstyle. True fingerstyle jazz guitar dates back to early swing era acoustic players like Eddie Lang (1902–1933) Lonnie Johnson (1899–1970) and Carl Kress (1907–1965), Dick McDonough (1904–1938) and the Argentinian Oscar Alemán (1909–1980). Django Reinhardt (1910–1953) used a classical/flamenco technique on unaccompanied pieces such as his composition Tears. Fingerstyle jazz on the electric guitar was pioneered by George van Eps (1913–1998) who was respected for his polyphonic approach, sometimes using a seven string guitar. Wes Montgomery (1925–1968) was known for using the fleshy part of his thumb to provide the bass line while strumming chordal or melodic motives with his fingers. This style, while unorthodox, was widely regarded as an innovative method for enhancing the warm tone associated with jazz guitar. Montgomery's influence extends to modern polyphonic jazz improvisational methods. Joe Pass (1929–1994) switched to fingerstyle mid career,making the Virtuoso series of albums. Little known to the general public Ted Greene (1946–2005) was admired by fellow musicians for his harmonic skills. Lenny Breau (1941–1984) went one better than van Eps by playing virtuosic fingerstyle on an eight string guitar. Tommy Crook replaced the lower two strings on his Gibson switchmaster with bass strings, allowing him to create the impression of playing bass and guitar simultaneously. Chet Atkins (1924–2001) sometimes applied his formidable right-hand technique to jazz standards, with Duck Baker (b. 1949), Richard Smith (b. 1971), Woody Mann and Tommy Emmanuel (b. 1955), among others, following in his footsteps. They use the fingerpicking technique of Merle Travis and others to play wide variety of material including jazz. This style is distinguished by having a steadier and "busier" (several beats to the bar) bass line than the chord melody approach of Montgomery and Pass making it suited to up-tempo material. Fingerstyle has always been predominant in Latin American guitar playing, which Laurindo Almeida (1917–1995) and Charlie Byrd (1925–1999) brought to a wider audience in the 1950s. Fingerstyle jazz guitar has several proponents: the pianistic Jeff Linsky (b. 1952), freely improvises polyphonically while employing a classical guitar technique. Earl Klugh (b. 1953) and Tuck Andress have also performed fingerstyle jazz on the solo guitar. Briton Martin Taylor (b. 1956), a former Stephane Grappelli sideman, switched to fingerstyle on relaunching his career as a soloist. His predecessor in Grappelli's band, John Etheridge (b. 1948) is also an occasional fingerstyle player. Electric blues and rock The solid-body electric guitar is rarely played fingerstyle, although it presents no major technical challenges. Slide guitarists often employ fingerstyle, which applies equally to the electric guitar, for instance Duane Allman and Ry Cooder. Blues guitarists have long used fingerstyle: some exponents include Jorma Kaukonen, Hubert Sumlin, Albert King, Albert Collins, John Lee Hooker, Derek Trucks, Joe Bonamassa, and Buckethead. Exponents of fingerstyle rock guitar include: Mark Knopfler, Jeff Beck (after years of pick playing), Stephen Malkmus, Bruce Cockburn (exclusively), Robby Krieger, Lindsey Buckingham, Mike Oldfield, Patrick Simmons, Wilko Johnson, J.J. Cale, Robbie Robertson, Hillel Slovak, Annie Clark, Kurt Vile, David Longstreth and Richie Kotzen. - "Guitar Lessons: Fingerstyle". GuitarTricks.com. Retrieved 3 December 2014. - "Learn How To Travis Pick". Howtotuneguitar.org. Retrieved 2 October 2014. - "WebCite query result". Webcitation.org. Archived from the original on October 26, 2009. Retrieved 2 October 2014. - The little finger whose use is not completely standardized in classical guitar technique can also be found designated by e or x. There are several words in Spanish for the little finger: dedo meñique, dedo auricular, dedo pequeño, but their initials conflict with the initials of the other fingers; c is said to be the initial of dedo chiquito, which is not the most common name for the little finger; e and x are not initials but letters that were picked, either with its own rationale, by people who didn't know what else to pick - Tennant, Scott (1996). Pumping Nylon. Alfred pub. co. ISBN 978-0-88284-721-4. - "David Knowles | UK Musician, Singer-Songwriter". Davidknowles.biz. Retrieved 2016-01-19. - "Music Lessons from". Homespuntapes.com. Retrieved 2010-03-01. - Russell, Tony (1997). The Blues – From Robert Johnson to Robert Cray. Dubai: Carlton Books Limited. p. 105. ISBN 1-85868-255-X. - Sid Griffin and Eric Thompson (2006). Bluegrass Guitar: Know the Players, Play the Music, p.22. ISBN 0-87930-870-2. - Traum, Happy (1974). Bluegrass Guitar, p.23. ISBN 0-8256-0153-3. - Traum, Happy (1974). Fingerpicking Styles For Guitar. Oak Publications. ISBN 0-8256-0005-7. - Herbst, Peter (1979-09-06). "cover story features James Taylor". Rolling Stone. Retrieved 2016-01-19. - Traum, Happy (1974). Fingerpicking Styles For Guitar, p.12. Oak Publications. ISBN 0-8256-0005-7. Hardcover (2005): ISBN 0-8256-0343-9. - "Basics of Clawhammer Guitar". Angelfire.com. Retrieved 2 October 2014. - Archived January 12, 2010, at the Wayback Machine. - M. Brocken, The British Folk Revival 1944-2002 (Ashgate, Aldershot, 2003), p. 114. - B. Swears, Electric Folk: The Changing Face of English Traditional Music (Oxford University Press, 2005) p. 184-9. - V. Coelho, The Cambridge Companion to the Guitar (Cambridge University Press, 2003), p. 39. - D. Laing, K. Dallas, R. Denselow and R. Shelton, The Electric Muse (Methuen, 1975), p. 145. - B. Swears, Electric Folk: The Changing Face of English Traditional Music (Oxford University Press, 2005) pp. 184-9. - P. Buckley, The Rough Guide to Rock: the definitive guide to more than 1200 artists and bands (Rough Guides, 2003), pp. 145, 211-12, 643-4. - R. Weissman, Which Side are You On?: An Inside History of the Folk Music Revival in America (Continuum, 2005), p. 274. - V. Coelho, 'The Cambridge Companion to the Guitar' (Cambridge University Press, 2003), p. 39. - J. Henigan, Dadgad Tuning: Traditional Irish and Original Tunes and Songs (Mel Bay, 1999), p. 4. - J. DeRogatis, Turn on Your Mind: Four Decades of Great Psychedelic Rock (Hal Leonard, 2003), p. 173. - "Elijah Wald". Elijahwald.com. Retrieved 2 October 2014. - "Lionel Loueke on Canvas (YouTube)". YouTube. Retrieved 2 October 2014. - "Michael Horowitz: The Unaccompanied Django". DjangoBooks.com. Retrieved 2 October 2014. - "Robert Fripp interviews John McLaughlin". Elephant-talk.com. Retrieved 2 October 2014. - Archived January 31, 2011, at the Wayback Machine. - "Slowhand Blues Guitar". 12bar.de. Retrieved 2010-03-01.
<urn:uuid:fc9c3443-6fb0-408b-a40a-95c63cb565ad>
CC-MAIN-2017-51
https://en.wikipedia.org/wiki/Fingerpicking
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948568283.66/warc/CC-MAIN-20171215095015-20171215115015-00528.warc.gz
en
0.938125
8,155
2.953125
3
The Most Comprehensive and Authentic Explaination of the Quran! Purchase the complete Tafsir Ibn Kathir (10 Vol Abridged) By Dar-us-Salam Publications. Abu Bakr was born c. 573 in the city of Mecca to Abu Quhafa and Umm Khayr. Belonging to the tribe of Banu Tay, the clan of the Quraysh. Abu Bakr was a senior companion (Sahabah) and the father-in-law of the prophet, Muhammad (SAW). He ruled over the Rashidun Caliphate from 632-634 CE when he became the first Muslim Caliph following Prophet Muhammad‘s death. As Caliph, Abu Bakr succeeded in the political and administrative functions previously exercised by the Prophet, since the religious function and authority of prophethood ended with the Prophet’s death. He was called Al-Siddiq (The Truthful) by the prophet, Muhammad after he believed him in the event of Isra and Mi’raj when other people didn’t, and Ali confirmed that title several times and was known by that title among later generations of Muslims. He was also sometimes called Ibn Abi Quhafa meaning the ‘son of Abu Quhafa‘. Abu Bakr‘s real name is uncertain with Abd Allah, Abd al-Ka’ba and Atiq cited by the early sources. Abu Bakr’s full name is Abdullah ibn Abi Quhafa ibn Amir ibn Amr ibn Ka’b ibn Sa’d ibn Taym. But he was much commonly known by the kunya (teknonym) Abū Bakr, meaning “Father of Young Camels“. He reportedly received the title due to his caring and love for camels in childhood. His father Abu Quhafa was a prominent merchant of the Banu Taym clan of the Quraysh. He initially opposed the prophet Muhammad (SAW) until the Islamic conquest of Mecca in c. 630 when he embraced Islam. Abu Bakr‘s mother Umm Khayr also hailed from the Banu Taym and converted to Islam in c. 614. Like other children of the rich Meccan merchant families, Abu Bakr was literate and never developed a fondness for poetry. He had great knowledge of the genealogy of the Arab tribes, their stories and their politics. Regardless, it recorded that prior to converting to Islam, Abu Bakr practiced as a hanif and never worshipped idols. He also avoided alcohol. During the Age of Ignorance, Abu Bakr was appointed as a representative of the people of Quraysh for cases of ransom and penalty. Since Abu Bakr was the most knowledgeable of family history of Arabs, he was called ‘Scholar of Quraysh‘. At the age of thirty eight, Abu Bakr became a chief of the Banu Taym. Abu Bakr conversion to Islam initially remained a secret. After he announced his faith, he delivered a speech at the Kaaba. This was the first public address inviting people to offer allegiance to prophet Muhammad (SAW) was delivered by Abu Bakr. In a fit of fury, the young men of the Quraysh tribe rushed at Abu Bakr and beat him till he lost consciousness. Four members of the Banu Taym wrapped Abu Bakr in a mantle and took him to his house. Umm Khayr saw her son and washed his face. Following this incident, Umm Khayr converted to Islam. His preaching brought many people to Islam as he persuaded his intimate friends to convert. Many Sahabis, prominently including Uthman, Zubayr, Talha, Sa’d ibn Abi Waqqas, Abu Ubayda, Abd al-Rahman ibn Awf, Abu Hudhaifah ibn al-Mughirah and many others converted to Islam by the invitations of Abu Bakr. Abu Bakr‘s acceptance proved to be a milestone in prophet Muhammad’s mission. As slavery was common in Mecca, many slaves accepted Islam. When an ordinary free man accepted Islam, despite opposition, he would enjoy the protection of his tribe. For slaves, however, there was no such protection and they commonly experienced persecution. Abu Bakr felt compassion for slaves, so he purchased eight slaves, four men and four women, and then freed them, paying 40,000 dinar for their freedom. The slaves were Bilal ibn Rabah, Abu Fukayha, Ammar ibn Yasir, Lubaynah, Al-Nahdiah, Harithah bint al-Muammil and Umm Ubays. Most of the slaves liberated by Abu Bakr were either women or old and frail men. Almost all of Abu Bakr’s family converted to Islam except his father Abu Quhafa, his son Abdul-Rahman, and his wife Qutaylah. Abu Bakr‘s daughter Aisha was betrothed to prophet Muhammad; however, it was decided that the actual marriage ceremony would be held later. In 621, Abu Bakr was the first person to believe in yhr prophet’s Isra and Mi’raj (Night Journey). Prophet Muhammad bestowed the honorific epithet Siddiq (lit. ’Truthful, Upright or Righteous’) upon the former. During the Age of Ignorance, he was a monotheist and condemned idol-worshipping. Being a wealthy trader, Abu Bakr used to free slaves. Following his conversion to Islam in 610, Abu Bakr served as a close aide to Prophet Muhammad, who bestowed on him the title al-Siddiq (‘the Truthful’). The former took part in almost all battles under the prophet. He extensively contributed his wealth in support of the propagation of Islam and also accompanied the prophet, Muhammad, on his migration to Medina. By the invitations of Abu Bakr, many prominent Sahabis became Muslims. He remained the closest advisor to the prophet, being present in almost all his military conflicts. In the absence of the prophet Muhammad (SAW), Abu Bakr led the prayers and expeditions. After the death of Prophet Muhammad (SAW) in 632, Abu Bakr succeeded the leadership of the Muslim community as the caliph. His election was opposed by a large number of rebellious tribal leaders, who had apostatized from Islam. Abu Bakr’s commanders kept the rebels in check and subsequently defeated them in the Ridda Wars, as a result of which he was able to consolidate and expand the rule of the nascent caliphate over entire Arabia. Abu Bakr ordered the initial incursions into the neighbouring Byzantium and Sasanian Empire, initiating the Muslim conquests of Levant and Persia respectively. Apart from politics, Abu Bakr is also credited for the compilation of the Quran, of which he had a personal caliphal codex. Abu Bakr nominated his principal adviser Umar (r. 634–644) as his successor before dying in August 634. Along with the prophet Muhammad, Abu Bakr is buried in the Green Dome at the Al-Masjid an-Nabawi in Medina, the second holiest site in Islam. Though the period of his caliphate was short, it included successful invasions of the two most powerful empires of the time, a remarkable achievement in its own right. He set in motion a historical trajectory that in a few decades would lead to one of the largest empires in history. His victory over the local rebel Arab forces is a significant part of Islamic history. Abu Bakr is widely honored among Muslims. His Migration to Medina In 622, the newly-converted Muslims of Medina invited prophet Muhammad to emigrate to their city. He subsequently accepted the request and the migration began in batches. In July 622, Muhammad secretly fled from Mecca along with Abu Bakr and both sought refuge in the Mount Thawr. During this time, Abu Bakr‘s son Abd Allah supplied resources and also informed them about the conspiracies of the polytheists. Ali was the last to remain in Mecca, entrusted with responsibility for settling any loans the Muslims had taken out, and famously slept in the bed of prophet Muhammad when the Quraysh, led by Ikrima, attempted to murder prophet Muhammad as he slept. Meanwhile, Abu Bakr accompanied Muhammad to Medina. Due to the danger posed by the Quraysh, they did not take the road, but moved in the opposite direction, taking refuge in a cave in Jabal Thawr, some five miles south of Mecca. Abdullah ibn Abi Bakr, the son of Abu Bakr, would listen to the plans and discussions of the Quraysh, and at night he would carry the news to the fugitives in the cave. Asma bint Abi Bakr, the daughter of Abu Bakr, brought them meals every day. Aamir, a servant of Abu Bakr, would bring a flock of goats to the mouth of the cave every night, where they were milked. The Quraysh sent search parties in all directions. One party came close to the entrance to the cave, but was unable to see them. Due to this, Quran verse 9:40 was revealed. Aisha, Abu Saʽid al-Khudri and Abdullah ibn Abbas in interpreting this verse said that Abu Bakr was the companion who stayed with prophet Muhammad in the cave. Aisha was a wife of Muhammad. After arriving in Medina, prophet Muhammad instituted brotherhood between the Ansar (lit. ’Helpers’), the natives of Medina, and the Muhajirun (lit. ’Emigrants’), the natives of Mecca who migrated to Medina. Consequently, Abu Bakr was paired to Kharija ibn Zayd, a chieftain of the Banu Khazraj. The 9th-century historian al-Baladhuri (d. 892) reports that Abu Bakr paid the price for buying a land, on which the Prophet’s Mosque was built. Early in 623, Abu Bakr‘s daughter Aisha, who was already married to prophet Muhammad, was sent on to the prophet’s house after a simple marriage ceremony, further strengthening relations between Abu Bakr and prophet Muhammad. In March 624, Abu Bakr guarded prophet Muhammad in the Battle of Badr. Following the Muslim victory, prophet Muhammad accepted Abu Bakr‘s suggestion to ransom the captives Battle of Badr In Sunni accounts, during one such attack, two discs from Abu Bakr’s shield penetrated into Muhammad’s cheeks. Abu Bakr went forward with the intention of extracting these discs but Abu Ubaidah ibn al-Jarrah requested he leave the matter to him, losing his two incisors during the process. In these stories subsequently Abu Bakr, along with other companions, led Muhammad to a place of safety. Battle of Uhud In 625, he participated in the Battle of Uhud, in which the majority of the Muslims were routed and he himself was wounded. Before the battle had begun, his son Abdul-Rahman, at that time still non-Muslim and fighting on the side of the Quraysh, came forward and threw down a challenge for a duel. Abu Bakr accepted the challenge but was stopped by Muhammad. Later, Abdul-Rahman approached his father and said to him “You were exposed to me as a target, but I turned away from you and did not kill you.” To this Abu Bakr replied “However, if you had been exposed to me as a target I would not have turned away from you.” In the second phase of the battle, Khalid ibn al-Walid’s cavalry attacked the Muslims from behind, changing a Muslim victory to defeat. Many fled from the battlefield, including Abu Bakr. However, he was “the first to return” Battle of the Trench In 627 he participated in the Battle of the Trench and also in the Invasion of Banu Qurayza. In the Battle of the Trench, Muhammad divided the ditch into a number of sectors and a contingent was posted to guard each sector. One of these contingents was under the command of Abu Bakr. The enemy made frequent assaults in an attempt to cross the ditch, all of which were repulsed. To commemorate this event a mosque, later known as ‘Masjid-i-Siddiq’, was constructed at the site where Abu Bakr had repulsed the charges of the enemy. Battle of Khaybar Abu Bakr took part in the Battle of Khaybar. Khaybar had eight fortresses, the strongest and most well-guarded of which was called Al-Qamus. Muhammad sent Abu Bakr with a group of warriors to attempt to take it, but they were unable to do so. Muhammad also sent Umar with a group of warriors, but Umar could not conquer Al-Qamus either. Some other Muslims also attempted to capture the fort, but they were unsuccessful as well. Finally, Muhammad sent Ali, who defeated the enemy leader, Marhab. In 629 Muhammad sent ‘Amr ibn al-‘As to Zaat-ul-Sallasal, followed by Abu Ubaidah ibn al-Jarrah in response to a call for reinforcements. Abu Bakr and Umar commanded an army under al-Jarrah, and they attacked and defeated the enemy. Battles of Hunayn and Ta’if In 630, the Muslim army was ambushed by archers from the local tribes as it passed through the valley of Hunayn, some eleven miles northeast of Mecca. Taken unaware, the advance guard of the Muslim army fled in panic. There was considerable confusion, and the camels, horses and men ran into one another in an attempt to seek cover. Muhammad, however, stood firm. Only nine companions remained around him, including Abu Bakr. Under Muhammad’s instruction, his uncle Abbas shouted at the top of his voice, “O Muslims, come to the Prophet of Allah”. The call was heard by the Muslim soldiers and they gathered beside Muhammad. When the Muslims had gathered in sufficient number, Muhammad ordered a charge against the enemy. In the hand-to-hand fight that followed the tribes were routed and they fled to Autas. Abu Bakr was commissioned by Muhammad to lead the attack against Ta’if. The tribes shut themselves in the fort and refused to come out in the open. The Muslims employed catapults, but without tangible result. The Muslims attempted to use a testudo formation, in which a group of soldiers shielded by a cover of cowhide advanced to set fire to the gate. However, the enemy threw red hot scraps of iron on the testudo, rendering it ineffective. The siege dragged on for two weeks, and still there was no sign of weakness in the fort. Muhammad held a council of war. Abu Bakr advised that the siege might be raised and that God make arrangements for the fall of the fort. The advice was accepted, and in February 630, the siege of Ta’if was raised and the Muslim army returned to Mecca. A few days later Malik bin Auf, the commander, came to Mecca and became a Muslim. Abu Bakr led one military expedition, the Expedition of Abu Bakr As-Siddiq, which took place in Najd, in July 628 (third month 7AH in the Islamic calendar). Abu Bakr led a large[vague] company in Nejd on the order of Muhammad. Many were killed and taken prisoner. The Sunni Hadith collection Sunan Abu Dawud mentions the event After the death of Prophet Muhammad in June 632, a gathering of the Ansar (lit. ’Helpers’), the natives of Medina, took place in the Saida clan’s courtyard. They made an abortive attempt to elect the caliph amongst themselves, with the common choice being Sa’d ibn Ubada. The Ansar might have intentionally excluded the Muhajirun (lit. ’Emigrants’), the natives of Mecca who migrated to Medina. Upon learning of this meeting, Abu Bakr hastened to the gathering, along with two other prominent Muhajirun, Abu Ubayda ibn al-Jarrah and Umar ibn al-Khattab. The former addressed the assembled men, warning that an attempt to elect a leader outside of prophet Muhammad’s own tribe, the Quraysh, would result in dissension, as only they can command the necessary respect among the community. He presented Abu Ubayda and Umar as two potential candidates for the caliphate. Habab ibn Mundhir suggested that the Ansar and the Muhajirun choose a leader each from among themselves, who would then rule jointly. Ensued by this proposal, a heated argument began amongst the two groups. In a decisive move, Umar took Abu Bakr’s hand and swore his allegiance to the latter, an example eventually followed by the gathered men. This may indicate that the choice of Abu Bakr may not have been unanimous, with emotions running high as a result of the disagreement. The orientalist William Muir gives the following observation of the situation as “The sovereignty of Islam demanded an undivided Caliphate; and Arabia would acknowledge no master but from amongst Koreish”. Abu Bakr‘s first address as caliph I have been given the authority over you, and I am not the best of you. If I do well, help me; and if I do wrong, set me right. Sincere regard for truth is loyalty and disregard for truth is treachery. The weak amongst you shall be strong with me until I have secured his rights, if God wills; and the strong amongst you shall be weak with me until I have wrested from him the rights of others, if God wills. Obey me so long as I obey God and His Messenger. But if I disobey God and His Messenger, you owe me no obedience. Arise for your prayer, God have mercy upon you. — The address was delivered at The Prophet’s Mosque Abu Bakr was almost universally accepted as head of the Muslim community, under the title of caliph, as a result of Saqifah, though he did face contention because of the rushed nature of the event. Ali and his supporters initially refused to acknowledge Abu Bakr’s authority, claiming that Muhammad had earlier designated him as the successor.Among Shia Muslims, it is also argued that Ali had previously been appointed as Muhammad’s heir, with the election being seen as in contravention to the latter’s wishes. Abu Bakr later sent Umar to ask allegiance from Fatimah, which resulted in an altercation that may have involved violence. However, after six months the group made peace with Abu Bakr and Ali pledged him his allegiance. After Ali pledged his allegiance, Ali used to help Abu Bakr on government and religious matters. Battles against Tulayha Few days after Abu Bakr’s election, in July 632, Tulayha ibn Khuwaylid, from the Banu Asad tribe, was preparing to launch an attack on Medina. Abu Bakr raised an army primarily from the Banu Hashim. He appointed Ali ibn Abi Talib, Talha ibn Ubayd Allah and Zubayr ibn al-Awwam, each as commander of one-third of the newly organized force. Tulayha’s forces was defeated and driven to Zhu Hussa. Though, few months after, Tulayha again launched an attack on the Muslim forces. Abu Bakr appointed Khalid ibn al-Walid as the main commander. Khalid had an army of 6,000 men whereas Tulayha had an army of 30,000 men. However, Tulayha’s forces were crushed by Khalid ibn al-Walid and his forces. After the battle, Tulayha accepted Islam and asked forgiveness from Abu Bakr. Though, Abu Bakr forgave Tulayha, he refused to allow Tulayha to participate in wars on the Muslim side since Tulayha killed a Sahabi called Akasha ibn Mihsan in the battle. Battle of Yamama Musaylimah, from the Banu Hanifa tribe, was one of the biggest enemies of Abu Bakr. He is denounced in Islamic history as “false prophet”. Musaylimah, along with his wife Sajah from Banu Taghlib and Banu Tamim, claimed prophethood and gathered an army of 40,000 people to attack against Abu Bakr. Abu Bakr appointed Khalid ibn al-Walid as the primary commander and appointed Ikrimah and Shurahbil as the commander of the corps. In the battle, Musaylimah’s forces were crushed by Khalid and his forces. However, Musaylimah’s forces killed about 360 huffaz (memorizers of the Quran) were killed. Wahshi ibn Harb killed Musaylimah in the battle. After the battle, Musaylimah’s wife Sajah became a devout Muslim. Preservation of the Quran Abu Bakr was instrumental in preserving the Quran in written form. After the Battle of Yamama in 632, numerous memorizers of the Quran had been killed. Umar fearing that the Quran may become lost or corrupted, Umar requested that Abu Bakr authorise the compilation and preservation of the scriptures in written format. The caliph was initially hesitant, being quoted as saying, “How can we do that which the Messenger of Allah, may Allah bless and keep him, did not himself do?” He eventually relented, however, and appointed Zayd ibn Thabit, who had previously served as one of the scribes of Muhammad, for the task of gathering the scattered verses. The fragments were recovered from every quarter, including from the ribs of palm branches, scraps of leather, stone tablets and “from the hearts of men”. The collected work was transcribed onto sheets and compiled in the sequence that had been instructed by Muhammad, as opposed to the order in which they had been revealed. The complete work was then verified through comparison with Quran memorisers. The finished codex, termed the Mus’haf, was presented to Abu Bakr, who prior to his death, bequeathed it to his successor Umar. Upon Umar’s own death, the Mus’haf was left to his daughter Hafsa, who had been one of the wives of Muhammad. It was this volume, borrowed from Hafsa, which formed the basis of Uthman’s legendary prototype, which became the definitive text of the Quran. All later editions are derived from this original. Abu Bakr had four wives. His first wife Qutaylah bint Abd al-Uzza bore him a daughter Asma and a son Abdullah. Though Asma and Abdullah became Muslims, their mother Qutaylah didn’t become a Muslim and Abu Bakr divorced her. Abu Bakr‘s second wife was Zaynab bint Amir, who bore him Abdul-Rahman and Aisha. Zaynab and her daughter Aisha converted to Islam whereas Abdul-Rahman didn’t convert until the Treaty of Hudaybiyyah in 628 CE. Abu Bakr‘s third wife was Asma bint Umais, who bore Muhammad ibn Abi Bakr. Before her marriage with Abu Bakr, Asma was a wife of Jafar ibn Abi Talib, and after Abu Bakr‘s death, Asma married Ali ibn Abi Talib. Abu Bakr’s fourth wife was Habibah bint Kharijah. She bore Umm Kulthum, who was born after Abu Bakr’s death. Abu Bakr’s descendants are called Siddiquis. The Sufi Naqshbandi spiritual order is believed to be originating from Abu Bakr. Abu Bakr died of natural causes in 634, having nominated Umar, his most able supporter, as his successor. During the reign of the Umayyad caliph al-Walid I, Al-Masjid an-Nabawi was expanded to include the site of Abu Bakr‘s tomb. The Green Dome above the tomb was built by the Mamluk sultan Al Mansur Qalawun in the 13th century, although the green color was added in the 16th century, under the reign of Ottoman Sultan Suleiman the Magnificent. Among tombs adjacent to that of Abu Bakr, are of Muhammad, Umar, and an empty one reserved for Isa.
<urn:uuid:b2dff5db-5ca9-41a7-a470-9c0c0255aa31>
CC-MAIN-2022-33
https://www.speakersofislam.com/history/history-of-abu-bakr-the-first-muslim-caliph/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00697.warc.gz
en
0.978029
5,159
3.1875
3
Distribution system for downflow reactors A distributor system for use in multiple bed, downflow reactors which provides improved distribution across the reactor and improved vapor/liquid contact and distribution. The system comprises a collection tray arranged below the first catalyst bed in the reactor, a first, rough distributor tray which is arranged below the collection tray and which is fed from the collection tray by means of spillways in the tray and a mixing chamber beneath the spillways. The first distributor tray provides for separate vapor and liquid flow by means of apertures in the tray or downward flow of liquid and vapor chimneys for downward flow of vapor. After the first distributor tray, a second, final distributor tray is provided with downcomers for flow of liquid and vapor onto the lower catalyst bed. Each downcomer comprises an open-topped tube with a side aperture for entry of liquid into the tube, vapor entering through the open top of the tube. Latest Mobil Oil Corporation Patents: - Process for preparing short chain alkyl aromatic compounds - Method and system for reducing lead-time in the packaging industry - Multilayer polyolefin substrate with low density core and stiff outer layers - Vapor/liquid contacting cyclone with secondary vanes - Gasoline sulfur reduction in fluid catalytic cracking This invention relates to a distribution system for downflow reactors which include a number of superimposed reaction beds. Reactors of this type are employed in the chemical and petroleum refining industries for effecting various reactions such as catalytic dewaxing, hydrotreating, hydrofinishing and hydrocracking. The present distributor system is particularly useful for effecting mixed-phase reactions between a liquid and a vapor.BACKGROUND OF THE INVENTION Reactors used in the chemical, petroleum refining and other industries for passing liquids or mixed-phase liquid/vapor mixtures over packed beds of particular solids are employed for a variety of different processes. Typical of such processes in the petroleum refining industry are catalytic dewaxing, hydrotreating, hydrodesulfurisation, hydrofinishing and hydrocracking. In these processes liquid phase is typically mixed with a gas or vapor phase and the mixture passed over a particulate catalyst maintained in a packed bed in a downflow reactor. Because chemical reactions take place which themselves may produce additional components in the vapor phase, for example, hydrogen sulfide and ammonia during hydrotreating processes, and because such reactions may consume some of the vapor phrase reactants, it is frequently necessary to add additional vaporous reactants e.g. hydrogen at various points along the path of the reactants. Other reactions may use heat exchange media, e.g., hydrogen quench, which are added or withdrawn at different points in the unit. To do this, the contact solid is conventionally arrayed in superimposed beds with a distributor plate above each bed in the sequence to ensure good distribution of the reactant phases at the top of the bed so that flow is uniform across the beds, at least at the top of the bed. By ensuring good reactant distribution, the bed is used most effectively and efficiently and the desired reactions will take place in the most predictable manner with a reduced likelihood of undesirable exotherms or other problem conditions. Many different types of distribution plate are known. Some are simple and comprise little more than a pierced or slotted plate. Others have various forms of weirs or other devices for promoting the desired uniformity of reactant flow, achieving good liquid/vapor contact. For example, reference is made to U.S. Pat. No. 4,126,539 which shows a distributor plate for use in a catalytic hydrodesulfurisation (CHD) reactor. One type of system involves an inlet deflector cone cooperating with a splash plate and liquid distributor trough to pass liquid into the reactor to two distributor trays which facilitate the uniform spreading of liquid over the upper face of the catalyst bed. The distributor trays contain a series of spaced risers which have dual functions. They permit vapor to pass down through the tray and they also serve as liquid downflow conduits, the liquid passing through weir slots in the sides of the risers. The nature of liquid flow through weirs, however, makes this type of design very sensitive to tray unevenness introduced during fabrication or installation. Another example of a distributor is the mixed-phase flow distributor for packed beds of U.S. Pat. No. 3,524,731, which was intended primarily to accommodate wide variations in liquid feed rate. The liquid flow is normally through liquid downpipes but at very high liquid rates, some liquid overflows into the vapor chimneys through triangular weirs. However, during normal operation the chimneys do not carry liquid and hence do not contribute to the number of liquid streams entering the bed. Also, during periods when they carry liquid there would be a a great variation in the liquid flow through the chimneys compared with that through the tubes. U.S. Pat. No. 3,353,924 shows a somewhat different approach: flow into the liquid tubes is still through a pair of notched weirs and the disadvantages mentioned above are applicable wth respect to this system as well. There is no liquid flow in the vapor chimneys, and the number of uniformly spaced liquid streams which can be placed on the tray is therefore limited. A system of this kind is shown, for example, in U.S. Pat. No. 4,126,539. Liquid flow is through the vapor downcomers only, by a combination of hole and weir flow. The tray area between the downcomers is not used for liquid distribution, and the use of weir flow makes the distribution pattern vulnerable to variations in tray level. Other approaches appear in U.S. Pat. Nos. 4,126,540 and 4,140,625 where liquid flow is through holes in downcomers only. There is no attempt to make use of the tray area between downcomers and the size of the downcomers, coupled with the need to maintain tray mechanical integrity, prevents maximization of the number of liquid streams entering the catalyst bed. Liquid distribution is also of concern in other environments. For example, in U.S. Pat. No. 2,924,441 the disclosure is related to the design of a liquid distributor for gas/liquid phases such as in gas absorption of distillation in a packed tower. The distributor described makes no attempt to address the special need for good initial liquid distribution found in concurrent downflow catalytic reactors. Another form of distributor is shown in U.S. Pat. No. 3,541,000. The system employs a plate fitted with liquid downcomers which maintain the desired level of liquid above the plate before overflow into each downcomer which also has to allow for the vapor to pass into the bed beneath. This system has two disadvantages. First, the configuration of the top of the downcomers permits considerable variations in the liquid flow rate across the plate unless it is fabricated and installed in a completely horizontal position. The liquid flow rate into the downcomer increases exponentially with the liquid height above the lower edge of the weir and so, if the plate is not horizontal, the greater height of the liquid at one edge of the plate will give a greatly increased liquid flow on the low side of the plate at the expense of the high side. The use of the downcomers for liquid and vapor flow exacerbates this problem since vapor will not flow down through the liquid to a submerged aperture. Thus, if the weirs on the low side of the plate become submerged, not only will the liquid flow increase greatly but vapor flow may be cut off completely. Thus, the desired reactions may be almost completely precluded on at least one side of the reactor bed.SUMMARY OF THE INVENTION We have now devised a distributor plate for downflow reactors which provides improved uniformity of distribution across the reactor and improved mixing of liquid and vapor phases. It enables the liquid flow to proceed independently of vapor flow and is relatively insensitive to errors in level. It may therefore be fabricated and installed with greater ease than many other types of distributor. According to the present invention, the distributor system for use between the beds of a multiple bed downflow reactor comprises (i) a collection tray for receiving vapor and liquid; (ii) a mixing chamber below the collection tray; (iii) spillways providing a flowpath for vapor and liquid from above the collection tray into the mixing chamber; (iv) a first distributor tray at the bottom of the mixing chamber having apertures in it for downward flow of liquid and chimneys for downward flow of vapor, and (v) a second distributor tray having downcomers for downward flow of liquid and vapor, each downcomer comprising upstanding, open-topped tubes having apertures in their sides for entry of liquid into the tube. If the distributor system is to be used for a reactor where a vapor is to be injected between the beds, an injection point may be provided either above the collection tray or below it but in order to obtain the best vapor/liuqid contract, it is preferred to have it above the collection tray. The chimneys in the first distribution tray enable the liquid and vapor flows to be separated at this point so that both proceed at predictable rates. In addition, this tray provides an initial, rough distribution of liquid to the second and final tray which provides for a high degree of flow uniformity across the bed beneath the distributor. The vapor chimneys in the first distributor tray are preferably in the form of open-topped, imperforate tubes which extend upwardly from the first tray to a height which is above the liquid level which will prevail on the first tray. At the top these tubes may be slotted to provide weirs for liquid flow in case the liquid rises to levels higher than normal. The tubes may also be provided with apertured plates across their bottoms with vapor outlets around the bottom of the tubes so as to break up any liquid falling down the chimneys and distribute it across the second distributor tray. The second distributor tray has a large number of combined vapor/liquid downcomers evenly arrayed across the tray to ensure even distribution across the catalyst bed. These downcomers are in the form of upstanding tubes which extend upwardly from the tray so as to ensure that a pool of liquid is maintained on the tray. Each downcomer tube has an aperture in its side to permit liquid flow into the downcomer. the aperture may be of any convenient configuration--preferably circular--but is preferably dimensioned and positioned so that in normal operation it is below the top of the liquid pool on the tray. This will ensure even, predictable flow into the downcomers. Because the flow into these side apertures is proportional to the square root of the depth of liquid above the aperture, the liquid flow rate into the downcomers is relatively insensitive to variations in liquid height once the apertures are all submerged. For this reason the present distributor system is easier to fabricate and install since it does not need to be absolutely level. The downcomers preferably have baffles over their open top ends to prevent liquid falling from the first tray directly into the downcomers and so providing unpredictable variations of flow rate. They also preferably are slotted at their upper ends to provide weirs for liquid flow if the liquid level on the tray reaches levels above normal.The Drawings In the accompanying drawings: FIG. 1 is a vertical section of a portion of a multiple bed reactor showing the present distribution system. FIG. 2 is a plan view at the 2--2' of FIG. 1.DETAILED DESCRIPTION FIG. 1 shows, in simplified form, a section through the portion of a multiple bed, downflow reactor in the region between the beds. The general configuration of the downflow reactor will be conventional, as will details such as the supports for the grids and distributor plates which are not shown for purposes of clarity. The walls 10 of the reactor and the catalyst support grid 11 support an upper bed of catalyst or other particulate contact solid over which the liquid is to flow together with any vapor included as the reactant or as a product of the reaction. For clarity, the catalyst is not shown. The support grid may be of conventional type and provides support for the catalyst either directly or by means of support balls which permit the liquid and vapor to flow downwardly out of the upper bed of catalyst and through the grid to the distributor system beneath. A collection tray 12 is disposed beneath the catalyst support grid 11 to collect the liquid leaving the upper catalyst bed. The vapor injection point is provided here by means of a spider 13 which is connected to vapor injection line 14 to provide a uniform initial distribution of the injected vapor. For example, in a hydroprocessing reactor such as a catalytic hydrodesulfurization (CHD) unit, hydrogen may be injected as quench at this point. Other vapor injection devices may also be used and if desired, vapor takeoff may also be provided at this level. A plurality of spillways 15 are provided in collector tray 12 to permit a pool of liquid to accumulate on tray 12 before passing through the spillways into mixing chamber 16 beneath. The spillways comprise upstanding downcomers which provide a passage 17 for the downflowing liquid as well as for the vapor. The spillways have outlets 18 beneath collector tray 12 which face sideways and tangentially into an annular mixing chamber 16. Mixing chamber 16 comprises a cylindrical, vertical wall portion 19 which is fixed to collection tray 12 and a lower, annular tray 20 with an upstanding rim 21 for providing a pool of liquid in the mixing chamber. The side facing outlets 18 of spillways 15 impart a rotary or swirling motion to liquid in mixing chamber 16 which promotes good intermixing and temperature equilibrium of the liquid at this point. The liquid spills over the edge or rim 21 and falls downwards onto the deflector 22 which is disposed directly underneath the central aperture in the annular mixing chamber 16. Deflector 22 is fixed to the first, rough distributor tray 30 which provides an initial, rough distribution of the liquid and the vapor across the reactor. The first distributor plate 30 is provided with a large number of liquid downflow apertures 31 in the region about central deflector 22 (for clarity, only some are shown in FIG. 2). Generally, a pool of liquid will accumulate on tray 20 and cover these apertures so that flow of vapor through them is precluded. To provide for vapor flow into the lower portion of the reactor, a plurality of vapor chimneys 32 is provided, arranged in a ring around the tray, suitably at a point near the circumference of a circle which divides the reactor flow area equally in two. The number of vapor chimneys will be selected according to the desired flow rates and other conditions, as is conventional. The vapor chimneys each comprise an open-topped, imperforate upstanding tube 33 which extends upwardly from the first distributor tray 30. Around the top of each chimney tube a number of slots are provided to act as weirs for liquid flow if the level of liquid on tray 30 should build up to the point where it is necessary to provide for additional flow through the reactor to prevent flooding. The slots may be of any desired configuration, for example, straight-sided, straight-bottomed slots as shown or they may alternatively be arcuate or apertures may be formed just below the top of the chimneys in order to provide for controlled liquid overflow down the chimneys. In order to ensure that any liquid flowing down the chimneys is evenly distributed, the chimneys preferably have distributor plates at their lower ends below tray 30 formed by plates 35 with liquid apertures 36 formed in them. To permit vapor flow out of the chimneys vapor outlets 37 are provided around the lower end of the chimneys and if large amounts of liquid flow down the chimneys, these outlets will permit liquid flow through them. The second and final distributor tray 40 comprises a flat plate 41 with a large number of vapor/liquid downcomers to provide many points of distribution of vapor and liquid over the bed of catalyst below (not shown). Each downcomer comprises an upstanding tube 42 which extends upwardly from plate 41. Each tube has an aperture 43 (or apertures) in its side which is positioned below the top of the pool of liquid which forms on plate 41 during normal operation. The number and size of all the apertures in the downcomer are selected according to the desired flow rate and generally, it is preferred for the apertures to be totally submerged so that the greatest uniformity of liquid flow is achieved, regardless of variations in the level of the second distributor plate. As pointed out above, the rate of flow of liquid into each aperture varies in proportion to the square root of the height of liquid above the apertures so that the flow rate into the downcomers is relatively insensitive to variations in the level of the distributor plate 40. However, if the liquid level on this tray falls to the point where the apertures are partly uncovered, variations in the horizontal level of the tray will produce relatively greater variations in flow rate across the reactor. For this reason, operation with the apertures completely submerged is preferably. The downcomers are open at the top in order to permit varpor to enter and pass down into the lower catalyst bed but in order to prevent liquid from the first, rough distributor plate entering the downcomers directly and so providing an unpedictable variation from the design flow rate, baffles 44 are placed over the open tops of the downcomers. In addition, the downcomers have liquid weirs at the top in order to provide for additional liquid flow if the liquid level on the second tray should build up beyond its normal height. As with the vapor chimneys, the weirs may be in any convenient form but are suitably straightforward slotted weirs provided by slots 45. The bottoms of the downcomers are open to permit flow of vapor and liquid into the lower catalyst bed. The distribution system provides improved injection of quench gas or other vapor into the distribution system, improved mixing of vapor, liquid and injected gas as well as improved distribution across the reactor. This system may also be used with liquid quench with an appropriate injection means in place of the spider. In addition, the system is relatively compact in form and takes up relatively little space in the reactor, as compared to other distribution systems which may provide a similar degree of distribution uniformity. The separate vapor and liquid distribution which occurs on the firsst distribution plate avoid potential problems with two-phase distribution and only at the end of the distribution process is liquid injected into each vapor stream through the vapor/liquid downcomers on the final distribution tray. Furthermore, the system, as described above, is relatively insensitive to tolerance variations introduced during fabrication and provides superior uniformity of distribution and vapor/liquid contact during operation under varying conditions. This system may also be used without quench injection to provide improved liquid mixing and liquid and vapor redistribution in a long catalyst bed. 1. A distributor system for distributing vapor and lqiuid across a downflow reactor, which comprises: - (i) a collection tray for receiving vapor and liquid, - (ii) a mixing chamber below the collection tray having a first spillway for the downward passage of vapor and liquid; - (iii) collection tray spillways providing a flow path for vapor and liquid from above the collectin tray into the mixing chamber; - (iv) a first distributor tray below the mixing chamber, said first distributor tray having apertures in it for downward flow of liquid and vapor chimneys for downward flow of vapor, each vapor chimney comprising an open-topped tube extending above the first distributor tray and including an apertured plate at its lower end below the first distributor tray with vapor outlets arranged around the lower end of the chimney; and - (v) a second distributor tray having tubular downcomers for downward flow of liquid and vapor, each downcomer comprising upstanding, open-topped tubes having apertures in their sides for entry of liquid into the tube. 2. A system according to claim 1 which includes means for injecting a gas above the collection tray. 3. A system according to claim 1 in which the collection tray spillways comprise upstanding flow conduits extending above the collection tray and defining inlets for vapor and liquid above the collection tray to pass through the tray to the mixing chamber below the collection tray. 4. A system according to claim 3 in which the mixing chamber comprises an annular mixing chamber and said first spillway is centrally located therein. 5. A system according to claim 4 in which the outlets of the collection tray spillways are arranged to discharge tangentially with respect to the mixing chamber to impart a swirling motion to liquid in the mixing chamber. 6. A system according to claim 1 in which the downcomers of the second distributor tray comprise open-topped tubes having baffles over the open tops to deflect and prevent falling liquid from entering the tubes. 7. A system according to claim 1 in which the apertures in the sides of the open-topped tubes of the downcomers are circular. 8. A system according to claim 7 in which the tops of the circular apertures are below the operating height of the liquid on the second distributor tray. |3524731||August 1970||Effron et al.| |3541000||November 1970||Hanson et al.| |4126539||November 21, 1978||Derr, Jr. et al.| |4126540||November 21, 1978||Grossboll et al.| |4140625||February 20, 1979||Jensen| |4550000||October 29, 1985||Bentham| |4579647||April 1, 1986||Smith| Filed: Jul 2, 1987 Date of Patent: Jun 6, 1989 Assignee: Mobil Oil Corporation (New York, NY) Inventors: Fouad A. Aly (Newtown, PA), Richard G. Graven (Pennington, NJ), David W. Lewis (Newtown, PA) Primary Examiner: Barry S. Richman Assistant Examiner: Amalia L. Santiago Attorneys: Alexander J. McKillop, Charles J. Speciale, Malcolm D. Keen Application Number: 7/69,545 International Classification: B01J 1000;
<urn:uuid:7b6d5b2f-0773-4e98-9e51-e0eb961d7474>
CC-MAIN-2022-33
https://patents.justia.com/patent/4836989
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00497.warc.gz
en
0.92203
4,711
2.515625
3
Interesting facts on green topics. China closes down coal-fired power stations and India commits to solar[edit | edit source] - In the past three years from 2006 to 2009 the Chinese have closed down more than 54 gigawatts of inefficient coal-fired generating capacity - equivalent to closing all of Australia's coal-fired power stations twice over. - India just decided to increase its solar power capacity from near zero to 20 gigawatts by 2020 - that's more solar power than is currently produced worldwide. Source: Man of coal, Guy Pearse, 13 December 2009 Giant Nomura jellyfish in Japan may be attributed to climate change[edit | edit source] Giant Nomura jellyfish in Japan have appeared in 2009. They have: - sunk a fishing trawler when many were caught in its nets - caused an estimated loss of $100m to the Japanese fishing industry - grown to sizes larger than a sumo wrestler Climate change may be implicated in their presence, as warmer waters where they spawn near the Chinese coast have been less aerobic, allowing the tiny juvenile jellyfish to grow free from fish predators. Source: Japanese fishing trawler sunk by giant jellyfish, 02 Nov 2009 Extreme weather attributed to climate change was experienced in Melbourne, Australia in November 2009[edit | edit source] - its hottest November on record with the city's average maximum temperature of 27.6 degrees to Saturday 28/11, besting the 1862 record of 25.5 degrees for the whole of November. - 10 consecutive days over 30 degrees at the start of the month set the pattern for the monthly record to fall. - November rainfall was 90.2 millimetres, well above the monthly average of 59.7 millimetres and the wettest November since 2004. Senior weather bureau forecaster Terry Ryan said it was unusual to have the combination of the hottest November and above-average rainfall. "This is another statistic that says the Earth appears to be getting warmer," he said. Source: Record month of heat and rain, David Rood, The Age, November 30, 2009 Climate and clean energy policies save money[edit | edit source] - Comprehensive climate and clean energy policies will save United States households $900 per year on average. - U.S. passenger vehicles (cars and trucks) consume about 380 million gallons of gasoline per day and contribute 20% of America's global warming pollution. - The fuel for these cars is almost entirely refined from petroleum, nearly 60% of which is imported. Source: Repower America Plastic recycling[edit | edit source] Some statistics of how much plastic waste countries succeed in recycling: - United States: 5% - India: 60% - Denmark: 90% Source: Addicted to Plastic, Documentary See also: green plastic Australian shopping habits[edit | edit source] In Australia during the 2006-07 financial year: - Retail sales reached more than $214 billion, up $10 billion on the figure from just three years earlier. - 35m packets of Tim Tam biscuits were sold (nearly 400m biscuits) - 22m jars of Vegemite were sold - the box of takings for the Happy Feet movie were $31.8m - 93.1m letters were mailed - 5m people watch pay TV between 6pm and midnight - 1.2m people eat at McDonalds everyday. - The top selling vehicle was the Toyota Corolla (4 cyl) with the Holden Commodore (6 cyl) second. - A cigarette brand was the top selling grocery brand - The selling food was bread, the second was chocolate and the third was ice cream Source: Shopping habits give food for thought, The Age, 22 May 2008 Logging Australian old growth forests[edit | edit source] - As at April, 2009, around 8% of pre European settlement forest remains in original condition in Australia. - About 5.5% of this is protected. - The remaining 2.5% (including Brown Mountain old growth forest) is either being logged or is scheduled for logging - should be protected. - Queensland and New Zealand have both protected all their remaining old growth forests from logging. Western Australia has protected most of theirs. - Victoria, South East New South Wales and Tasmania is where the bulk of contentious old growth logging continues. World CO2 emissions from coal-fired power stations[edit | edit source] The amount of CO2 that would have to be buried each and every day from the world's coal-fired power stations: - Global CO2 emissions from the consumption of coal (2004) = ~10.5 GT - Volume of one ton CO2 at 25C and one atmosphere pressure = 556m3 - 10,500,000,000 tonnes X 556m3 = 5,838,000,000,000m3 = 5,838km3 per year or 15.99km3 per day For muted carbon capture and sequestration projects to remove C02 emissions about 16km3 per day of "storable" CO2 must be stored somewhere underground. South East Australian woodchipping and forest destruction[edit | edit source] Between 2,500 and 3,000 trees from South East New South Wales and East Gippsland in Australia are cut down every working day to supply the Eden chipmill, including the magnificent Brown Mountain old growth forest. Source: http://web.archive.org/web/20110219122948/http://www.chipstop.forests.org.au/ Many mobile phones end up as e-waste in landfills[edit | edit source] - Thousands of tons of electronic waste hit landfills each year as users upgrade to new mobile phones and discard their old ones. - There are already 11,000 tons of unused cellular phones in the United Kingdom that have not yet been disposed of. - Most of these phones will eventually be discarded, with highly toxic metals and other chemicals in them leaching into the earth. - An estimated 1 billion mobile phone handsets are sold each year, 1 million per day come from Nokia alone. - 100 million people upgrade to new phones each year in Europe alone, even though the average handset has a life of 5 years. - Many mobile phone service providers lure new customers by promising a free new handset for those who sign up. - While many companies offer to recycle used mobile phones for consumers, the vast majority of such phones are still thrown away. - The recycled handset market could be worth $3 billion by 2012, with recycled phone shipments numbering above 100 million. Renewable energy installation in Israel, Spain and Northern Europe[edit | edit source] - In Israel 90% of homes have solar water heaters installed, where they must now be installed on new homes by law - In 2005, Spain became the second country (after Israel) to require solar water heaters. - Spain was also the first country to require the installation of solar cells for electricity generation in new buildings. - In many climates, a solar heating system can provide a very high percentage (50% to 75%) of domestic hot water energy. - In many northern European countries, solar power is used not only to heat water, but also to provide 15 to 25% of home heating energy. Road transport in Melbourne circa 2007[edit | edit source] - Melbourne's traffic-choked road network is slowing down, leaving peak-hour motorists crawling through the city's inner core at just 23 kilometres an hour - Drivers are travelling 3 million more kilometres a day on freeways than in the previous year, a VicRoads' Traffic System Performance report shows. - Inner-city arterial roads with trams have sped up, calling into question the Government's new policy on extending clearway times. - 80% of cars on Melbourne's roads have only one occupant: the driver. - Drivers on Melbourne's roads cover 88 million kilometres every day, the same as last year. - The average all-day speed on Melbourne's roads has fallen to 40.8 km/h, a kilometre slower than the previous year. - Average freeway speeds are just 59 km/h in the morning peak, three kilometres faster than the previous year. - Fewer inner-city residents are driving their car - Traffic on the Western Ring Road had surged by between 11% and 20% in the five years to 2007. Source Peak-hour motorists forced into 23 km/h crawl, The Age CO2 levels in atmosphere and sea levels[edit | edit source] Linking levels of CO2 (ppm) and global average temperatures (referenced to pre-industrial) with sea level rise: - 180ppm gives a temperature of -5C and a sea level of -120m - 280ppm gives a temperature of 0C and a sea level of 0m - 280-300ppm gives a temperature of 1.7 to 2.7C and a sea level of 4-6m - 380 (360-400)ppm gives a temperature of 2.7 to 3.7C and a sea level of 15 to 35m - 425 (350-500)ppm gives a temperature of 5.7C and a sea level of 75m World C02 levels are now at a record high of 387 parts per million (ppm), up almost 40% since the industrial revolution and the highest for at least the last 650,000 years. This could translate to eventual sea level rises in the range of 15 to 35m, which would be catastrophic. So we need to reduce C02 emissions now. Australia meeting emission reduction targets[edit | edit source] - For Australia to meet a greenhouse gas emission reduction target of 50% by 2020, one coal fired power station the size of Hazelwood needs to be decommissioned every year from 2009 to 2020. - The Victorian Labor government has announced plans for the construction of a new brown coal fired power station in the Latrobe valley and describe this as a "clean coal" initiative. Extinctions[edit | edit source] - Australia has one of the worst rates of animal extinction in the world - In 2007. more than 1500 kinds of animals and plants are close to dissappearing forever Where are we at in 2007?[edit | edit source] - In the past 20 years Australian homes have increased in size by 40%, while our families are getting smaller. - Australians spend 90% of their time inside. - 20 years ago there was no Internet. Today if MySpace was a country it would be the 11th largest in the world. - Every year 125 million computers are thrown out across the world, most of these go to landfill. - 10 years ago, half the people in the world had never made a phone call. Today, half the people in the world own a mobile phone. - Demand for rooftop solar panels is increasing by 16% per year in Australia and by 40% globally. - 0.25 hectares of land is required to feed each person. By 2025 there will be less than one third of that area each. - The world's population is 6.5 billion, and is increasing by 77 million people per year. Car fuel consumption standards[edit | edit source] - Japanese cars are required by law to get more than 45 miles per gallon whereas for cars in the U.S. the standard is under 25 mpg. - Australian cars have a voluntary target set in 2003 of 6.8L/100km for petrol passenger cars by 2010. This represents an 18% improvement in the fuel efficiency of new vehicles between 2002 and 2010. Cycling saves carbon emissions[edit | edit source] A cyclist who commutes 18km each way every day on a relatively flat commute will save each year: - 2.6 tonnes C02 and $7000 compared to a large car like a Land cruiser - 0.9 tonnes C02 and $3000 compared to small car like a Corolla Trains are the best form of urban transport[edit | edit source] Rail passenger transport has the lowest carbon emissions - full trains are clearly much more energy efficient than cars. Relative to a trip in a car, carbon emissions are: - Train trips - one eighth (8 times better) - Light rail - one quarter (4 times better) - Buses - one half (2 times better) Australian households create 9 tonnes of CO2 per year from electricity usage[edit | edit source] - The average Australian home uses about 20kW/h of electricity per day, which translates to about 9 tonnes of carbon dioxide emissions per year. - A solar efficient house with a solar array can greatly reduce or eliminate these emissions. More information. - The British Government estimates eight percent of all domestic electricity is consumed by devices in standby The Energy Challenge (PDF) Rail vs road - some points to consider[edit | edit source] - A modern small automobile with two passengers generates almost 25 times the air pollution, per passenger mile, as a four car commuter train at 35% capacity. - Two sets of commuter rail tracks will handle the passenger traffic of at least six lanes of highway. - A new light-rail line costs about a third of a new highway or loop road, and recent developments in track-laying technology can shave 60% to 70% off that cost. - Trains are faster, quieter, and smoother than buses. In addition, they avoid traffic jams and most accident scenes. - Modern commuter and light-rail trains are built to run forward or backward, eliminating the need for huge turnaround loops. - Rail deaths and injuries are much lower compared to those in automobiles. - Rail cars and locomotives last much longer than cars and trucks (in some cases up to 100 years) with appropriate maintenance. - Railroad tracks are cheaper and easier to maintain than roads and highways. - There is no rubber tire disposal problem with trains (a much bigger issue than many people realize). Source: 13 Reasons We Need Passenger Rail, Rails - New Mexico's Passenger Rail Action Group Melbourne house price rises[edit | edit source] - The median house price in Melbourne soared 13.1% ($50,000) to $431,000 in 2006. - This is the largest dollar increase over a twelve month period. - $50,000 would pay for a solar panel system that would supply more than the average house electricity usage. Source: Median house price soars in Melbourne, Sydney Morning Herald Water consumption in Australia[edit | edit source] - Melburnians' daily average water consumption average in 2007 was 277 litres per person, down from 303 litres per person in 2006. This reveals a massive change in habits from the 1990s, when the average for personal use was 422 litres a day. - However, while the figure of 277 litres per day is celebrated by the Victorian State Government, it is still almost double the amount being used by residents of Brisbane and south-east Queensland, who have been limited to 140 litres per person a day since May 2007. Source: The Age Carbon emission offsets[edit | edit source] - By the end of 2007, over half a million Australians have purchased carbon credits to help neutralise their greenhouse gas emissions. See Green travel for more information. Plastic shopping bags[edit | edit source] In 2008 in Australia, plastic shopping bags are given out at no direct cost to shoppers. Here are the facts: - The energy consumed in the life cycle of a plastic bag is estimated to be equivalent to 13.8 millilitres of crude oil, or about a teaspoonful. - 3.9 to 4.5 billion plastic bags are thought to have been used in Australia in 2005. - 34% fewer bags were used in 2005 than in 2002. - Most lightweight plastic bags in Australia are made overseas. Source: The Age - Americans use 100 billion plastic shopping bags a year, according to Washington-based think tank Worldwatch Institute, or more than 330 a year for every person in the country. Most of them are discarded. - They can take from 400 to 1000 years to break down,. Their constituent chemicals remain in the environment long after that. - They are made from crude oil, natural gas and other petrochemical derivatives; an estimated 12 million barrels of oil are used to make the bags the US consumes each year. - Countries from Taiwan to Uganda, and cities including Dhaka in Bangladesh, have either banned plastic bags outright or imposed a consumerlevy on them, - Britons use 13 billion single-use plastic bags a year, or more than 200 per person. Prime Minister Gordon Brown has urged the country's biggest supermarket chains to cut use faster than planned and said Britain could eliminate them altogether. Source: The Age Shipping[edit | edit source] - Shipping is now a booming global industry, with most manufacturing being concentrated thousands of miles from consumer centres in Europe and the United States. - Nearly 100,000 cargo ships transport 95% of world trade by sea - The world shipping industry is expanding rapidly as countries such as India and China become major players in the global economy. - The cost of shipping or "bunker" fuel has nearly doubled in the past two years, forcing the industry to consider alternatives. - Concerns have grown about climate change and air pollution from shipping. - It is estimated that commercial shipping uses nearly 2 billion barrels of oil a year and emits as much as 800 million tonnes of carbon dioxide, or 4% of the world's man-made emissions. - Shipping also releases more sulphur dioxide than all the world's cars and lorries. - The industry has so far failed to harness renewable energy, either because conventional fuel has been cheap, or because modern cargoes, mostly carried in containers, need to remain stable on deck or in holds. - Sails or spinnakers have been proposed for merchant ships, but these can take up storage space and cause vessels to keel. - Sails could pay off their cost within 3 years with oil priced at $US60 per barrel - One kite on one ship over one year would save the equivalent amount of oil as converting every single automobile in California to a hybrid - A United Nations study in 2008 has reportedly found that carbon emissions from merchant shipping are nearly three times greater than previously estimated - annual emissions from global shipping equal about 1.12 billion tonnes of CO2, or an estimated 4.5 per cent of global carbon emissions. - Emissions from merchant shipping are not taken into account by the European Union (EU) when making its targets for cutting greenhouse gases. - Green shipping blowing in the wind, The Age - Video: Kiteship - a kite that generates 10,000 horsepower - SkySails - Turn Wind into Profit - Shipping carbon emissions greater than thought: UN report, ABC News, February 13, 2008 How to live an extra 14 years[edit | edit source] - People who drink moderately, exercise, quit smoking and eat five servings of fruit and vegetables each day live on average 14 years longer than people who don't. - Overwhelming evidence has shown that these things contribute to healthier and longer lives, but a new British study actually quantified their combined impact. Source: What could you do in 14 years?, The Age Forest destruction and climate change[edit | edit source] - Forest destruction around the globe is the largest single source of carbon emissions after energy, contributing up to 10 times as much as aviation. - The Stern Report warned that rainforest destruction alone would, in the next four years, release more carbon into the atmosphere than every flight from the dawn of aviation until 2025 Source: Flying clouds the real climate culprit, BBC NEWS Antartica, climate change and sea levels[edit | edit source] - Antarctica , a deep freeze holding 90 percent of the world's ice, is one of the biggest puzzles in debate on global warming with risks that any thaw could raise sea levels faster than U.N. projections. - If a fraction of Antartica's ice melted, this could damage nations from Bangladesh to Tuvalu in the Pacific and cities from Shanghai to New York. - Antartica has enough ice to raise sea levels by 57 metres (187 ft) if it melted, over thousands of years. - A year after the U.N.'s Intergovernmental Panel on Climate Change (IPCC) projected sea level rises by 2100 of about 20 to 80 cms (8-32 inches), a Reuters poll of 10 of the world's top climatologists showed none think that range is alarmist. Source: Antarctic ice riddle keeps sea-level secrets, Reuters, January 31, 2008 Facts about business computing[edit | edit source] Some facts about business computing: - Leaving a computer running consumes electricity and adds to computing costs. - The use of screen savers does not save energy. - It is estimated that a typical desktop PC with a 17-inch flat panel LCD monitor requires about 100 watts - 65 watts for the computer and 35 watts for the monitor. - If left on 24x7 for one year, this system will consume 874 kilowatt hours of electricity - enough to release 341 kg of carbon dioxide into the atmosphere and the equivalent of driving 1312 km in an average car. - According to the Columbia University Guide to Green Computing, if the paper used each year for personal computing were laid end to end, it would circle the Earth more than 800 times. Household airconditioner power consumption[edit | edit source] Typical wall mounted household airconditioning units consume a lot of power. For example, Mitsubishi Electric reverse cycle airconditioners advertised in Autralia in early 2008 are: - from 2.5kW cooling / 3.2kW heating (A$831) - to 8.1kW cooling / 9.0kW heating (A$2397) The smallest of these units consumes 1kW more electricty than a solar panel array of twenty 75W panels, which produces 1.5kw. The larger unit consumes over five times the amount of energy the array produces. The increased usage of household air conditioners is one of the major factors causing increases in peak load - usually experienced on very hot days - which is a contributing factor to the possibility of more coal fired power stations being constructed. A good passive solar design house can avoid the use of airconditioners entirely. Data centre power consumption[edit | edit source] Data center owners such as Google are building data centers in places where power is cheap. A decade ago, the main consideration was where broadband would be cheapest. Now, data centers can take up 50,000 square meters of floor space and require 40 to 50 megawatts of power. - Source: Will thin clients rebound with higher power prices? Green Tech blog, CNET News.com Biodiversity in Victoria, Australia[edit | edit source] - 30% of Victoria's native animals are extinct or threatened - 44% of our native plants are extinct or threatened (ref: Environmental Sustainability Issues Analysis for Victoria, CSIRO) - Forests that provide habitat for threatened species such as Leaderbeaters Possum and the Powerful Owl are still clearfelled in Victoria - more than 70% of Victoria's native bushland has been cleared in just 170 years - 92% of Victoria's native bushland on private land has been cleared Source: Victoria Naturally Alliance Computer games consoles are not green[edit | edit source] - Microsoft chief executive Steve Ballmer espouses what the world's largest software company is doing for the environment - but the company's Xbox games console does not rate a mention in this context. - Microsoft have sold 18 million Xbox 360s sold as at March 2008. - Worldwide computer use requires 14 power stations for the necessary electricity, producing more harmful carbon dioxide emissions than the entire airline industry - not including the emissions created and manufacturing and shipping the products. - Games consoles - of which 62 million of various brands were sold in 2007 - are the high consumers of this industry, using huge amounts of energy to generate the necessary graphics and sounds. - When played online, games consoles link up to huge server farms which use even more energy. - With each generation of console - we are currently on the seventh - previous platforms are made obsolete by the newest technology. Millions of consoles, games and other accessories are thrown away. - A personal computer setup for gaming with a powerful processor and video cards can have a power supply with a peak load rating of 1kW and consume up to that amount of power. Source: Greenpeace takes on 'gas guzzling' gamers, The Age, March 9, 2008 Why GM is not green[edit | edit source] - 114 million hectares of genetically modified (GM) crops is just 1.3% of the world's productive land. - 98% of all GM crops are grown in just seven countries and five of those are in North and South America where most GM is used for animal feed or ethanol production. - The US alone grows over half of all GM crops - Since the GM cotton crop in Australia topped 90%, the area of cotton has fallen each year, from 230,000 hectares to 134,000 hectares, to ~65,000 in 2008. GM cotton is no magic bullet. - Concerns have been raised about whether GM licences are subject to a rigorous 'science-based' assessment process. - In Australia, the ACT has a ban and GM and SA now prohibits the passage into or through its territory of any GM crop product - seed for planting, cleaning, crushing or export. More information: https://www.truefood.org.au/ Energy efficiency measures can repay their cost[edit | edit source] - You can get a financial payback from embracing energy efficiency measures such as using low energy lighting. - Councils and governments can save money by establishing revolving energy funds to provide a financial incentive to implement energy efficiency measures and practices. Energy efficiency projects reduce energy use and greenhouse gas emissions. They can also save money. Transport trips in Melbourne, Australia[edit | edit source] On a typical weekday in 2008, Melburnians: - make 13.5 million personal trips - 10 million of these trips made by car. On a daily basis: - around 78 per cent of people travel by car - 7 per cent by public transport (train, tram or bus) - 15 per cent by walking or cycling. Source: Eddington Report Australian coal exports[edit | edit source] - Australia exports approximately 300 million tonnes of coal per year - This coal potentially generates 720 million tonnes of CO2 if all was burnt for power generation. - That means that while Australia emits approximately 20 tonnes per capita of CO2 domestically, Australia is exporting in coal terms, the equivalent of 720 million/24 million Australians = an additional 30 tonnes per capita - In 2006, Australia's exports of black coal were worth $24 billion, which makes coal easily Australia's biggest commodity exporter. Source: Lateline - 15/02/2007: Coalfields to become climate change battle ground Minimisising and offsetting carbon emissions[edit | edit source] - Average yearly emissions per car: 4.3 tonnes of CO2 - Emissions for an economy return flight Sydney London: 10 tonnes of CO2 - Average computer left on for 1 year: 340kg of CO2 - Average yearly emissions per Australian household: 14 tonnes of CO2 - Australia's yearly emissions per citizen: 28 tonnes of CO2 Source: Yourbit - doing your bit easy. Act against climate change. Erase your carbon footprint. Electric cars are more efficient[edit | edit source] Electric cars are 30% more efficient (in terms of greenhouse gas emissions) than cars powered by internal combustion engines (that use petrol, diesel or gas), even taking into consideration coal-fired power generation. Transitioning to electric drive trains for cars would reduce global carbon emissions. For those that need more range, a plug in hybrid would suffice - fuel can be used to generate electricity when needed for longer trips. In November 2008, the numbers of electric vehicles of car size are: - less than 300 on Australian roads - about 500,000 world wide Coal industry costs environment, society $717b each year[edit | edit source] A report commissioned by Greenpeace has found the coal industry contributes more than $700 billion damage to the environment and society every year. The research, by a Dutch environmental consultancy: - calculates the cost of dealing with natural disasters caused by climate change - looks at the health problems caused by air pollution such as respiratory diseases - found that over the next 10 years the coal industry will have caused damages in excess of $7 trillion. - Source: Coal industry costs environment, society $717b each year, ABC News (Australian Broadcasting Corporation) - See also Cost of Coal report, Greenpeace Awareness of Green and Environmental Jobs Grows[edit | edit source] There has been an increase by job seekers and employers of being green and this is now being promoted by companies to new potential employees. - A 12% increase has been measured in the importance job seekers put on a company being green. - 14.7% of companies that responded are taking steps to be greener and are using this to attract candidates. - Source: Environmental Jobs See also[edit | edit source] [edit | edit source] - Ecohouse challenge "factoids" on saving energy - Home Builders Australia "Home Builders Australia" Save the environment by building a eco friendly house. Find your nearest Eco friendly Australian builder here.
<urn:uuid:783f008e-0bba-4d80-a84c-5bdd71fb8cbf>
CC-MAIN-2022-33
https://www.appropedia.org/Green_facts
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00098.warc.gz
en
0.93076
6,261
2.703125
3
The end of the Black Hawk War in 1832 signaled the beginning of the development of the area called Lombard. According to C. W. Richmond’s 1876 DuPage County History, the earliest settler on the land of the town proper was Luther Morton, who built a cabin near the present rail depot, the heart of present day Lombard. But the town was originally named for brothers, Ralph and Morgan Babcock, who had settled farther west, near the DuPage River. New Englanders were eager to take up the newly opened land sites, now that the threat of Indian attacks no longer existed. The year 1834 saw the arrival of the Deacon Winslow Churchill family. One son, Winslow Jr., settled on the east side of Babcock’s Grove. Another early settler, Sheldon Peck, a portrait painter whose primitive paintings still command high prices, established his farm on the eastern edge of Babcock’s Grove, near the present Lombard Commons Park. His house, begun in 1837, remains standing, and is the oldest in the village. Alyce Mertz, the granddaughter of Sheldon Peck, occupies the house at this time. Her father, Frank Peck, reported in his journal that it took two years of dragging oak logs from a wood lot on the banks of the DuPage to complete the dwelling. It was here that the first school convened, and that runaway slaves were to find refuge during the Civil War, Babcock’s Grove developed along the trail from Chicago to St. Charles, Illinois. St. Charles Road today, old-timers in town remember, was once called Lake Street. Across the marshy prairie it followed the high ridges that the Indians favored; they, in turn, followed age-old buffalo trails. For eleven years the Frink and Walker Stage line used this trail from Chicago to St. Charles. But in time even this “commodious, efficient” method of transportation was to give way to something better. Sheldon and Harriet Corey Peck. Courtesy Lombard Historical Society. The Galena and Chicago Union Railroad ran its first engine to Babcock’s Grove in 1849, following the old stage line to St. Charles by slicing through the lands of Winslow Churchill Jr., Sheldon Peck, Hiram Whittemore, Reuben Mink and John Rumble. Farmers could see the advantages the railroad offered and eagerly accepted the $15 per acre for right-of-way. No more bogging down on muddy trails to the markets at Chicago! “The Pioneer,” a secondhand steam engine now on display at the Chicago Historical Society, steamed westward at the admirable speed of 25 mph. A temporary turntable was built west of town, and the train was hand turned for its return run to Chicago. In time the line was extended to Newton’s Station (Glen Ellyn), Wheaton, and beyond. By 1851 the nucleus of a small community had begun to form around a makeshift depot and “railroad hotel.” There were at least five frame houses and a store clustered around St. Charles Road and the future Park Avenue. Reuben Mink, a Pennsylvanian, purchased the Morton homestead and other land tracts. Either through luck or a sense of the future he found himself the possessor of much land in what was to become Lombard. But he was to give back to the town two pieces of property which were essential to its development. The first, located on Main Street at Washington Boulevard, was deeded to the small community for a burying ground. The earliest recorded burial was July 1. The second piece of property, located on St. Charles Road near Mr. Mink’s home, was given for the first permanent schoolhouse, built about 1861. The schoolhouse, built in two sections, still survives in part. One portion, exhibiting Greek Revival characteristics, is now a private home located at 210 South Lincoln, across the street from St. John’s Lutheran School. During the 1850s Babcock’s Grove saw a period of development as a farming community. The earliest settlers in the area were from New York and New England states who had come west with the opening of the Erie Canal. But soon names of other origin began to appear in the early land records. German immigrants, refugees from civil and religious strife in their native country, began to take up land. Soon York Township was dotted with farms owned by persons with names like Backhaus, Stueve, Meyer, Schoene, Klusmeyer and Heinberg. They would prove to be industrious and supportive of education. An early school was established by the German community on the Schoene farm, just east of Sheldon Peck’s house, with Julius Schoene as teacher. An influx of Irish settlers had its beginnings in this period as well. Although many came as laborers on the canals and railroads, others, with a typical love of the land and the money to purchase it, were able to pursue the occupation they knew best, farming, James Sheahan took up land on the western outskirts of Babcock’s Grove. This farm, later purchased by Danish immigrant Peter Hoy, was to be one of the last operating farms in the area. Scheduled for development in the 1980s, the old Hoy barns, visible from Route 53 in Flowerfield are the last reminder of this period of agricultural growth. Of the early Irish settlers, only the names remain on the tombstones in little old St. Mary’s Cemetery on Finley Road. The 1850s saw other firsts. Dietrich Klusmeyer’s three-story stone hotel, at the present intersection of St. Charles Road and Park Avenue was erected in 1858. It still stands today, housing a handful of commercial establishments. Its wide porches, bracketing and popular Dram Shop are gone; but the building has survived, a silent observer of years of Lombard’s development. Nearby, to the west, stands another survivor of those early days. J. B, Hull, the town’s first postmaster, operated the first store, depot and post office in a small board and batten building next to the tracks. Nationwide, the slavery issue had erupted into Civil War. The farmers of York Township joined the cause. While the women folk knitted and rolled bandages at home, a number of Babcock’s Grove men marched off to war. General Benjamin Sweet, a retired Civil War general, once charged with the direction of Fort Douglas in Chicago, where captured Southern soldiers were held, moved to a farm on the outskirts of Babcock’s Grove. As pension agent in Chicago, he may have been acquainted with Josiah L. Lombard, a successful realtor who became interested in the yet untapped possibilities of the area. Joined by Captain Silas Janes of Danby, the three platted the town in 1868 and petitioned the state for a charter. The townspeople, impressed with Josiah L. Lombard’s plans for the area, voted to name the town after him. The town of Lombard was organized in short order, after the granting of a charter in 1869. Isaac Claflin was named president and Colonel William Plum of Civil War fame, became clerk. Josiah Torrey Reade followed Issac Claflin as president. He is remembered also for founding the first town library from his own personal collection of books. That first library had its beginning with a peach basket full of books carried by Reade between his home and the First Church, where a room behind the sanctuary was donated as a library. That wooden Gothic chapel, a symbol of Lombard’s beginning, still stands on the corner of Main and Maple Streets. When dedicated in 1870, it was the home of the First Congregational Church. Today it is known as the First Church of Lombard, United Church of Christ, a landmark which attracts photographers and artists alike with its board and batten exterior, lancet-shaped calico stained glass windows and classic lines. It is on the National Register of Historic Places. By 1870 Lombard had become a “commuter” town, with many residents traveling by rail daily to Chicago. A small frame building on Park Avenue served as ticket depot, waiting room and freight office. Nearby was a small stockyard which held cattle being transported to the markets at Chicago. Although the railroad offered a modern, up-to-date means of transportation between towns, the horse-drawn wagon and buggy were still an essential part of every day living. John Fisher came to Lombard in 1874 and built a carriage and blacksmith’s shop at 19 West St. Charles Road. Fisher served as the town’s justice of the peace for twenty years, setting up a courtroom in the basement of his house. As there was no lock-up in Lombard, prisoners had to be taken to the jail in Wheaton. Fisher once averted what might have become the town’s only lynching by spiriting his prisoner, a man named Bo Creek, who had killed his foreman in a dispute over wages at the stone quarry west of town, to the relative safety of the jail in Wheaton. The temperance question continued to plague the townspeople. Dr. Richard Oleson, son of the town’s first doctor, sums it up this way: The temperance-dram shop fights had been long and bitter. It always was an issue at election time. When the town was incorporated, the wets prevailed by one vote. The next year the opposition won by one vote. One year the town would have prohibition, the next year it wouldn’t, until Rev. Caverno … got the boys together. The Rev. Charles Caverno, pastor of First Church for many years after his 1870 arrival, pointed out to the New Englanders that the Germans who wanted their beer were honest, law-abiding citizens, and who were they to deny them. His argument prevailed. It was decided that dram shops might operate between the hours of 4:00 a. m. and 11:00 p. m. each day; the early opening accommodated farmers bringing their milk to town to meet the railroad “milk run.” In 1877 Dr. Charles Wilmot Oleson came to Lombard to serve for many years as the town’s first doctor. Many people in later years remembered the compassion of this gentle man. He built a stylish Victorian home on North Main Street in a section of town which had come to be called “Quality Row.” In later years Dr. Oleson suffered a stroke and was followed in his practice by his son, Dr. Richard Oleson. Lombard continued to attract persons with leadership ability. In 1878 twenty-four year old William Hammerschmidt came from Naperville and bought land to develop a clay pit. He started manufacturing tiles and bricks. Many of Lombard’s homes and commercial buildings were built of Hammerschmidt brick, which was sold widely throughout six midwest states. In spite of the town’s residential nature, a few small industries did operate in Lombard. There was a boot factory, with shoemaker Phil Carroll employing several men. Well drilling was an important business. Lightning rod salesmen did a brisk door-to-door business. Butter and cheese were made in the home until another early Lombard businessman arrived in 1879 and built a cheese factory on the south side of Lake Street, at the western edge of town. William Stuenkel’s name appeared frequently in the town board minutes during the five years he operated his cheese factory. He had often to be advised to control the “offensive odors” coming from his establishment. But keeping 300 lbs. of cheese a day from smelling was an insoluble problem. The cheese factory continued after Stuenkel’s tenure under the directorship of a corporation of local businessmen, including Frederick Marquardt. The name was changed to Lombard Butter and Cheese Company, and moved by Frederick Marquardt to the north side of Lake Street. In 1880 Frederick Deicke operated a creamery near his home, adding to it a small general store. He married a local girl, Regina Goltermann, and took a special interest in the establishment of Trinity Lutheran Church at York Center, where he lived. His son, Edwin, has been instrumental in the development of many educational and cultural efforts throughout DuPage County. Mention should be made at this point of those local craftsmen like William Zabel, Karl Mech, the Assmans, and others who used their hands and their talents to erect many of the homes and business establishments of Lombard. In many cases their names have been forgotten while Historical Society plaques commemorate the buildings which they embellished. Telephone service in Lombard began in 1882. The town was fortunate in being situated on the Chicago-Geneva toll line, the Chicago Telephone Company’s first experimental line. Also in 1882 Colonel William R Plum’s The History of the Military Telegraph Corps During the Civil War was published. Copies were placed in the Federal Archives, and Lombard had its first published author! In 1886 the town “saw the light” in the form of gas street lamps. The residents celebrated by hiring a band and holding a street dance. A tightwire was stretched from roof to roof of Gray’s Hardware and Marquardt’s Grocery, across the street, and a tightrope walker pranced across it. The tracks were laid in 1886 for a second rail line, the Chicago and Great Western Railroad. The first train arrived August 1, 1887. Peter Hoy, a native of Denmark, came to America in 1880. By 1890 he had saved enough to buy the old John Sheahan farm southwest of town where he operated a dairy farm, bottling and selling the milk in Lombard and surrounding towns. He often invited the children of the Lombard School to his property to observe first hand the operations of a real dairy farm. At Christmas he would take Lombard schoolchildren for horse-drawn sleigh rides. His barn was often used to shelter vagrants, who had no other place to sleep. Jobless men often followed the rail tracks through Lombard, looking for work. Today Peter Hoy School commemorates this hardworking, gentle man. Lombard enjoys the distinction of being one of the first towns in the nation where women voted before the passage of the 19th Amendment. However, this distinction lasted only for 1891. In that year, Ellen A. Martin, a woman attorney residing in the community, marched into the polling place and demanded to be allowed to vote, basing her claim on the fact that the town charter enfranchised all citizens, with no mention of sex. She and fourteen wives and daughters of prominent Lombard residents voted that day. But the men of Lombard won out by “reorganizing” the town charter in line with the state charter. The ladies did not vote the following year in the town election. As a result of ensuing litigation, women were allowed to vote in school elections. Unfortunately, Ellen Martin did not live to see the passage of the 19th amendment, since she died in 1916, having returned to her native New York State. By 1893, after many long battles, Lombard finally reached an agreement with the Chicago and Northwestern Railroad, and a viaduct was constructed at Main Street under the tracks. Lombard was one of the earliest towns to deal with the problem of traffic flow in this manner. In 1899 the Aurora, Elgin and Chicago Railway was authorized to build a railway through the south end of town. Begun in 1902, the railway, variously known as the Chicago Aurora and Elgin, the “Roarin’ Elgin,” or “the Third Rail,” sped past the back yards of the town’s residents at a decent clip, lights flashing and bells ringing warning stragglers to clear the track. On October 19, 1903, Lombard was reincorporated, now as a village. Instead of a town council a board of trustees served the village. At the turn of the century several churches were well established in Lombard. Diagonally from First Congregational Church, a Methodist Church organized in 1909, and a building subsequently erected. Several blocks west stood St. John’s Lutheran Church with its two tall steeples and classic design; it has served the area’s Lutherans since 1893. In fact, when the Roman Catholics erected their combination church-school on Maple Street at Elizabeth Street in 1912, Maple Street’s name could as well have been changed to Church Street! In 1959 Sacred Heart parish welcomed the grandson of Martin Hogan to the dedication of the new Sacred Heart Church, located on his grandfather’s farm. The grandson, Rev. Martin D. McNamara, had grown up to become the first bishop of the newly-created Joliet Diocese. Although a few automobiles were beginning to be seen regularly in town, throughout the early 1920s horses were still the common mode of transportation. When in 1914 the Methodists requested permission to set hitching-posts in front of their church, the request seemed reasonable. But Lombard did have an up-to-date silent movie theater in a building on Parkside between Main Street and Park Avenue. And in 1928 the Parkside Theater, as it was called, was replaced by a new movie house on Main Street. The DuPage Theater, had gilded pillars, and a starlit sky, complete with drifting clouds. Outside, the “waterfall” marquee has been restored, recreating this Lombard landmark as it was in the thirties. The end of World War I brought a return to simpler pleasures. One source of hometown pride was the comic strip “Little Orphan Annie,” created by Harold Gray in 1924. The comic strip, which became a part of American life by way of the Chicago Tribune, was created by Mr. Gray when he lived in Lombard on South Stewart Street. He was later to purchase a spectacular Victorian home at 119 North Main Street for his parents, with whom he lived for several years between marriages. The house, which has retained nearly all of its original gingerbread trim and bracketing, was built by William LeRoy, one of the original town board members, and had formerly been known as “Chateau LeRoy.” With the return of soldiers from the war, a building boom was in the making. Townspeople began lobbying for a park and general beautification. On April 28 ,1927, during lilac time, Colonel William R Plum died, leaving his house and grounds to the village with the provision that they be used for a library and public park. Josiah Reade’s small library, which had outgrown its quarters at the First Church, finally had a permanent home. Mr. Reade lived to see his collection of 3,000 books moved to the Helen M. Plum Memorial Library across the street. Jens Jensen, a prominent landscape architect, having a special interest in Lombard and Lilacia, agreed to design the park for the modest sum of $600. His crowning touch was the limestone waterfall and pool designed especially for the new park. In November 1929 plans were laid for holding the first annual Lilac Pageant. The highlight of the pageant was the choosing of the first Lilac Queen, Adeline Fleege. In 1931, when the Depression was in full swing, both Lombard banks were forced to close, and several emergency measures were necessary to sustain the town. A canning project, set up by Father Jones of the newly organized Epiphany Mission, provided food for the needy, as well as assisting the townspeople. In a cooperative venture the churches lent pots and kettles, while the village supplied the gas. Canning equipment was procured, and farmers donated their surpluses. All who could pay were charged 3 cents per can, and every 20th can went on the shelf for the destitute. Government agencies like the WPA and the CCC stepped in to provide jobs for local men grading streets, removing and planting trees, and repairing village property. Building in Lombard almost came to a standstill during the Depression, and many houses ended up on the market, or were repossessed when their owners were unable to meet monthly payments. Some residents argued for local beautification as a means of raising town morale. Shrubs and trees, dug from the Churchill farm west of town, were planted on the fringes of the village hall grounds, along streets, and around the sewage disposal plant by unemployed men. Seventy trees, purchased by individuals at $3.75 per tree, were planted along North Main Street between North Avenue and Pleasant Lane. Each tree was a memorial, and “Memory Lane” was the result. In time the economy began to recover, and newcomers discovered the town, moving into the long-empty houses. The State Bank reopened in 1945. A Village Hostess program was initiated, with Estelle K. Wasz as the first hostess, greeting newcomers and answering their questions. World War II brought the need for a Municipal Defense Council to Lombard. The East St. Charles Road pumping station was protected against sabotage by being fenced and lighted. Air Raid Wardens were appointed, and first-aid kits distributed. The townspeople were asked to salvage rubber, metals, waste paper and rags to aid the war effort. Even the cannon on the village hall grounds, a memorial to the veterans of the Spanish American War, was turned in for scrap. Ration coupons were carefully counted out on Lombard dinner tables, and even the smallest child in the family could aid the war effort by flattening tin cans. As World War II ended and the veterans returned, there was a need for new housing. Shell houses with unfinished second stories, pre-fabricated housing, tri-levels and ranch houses were soon being built throughout the area. Lustron enamelled steel houses, introduced early in Lombard, became very popular. Public buildings, as well, were being erected or added to. Green Valley School, closed during the Depression because of declining enrollment, was re-opened in 1947. The building boom continued into the fifties. The small shopping center on South Main Street was enlarged Begun in the late twenties, it developed on both sides of the Chicago, Aurora and Elgin Railroad crossing at Main Street. As the population grew, the school system began to feel the pinch. There was now a definite need for additional schools, and a number of present day Lombard schools had their beginning during this period. In 1954 Mildred Robinson Dunning decided to record the history of her home town, as told to her by native-born and long-time residents of Lombard. Her effort, The Story of Lombard 1833-1955, was a simple recounting of the highlights of the town’s history. Others had thought the events of the town’s growth of sufficient interest to preserve them for future generations. Frank Peck kept a journal for many years. Amy Collings wrote a series of newspaper columns in the 1940s for the local paper, The Lombard Spectator. Hubert Mogle chronicled the growth of American Legion Post 391. Katherine Reynolds, editor of the Lombard Breeze and author of several novels, used thinly disguised local residents for some of her characters, much to the enjoyment of the townspeople. Her best-known work was entitled Green Valley, with the subdivision and school named for this work. Lillian Budd, also an established author and former Lombard resident, was approached in 1973 by the Lombard Historical Society to do an updated version of the town’s history as a bicentennial project. The result of her labors was entitled Footsteps On The Tall Grass Prairie. Lombard adopted the city manager plan in the spring of 1955. Lombard’s first city manager, Hugh T. Henry, took office the following fall. A special census that year showed the village to have a population of 16,284. In 1962 the village received the highest rating in Illinois municipal waterworks operation from the American Waterworks Association. A design for a new village flag was selected, the result of a contest that year, with Susan Mills, a Willowbrook student, submitting the winning design. In 1968 Yorktown Shopping Center was completed. It took four years of construction. Yorktown, encompassing 100 stores, covers 130 acres. On the 100th anniversary of the founding of the town, the Centennial Lilac Parade depicted events and personalities reminiscent of the town’s history. As the 1969 celebration drew to a close, Lombard Centennial, Inc. donated its assets to the establishment of a Lombard Historical Society. Today, the Lombard Historical Society maintains its museum in an 1870s style frame cottage at 23 West Maple. Other recent changes include the closing of Lincoln School, built in 1916 near the site of the old brick Lombard School, and the still older frame school house. The village hall was also closed; it has been replaced by the Lombard Civic Center, a complex of modern buildings of pre-cast white quartz, located at 255 East Wilson. Morris the Cat, star of Nine-Lives Cat Food commercials, has passed on to his reward from his home in Lombard. Morris’ replacement also lives in Lombard. Today Lombard is a village with a population of 38,500. It covers 10.5 square miles, on which 8,950 single family homes and 3,650 multiple family units have been built. Almost forgotten are the log cabins of the Mortons, the Babcocks and the Churchills. Faded into the past are the small clapboard cottages along the St. Charles Trail. Yet, in 1976, in celebration of the nation’s bicentennial, a local group erected a permanent symbol of Lombard’s past, a log cabin which is used today by all the groups in town. Thus the community has come full circle, from the little log cabin of the 1830s on the banks of the DuPage River to its sentimental recreation in a Lombard Park. Margot Fruehe is Director of the Midwest College of Engineering Library in Lombard. She is a member of the Lombard Historical Society and DuPage County Genealogical Society.
<urn:uuid:ba37c4d9-f869-429d-ba47-9d6809a14f4c>
CC-MAIN-2022-33
https://dupagehistory.org/dupage-roots/lombard/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00298.warc.gz
en
0.980144
5,547
3.046875
3
Answers to Exercise 1.1 Thinking about management and business research 3.a Write down a paragraph on why it is important that Business and Management Research is conducted in different countries and settings. By conducting research in different countries and settings, we can gain a deeper and more accurate understanding of phenomena related to business and management. We can develop more nuanced theories and gain a clearer sense of when or where they can (or cannot) be applied. Our research findings are likely to be useful to a larger number of people. And we gain a clearer understanding of the role of context, which is quite an important concern in its own right. See e.g. Johns, 2006 and Jia et al., 2011, for more information on the role of context. Johns, G. (2006). The essential impact of context on organizational behavior. Academy of management review, 31(2), 386-408. Or Jia, L., You, S., & Du, Y. (2012). Chinese context and theoretical contributions to management and organization research: A three-decade review. Management and Organization Review, 8(1), 173-209. 1.2 Research with impact 2. Discuss research and impact in relation to the job of academics and different practitioners (Head of Marketing, CEO, Data Analyst, etc.). What are the main similarities, and what are the differences? In most universities academics on research contracts are required to produce research as part of their conditions of employment. In the 1960s, when management was seen as an applied discipline, emphasis was placed on the applied end of the research continuum. However, as a consequence of the rise of management and business departments within traditional universities, the kind of research conducted by research-active staff became more focussed on achieving academic rather than empirical impact. The ability to conduct ‘rigorous research’ became increasingly seen as being synonymous with the publication of articles in leading journals – the higher the impact factors the better. Evaluated in this way, research appeared as both a means and an end in itself. More recently, the pendulum has begun to swing back as the contribution research makes to management practice has become an important criterion for the allocation of research funding. Academics conducting research are asked to demonstrate how their research may help to address societal challenges, such as the UK’s relatively low productivity. Assessments of ‘research impact’ require the researcher to show how their research has led to change in policy areas or practice. Such change may occur as a result of the dissemination of research outcomes but also from engagement with practitioners. Some academic researchers manage to inspire both fellow academics (e.g. with new theories) and practitioners (e.g. with new solutions). However, it can be difficult to achieve both with the same research. When compared to academic research, practitioner research often appears less theory-driven and more problem-oriented. It tends to aim at improving business and professional practice within a particular organization rather than across a certain type of organizations or entire industries. This is not to say that practitioner research is less useful or does not have to be rigorous. Like academic research, the quality of practitioner research does not only depend on asking the right questions but also on using the appropriate methods in a research process that is both systematic and rigorous. With the rise of the so-called Big Data agenda, the work of practitioner researchers in management and business research has changed in insignificant ways. Today business analytics is seen as one of many bridges between academic and practitioner research. Collaborative research between academics and practitioners opens up new avenues for bringing into dialogue research conducted in both realms. Therefore, the methods introduced in this volume can be of great to use to both academic and practitioner researchers. 1.3 Proposal writing 1. What are the functions of each of the elements of an academic research proposal? (Why do we need an introduction? What is the literature review for? Why is it important to highlight limitations? etc.) It has been said that a good research proposal should contain all the information needed for a similarly qualified individual to conduct the research themselves. The function of a research proposal therefore is to spell out the focus, and the steps that need to be taken to evaluate and deliver a given research project. It is usual for a research proposal to begin with the title of the study. This should be intelligible and if possible eye-catching so that an informed reader will want to know more. What usually follows next is a brief overview of the study, indicating its focus and perhaps briefly, the manner of exploration and importance. Sir George Bernard Shaw has been quoted as once saying in a letter to his grandson, ‘I’m sorry that this letter is so long – I didn’t have time to write a shorter one’. The point here is that boiling an introduction down to its essence is no easy task. Chief executives of large companies practice what has been called ‘an elevator pitch’, this is the message they want to convey about an aspect of their company or what they are trying to achieve for their organization quickly and concisely as if to a complete stranger – in the time that it would take an elevator to ascend six or seven floors. Clarity then is what the researcher is trying to achieve, for themselves and the reader. What might then follow are four or five aims – certainly no more than this. The first aim usually relates to the focus of the study. For example, ‘this study examines the effectiveness of performance related pay’. A second aim might involve an investigation of previous research. A third aim might then relate to how this study would ideally be designed, the methods that would be adopted and the analysis that would be conducted on data collected. Another aim could relate to how the findings would be located within the existing literature indicating what literatures the researcher hopes to add to or extend or what gaps in knowledge might be filled. A final aim might speculate on how the knowledge produced would be translated into practice and who the stakeholders for this knowledge would be. There is no specific requirement to focus this aim on users – but management as we have indicated is an applied discipline and the improvement of management practice ought to be an outcome of management research. Once the aims are established, proposals usually offer a literature review section on the current state of knowledge in relation to one or more pertinent fields or disciplines that relate to the focus of the research. References are usually provided within this overview. In academic research it would be normal to include key seminal papers which indicate that the student has a good grasp of the important current debates. Literature reviews are written in a way that they identify research gaps and controversies that call for the kind of research the author of the proposal proposes to undertake. The literature review also has the function of informing the subsequent section on the conceptual framework and the more specific research questions. Then would follow a methods section which outlines how the proposed research will be conducted. This section should be precise enough so that the reader can evaluate if the envisaged research design makes sense and enables the researcher to conduct the proposed research in the best possible way. It is therefore important for the researcher to include information on things like sample sizes or the number of case studies, the number of respondents the individual would be expecting to question and how access would be achieved. It may also touch upon ethical issues. Methods sections should cover both data collection and methods for data analysis that are well aligned with the research questions and objectives, the research setting and conceptual framework. In the next section, the researcher would usually articulate the significance of the proposed research along with its limitations. This is often followed by a project schedule and timeline, and an overview of resource or training requirements. This last section is important for the reader to be able to evaluate the feasibility the study and its expected contribution in relation to its costs. The last section of an academic proposal is the list of references. Sometimes there is also an Appendix with supporting information and documents. 2. In pairs, or on your own, draw up a list of the similarities and differences between writing proposals for business and research. Take notes and develop a visual illustration, table or figure that illustrates these differences. This question is one about where emphasis is placed in different kinds of research. If the emphasis is more of an academic one then the research will focus more on contributing to an existing literature. Therefore, a literature review for an academic project is likely to be more detailed than for a piece of practitioner research. A big prize for academics is to develop, improve or establish a new theory which helps others explain the world around them. For this to happen academics must have a fairly comprehensive overview of the existing literature on the subject(s) they are exploring. Contributions to knowledge can also come from methodological innovations. Academic proposals sometimes emphasize aspects of this so as to indicate the rigour or novelty of the methodology to be adopted. When writing proposals for business, the emphasis is more on what the research will do for an organization in a particular context of practice. Therefore, the appraisal of the situation or setting is key. This is not to say, that the literature is not important, but researchers often only need to address literatures that are particularly useful for their purpose. All objectives require justification with a view to the expected benefits of the proposed activities for the organizations involved. Rather than an abstract and introduction consultants offer an offer a summary which explains why this study is important and what benefits and value will ensue from its conduct. Academic research proposals often give heavy emphasis to a justification of the methodology to be adopted. Sometimes this goes into the detail of its philosophical stance, the methodological implications that derive from this and the methods to be used. This is presented so as to indicate rigour. In contrast, a business proposal focuses on the content of the research and the processes that will be adopted – both in the context of what is possible within the constraints of time cost and resources that will be required. In business proposal the methods are often split into two sections – one outlining the proposed research activities, and another detailing how the impact or success of certain activities or interventions will be evaluated. Academics focus on the significance of their study (in relation to a literature) and set out the limitations (something business reports don’t always emphasize). They also tend to address ethical issues. Ethical issues as of no less importance to practitioner research but these are not foregrounded in quite the same way. A clearly articulated work plan and assessment of the resources required tend to be important elements of both research and business proposals. 3. Tony Morgan argues that good proposals are built on empathy. He highlights the importance of understanding your audience. In practice, this can sometimes be difficult to achieve. Have a look at the scenarios outlined below. Make a list of activities you can pursue to find out more about the respective audience of a proposal: - You are a new business consultant and you are asked to write a proposal for a consultancy for a small fast food chain you know relatively little about. They are interested in ways to enhance their online order and delivery service. All business improvement begins with as good an understanding as you can get into what the benefits you can offer to your customer (often in comparison to the competition). When there are many competitors to choose from – and fast food would fit into this – superior online ordering and delivery could be a key success factor. This requires a consultant to conduct some research. One approach might be a focus on understanding just what the customer needs are and also what exactly are the deficiencies with the current arrangements in order to ascertain ideally what they would like. This might take the form of a survey or some in-depth qualitative interviews. Choice of sample would be important here as it would need to individuals who actually buy this kind of food in the way proposed. Sometimes it can also be helpful the visit the business as a costumer and to talk to other customers to get a feel for what they are looking for (importance of empathy) but also where the ‘pain points’ are when it comes to their order and delivery service. An analysis of reviews on the Internet could complement such research and provide useful insights into the challenges and strengths of the business. Secondly, it is also important to examine the website of the business and to interview both members of its management and costumer-facing employees. Their experience of the business, of strength and weaknesses and of how order and delivery could be improved may differ. It can also be helpful to ask oneself why the business has requested consultancy services at this time, and who is likely to support or reject change in the way orders and deliveries are organized. Once the researcher has come to a better understanding of the business and of the main problems (real or perceived) that make the leadership of the business search for a new solution, the researcher is in a much better position to actually do that: search and identify an appropriate solution. This would require some desk research how other businesses do their online ordering and delivery. It is important to be careful here about the ethics – looking at businesses in a different sector or part of the country might help to overcome this. Again, it can be really helpful to make an effort to imagine oneself as a costumer when evaluating the benefits and limitations of different order and delivery systems. Finally, the third component might involve an investigation and comparison of different proprietary online and delivery systems that are available. Consideration could then be given to which would best deliver those benefits that the customers had indicated and an assessment of any trade-offs (cost or otherwise) could be made. Again, an emphatic approach may help to decide if an existing system could be adapted or if the development of a new system may provide some competitive edge. At all times it is important to bear in mind that the ultimate objective of your work is to identify and ‘sell’ a solution to the management of the fast food chain. - You plan a research project on women leadership in the IT industry. As part of your research you would like to observe team meetings but you need to establish field access. Achieving access for conducting research is always difficult but there are ways to improve your success. First, it will be helpful to review the literature on women in leadership roles. Women and management is of great current interest to academics but also professional bodies and the media – so your research could have a large audience. It should not be too difficult to identify individuals who have recently been profiled as leaders and to narrow this down to those in the IT field. Once done, a direct (and authentic) approach setting out what you would like to do might well gain you the access you require. Senior managers have often spent a great deal of time getting to the positions they have, so they won’t want to grant access to someone that they don’t know, who might come along and waste their time or even damage them. However, many leaders also want others to be able to achieve what they have achieved, so doing anything that helps and promotes this may well be of interest to them. In that sense their ‘audience’ and reason to be involved is likely to differ from yours. So it is important to put oneself in their shoes (i.e. be empathic) when considering how to best approach them. The underlying message here may be to find ways to reassure them of confidentiality and of your ethical standards. A high-quality information sheet and consent form can help to convey serious scholarly intent. Another approach might be to use your own social networks to identify individuals who perform this kind of leadership role in the IT industry and use them to broker a contact. This has the advantage that you come ‘recommended’ so to speak and ‘risk’ to them from participating is consequently reduced. However, it can be quite daunting to be confronted with a request for observational research. While we do not want to mislead our research participants about our intentions, it is possible to pursue a more incremental approach where one conducts an interview and builds rapport before asking to observe team meetings. A third approach could involve a blog post or an advert in an appropriate practitioner journal explaining the research and asking for volunteers to take part in the study, spelling out the benefits of their participation might also yield some success. Again, in order to decide where to place such advert or post, it is helpful to do some research on the experiences and networks of women working in IT, and on women leadership networks more generally. You need to know your audience – in this case your potential research participants. This audience differs from the academics that are evaluating your research proposal. Therefore, it is usually not helpful (and a sign of a lack of engagement and empathy) if you provide them all with the same proposal! - You are asked to write a research proposal for a dissertation project on supply chain management in the apparel industry in China. There are two approaches that first design of a study might take place when researching a topic like this – particularly in relation to a dissertation. One approach to adopt would be to take a case approach, where the case is a single company selling apparel with supply chains in China. If this approach were adopted the student would need to identify a suitable company and then make approaches to interview the appropriate management personnel such as the design team, the production and operations managers, the buyers and merchandisers. Using a case design, the methodology would be a fairly inductive one, with the student finding out as much as they can and learning about the issues involved in supply chain management in the apparel industry – particularly in the context of China. As with any research, there would be an expectation for the student to read whatever literature they could find on the subject in order to inform the questions they ask and be able to give more structures to the answers they received. Following some scandals in the apparel industry in China, it may be difficult to gain access. Such concerns should therefore be kept in mind when developing an engagement strategy and corresponding information sheets and contact emails. Again, as the audience of an information sheet and of an academic proposal differs, the documents should differ both in structure and content. In terms of a proposal, there would need to be some indication as to how the company case would be selected. Some reference to the current literature on supply chain management in the apparel industry, indicating an understanding of the issues involved. A section on methodology indicating what levels of management would need to be interviewed, whether visits overseas (to China) would be required and how the information would be marshalled into the final dissertation. References would also need to be included both on supply chain and on research methodology in relation to case study research. In contrast, an information sheet should explain the purpose of the research in clear terms and outline the role of research participants in the study. It should cover the benefits, risks and ethical challenges of the researcher from the perspective of the participant rather than the researcher. Again, an empathic approach can be helpful when designing information sheets, contact emails and consent forms. After all, these documents should encourage potential research participants to take part rather than making them feel overwhelmed or even threatened. A second approach that might be considered could be a survey of companies in the clothing industry. Adopting a survey design would require a much more extensive literature review to take place than in the case study approach above. This is because the survey would require more detailed questions to be asked of individuals within the organization being surveyed. Prior to any questionnaires being sent out it would probably be sensible to telephone the company to clarify that they do have supply chains in China as well as gain an understanding of exactly to whom the questionnaire should be sent. Questionnaires sent to a named person have much more likelihood of being returned. To improve the success rate still further, telephoning this individual in advance to warm them that a questionnaire is coming and reassure them as to its importance (even offer a précis of the findings) should also increase participation levels. This strategy is particularly helpful when the researcher combines background research with empathy: Who should be called or emailed about the survey? What time of the day may be convenient? How can one make it as easy as possible for participants to take part in the research? How can they be assured that the research is conducted in an ethical way? In terms of the proposal, indication should be offered as to the key issues that emerge from the literature on supply chain management – particularly in relation to clothing and China, so as to give an indication as to the likely questions that will need to be answered. Indication should also be given as to how access will be gained, how many companies will be approached, how many people in the organization will the questionnaire the addressed to and how the analysis will take place. As with the case study proposal a reference section would also be expected.
<urn:uuid:50cef09a-5f8e-404c-a524-2a3d5d2b8d18>
CC-MAIN-2022-33
https://study.sagepub.com/easterbysmith7e/student-resources/chapter-1/answers-to-exercise
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00697.warc.gz
en
0.961665
4,237
2.75
3
(I originally published this article on author Graham Hancock´s website on May 25, 2016.) A major discovery set forth in my new book, The Missing Link, has the potential to upend everything we learned in school about ancient civilizations and ancient religions. During the course of my research and travels to visit the ruins and artifacts of Antiquity, I repeatedly found variations of the same mysterious “icon” worldwide. The “GodSelf Icon”—the term I use to express my discovery—is a prominent feature in most ancient cultures, as the collage below shows: “GodSelf Icons,” a prominent feature of most ancient cultures, cast serious doubts about conventional theories of human spiritual evolution and the emergence of civilization. I first set forth my initial discovery of the GodSelf Icon in my 2011 book, Written in Stone. Since then, I have found even more powerful reasons to focus attention on this remarkable pattern. My new e-book, The Missing Link: Powerful New Evidence of an Advanced “Golden Age” Mother Culture in Remote Prehistory, provides a more in-depth analysis of my GodSelf Icon discovery. The GodSelf Icon is a depiction of a central figure, a forward-facing man or woman who holds in his/her hands outward from the body either a pair of animals or a pair of staffs symmetrically. These twin objects stand for opposing principles, and the central figure represents the hero or sage who combines those two opposing principles to create a spiritual balance that opens new doors of perception and creates centeredness of being. Students of the occult will immediately recognize in this description the age-old coincidentia oppositorum (“coincidence of opposites”) concept. This is one of the central meanings of the GodSelf Icon, as we’ll see below. The GodSelf Icon is a central feature of art and artifacts found in ancient Egypt, Assyria, Babylon, Peru, Mexico, Columbia, Costa Rica, Africa, China, Cambodia, Mesopotamia, India, Crete and many other places. In almost every one of these civilizations, the GodSelf Icon can be traced back to a very ancient and formative era. The further back in time we look, the more we see the GodSelf Icon. A clear example of this is in Peru, where the Incas were merely the latest and final culture to use the GodSelf Icon. If we look to a much deeper antiquity, we see multiple versions of the GodSelf Icon, from one side of the country to the other: All of the pre-Inca civilizations that once flourished in Peru not only used the GodSelf Icon, but regarded this symbol as the pinnacle of their culture. These cultures include the Chachapoyas, Chancay, Chavin, Chimu, Moche, Nazca, Paracas, Sican-Lambayeque, Tiahanaco (Bolivia) and the Wari. Scholars of the New World have noted the importance of this symbol and they call this symbol the “staff god.” We find the following explanation for the “staff god” in Wikipedia: “The Staff God is a major deity in Andean cultures. Usually pictured holding a staff in each hand…his other characteristics are unknown, although he is often pictured with snakes in his headdress or clothes. The oldest known depiction of the Staff God was found on some broken gourd fragments in a burial site in the Pativilca River Valley…and carbon dated to 2250 BC. This makes it the oldest image of a god to be found in the Americas.” This very same Icon, with the very same “pose”, was widespread among the Old World cultures of the Eastern Hemisphere. Scholars of the Old World call the GodSelf Icon “the Master of Animals.” Here is the Wikipedia entry for “Master of Animals”: “The Master of (the) Animals or Lord of the Animals is a motif in ancient art showing a human between and grasping two confronted animals. It is very widespread in the art of the Ancient Near East and Egypt…They sometimes also have female equivalents, the so-called Mistress of the Animals…They may all have a Stone Age precursor…” Here are several representations of the GodSelf Icon—called Master of Animals in the Old World—from the Old World civilization of Jiroft, which is dated to Persia (late 3rd millennium BC): GodSelf Icons from Jiroft, 3rd millennium BCE. Despite recognizing the icon in their respective disciplines, scholars of Old World cultures and scholars of New World cultures have: (a) failed to recognize the icon’s presence worldwide (b) failed to understand the icon holds the same meaning worldwide (c) failed to connect (a) and (b), and thus remain unaware that THE “GODSELF ICON” IS THE LOST SYMBOL OF AN ANCIENT UNIVERSAL RELIGION once known worldwide. My book presents multiple comparative analyses of GodSelf Icons that stem from cultures that have long been considered alien from each other by heritage or through lack of trade possibilities. Here is an example: The similarity between the images above is truly striking. Not only is the overall shape a perfect match, but even the small details are a perfect match—the parallel hands, elbows, squat body, and elongated “staffs” in each hand symmetrically. Here is a closer look: This rock-cut image of the Egyptian god Bes bears a strong resemblance to the stone engraving of Viracocha on the Gate of the Sun in Tiahuanaco, Bolivia. Stylistic differences aside, the ancient Egyptian master masons (in North Africa) who created this GodSelf Icon, named Bes, and the ancient Tiahuanacan master masons (in South America) who created the GodSelf image in Bolivia, named Viracocha, seem to have been working off the same basic blueprint. Each would have recognized the other’s GodSelf icon as such. The visual similarity of these GodSelf Icons is only the tip of the iceberg. Scholars tell us that the Egyptians and the ancient Andean cultures followed the same “balance-of-duality-to-find-the-center” religion—which is precisely the teaching conveyed by, and encoded in the pose of, the GodSelf Icon: By extending both arms and hands outward from the body, the GodSelf Icon conveys the concept of duality—an idea expressed by the twin objects depicted “symmetrically” in each hand (twin serpents, twin staffs, twin animals, etc.). Standing between the representations of duality, the hero figure marks the “center” or “balance” point, thus giving us the central message of the GodSelf Icon—to find the center between the opposites. The GodSelf Icon has been preserved through the ages in the occult tradition, which has also retained its “balance-of-duality-to-find-the-center” meaning. I found that there exists a modern “memory” of this ancient global icon, called the Rebis, which has been linked symbolically to Freemasonry: Rebis from Theoria Philosophiae Hermetica (1617) by Heinrich Nollius. Sun (and Masonic compass) in the right hand, Moon (and Masonic square) in the left hand. The icon has two heads. Male right, female left. In an article subtitled Secrecy and Symbolic Power in American Freemasonry, which appeared in the Journal of Religion & Society (Volume 3, 2001), comparative religion scholar Hugh B. Urban of Ohio State University describes this figure: “…the “Mystery of Balance” or coincidence of opposites…This is…the secret of universal equilibrium between good and evil, light and darkness…Male and female, sun and moon, light and dark—symbolized by the Masonic compass and square…all come from the same source…” In my 2011 book, Written in Stone, I proposed an alternative history of religion, one that views ancient spirituality as a process of overcoming opposite forces within the physical (bodily) self to discover spiritual balance and inner strength. To support this idea, I pointed out how ancient cultures worldwide—and especially the pyramid cultures—all built “Triptych Temples” (a term I coined) to express this “balance-of-opposites-to-find-the-center” wisdom: The true secret about God is that there never was an outward God in the biblical sense. The only god is you, the inner you (your spiritual “soul”) as opposed to the outer physical you (your material “body”); but you have amnesia of who you really are. Noted American Theosophist Alvin Boyd Kuhn once wrote, “Man is a god in the body of an animal according to the pronouncement of ancient philosophy…” The truth of that statement was known in ancient times and has been preserved up to the present, in defiance of religious orthodoxy and superstition, in large measure thanks to the careful safeguarding of ancient spiritual truths by Freemasons and other members of Secret Societies, which conveyed this idea using the same GodSelf Icons. The following GodSelf Icons from modern esoteric manuscripts share the same posture. In each case the centered deity, mimicking the pose of the Rebis above, holds a solar staff in his/her right hand and a lunar staff in his/her left hand: Left: The alchemical Mercury, from Tripus aureus (The Golden Tripod) by Michael Maier, c. 1618. Middle: From a mysterious alchemical treatise titled “The Hermetic and Alchemical Figures of Claudius de Dominico Celentano Vallis Novi From A Manuscript Written And Illuminated At Naples A.D. 1606” Right: From a 16th-century alchemical treatise called “The Rosary of the Philosophers.” Discovering the Rebis was an important moment for me because it was clear that the Rebis is a modern version of the ancient GodSelf Icon motif. I began learning about the Rebis´ significance in esoteric manuscripts, which described the sun and moon in the Rebis´ hands as emblems signifying duality. When I applied this key to ancient cultures, their GodSelf Icons began to come to life. The similarity of their GodSelf Icons makes it appear that ancient cultures across different continents were somehow related, even though these areas are geographically distant, so distant that any direct relationship seems impossible. Scholars who study these areas believe that these civilizations developed independently, but side-by-side comparison of artefacts and monuments from cultural centers throughout the world seem to support the idea that we are looking at two peas from the same pod. I believe that the existence of pyramids, corbeled vault architecture and mummification on different continents is beyond coincidence: In Written in Stone, I presented evidence supporting the idea that an earlier “Golden Age” mother culture now lost to time—Graham Hancock’s “lost civilization”—may have been the common thread that united these cultures. The story of an advanced “Mother Culture” in remote prehistory was first set forth by the Greek Philosopher Plato, who called it “Atlantis.” According to Plato’s account, the peoples of this Mother Civilization were not technologically advanced but spiritually advanced. As Plato explains, the Atlanteans grew weak due to their materializing tendencies, weak enough that they began to lose touch with the inner divinity that granted them their power: “For many generations…they obeyed the laws and loved the divine to which they were akin. …they reckoned that qualities of character were far more important than their present prosperity. So they bore the burden of their wealth and possessions lightly, and did not let their high standard of living intoxicate them or make them lose their self-control… But when the divine element in them became weakened…and their human traits became predominant, they ceased to be able to carry their prosperity with moderation.” As Plato saw it, the Atlanteans were sophisticated because of their identification with their own “divine” nature, rather than their “human” traits. In The Missing Link, I put forth the idea that the GodSelf Icon is a memory of this divine nature in man—a way to remember the divinity within the body. In ancient Greece, the story of Demeter and Persephone, one of the foundational myths of Greece, is closely related to this idea. The Greek gods descended in large part from Demeter, goddess of the harvest, who preceded most of the Olympians, and whose oldest images are represented by GodSelf icons. Demeter, Goddess of Harvest, depicted as a GodSelf Icon, Roman, Augustan period. Demeter’s young daughter Persphone strayed one day from her home in Arcadia (heaven) while picking flowers in the green fields. Suddenly, Persephone “fell” into the Underworld; Hades below had made the ground open to swallow her. Overcome with sorrow, Demeter searched for her daughter ceaselessly, preoccupied with her loss and her grief. The seasons halted; living things ceased their growth, then began to die. A desperate Demeter pleaded with the Supreme God, Zeus, to free her. Zeus concluded that if Persephone had not eaten of the fruit of the lower world, she could return to Arcadia. But if she had, she would have to live a part of each year in the Underworld with Hades. Persephone had indeed eaten a pomegranate while in the Underworld, condemning her to return below for a fraction of each year. Persephone’s time spent in the underworld is thus linked to Fall and Winter, and her return to the upper world with Spring and Summer. To interpret this myth correctly, it’s necessary first and foremost to understand that the myth does not describe anyone or anything external to you. The myth is all about you. It simultaneously describes the dichotomy of your immortal spiritual condition and your mortal human condition. Demeter symbolizes your soul (the divine element in you, as Plato would say) while Persephone symbolizes your body (the human trait, according to Plato). Demeter, your soul, is eternal, powerful, wise and divine. Persephone, your body (who is the offspring or “child” of Demeter just as much as your body is the offspring or “child” of your soul), is naïve, unwise, playful and blissfully ignorant; as such, Persephone is subjected to, and indeed becomes a victim of, the pull and passions of material earthly existence. As evidence of this Demeter/Soul vs. Persephone/Body interpretation, the myth clearly compares and contrasts the higher world of heaven where Demeter resides with the “underworld” or lower world of earth, where Persephone eventually resides. The myth teaches that we’ve fallen from Heaven down to the Underworld (earth), just like Persephone. We have eaten – and we continue to eat – the fruit of this lower world, with its myriad seeds. When we die, we leave this place and ascend back to the source. But, having eaten of the fruit, the soul will necessarily gravitate back down again because, in the words of Socrates, “it is always full of body when it departs, so that it soon falls back into another body and grows with it as if it had been sewn into it.” This is the cycle of reincarnation, a central teaching of the Mysteries. It is an almost endless cycle that will continue until, after learning “the lessons of material/earthly life” we cease to identify with the material bodies we acquire during incarnation and begin to find our true inner Self – the soul. According to Plato, Socrates said that the key “lesson of material/earthly life” is to recognize that earthly existence is made up of “pairs of opposites,” which imprison the soul in the body by preventing it from knowing itself. To elucidate this idea, Socrates uses a certain “pair” of opposites, namely, “pleasure” and “pain”: “…every pleasure and every pain provides, as it were, another nail to rivet the soul to the body and to weld them together. It makes the soul corporeal, so that it believes that truth is what the body says it is. As it shares the beliefs and delights of the body, I think it inevitably comes to share its ways and manner of life and is unable to reach…a pure state; it is always full of body when it departs, so that it soon falls back to another body and grows with it as if it had been sewn into it. Because of this, it can have no part in the company of the divine, the pure and uniform.” There is a parallel to this in the ancient Zoroastrian religion, founded by the prophet Zarathustra (Zoroaster), which sees the world as an arena for the struggle of the two fundamentals of being, Light/Good and Darkness/Evil, represented in two antagonistic divine figures: Ahura Mazda on the side of good against Ahriman on the side of evil. “… the phenomenal world exists of a pair of conflicting opposites: light/dark, truth/falsehood, health/sickness, rain/drought…life/death, heaven/hell. —Karigoudar Ishwaran, Ascetic Culture: Renunciation and Worldly Engagementedited This duality was personified in the primeval “Creator” deity of the Persian religion, an “androgynous” figure named “Zurvan,” depicted in the center below: The androgynous figure of Zurvan, Luristan, Persia, c. 7th BC The so-called “god of time and eternity,” Zurvan, is described by scholars as the “neutral father” of the “good” god Ahura Mazda and the “evil” god Ahriman. These twins are born and emanate from either side of him, as shown in this image from an ancient silver plaque. With his children representing the two opposites, Zurvan is “centered” between them, facing forward. Zurvan’s neutrality between opposites is personified here by his striking the GodSelf pose. He appears to share an arm with both Ahura Mazda (good) and Ahriman (evil), and he is said to be passing along one flame in his “good” hand and one in his “evil” hand. But Zurvan is neither good nor evil; he is the eternal being between these two temporal opposites. He is neutral. Zurvan is for this reason referred to as the god of light and darkness, good and evil, right and wrong, and so on. In fact, those aren’t his arms, though they appear to be. They are the arms of his two lower halves—his left and right sides, good (Ahura Mazda) and evil (Ahriman), that appear to be emanating from him, like the twin male and female faces that emanate from the Buddha. Zoroastrianism emphasizes high moral standards, with salvation achieved by he who strikes the balance and realizes that he is neither good nor evil; rather, he is the eternal being temporarily experiencing these terrestrial apparitions here in the material realm. Ahura Mazda is not a personal god like the God of the Bible, but more of a template that encodes wisdom pertaining to the physical and spiritual constitutions of every man and woman. Zurvan is also a model that the masses should strive to follow. Worship is centered on this idea, not on a personal relationship with God. The GodSelf icon was an important part of the vocabulary of religious and political expression in ancient times. Artists who depicted Alexander used the GodSelf Icon pose, perhaps as a message for posterity, telling us exactly how he became so powerful: Images of Alexander the Great striking the GodSelf Icon pose. Religion for the ancients was not a homogenizing force as it is for most believers today. Your purpose as a spiritual being is not to obey the dictates of an age-old set of rules, nor to pray in prescribed sentences at certain times of the day, nor to sit with a massive group of other like-minded people nodding heads at the time-worn platitudes of a priest. Instead, the GodSelf Icon calls upon us to develop our talents to the fullest, to meditate about and act upon our individual purposes (our Will), and to become the greatest exemplars of our highest purposes. We are not sheep to be herded by priests; rather, we are independent self-sufficient spiritual beings who have a purpose that transcends our bodily functions and social needs. The Missing Link builds on the case I made in Written in Stone, to show that one of the most important symbols in human history has been overlooked and misunderstood. What’s more, that symbol, properly appreciated, is as powerful for people today as it was for our Stone Age ancestors. In fact, one of the spurs to my interest in ancient civilizations was my chance acquaintance with the Freemason movement. Freemasons consciously imitated past symbols in a way that stressed spiritual concepts. I feel immensely grateful that they did so in a manner that preserved something of the past that would otherwise have been completely forgotten. Richard Cassaro’s new book, The Missing Link, explores the meaning, transformations and propagation of the ancient world’s most important religious icon. His first book, Written in Stone, is a wide-ranging exploration of hitherto-unknown connections among Freemasons, medieval cathedral builders and the creators of important ancient monuments, in support of his theory that a spiritually advanced mother culture, lost to history, is behind many of the world’s architectural and artistic traditions. Prior to the publication of Written in Stone, Cassaro enjoyed a successful career as a U.S. correspondent, professional journalist, and photo researcher for Rizzoli Publications, one of the world’s leading media organizations. Cassaro, who is a graduate of Pace University in New York City, has examined first-hand the ancient ruins and mystical traditions of Egypt, Mexico, Greece, Italy, Sicily, France, England, India, Peru and Spain; he has lectured on his theories to great acclaim in the United States, Egypt, Italy, Spain and Peru. Richard Cassaro © Copyright, All Rights Reserved. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to www.RichardCassaro.com with appropriate and specific direction to the original content.
<urn:uuid:06b3119e-12f5-4525-b2e6-633051274e40>
CC-MAIN-2022-33
https://www.richardcassaro.com/june-2016-aom-the-missing-link-evidence-of-a-lost-civilization/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00696.warc.gz
en
0.945081
4,874
2.703125
3
Factors of everyday life can put an abundant strain on a relationship. Severe stressors may include resentment, infidelity, intimacy issues, lack of trust, and miscommunication. When problems go unresolved, or a partner is suffering from mental illness or health complications, one can feel helpless or have feelings of guilt or shame. Communicating effectively on both parts can alleviate emotional anxiety from subjects of all kind. Couples often seek couples or marriage counseling when the relationship is at a standstill, or if they are unsure whether or not the relationship is worth salvaging. This type of therapy can benefit families with children who have been affected by relationship issues such as divorce, and confront the source of the conflict. Treatment techniques may include the following depending on the therapist: - Gottman Method - Narrative Therapy - Emotionally Focused Therapy - Positive Psychology - Imago Relationship Therapy - Analyzing Your Communication - Getting to the root of the problem - Enhancing Intimacy - Individual Counseling - Couple Retreat When a relationship is showing signs of addiction, emotional abuse, sexual abuse, and/or severe depression, seek guidance from a counselor immediately, for your safety and your partner. Depression commonly manifests physically, through stomach pains, headaches, disrupted or excessive sleep, and motor control difficulty. While the causes of depression are unknown, a predisposition for it runs in families and it can be triggered by trauma and adverse life circumstances. Depression is diagnosed more frequently in women and tends to display differently in women than in men. People tend to suffer higher rates of depression after giving birth and in late fall. Depression and anxiety often exacerbate each other and people with depression commonly have difficulty concentrating on tasks and conversations. Some people abuse alcohol and drugs or overeat as a way of coping, causing them to develop other medical problems. Depressed people are also at increased risk for self-harm. Depression is a mental illness which is characterized by prolonged emotional symptoms including: Diagnosing depression involves a psychiatric evaluation and physical tests to determine whether a person’s symptoms are actually being caused by a different disorder. A person must have been experiencing symptoms for at least two weeks to be diagnosed with depression. Every case is unique and requires individual attention, but there are a number of effective complementary ways of treating depression, including: - Talk therapy - Adopting a healthier lifestyle Body image is the mental representation that one creates in their mind, but it may or may not relate to how others see an individual. The skewed view that someone has of their body is a culprit affecting people across the globe, where ethnicity, culture, gender, and age may all fall prey to it. According to ANAD (National Association of Anorexia Nervosa and Associated Disorders), about 30 million Americans suffer from some sort of eating disorder. Eating disorders hold a record for having the highest mortality rate when compared to other mental illnesses; someone dies of an eating disorder every 62 minutes. Types of Eating Disorders There are copious numbers of eating disorders and, unfortunately, the statistics mentioned above don’t begin to scratch the surface. Here are few examples of eating disorders: - Anorexia Nervosa: People reduce the amount of energy intake required for their weight, age, gender, development and physical health. - Bulimia Nervosa: Individuals consume large amounts of food, and then induce themselves to vomit to stop weight gain. - Binge Eating Disorder (BED): Eating large amounts of food in small periods of time. - Avoidant/Restrictive Food Intake Disorder (ARFID): Children are not just finicky when it comes to this disorder, but they become malnourished because they restrict themselves from eating certain foods. - Diabulimia: People with Type 1 diabetes purposely underuse insulin to control their weight. Like other mental disorders and illnesses, care should involve a diverse team of experts. It’s recommended that professional caretakers include the following: - Social worker - Primary care physician Due to the severe toll that eating disorders may have on an individual’s physical health, psychological therapy is not enough. It’s also important, if possible, to incorporate family therapy and support groups. Family-Based Treatment, according to NEDA, is a method used for patients who are minors. In severe cases, inpatient care may be necessary; the person suffering from the eating disorder will be hospitalized or placed in residential care. If you or someone you care about is suffering from an eating disorder, call the helpline now at 1-800-931-2237. An eating disorder is a serious medical and health concern that needs to be addressed as soon as possible. Is your family going through a rough patch? Whether the issue itself, stems from a lack of understanding between those involved, sibling conflict, or developmental disagreements with a child, a difficult situation can have an effect on the entire family. Family counseling, or family therapy, can be helpful when problems arise and can help restore and improve communication. Some situations that may benefit from family counseling include: - If a family is going through a loss. - If a family member is suffering from substance abuse - Issues between parents (parenting issues or going through a divorce) - If a teenager is experiencing behavior issues such as anger outbursts - Sibling conflict How can family counseling help? Family counseling can help open up a line of dialogue and communication and can help family members understand each other’s perspectives. This makes it easier to resolve disputes. During the counseling sessions, each member has the ability to learn ways of communicating better, as well as developing techniques to de-escalate arguments while making sure that everybody is getting heard. This can also help with parenting problems such as conflicting parenting styles, rule enforcement and remaining consistent with your child once the rules are established. How is it accomplished? Family therapy or counseling can be used in addition to individual treatment. The goal is to improve relationships and improve methods of communication and conflict resolution. Families are a unique ecosystem, and issues affecting one member of a family can reverberate and affect the whole unit. Additional benefits of this type of counseling are that in some instances, the sessions can heal emotional wounds in a short period of time. Throughout the course of our years, we all experience a loss at some point in our lives. In fact, statistics show that 1 in 5 children will experience the death of someone close to them before 18 years of age. Feelings of grief and loss are not always associated with death, however, but commonly surface after a loss of some kind – whether it is the loss of a loved one, a severed relationship, a pregnancy, a pet, or a job. When a person loses something or someone valuable to them, feelings of grief can be overbearing. Grief can leave a person feeling sad, hopeless, isolated, irritable, and numb by affecting them mentally, emotionally, and physically. It’s important to understand that healing from grief is a process and everyone copes with this emotion differently. Many people don’t know what to say or do when a person is grieving, but be sure to have patience with the individual (including yourself) throughout the entire process. An alternative treatment method includes psychotherapy. Through psychotherapy, a patient may: - Improve coping skills - Reduce feelings of blame and guilt - Explore and process emotions Consider seeking professional support if feelings of grief do not ease over time. Grief is the emotional response to any type of loss. Perhaps of a loved one due to death or divorce, but also the loss of a job, a pet, financial stability, or safety after trauma. Feelings of grief can be overwhelming, and it can be hard to know how to manage and overcome these emotions. It is important to have patience with yourself and others during this process as it is a healthy part of healing. If you are having trouble coping on your own, or know of someone who could use extra support, a therapist can assist. There is no orderly process of passing through stages of anger, denial and acceptance. Everyone experiences loss differently based on their personality, culture, and beliefs, among many other factors. Common symptoms of grief include: - Shock and disbelief: feeling numb about the event, having trouble believing it happened, denying it, or expecting to suddenly see the person you lost. - Sadness: crying, or having feelings of emptiness, despair, yearning, or loneliness. - Guilt: regret over things unsaid or undone, feeling responsible for the death or the event, or shame from feeling relieved by a person’s passing. - Anger: blaming someone for injustice. - Fear: feelings of anxiety, helplessness, and insecurity, or having panic attacks. - Physical symptoms: fatigue, nausea, weight loss or gain, aches and pains, and insomnia. Coping with Grief and Loss An important part of healing is knowing that you are not alone. Seek support from your friends, family, or faith, or join a bereavement support group. Sharing your loss can make the grieving process easier. Remember to take care of yourself; to eat, sleep, and exercise even when you’re too stressed or fatigued to do so. A healthy alternative is to seek the help of a professional therapist. A therapist can help you work through your intense emotions in a safe environment. Post-traumatic stress disorder (PTSD) is a disorder that develops in some people who have experienced a shocking, scary, or dangerous event. It is natural to feel afraid during and after a traumatic situation. Fear triggers many split-second changes in the body to help defend against danger or to avoid it. This “fight-or-flight” response is a typical reaction meant to protect a person from harm. Nearly everyone will experience a range of reactions after trauma, yet most people recover from initial symptoms naturally. Those who continue to experience problems may be diagnosed with PTSD. People who have PTSD may feel stressed or frightened, even when they are not in danger. When to See a Therapist If you have disturbing thoughts and feelings about a traumatic event for more than a month, if they’re severe, or if you feel you’re having trouble getting your life back under control, talk to a mental health professional. Getting treatment as soon as possible can help prevent PTSD symptoms from getting worse. If You Have Suicidal Thoughts If you or someone you know has suicidal thoughts, get help right away through one or more of these resources: - Reach out to a close friend or loved one. - Contact a minister, a spiritual leader or someone in your faith community. - Call a suicide hotline number — in the United States, call the National Suicide Prevention Lifeline at 1-800-273-TALK (1-800-273-8255) to reach a trained counselor. Use that same number and press 1 to reach the Veterans Crisis Line. - Make an appointment with your doctor or a mental health professional. When to Get Emergency Help If you think you may hurt yourself or attempt suicide, call 911 or your local emergency number immediately. If you know someone who’s in danger of attempting suicide or has made a suicide attempt, make sure someone stays with that person to keep him or her safe. Call 911 or your local emergency number immediately. Or, if you can do so safely, take the person to the nearest hospital emergency room. Child and Adolescent Counseling Children, just like adults, can participate in and benefit from counseling. Counseling can help children and adolescents learn how to identify causes of their distress, develop their skills in asking for help and expressing emotions, and improve their problem-solving abilities. Why would I send my child or adolescent to counseling? Children, just like adults, experience stress. Common stressors for children include school and family issues. School stressors may include excessive or difficult homework, test anxiety, peer pressure, bullying, and learning difficulties. Family issues may include parental arguing, divorce, moving homes, new sibling, major illness, death, loss, and transitions. If you notice a change in your child’s behavior (e.g., inattention, arguing, withdrawing) or emotions (e.g. depressed, angry, worried, stress) and think they may need help, child/adolescent therapy may be a good resource. What is the goal of child/adolescent therapy? Specific therapy goals are customized to meet the needs of the child and their family. The overall goal of our child and adolescent therapy approach is to alleviate symptoms of distress; improve the child’s social and emotional resources; increase their use of effective communication skills; and strengthen family, community, and peer relationships. Intimacy problems widely occur behind a variety of closed doors. Conflicts may include a loss of harmony between the sheets, a lack of sexual desire between either partners or failure in communication. There are often psychological factors that may contribute to a sexual disorder such as erectile dysfunction, or a lowered desire after a new mother has given birth. Intimacy issues are common, but if one or more become severe and there is no resolution in sight, it may be time to seek therapy for guidance. What Makes a Satisfactory Relationship? - Mutual Respect Some of the signs that sex problems are affecting a relationship include: - Disappointment in oneself or the relationship - One or both partners are feeling dissatisfied - Couples lack communication and disconnect from one another - One or both partners feel neglected or unwanted - A feeling of sexual boredom or unhappiness Steps to take for treating intimacy issues begin with: - Psychosexual Therapy: this technique allows couples to express themselves in a safe environment with a trusted and supportive professional. - Relationship Counseling: healthy relationships require strong connections and time to build trust. Whatever the issue may be, a counselor can work with individuals together or separate to overcome the problem. Panic attacks are brief episodes of extreme fear. They may be mistaken for heart attacks or strokes, but are actually psychological rather than physical. Panic attacks can occur suddenly and usually peak within ten minutes. Most panic attacks end within 20 to 30 minutes. Some symptoms include: - Chest pain - Feelings of suffocation Sometimes panic attacks are isolated incidents, but if a person has had at least two panic attacks and lives in fear of having another, they may have panic disorder. A panic attack can happen without an obvious cause, but people with panic disorder may develop phobias related to something they associate with panic attacks, including open spaces, and large crowds. Panic disorder is classified as an anxiety disorder, and like other forms of anxiety, it is commonly treated with a combination of therapy, medications, and healthy lifestyle changes. Anxiety patients are also encouraged to do breathing exercises, get regular exercise, and to avoid stimulants. The number of situations associated with parenting and families is endless, but common conflicts can include in-laws sticking their nose your relationship, difference in opinion when it comes to raising children, and even trauma, such as domestic violence, or alcohol and drug abuse. It can be challenging to watch family members struggle, and in most cases, you may not know how to resolve the problem. Seeking support from a mental health professional can help parents and families develop acceptance and skills to repair relationships that may seem unsalvageable. Other parenting and family issues may include: - Being a single parent - Problems caused by divorced parents entering new relationships - Fewer opportunities for parents and children to spend time together Parenting and family issues are oftentimes intertwined. Treatment methods vary and will depend on an individual or family situation. The healing process may focus on improving communication between family members, as well as finding healthy ways of resolving a conflict. Setting clear boundaries and communicating effectively as a parental unit can set a good example for your children and/or your spouse. If a child is suffering from a genetic disorder or a mental health condition, these are topics that can be addressed with family or individual counseling. Couples counseling is effective at supporting parents in child-rearing. You don’t have to face parenting and family challenges alone. Seek help from a qualified therapist or a professional support group to gain some much-needed perspective that will allow you to effectively work through the problems at hand. Being a parent carries a lot of responsibility, and the process can be difficult at times. Whether you are married or single, you may have feelings as though you are on your own, especially if you are dealing with a difficult situation or behavior issues with your child. It’s important to address these problems, and seeking the help of a therapist and/or parent support group can alleviate the stress. Why is parenting support necessary? Sometimes a parent needs guidance when reinforcing rules and setting boundaries for a child. If a person is going through a divorce, this can affect a child or children involved. Each of this issues can affect a family unit, and its important that you don’t weather the storm alone. Parent support groups can assist with improving parenting skills, as well as relationships between the parent and child. What does parenting support look like? - Therapy can be in the form of a support group with other parents, one-on-one sessions with a therapist, or may involve family counseling. Support can be helpful if you have a young child who is going through some kind of developmental or genetic disorder. - Parenting support can take the form of group therapy which involves meeting with other parents to discuss your child’s behaviors and offer advice to one another. Encountering certain obstacles or situations may leave one frightened, such as being afraid of the dark, high heights, or animals. Most of us are able to remain calm, rationalize the situation, and find a way around it, but this doesn’t work everyone. According to the National Institute of Mental Health, more than 10 million adults live with some kind of phobia. What is a phobia? Phobias, according to the American Psychological Association, are intense fears that result in distress and can be intrusive. Individuals with this anxiety disorder have an irrational fear of things that don’t pose any real threat. Here are a few examples of common phobias: - Arachnophobia, which is the fear of spiders - Acrophobia, this is the fear of heights - Agoraphobia, which is the fear of being in a situation you can’t escape from The American Psychiatric Association simplified the symptoms into two points: - An out-of-proportion reaction, as well as the age playing a role in being inappropriate - The individual’s capability to behave normally is compromised Unlike anxiety disorders, such as Obsessive Compulsive Disorder, there isn’t extensive research that has been completed on phobias, but that hasn’t stopped mental health professionals from finding ways to help patients. - Therapists help treat phobias by using psychotherapy, also known as talk therapy. The patients receive CBT (cognitive behavior therapy), where they can learn how to think, react, and behave to whatever it is that they fear. It is meant to reduce the feeling of overwhelming anxiety. - Medications, on the other hand, aren’t a cure but they help patients deal with symptoms. - Individuals can also learn stress-management techniques, such as meditation, yoga, or other holistic approaches. While one of these methods may work for some, professionals may provide their patients with a combination of these treatments and remedies. Unfortunately, the cause of anxiety disorder is unknown. It may be due to genetics, the environment, or even developmental. But until then, people dealing with phobias should seek help. Everyone encounters stress during their lives at one point—never-ending bills, demanding schedules, work, and family responsibilities—and that can make stress seem inescapable and uncontrollable. Stress management skills are designed to help a person take control of their lifestyle, thoughts, and emotions and teach them healthy ways to cope with their problems. Find the Cause The first step in stress management is identifying your stressors. While this sounds fairly easy—it’s not hard to point to major changes or a lot of work piling up—chronic stress can be complicated, and most people don’t realize how their habits contribute to their stress. Maybe work piling up isn’t from the actual demands of your job, but more so from your procrastination. You have to claim responsibility for the role you play in creating your stress or you won’t be able to control it. Strategies for Stress Management Once you’ve found what causes your stress, focus on what you can control. Eliminate the realistic stressors and develop consistent de-stressing habits. Instead of watching TV or responding to texts in bed after work – take a walk, or read a book. Maintaining a healthy diet, exercising regularly, and getting enough quality sleep, will ease feelings of stress and help you relax. Also, make a conscious effort to set aside time for yourself and for relaxation. Alone time can be whatever you need it to be. Some people like doing activities such as tai chi, yoga, or meditation, but you can also treat yourself to something simple, like taking a bubble bath, listening to music, or watching a funny movie. Finally, don’t feel like you have to solve your stress on your own. Reach out to your family and friends. Whether you need help with a problem or just need someone to listen, find a person who will be there to positively reinforce and support you. If stress becomes chronic, don’t hesitate to seek the help of a therapist. Most of us spend more time at work than at home, therefore the workplace should be an environment where we feel safe and comfortable. However, because work is where a bunch of different personalities, communication styles, and worldviews gather around, things don’t always go smoothly. In fact, workplace bullying is on the rise and though statistics vary, some studies reveal that nearly half of all American workers have been affected by this problem, either as a target or as a witness to abusive behavior against a co-worker. Examples of common workplace issues include: - Poor job fit - Mental anguish - Sexual or verbal harassment - Low motivation and job dissatisfaction How a Therapist Can Help Therapy for work and career issues can help a person develop a better understanding of their wants and needs as well as approach alternative ways to handle tension while on the clock. Therapy is a neutral setting where patients can discuss their fears, worries, or stressors, and regain control of their happiness. Psychotherapy tends to work well when addressing workplace issues because talk therapy such as this can effectively treat depression and anxiety that can stem from these conflicts. A mental health professional can also teach coping skills that will help a person manage work-related stress. Challenges with life’s transitions occurs when a person has great difficulty coping with, or adjusting to, a particular source of stress, such as a major life change, loss, or event. Because people who are experiencing challenges with life transitions often have some of the symptoms of clinical depression, such as tearfulness, feelings of hopelessness, and loss of interest in work or activities, this is sometimes informally called “situational depression.” Unlike major depression, however, adjustment challenges don’t involve as many of the physical and emotional symptoms of clinical depression (such as changes in sleep, appetite and energy) or high levels of severity (such as suicidal thinking or behavior). Life is continually unfolding. Sometimes it meanders and other times it twists and turns abruptly. And other times it stagnates or gets stuck. If life’s transitions are troubling you, it can help to seek guidance on how to navigate your way onto a better course. Attention-deficit/hyperactivity disorder (ADHD) is a brain disorder marked by an ongoing pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development. - Inattention means a person wanders off task, lacks persistence, has difficulty sustaining focus, and is disorganized; and these problems are not due to defiance or lack of comprehension. - Hyperactivity means a person seems to move about constantly, including in situations in which it is not appropriate; or excessively fidgets, taps, or talks. In adults, it may be extreme restlessness or wearing others out with constant activity. - Impulsivity means a person makes hasty actions that occur in the moment without first thinking about them and that may have high potential for harm; or a desire for immediate rewards or inability to delay gratification. An impulsive person may be socially intrusive and excessively interrupt others or make important decisions without considering the long-term consequences. Signs and Symptoms Inattention and hyperactivity/impulsivity are the key behaviors of ADHD. Some people with ADHD only have problems with one of the behaviors, while others have both inattention and hyperactivity-impulsivity. Most children have the combined type of ADHD. In preschool, the most common ADHD symptom is hyperactivity. It is normal to have some inattention, unfocused motor activity and impulsivity, but for people with ADHD, these behaviors: - are more severe - occur more often - interfere with or reduce the quality of how they functions socially, at school, or in a job
<urn:uuid:970731e7-8943-42c7-8904-4be4e3ce3b6f>
CC-MAIN-2022-33
https://transfigurationcounseling.com/counseling
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00496.warc.gz
en
0.948292
5,425
2.609375
3
Many of us have listened to music due to several reasons, motivation, mood, anger, or emotional attachments. There may be a lot of reasons why people turn to music daily and some of these reasons are because music is good for our health. There are mental, physical, sports betting and spiritual reasons why people listen to music, and here are some of the benefits of listening to music for your health. When people are stressed, they either look for music that makes them feel pity for themselves or the kind of music that will strengthen them. Stress can be a problem especially when is chronic which may afterward cause imbalance leading to known headaches and insomnia. This is when music plays a part to strengthen a person in any situation. Music like Rhumba, Dance, etc. which can make you dance with a fast rhythm can help one to get over a stressful day. As mentioned above, music relieves stress. When stress is reduced, the mood is regulated which may lead to the disappearance of stress. A positive mood may come from music beats, melody, and lyrics which may lead to the regulation of a bad mood. Music is a great regulator of a bad mood and people should listen to music to get over a bad mood. The way music is used in therapy to help people is a sign that music benefits people health-wise. High blood pressure can be lowered by relaxing and listening to calm music that makes you relax and focus on your health and online casino games . Music therapy is capable to make blood pressure drugs effective. Therefore, music keeps your heart healthy. If music can relieve stress and regulates mood it, therefore, shows that it can help someone to think clearly due to the stress relieved from a person. Music does not only give strength to the body but also it is mental energy that helps one to increase alertness and boost memory. Calm music is used in music therapy to get the minds of patients clear and calm which boost memory. With different types of genres in music, you can choose the music that works for you so that you benefit from music health-wise. Music is one of the many things that can be used for health benefits. What happens after you put down your bet? Is it possible you have lost money because of a rogue betting site? The answer is yes. In fact, horse racing is one of the industries where the practice of bookmaking has become rampant. When dealing with gambling sites, there are several things to consider. If you are looking for information on how to spot a rigged pokies online for aussies or sportsbook, read our guide below. There are thousands of betting sites out there to choose from. Some even offer mobile apps; however, they don’t always guarantee fair gaming. They also pose security risks since they allow third parties to access your data. This means you should only play at legitimate bookmakers who provide strong customer protection standards. As mentioned earlier, there are many ways to get cheated or ripped off when betting on horses. Below is how to spot a rogue horse betting site. A lot of online casinos and betting sites use the same software as brick-and-mortar establishments. However, these websites may not be trustworthy due to their lack of security measures. These include: Virus Protection – A virus can infect any computer used by a gambler, which could potentially erase all files and jeopardize important documents like tax returns. Customer Support System – Many websites do not have live chat agents available 24/7 to assist online casinos usa customers in case of an urgent situation. You need to contact them during regular business hours. Many websites that claim to be professional bookmakers have terms of service that will leave you feeling frustrated. For instance, some of these terms stipulate that you must open an account within 48 hours and deposit a $50 minimum. Others require you to deposit before placing your first bet. These kinds of unfair conditions exist everywhere but most people are aware of such practices if they happen at physical casinos. It is therefore crucial that you familiarise yourself with the rules of a particular website before agreeing to its TOS. Many gamblers will overlook this aspect because they simply want to enjoy playing poker games. However, you should never give away your private information without knowing exactly what you are getting into. One way to protect your privacy is to sign up using a fake name so that nobody knows your real identity. Another thing you can do is to create a different email address for each gambling account. Finally, avoid giving out information about yourself through social media platforms. Most websites let you place bets based on your location. However, it is very risky to gamble outside your country of residence. Bookies are legally required to impose strict regulations on international transactions. If you suspect that your money may have been sent overseas, check the transaction history on your bank statement to verify whether this is indeed true. Most banks now provide this feature. Also, avoid sending cash directly to online bookmakers. Instead, ask them to pay you via PayPal or similar services. Fashion is a way of life, and it’s not just about what you wear; it’s also about how you live your life. It’s the way you carry yourself, the way you speak, the way you act, and the way you treat others. If you want to be fashionable, then you need to know fashion. This article gives you a list of ways to keep up with a fashion trends lifestyle. The first thing that you should do when you are trying to become more fashionable is to figure out what kind of style you like best. Do you prefer casual or dressy? Are you into trendy styles or classic ones? Once you have figured out what type of style suits you best, you will be able to find clothes that match your personal taste. You have to know which trends suit you just like how you know which mobile casinos Australia game is best for you. There are many different types of clothing that are in trend right now. For example, there are jeans, skirts, dresses, tops, jackets, and shoes. When you go shopping for new clothes, make sure that you look at all of these items so that you can pick up on any trends that might be going around. If you want to stay current with the latest trends, you need to keep an eye out for news stories about them. There are many websites online that provide information about the newest fashions. These sites can help you stay informed about the latest trends without having to pay attention to every single detail. When you buy new clothes, you need to get them in the right size. This means that you need to measure yourself before you start buying clothes for casino en ligne en France games. Make sure that you take measurements from the places where you think that you would wear your clothes most often. You’ve probably heard that saving money is something young adults should be good at. Even though you might be tempted to spend every penny you earn like crazy during your teenage years, you don’t want to miss out on opportunities later down the line. Let’s see below how teenagers save money for best casinos online when they are in the right age. This is where you need to learn to prioritize spending and cut back when necessary. Make sure what gets bought has a long-lasting effect on you. If it doesn’t make you feel better or bring in more money, then it isn’t worth buying. This does not mean that you can’t ever splurge or treat yourself once you are an adult. It just means you have to think of the future first before you buy something unnecessary. Once you know exactly how much you will have each month to spend, you’ll have no excuse for overspending. Budgeting is also useful because it shows you which expenses add value to your life so they get priority over others. While this may seem stressful now, it will pay off big time when you start earning your income. Making a budget when playing casino games online is also helpful, it makes you keep track of all your losses and gains. The best way to save money is by getting creative about shopping. Do some research online or go to stores that often offer special promotions. For example, Target offers weekly ad circulars a week before their monthly ad goes live. Go through them and find items that you would normally purchase elsewhere but at a lower price. These days, there are tons of ways to use coupons. When grocery store ads come out, check them carefully to figure out if they have any coupon codes attached. Most retailers send emails with coupons, too. Also, look into stores such as Groupon that give cash back on purchases. Rebate programs are common and most manufacturers will provide them for free. In conclusion, saving money as a teenager is one of the best things you can do to prepare for adulthood. You won’t regret learning to handle finances responsibly. Are Disney movies just cartoons filled with songs and characters from fairy tales? Or does Walt Disney Studios make some good quality films too? Some argue they only produce family-friendly stuff, while others claim their output has moved beyond the realm of pure entertainment. Still, others swear they’re nothing special at all. Here are some of our favorite Disney movie releases… A classic Disney film that follows the adventures of a group of dalmatian puppies who find themselves in London during World War II. They eventually help save England from an invading Nazi force. This is one of Disney’s most popular animated features ever made. It was also nominated for two Academy Awards: Best Music Score, real money online casino and Best Song for “One Hundred Years Of Sunshine.” An old-school Disney cartoon about a stray mutt named Horatio who befriends a wealthy Englishman living on a farm. Together, they travel through time and space to meet other dogs like themselves who have become separated from their owners. This Disney masterpiece follows the story of Princess Aurora as she lives with her evil stepmother and stepsisters. Eventually, she falls asleep and dreams of seven enchanted princes who come to wake her up. Luckily, she wakes up before any harm happens. This movie was supposed to be released by Sony Pictures but when it didn’t get distribution rights to it, Disney decided to release it themselves. When it came out, it won 11 Oscars including best picture. A live-action remake of Disney’s animated classic about Belle, the beautiful young woman who works in a castle. She’s taken prisoner by the beast after saving his life. He doesn’t turn into a prince until she agrees to marry him. Their love story teaches us all to never give up on what we want so badly. Disney movies have inspired so much, be it among the top ranking actors of today, in clothing, or even in online gambling usa game development! No matter what people say about them, there’s no denying this American company will go down as the leader of animation. So if you haven’t seen the best Disney movies yet, then you owe it to yourself to watch them! In this article we are going to discuss about the amazon gift card. At the end of this article you will get the complete information about this card and you will also able to use this card. Do you want to buy Amazon gift card? This is a great idea for making purchases without using your credit card details. But, first you have to know where to buy them and how to use them. We will answer these questions throughout this short article. Large e-commerce platforms typically allow their users to make purchases using gift cards. For this reason, Amazon also allows you to buy Amazon gift cards to pay for your online shopping more conveniently. All this without providing your credit card numbers. You can buy a gift card for yourself and then redeem it using a code. But first, we’ll tell you more about what Amazon gift cards are, where to buy them, and what denominations they come in. Amazon gift cards are physical or digital cards, which you can buy at certain merchants. When purchasing them, you must load them on the Amazon platform so that the balance becomes effective and thus start paying for your orders. These cards are used to buy any product on Amazon. Its great advantage is that you can pay for your purchases by using them, instead of providing your credit card details, for example. On the other hand, Amazon gift cards are also still used to give credit to anyone, and use it to buy what they want. Amazon cards can only be used on the website of your country. To use them, you have to top up your balance through the Amazon platform. Then, you need to redeem your balance by performing the following steps: Enter Amazon from the browser of your laptop or mobile. Or instead, open the Amazon Shopping app where you can also do this. In the menu, go to the “Gift cards” section. Select the “Redeem a Gift Card” option. Write the redemption code corresponding to your card in the corresponding field. You can also scan the card if you are on a mobile device. To finish, click on “Apply to balance”. When you finish all the steps, you are ready to use your Amazon card. The balance will be stored in your account and you have a maximum of 5 years to spend it. When you want to pay for a product with this amount, you only need to select that card as the payment method: We carry a wide selection of Bowie knives that are perfect for camping, hunting, and self-defense. We also offer a variety of other services such as animal butchering and skinning. So if you’re looking for a versatile knife that can handle anything, Yoyoknives is the place to go. The Best Bowie Knife has been used in many movies, including The Iron Maidens and Jim Bowie. It is most well-known for being a weapon but it also comes in handy when you need to cut things at home or during camping trips! You can find endless versions of these knives with different designs and patterns – some even smaller than your average kitchen blade (perfect if size matters). Latest collection of Bowie Knife in 2022! If you are looking for a Bowie Knife that is different from the rest, then you need to check out our collection. We have a wide variety of knives that are perfect for any situation. So whether you’re looking for a knife to take camping or one to add to your collection, we have what you’re looking for. How to Choose a Bowie Knife? When it comes to choosing a Bowie Knife, there are a few things you need to keep in mind. First, you need to decide what you will be using the knife for. This will help you narrow down your choices and find the perfect knife for your needs. Size is another important factor to consider. Damascus kitchen knife set come in a variety of sizes, so you need to choose one that is comfortable for you to hold and use. Finally, you need to decide what material you want your knife to be made from. The most common materials are stainless steel and carbon steel. Each has its own advantages and disadvantages, so it’s important to choose the right one for you. The handle is one of the most important parts of the knife. It needs to be comfortable to hold and should not slip in your hand when wet. There are a variety of materials used for knife handles, including wood, metal, and plastic. The blade is the other most important part of the knife. It needs to be made from a strong material that can withstand a lot of wear and tear. The two most popular materials for knife blades are stainless steel and carbon steel. Each has its own advantages and disadvantages, so it’s important to choose the right one for you. Why Choose Yoyoknives? Yoyoknives is the leading provider of high-quality knives. We have a wide selection of knives to choose from, so you’re sure to find the perfect one for your needs. We also offer a variety of other services, such as animal butchering and skinning. So if you’re looking for a versatile knife that can handle anything, Yoyoknives is the place to go. We’re also the most trusted provider of Bowie knives. We have a wide selection of knives to choose from, so you’re sure to find the perfect one for your needs. We also offer a variety of other services, such as animal butchering and skinning. So if you’re looking for a versatile knife that can handle anything, yoyoknives is the place to go. With Mother’s Day fast approaching, it’s time to start thinking seriously about what you can do or give to your mom that will show to her just how much you care and appreciate her. While moms always know that we love them, giving them an extra special Mother’s Day can be a great way to put your love into action. But rather than giving her a coupon book or the basic flowers, try giving your mom a gift she’ll really love this year. To help give you some ideas, here are three great gifts you could give you mom this Mother’s Day. Schedule A Family Photoshoot Family is one of the most important things to every mom. And while spending time with your family is great, being able to document these good times can help your mom remember those good times for years to come. If each member of your family doesn’t live close together but will be coming together for a holiday in the near future, consider gifting your mom with a family photoshoot when everyone is together. Give her the name of the photographer you plan to use so that she can have something to look forward to about your next family get-together. And if everyone does live close and gets together often, surprising your mom with a group photoshoot will be something she’ll never forget. Surprise Her With A Day Of Relaxation Regardless of how old your mom is, be it still raising kids at home or getting ready to move into an assisted living community, having a day of relaxation and pampering is something every mom loves. Especially for moms who have stressful jobs or spend the majority of their time taking care of other people, having a full day where she can unwind, destress, and feel relaxed can be a great way to show her that you’ve noticed how much she does for you and that you want to return the favor. Tackle A Home Improvement Project Most moms have a to-do list that’s miles long. So if you’re ready to get your sweat on, you may want to take on one of the home improvement projects that she’s been meaning to get to for months. If you choose to go this route, make sure you do your due diligence and learn how to do the job correctly and in the style she wants. This can be a great way to give your mom a Mother’s Day gift that keeps on giving. If you want to show your mom just how special she is to you by the gift you give her this Mother’s Day, consider using some of the options mentioned above to help you pick the perfect present for her. If you’re getting ready to sell your home, there are a few things that you’ll want to have in place to ensure that you’re able to get your home sold quickly and for the biggest profit possible. One important thing that can help you to reach these goals more easily is to show your home in the greatest light when taking photos of the property. Taking photos of a home is a little different than taking photos of people or other objects. So to ensure that you do this the right way, here are three tips for taking photos of your home before putting it on the market. Start With A Sunny Day The first thing you need to know about taking great photos of your home is that you need to choose the right day to take the photos. Ideally, you should take all of the photos of your home on the sunniest day you can find within your timeframe. All homes look better when they’re shown in natural light, and a sunny day will make it so everything about your home looks lighter and brighter in your photographs. So if you live in a place that has frequent sunny days, make sure you choose a day like this to have photos taken rather than taking them on a day that’s more overcast or rainy. Position Yourself In The Corner When you’re ready to start taking the actual photos, getting the right shot of each room in your home will likely happen by standing in the corner of the space. From the corner, you’ll be able to capture as much of the room as you possibly can. Additionally, getting a photo from this angle can help make the room appear to be bigger. And if you’ve just done some renovations on your home, make sure you stand in the corner of the room that will best showcase this investment. Consider The Angles And Heights Another thing you’ll want to think about when taking photos of your home as you attempt to sell it is the angle and height of the photos you take. To showcase the greatest amount of each room that you’re taking a photo of, you should try to use a wide-angle lens. This type of lens can help people feel the actual size of the room. And when you’re taking any photos, try to use a tripod for the camera and set it to about chest height, which will help people see the most real-life perspective of the space. If you’re going to be taking photos of your home before you put it on the market, consider using the tips mentioned above to help you get the best possible photos to show your home in the best possible light. It can be challenging for most people to live alone, especially when they always have someone around in the home with them. Some may worry about feeling lonely, coming home to an empty house at the end of the day. Others worry about the responsibilities of being independent and whether they can manage independently. And there are those excited to go on their own and enjoy their freedom, especially young adults who have reached the legal age to spread their wings and start a new life. The one thing they can all expect to do is adjust, which will ultimately be worth it. There are valuable lessons and excellent benefits to living alone. You get to make decisions for yourself and handle different situations the way you see fit. You may feel lonely initially, but you will soon discover that you are comfortable with yourself. Happiness, as they say, is a decision that is yours to make. Living alone makes you a stronger person who can fend for yourself. Eventually, you make wiser decisions for your benefit. For example, you can choose to visit youngautomotive.com and purchase a quality used car, saving you money, among its other advantages. Actions like this that you do on your own also give you that sense of freedom as you take control of your life. Below are some excellent reasons why living alone may be good for you. Living alone means you have your entire home to yourself. You do not need to worry about having to hide in your bedroom when you need privacy. Every area of your home is what you make it to be, and you can be yourself, live the way you want, and decorate the way you please. You can arrange your furniture in any manner and use any room for your home office or workout area. You become more creative in the process and start to appreciate living alone. Many people who live alone agree that one of the advantages of living alone is making their own decisions. For instance, you have the option not to interact with other people if you don’t feel like being social. You can choose to have guests over or meet up with friends when you feel like hanging out. You can decide to laze around on your couch the whole day and not worry about anybody else feeling uncomfortable. When you live alone, you can do whatever you want inside your home without distractions. If you decide to work from home, you have the privacy to focus on your tasks and adjust your schedules according to what is comfortable for you. If you are tired and want to settle in early or have to work late on something, there are no disruptions to keep you from doing what you have to do. Independence makes people more resilient and capable of taking care of themselves. When loneliness creeps in, you can always hang out with friends or family members and go back home afterward to enjoy your privacy.
<urn:uuid:29ecfe25-cada-4115-840c-fe4f013ef408>
CC-MAIN-2022-33
https://worldpicturenews.com/category/blog/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00297.warc.gz
en
0.957189
5,151
2.6875
3
Some highlights from SANDRP’s latest Publication on Riverine Fisheries of the Ganga The government is discussing Ganga not only as ‘Ganga Mata’, but also as a ‘navigational corridor’ with plans to build barrages after every 100 kilometers with funding from World Bank. At her origin, hundreds of hydropower dams are changing the ecological character of the Ganga. However, as a rich ecosystem, the Ganga also supports about 10-13 million riverine fisherfolk and about 300 freshwater fish species! Riverine fisheries have been a blind spot in Independent India, despite the fact that they provide nutritional and livelihood security to millions of people.In the post independence water management discourse, river has been equated to water and water to irrigation, water supply, and hydro power. The profound impacts of irrigation, water supply and hydropower dams on sectors like riverine fisheries have been entirely ignored. Nachiket Kelkar looks at the status of riverine fisheries and fisher communities in the Gangetic Basin of India and highlights the devastating impacts of dams, barrages and water abstractions on this. Nachiket’s study on Gangetic Fisheries is based on long term engagement with fisher communities in the basin as well as robust scientific studies. SANDRP has published this work in the form of a Primer which will soon be available online. What follows are some glimpses from the Primer. Please write to us if you are interested in receiving a full soft copy of the Primer. Riverine fisheries of the Gangetic basin support one of the largest fishing populations of the world. However, its fish resources are rapidly declining due to large dams, barrages and hydropower projects, severely altered river flows, fragmentation of hydrological connectivity between rivers and wetlands, alarming levels of pollution, riverfront encroachment, rampant sand mining and unregulated overexploitation of fish resources. Across its range, the fisheries show indications of economic unviability and ecological collapse, with violent social conflicts as an outcome of the contest over scarce and declining resources as well as politics and access. A major factor behind the serious fisheries-related problems is severe alteration of river flow volume and seasonal dynamics by large dams, barrages and hydropower projects. The state of river fisheries directly indicates the declining biophysical, ecological and social integrity of the river basin. The existing in-river fisheries contribute merely about 10% of the overall inland fish production. Even this production is highly unsustainable today and has all the indicators of serious levels of overfishing. For instance, river fisheries in Bihar now even glean small-sized fish fry for markets in northern West Bengal (Siliguri) and Assam, where eating small fish is a delicacy (F.pers.comm). To understand the situation in Gangetic Basin clearly, a detailed, large-scale interview survey was conducted by the author in 2012 across 372 fishers in 59 fisher groups spread over 17 rivers in 5 north Indian states. The survey objective was to document perceptions of traditional fishing communities about issues and problems in fishing in the Gangetic basin. Of the respondents, c. 90% singled out “large dams and poor river flows” as the main causes for a near-total decline in fisheries and fish resources over the past 4 decades. About 90% people mentioned low water availability and stoppage of fish migratory routes by large dams as the main cause for fish declines. Almost 45% (from eastern and northern UP, and Bihar) singled out the Farakka barrage as the main problem. The Canvas of Gangetic River Fisheries The Ganga River, from her headwaters to the delta, along with hundreds of her tributaries drains an area of approx. 0.9_1 million km2 across northern and eastern India, flowing through 10 states in India and also in Nepal and Bangladesh. These rivers form one of the largest alluvial mega-fan regions of the world, and deliver huge quantities of sediment from the Himalayas to the northern Indian plains and to the Bay of Bengal in the Indian Ocean. The Gangetic floodplains shape not only landforms but also complex human cultures that attempt to stabilize themselves and adapt to the constantly changing riverine forces. Biodiversity, hydrology, geomorphology and social dynamics influence each other through constant interaction and multiple feedback mechanisms. The dynamic balance of these factors triggers opportunities for spawning, reproduction, population dynamics and viability, migration and movement of freshwater species, including fishes, river dolphins, otters, crocodilians, turtles, invertebrates as well as terrestrial biodiversity. In floodplain rivers, as floodwaters recede post-monsoon, fishers record the highest catches in October and November, as large post-breeding and migrating adult fishes (e.g. major carps, clupeids, mullet) become catchable. Winters, from December to early February, generally record low catches because many fish show slowed behavior and limited movement. But in spring fisheries of minor carps and catfishes record high production. With water levels reducing, fishes become more concentrated in specific river habitats like deep pools, where they are easy to fish. Summer fish catch biomass is also reasonably good due to the overall low water availability. In the Gangetic basin, fisheries are practiced in a range of diverse freshwater habitats including natural and man-made, lentic (stagnant water) and lotic (flowing water) ecosystems. Natural freshwater areas include large floodplain rivers, non-perennial rivers, perennial and seasonal streams, cold-water rivers and streams, glacial lakes, estuaries, tidal rivers, floodplain wetlands, oxbow lakes, grassland swamps and marshes. Manmade habitats include dug or built-up wetlands, ponds, man-made reservoirs, dam reservoirs and canals. To the fisher, flow velocity, depth profile, substrate type, vegetation structure, current patterns and habitat stability are key indicators for fishing effort allocation and logistical decisions. Fish Diversity in the Gangetic Basin The overall species pool of the Gangetic fish assemblage is estimated at around 300 species (53+ families, 150+ genera; 250 species). The floodplain fisheries are dominated by major and minor carps (Cyprinidae), catfishes (Siluriformes: 6-7 families), Clupeidae, Notopteridae and a mix of many other families. Major carps and the Clupeid fish, Hilsa (Tenualosa ilisha) and some large catfishes form the most valued catches across most parts of the Gangetic floodplains. Major carps, the most preferred freshwater food fishes, include species like Catla, Rohu, Mrigal, Mahseer etc. exhibit potamodromous (along freshwater upstream-downstream gradients) migration. Though these fishes have suffered serious declines due to overfishing, pollution and dams, they have been mass-produced through artificial rearing in pond aquaculture. Farmed large carps form the major proportion of fish eaten anywhere in India today. In wild fisheries, catfishes come lower in the preference order, but with the decline of carps, medium and small catfishes have become the main fishing targets. Further, as most catfishes are sedentary and do not show long-distance movements, the fisheries have completely switched from carp- to catfish-targeting fisheries. Other deep-bodied, highly sought after fishes include the Chitala and Notopterus, or the featherfishes, and mullet. The estuarine fishery in the Hooghly and Sunderbans tidal rivers in West Bengal is dominated by shellfish (prawns, mud crabs and shrimp), Clupeidae and Engraulidae, Sciaenidae, catfishes of the Ariidae and a far more diverse set of families compared to truly inland fisheries. Other important components of the commercial fisheries include 5-6 species of shellfishes (mainly prawn and shrimp). Coldwater fisheries specialize on large-bodied, rapids-loving potamodromous migrant fishes such as Mahseer and Snow Trout. These fishes are of high commercial importance and are in high demand by professional sport fishers and anglers, apart from being highly prized as food locally. Mahseer in particular, have recently led to the opening of new markets of luxury wildlife tourism that is based on angling and recreation in the Western Himalaya. Dam reservoir fisheries are almost entirely based on managed stocking and breeding of commercial fishes in hatcheries, of major carps Catla, Rohu and Mrigal, catfishes like Pangasiodon, and minor catfishes. The state of river fisheries in the Gangetic basin has been affected over the last few decades by several threats described in the next section. Dams and Riverine Fisheries in India Fisheries across India have been severely affected by dams, flow regulation and associated human impacts, which have substantially altered ecological requirements of fisheries and biodiversity together. If one clinically investigated the fisheries’ decline, they would find it to coincide with the period of maximum dam building (1970s-80s) in India. Most commercially valuable fish species, especially major carps and Hilsa, have shown population-level collapse and even commercial extinction over large inland waters. Reduction in harvested fish size-class distributions, a classical indicator of overexploitation by fisheries, points to poor fish recruitment and adult survival, which may be further brought down by flow regulation by dams. Dams have acted as the major factor of disruption by blocking migratory routes of upriver or estuarine spawning fishes such as Hilsa and Anguilla eels. Dams have also caused loss of genetic connectivity between fish populations, most notably seen in major carp stocks. Erratic water releases, nutrient and sediment trapping behind dams and barrages, failure of breeding in carp and catfish species due to siltation, erosion, poor water availability, modified thermal regimes required for breeding (increase in temperatures due to low river depth/flow), and exceptional levels of hazardous pollution (again, magnified due to the poor flows reducing dilution capacity of river water), are other fallouts that adversely affect fisheries. The fact that there is just not enough water in the river must form the bottom line of any causal investigation of riverine fisheries. Lack of appropriate policy measures and pollution receive dominant mention as threats to fisheries by government research agencies, but they are mere outcomes of much larger shifted baselines because of dams. Dams, barrages and hydropower projects through flow regulation have increased uncertainty about fishing and driven fishing to desperate levels: fishers often resort to destructive practices, or even worse, exit the fishery altogether. Such exit does not solve the problem of existing fisher folk: water is critical to sustaining not just fisheries but the river and the people dependent on it. Detailed understanding of the lives of fishing communities of the Ganges is therefore critical. Fisher communities in Ganga: Around 10-13 million people in the Gangetic floodplains are estimated to be dependent on fish resources for their livelihoods, directly or indirectly. However, accurate estimates of active traditional and non-traditional fisher populations are still wanting. It is important for any discussion on fishing communities to clearly separate traditional fishing communities from ‘non-traditional fishers’, who may be practically from any other local community and with the possession of other livelihood options, but also opportunistic fishing, due to unrestricted access to imported nets and gear available in markets to anyone. Traditional fishing communities were always the craftsmen of their own nets and gear, and also possess remarkable ecological knowledge about rivers, fish and biodiversity, their breeding biology, ecology, seasonality, and distribution. Of course, with the degradation of fisheries throughout the Gangetic plains, the traditional knowledge and practices of fishing are eroding fast. Hence such knowledge needs to be documented well, especially from old fishers with whom it still persists, to identify historical baselines of river fisheries with a different, past ecological reference (pers.obs.; F.pers.comm). Traditional fishing communities today form a highly marginalized, politically unorganized and socio-economically impoverished people. Caste discriminations and political history form the chief reasons for their poverty and subjugation over centuries of fishworking. But the present condition of rivers does not seem to offer hope to any improvement in their economic position unless and until there is collective voicing of their concerns, especially against large-scale water engineering projects that threaten their livelihoods. Their livelihoods, one may argue, confined them to the river’s water, albeit the fact that they never owned the waters legally. However, they always have stated cultural claims of temporally confined territory, following their foraging preferences and site usage. But depending on the nature of the river’s hydrological dynamics, there may be variable maintenance of fixed ‘territories’ by fishers adopting a roving mode of fishing, and neither legal nor cultural claims can be reconciled to a level that the conflicting parties can reach mutually. With regards to their economic viability and status, a large proportion of the traditional fishworkers fall Below the Poverty Line (BPL), and are recorded as Economically Backward Castes, and also have been assigned the status of Scheduled Castes. Annual incomes from fishing alone, according to the few estimates available, range from INR 25,000/- to INR 50,000/- (pers.obs., F.pers.comm.). Large dams, flow regulation and Gangetic basin fisheries : The singular key problem of fisheries today is that it lacks water in the dry-season, because of flow regulation by dams, barrages and hydropower projects. More water flow releases are needed for the protection of riverine fisheries in the Gangetic basin. Widespread river habitat degradation, industrial, agricultural and domestic pollution, altered flows and modification of sediment and nutrient fluxes by dam projects, and resource overexploitation (by fisheries, agriculture or industry) have had major consequences for the unique biodiversity and fisheries of floodplain rivers across Asia. Obstruction and fragmentation of river flow, habitat destruction, accelerated erosion and siltation, long-distance water diversions (involving huge amount of transmission losses and waste) and poor flow releases are the major direct threats of dam-canal systems in the Gangetic plains. Flow volume problems: Lower-than-minimum flows have been consistently recorded across the Ganga, Yamuna, Chambal, Kosi, Sone, Ken, Betwa, Ghaghra and Gandak rivers. Along with these large rivers, almost all others (Rapti, Baghmati, Mahananda, Teesta, Kamla, Burhi Gandak, Punpun, Gomti and others) have been highly regulated64,69. The reduction of freshwater discharge reaching the Sunderbans because of the Farakka barrage has led to high degree saline ingress throughout the estuary, causing die-offs of considerably large tracts of mangroves and aquatic vegetation, as well as severe losses to the upstream fishery. Downstream, fishing practices suited to brackish and fresh waters now have to adapt to saline intrusion into the estuary’s waters. Globally, fragmentation and flow regulation have caused the most severe impacts through drastic alterations to riverine biota and ecology. Low flows and fragmented connectivity of river channels lead inevitably to fish population declines and breeding failure. Over time, dams have probably led to genetic isolation of fish populations as well as river dolphin / crocodile populations, destruction of fish breeding habitats and spawning triggers and loss of valuable wild fish germplasm. These losses are so large in their ecological value and opportunity costs that they cannot be recovered with artificial fish culture techniques or hatcheries. Aggravation of pollution effects: The Ganges basin is one of the most polluted large river basins in Asia, especially with regards to domestic sewage and agricultural runoff. Poor flows reduce the dilution and self-purification capacity of river water to reduce concentration of pollutants and local impacts on fishes. . Agricultural fertilizers (organophosphates, organochlorines, nitrates etc.), heavy metal pollution from industrial effluents, thermal power plants, oil refineries, distilleries and tanneries, and nitrogen-rich sewage, waste-water and non-biodegradable substances such as plastics, mercury, radioactive compounds and hospital wastes can cause fish kills or even worse, lead to high levels of toxicity in tissues. Pollution problems are especially acute in highly regulated river reaches, especially around Delhi (Yamuna River), and the Gomti at Lucknow, Yamuna until Panchnada in UP and Ganga River at Kanpur, Allahabad, Varanasi, Patna, Barauni, Bhagalpur and Farakka. Siltation in dam reservoirs and barrage gates: Excessive siltation in the Ghaghra barrage has led to, as per local fishers, breeding failure in Labeo angra (Ghewri), a preferred spring-fisheries target in the region. The fishers claimed that over the past 5 years they have not captured a single fish with eggs inside it, and also added that catches have plummeted heavily (F.pers.comm). Siltation of gravel/sediment in reservoir or storage zones is a problem of huge magnitude for fisheries, especially through breeding failure. Accumulated silt in reservoirs is estimated to be so high (in tens of meters height) that it cannot even be easily flushed out, and leads to nearly 60-90% reductions in sediment fluxes of rivers in monsoon and non-monsooon seasons. Siltation adds to obstruction of flow release through barrage gates. In the Farakka barrage, sediment load accumulation is leading to breakage of gates every year, adding to maintenance costs. Habitat destruction and alteration of erosion-deposition dynamics: Soil erosion by erratic and sudden releases before floods can potentially lead to alteration and destruction of fish breeding habitats and stock depression. Changes in depth and flow velocity lead to fish not being able to receive natural physiological cues for movement and spawning that are otherwise provided by variability in discharge. Flow alteration also alters hydrological connectivity and sediment transport with wetlands and confluence channels during flooding. As a result these productive breeding habitats often become unavailable for catfishes and carps. These factors together become a problem for pre-settlement fish juveniles and recruits, which move into the main channels. Threats to cold-water and foothills fisheries from Hydropower Dams: Overall, despite their projected low impact situation, hydropower projects can have serious large-scale effects on mountain streams as well as rivers downstream. Globally, despite mitigation measures in hydropower constructions, fish migration and development have largely been deemed as failures. In India, hydropower projects, especially run-of-river projects in higher altitudes, often have disastrous effects on natural thermal regimes, cause sediment blockages and perturb natural flow variability at diurnal timescales through releases varying across several orders of magnitude. These changes severely affect not just breeding and migration in higher-altitude cold-water fisheries of snow trout and Mahseer in Himachal, Sikkim and Uttarakhand, but also downstream fisheries of catfish and carps in the foothills and plains due to altered flows. Their cumulative downstream impact can also potentially risk fisheries-based uses of river water without being exposed to the risk of sudden flow releases every day. Globally, through extreme perturbation of natural flow dynamics, dams have homogenized and altered many crucial river-floodplain processes, and have had disastrous impacts on biodiversity and fisheries. There is an urgent need to ensure ecologically necessary, adequate and natural flow regimes in all rivers of the Gangetic basin. The current water scarcity is so severe that projects such as river interlinking, apart from their ridiculous proposed costs, are simply impossible to conceive of, water itself being the limitation. There is no doubt that further water developments will prove disastrous for a whole section of people and their livelihoods, and must be scrapped. Rivers that need urgent attention in this respect are the Chambal, Yamuna, Ken, Betwa, Alaknanda, Bhagirathi, Mandakini, Sone, Damodar, the Ganges at Farakka and Allahabad, Sharada, Ghaghra and all other rivers especially in Uttar Pradesh, Uttarakhand, Madhya Pradesh and Bihar0. Run-of-river hydropower projects, flow diversions and links, pumped irrigation, embankments, agricultural intensification, groundwater depletion and sand mining are highly destructive threats that will affect not just fisheries but the whole social fabric of river users in the near future. Despite the demonstrated folly of not allowing rivers to flow from headwaters to estuaries and deltas, engineers, technocrats and politicians talk of “rivers flowing wastefully into the sea”. This statement would imply that the thousands of species and millions of fisher livelihoods that need flowing water in rivers are of no value to the state policy on water resource development. Such statements are ignoring important societal needs and hence are evidently irresponsible. No post dam-construction compensation schemes exist for fishers, who may lose their entire livelihood because of flow-regulation and loss of hydrological connectivity due to dams. Downstream fisher populations must be ideally compensated for the lost fishing catch and livelihood opportunity, but in general there has been scant attention towards the communities’ livelihoods (F.pers.comm). Downstream water allocations through on-ground consultations with fisher communities are urgently needed (F.pers.comm). In India, water resources development is so strongly irrigation-focused (and now strongly focused on industry and hydropower), that, in comparison, riverine fisheries are not even acknowledged as legitimate and in need of conservation and livelihood protection. These biases mean that only pond aquaculture receives any attention. If river conservation and development groups can actively work with fishing communities in order to develop an informed and aware constituency or interest group, fishers will gain political voice in making negotiations about water availability in river basins. Fisheries incur ‘colossal losses’ every season due to irregularities in dam operations, and always fall severely short of demand. But now, through the boom of artificially managed pond aquaculture and wetland fishing especially in Andhra Pradesh and West Bengal, the nature of supply itself has radically changedThis boom has contributed to India becoming one of the largest producers of inland freshwater fish in the world. But such ranking hides a lot of miserable facts about river degradation. Although net production shows increases, the collapse of river fisheries that still support millions of poor people who don’t get access to aquaculture, get totally ignored under such swamping. This is why farmed fish in fish hatcheries can barely replace riverine fisheries despite the fact that they have cornered the attention of fisheries development. The failure of river fisheries has led to large-scale outmigration for labour from the Indo-Gangetic plains (F.pers.comm.). This might be a significant contributor to the magnitude of labour-related migrations from the Gangetic plains, which has been a rising exodus. Today, fisher folk from Uttar Pradesh, Bihar and Bengal provide a large proportion (20-40%) of construction and manual labor force across India (F.pers.comm). Others who stay behind have to take to menial jobs such as rickshaw-pullers or servants (F.pers.comm; pers.obs). Some are forced to take to crime to be able to feed themselves and their families. These factors can weaken the social resilience of production systems and create poverty, disparity and community breakdown. It has been argued that ethnic conflicts between local Indian populations and illegally immigrated Bangladeshi refugees are linked to poor water releases from the Farakka barrage in West Bengal, to downstream floodplain reaches in Bangladesh. Mitigation measures like Fish ladders and hatcheries There is little existing research on the construction design, functioning and efficiency of fish ladders in tropical and subtropical large floodplain rivers. Across the tropics, monitoring studies on fish ladders do not show positive results. A handful of barrages in India have constructed fish ladders, but owing to numerous problems they have been largely a failure. These problems are all related to the extremely low discharge rates from the dams – as there is simply not enough water volume allocated for migrating fishes, which therefore cannot access the ladders and fish lifts. Other problems are linked to siltation in reservoirs and turbulence of flows near the fish passages. For instance, the Farakka fish lifts do not seem to have been of any help due to the extremely low outflow of the Ganga River from it, and the commercial extinction of the Hilsa fisheries both upstream and downstream is clear with an estimated 99.9% decline. Fish passes constructed at barrages on the Yamuna River (Hathnikund barrage) and the Ganga barrage at Haridwar have been monitored by CIFRI and the results suggest that they have had very low success for migration of cold-water species like the Golden Mahseer Tor putitora. Similar structures on the Beas River and Mahanadi River (Salandi dam, Orissa) have found to be ineffective in buffering the adverse impacts on fisheries production in these rivers. India has dominantly followed reservoir hatcheries development, and therefore consideration for effective fish ladders has always been low priority. However, as we have seen, hatcheries themselves bring about several problems for native fish populations – and are not an ecologically viable solution, despite being economically profitable to certain interests. Given the poor success of existing fish passages, it is important to consider modern designs in existing and proposed dams that are suited to the ecology of our own fishes. A whole body of interdisciplinary research – spanning engineering and ecology, is needed to address the significant gaps in our understanding of making fish passages work. We need to monitor existing examples well to assess reasons for their failure. Again, just the act of allowing higher dry-season flows and timely adequate releases in the river could be a far more effective strategy for fisheries improvement than other intensive technology-driven practices to enhance fisheries production (F.pers.comm) River restoration and alternative livelihoods: Given the current state of riverine fisheries, there is an urgent need to consider possibilities for large-scale ecological restoration of rivers by modifying dam operations and improving ecological flows. Alongside restoration, it is crucial to consider alternative livelihoods to fishers, which regard their traditional knowledge and provide them with clearly defined user rights and responsibilities over management of wild-caught or cultured fish resources. Ecological restoration of all major and minor rivers in India needs to be undertaken urgently, to ensure ecologically adequate, naturally timed flow releases, consistent dry-season flow regimes, hydro-geomorphological habitat maintenance, flood maintenance and reduction in pollution. Dam re-operations to ensure adequate flows and variability in river discharge remain a neglected aspect of river management in most regions today. Flow restoration can lead to improved health, numbers and availability of native commercial carps and preponderance of larger fish sizes through improved juvenile recruitment, along with other advantages to surface hydrology and local groundwater availability. Large-scale scientific research and monitoring programs must be instituted to study the response of inland wild-capture fisheries and take further steps to mitigate local threats. Restoration also needs to involve stringent restrictions on release of untreated domestic and industrial effluent, especially in urban belts such as Kanpur, the National Capital Region of Delhi, Allahabad-Varanasi, Mathura-Agra, Lucknow in Uttar Pradesh; Patna, Barauni in Bihar and the Durgapur and Kolkata regions in West Bengal. Strict restrictions are needed on sand-mining, riverfront encroachment and embankment construction, especially in the Chambal, Ghaghra, Gandak, Baghmati, Rapti and Kosi Rivers. In this regard, more judicial interventions, such as seen recently in the case of sand-mining closures from river beds based on a review by the National Green Tribunal, are critical in reducing wanton and unregulated destruction of riverfronts, when implemented effectively. In terms of reducing the most direct impacts, there is a need to regulate fishing pressure and completely curb destructive fishing practices like dynamiting, use of mosquito-nets, beach seines, and gillnets below allowable mesh-sizes, poisoning, use of long-lines etc. Traditional fishers must be involved directly in monitoring and banning the use of destructive practices by the government monitoring agencies. Finally, the quest for sustaining fisheries in the Ganga River basin in the long-term will require rethinking of current dominant paradigms to move towards ecological restoration of rivers, their biodiversity, as well as socially just, rights-based and equitable socio-political restoration of traditional fisher communities and fisheries management systems. Ashoka Trust for Research in Ecology and the Environment, Srirampura Royal Enclave, Jakkur, Bangalore 560064, India. (The views expressed are of the author and do not belong to the institution where the author currently works) Member, IUCN Cetacean Specialist Group, IUCN, Gland, Switzerland. Twelve-point recommendation from traditional fisher communities for sustaining riverine fisheries and livelihoods in the Gangetic basin. |1||Water||Provide enough water, adequate natural flows in all rivers. Allow fish movements upriver, currently blocked by large dams and barrages. STOP new dams and mindless, high-cost, destructive and unsustainable engineering projects such as river interlinking.| |2||Ban on destructive fishing practices||Curb destructive practices of fishing, especially mosquito-netting, poisoning, dynamite-fishing, trawling and beach-seine netting everywhere.| |3||Poverty alleviation and social security||Fishers are in need of government dole or loans, technical know-how, permits and I-cards, housing, education and displacement packages. It is alleged that these benefits are hardly reaching them, although the allocations of funds reach farmers easily. Fishers need government security from criminals / mafia / anti-social elements / pirates that harass them and grab fish catch.| |5||Define fisher rights and responsibilities||Clearly define fishing use and access rights across all riverscapes, provide clear guidelines on multi-objective management of fisheries amidst other economic activities| |6||Reduce pollution and mass fish-kills||Urgent need to reduce the presently excessive river pollution, especially industrial but also domestic wastes.| |7||Alternative livelihoods||River fisheries are currently in a state of ecosystem-level decline or collapse. Trash fishes have become the most common catch, replacing many commercially viable carps and catfishes. People require alternative livelihoods in situ, to check problems related to migration and exodus to work as construction laborers or rickshaw-pullers. Community-based, cooperative pond carp-culture fisheries seem highly promising. Other alternative livelihoods include working with river management authorities, conservation agencies, ecotourism, agriculture etc.| |8||Fishery co-operatives||Focus on community-based management of river fisheries and help it develop in an ecologically friendly and sustainable manner. Replace the systems of private contracts and free-for-all fishing by power-equitable, social dignified resource-sharing arrangements| |9||Ensure compliance of fishers towards biodiversity conservation and monitoring||Needs to be ensured through continued monitoring of fishing activity and behavior, including by-catch or hunting of species. This will help safeguard endangered wild species such as gharial, turtles, river dolphins, birds etc. This can also help the spread of exotic food fishes that are rapidly invading our rivers (the worst examples are Tilapia species, Chinese and Common Carps, and more recently, Red-bellied Piranha.| |10||Use of Food Security Act, Rural Labor Programs||Can facilitate daily incomes by which fisheries losses could be offset; while also providing a solid community-level incentive to regulate and monitor fishing.| |11||Restoration of native riverine fish communities||Very important given the huge decline in native carp species of high commercial value. Fisheries need to protected not only by revival of stocks, facilitating better fish recruitment, but also by protecting fish breeding habitats from| |12||Adaptive management of water tenure in fishing areas||Owing to natural uncertainty linked to flow regimes and channel course changes, new flexible systems of tenure in fisheries are required. Such systems would fit in well with providing a clear definition to fishing rights in any riverine stretch.|
<urn:uuid:6346e53a-51d6-4fb0-b16f-d602ec8d1412>
CC-MAIN-2022-33
https://sandrp.in/2014/08/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00295.warc.gz
en
0.935325
6,807
2.984375
3
Prescriptive Analysis. The statistics are a special branch of Mathematics which deals with the collection and calculation over numerical data. The data collection method of quantitative research is more structured than qualitative ones. tests Measures: Dependent variable (continuous) Independent variable (2 points in time or 2 conditions with same group) When to use: Compare the means of a single group at 2 points in time (pre test/post test) Assumptions: Paired differences should be normally distributed (check with histogram) Interpretation: If the p In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. We emphasize that these are general guidelines and should not be construed as hard and fast rules. Here, you can use descriptive statistics tools to summarize the data. Statistics (or statistical analysis) is the process of collecting and analyzing data to identify patterns and trends. This technique is useful for collecting the interpretations of research, developing statistical models, and planning surveys and studies. Pay particular attention to the levels of measurement (categorical or metric) associated with variables in different types of statistical tests. 1. The formula for it is: t = (x1 x2) / ( / n1 + / n2), where. The sample of cervical cells is sent to a lab, where the cells can be checked to see if they are infected with the types of HPV that cause cancer (HPV test). Answer a handful of multiple-choice questions to see which statistical method is best for your data. A p value indicates the probability that a difference found between interventions is due to chance rather than a true difference. Statistical validity is one of those things that is vitally important in conducting and consuming social science research, but less than riveting to learn about. Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. The intent is to determine whether there is enough evidence to "reject" a conjecture or hypothesis about the process. x1 is the mean of sample 1. x2 is the mean of sample 2. n1 is not all numbers constitute quantitative data (e.g. B. Nonparametric statistical tests may be used on continuous data sets. Quantitative data is data which can be expressed numerically to indicate a quantity, amount, or measurement. Business intelligence. Heres an introduction to the most popular types of statistical analysis methods for surveys and how they work. t = (x1 x2) / ( / n1 + / n2), where. Data presentation can also help you determine the best way to present the data based on its arrangement. Data analysis. Data presentation. Here are some of the fields where statistics play an important role: Market research, data collection methods , and analysis. Types of Statistics Descriptive statistics deals with enumeration, organization and graphical representation of the data, e.g. Researchers first make a null and alternative hypothesis regarding the nature of the effect (direction, magnitude, and variance). Design. What to use if assumptions are not met: Normality violated, use Friedman test Sphericity violated, use Greenouse-Geissercorrection For example, nQuery has a vast list of statistical procedures to calculate sample size, in fact over 1000 sample size scenarios are covered. Types of Statistical Tests. x1 = mean of sample 1. x2 = mean of sample 2. n1 = size of sample 1. n2 = size of sample 2. Test ANOVA with two factors showed a significant individual effect of concentrations (C) (F =83.833, P < 0.0001), salts (S) (F = 26.158, P < 0.0001) and interaction of these factors (S C) (F = 3.402, P =0.001) on the germinability percentage of Z. album seeds ().The germination response of Z. album seeds to the salinity assessed by the evaluation of final germination dependent and independent variables and know whether they are quantitative or categorical to choose the appropriate statistical test. SEO and optimization for user search intent. The efficacy of the variance equality test in steady-state gait analysis is well documented; however, temporal information on where differences in variability occur during gait subtasks, especially during gait termination caused by unexpected stimulation, is poorly understood. The formulas have not been included here because they are not fundamental to understanding the common process used when we do hypothesis testing. Our Stats iQ product can perform the most complicated statistical tests at the click of a button using Qualtrics online survey software, or data brought in from other sources. Co relational: The tests look for an association between variables.Pearson correlation: It tests the strength of association between two continuous variables.Spearman correlation: It tests the strength of association between two ordinal variables.Chi-square: It tests the strength of association between two categorical variables. There are different test statistics for each test. Types of Learn statistics and probability for free, in simple and easy steps starting from basic to advanced concepts. In particular, statistical analysis is the process of consolidating and analyzing distinct samples of data to divulge patterns or trends and anticipating future events/situations to make appropriate decisions. Student B. Statistics Solutions is the countrys leader in statistical consulting and can assist with selecting and analyzing the appropriate statistical test for your dissertation. This article lists statistical tests by data type and sample requirements. Given below are the types of statistical analysis: Descriptive Type of Statistical Analysis. Data analysis. The T-test allows the user to interpret whether differences are statistically significant or merely coincidental. Statistical Hypothesis . T-statistic is what you call the statistic of this hypothesis testing. Data is best represented by analysing it using appropriate and valid statistical test so that the truth of the data is revealed. Students T-Test or T-Test:(I) Application of t-test for assessing the significance of difference between the sample mean and population mean:The computation of t-value involves the following steps:(i) Null Hypothesis: First of all, it is presumed that there is no difference between the mean of small sample and the population means () or hypothetical mean.More items We will present sample programs for some basic statistical tests in SPSS, including t-tests, chi square, correlation, regression, and analysis of variance. 1 ----\ Some Commonly Used Statistical Tests Corresponding Exploratory Data Analysis. The formula we use to calculate the statistic is: 2 = [ (Or,c Er,c)2 / Er,c ] where. This article lists statistical tests by data type and sample requirements. You may need to make decisions on the basis of statistical Data, interpret statistical Data in research papers, do your own research, and interpret the Data. Inferential statistics are used along with hypothesis testing to answer research questions. A badly designed study can never be retrieved, whereas a poorly analysed one can usually be reanalysed. Students T-Test or T-Test 2. Synchronous, web based PhD faculty and student training. Rather than drawing conclusions, it simply makes the complex data easy to read and understand. In statistics, the term non-parametric statistics covers a range of topics: . Data presentation. Introduction and description of data. Statistical tests commonly used in intervention research (e.g., the F -test in analysis of variance) are associated with probability values, for example, p < 0.05. 1. Experimental protocol. Statistical tests are useful for determining the relationship between the variables as they provide the statistical justification for the results. For each type and measurement level, this tutorial immediately points out the right statistical test. tax file number!) For example, if a participant is taking a test in a chilly room, the temperature would be considered an extraneous variable. The program below reads the data and creates a temporary SPSS data file. Nonparametric Statistical tests. These examples use the auto data file. There are two main categories: QUANTITATIVE: express the amounts of things (e.g. Statistical analysis defined. A statistical hypothesis is a hypothesis that can be verified to be plausible on the basis of statistics. Asked 2nd May, 2022; Statistical Tests. Data are non-parametric Ansari-Bradley, Mood test, Fligner-Killeen test. Equality of variance: Data are normally distributed Levenes test, Bartlett test (also Mauchly test for sphericity in repeated measures analysis). Once you have defined your independent and dependent variables and determined whether they are categorical or quantitative, you will be able to choose the correct statistical test. In terms of selecting a statistical test, the most important question is "what is the main study hypothesis?". Depending on the function of a particular study, data and statistical analysis may be used for different means. There are many statistical tests used for biomedical research. ; The Methodology column contains links to resources with more information about the test. Standard ttest The most basic type of statistical test, for use when you are comparing the means from exactly TWO Groups, such as the Control CBGS Marine & Environmental Science Fundamentals of Research 2. However, italso throws out some information, as continuous data contains information in the way that variables are related. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The Statistics decision tree will help in choosing the correct statistical test. The types are: 1. Market research methods allow organizations and individual researchers to discover their target market, collect and document opinions and make informed decisions. Selection of statistical test is not a rocket science and it is based on some assumptions. The decision of which statistical test to use depends on the research design, the distribution of the data, and the type of variable. Question. You will also find a link near each test which has a detailed tutorial of how to perform these test in statistical packages like R, IBM SPSS and Etc.., 22 answers. The statistical analysis has the following types that considerably depends upon data types. Contents. This table is designed to help you choose an appropriate statistical test for data with one dependent variable. Next, the p-value is calculated. Choosing the Correct Statistical Test in SAS, Stata, SPSS and R. The following table shows general guidelines for choosing a statistical analysis. Find step-by-step guidance to complete your research project. Asked 27th Jun, 2014; What is the type of my research design? Only Correlation, Regression, z- or t-tests, and Cluster Analysis have been used by more than 50% of the participants in this research, during the first half of 2017 and this sample probably over-represents people using statistics, and under-represents those using statistics less often. Learn more with market research types and examples. The Key types of Statistical Analysis are . (1) Consideration of design is also important because the design of a study will govern how the data are to be analysed. Student B would need to conduct an independent t-test procedure since his independent variable would be defined in terms of categories and his dependent variable would be measured continuously. ; Hover your mouse over the test name (in the Test column) to see its description. Data presentation is an extension of data cleaning, as it involves arranging the data for easy analysis. Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. Types of Statistical Tests; Types of Statistical Tests. Quantitative data collection involves measurement of variables. 1. SPSS is one of the dominating statistics tools that most statisticians use. the number of cigarettes in a pack). TYPES OF VARIABLES. Reading Lists. Fishers Z-Test or Z-Test 4. This includes BCLC stage C. Metastatic liver cancer is cancer that has spread from the liver to distant parts of the body. Statistical analysis methods for surveys . The Statistics decision tree will help in choosing the correct statistical test. It doesn't help that people use the term "validated" very loosely. Most of the integrated data collection/ analysis solutions, such as Askia, Qualtrics, Confirmit, Vision Critical, are using statistics tools. Overview Univariate Tests One sample t-test which tests the mean of a single group against a known mean. We require some basic information for selection of appropriate statistical test such as objectives of the study, type of variables, type of analysis, type of study design, number of groups and data sets, and the type of distribution. To determine which statistical test to use, you need to know: whether your data meets certain assumptions. Paired T-test Tests for difference between two related variables. By using data sampling and statistical knowledge, one can determine the plausibility of a statistical hypothesis and find out if it stands true or not. Therefore, the purpose of the current study was to further verify the efficacy of the Financial analysis and many others. Updated: March 2021. For example, do women and men have different mean heights? distinct from qualitative data. Discover the different types of statistical tests that are employed in these analyses. (Related blog: z-test vs t-test) Performing Hypothesis Testing Independent and dependent variables are used in experimental research. Choosing the Right Statistical Test | Types and Examples Which statistical test to choose will depend on several factors the type of variables you have (interval, ordinal or nominal), the distribution and structure of your data. With all the procedures that you need for research or to make a good, informative presentation, it can be used for teaching in a university. The type of research used is an analytic study with cross sectional design. SEO and optimization for user search intent. In particular, statistical analysis is the process of consolidating and analyzing distinct samples of data to divulge patterns or trends and anticipating future events/situations to make appropriate decisions. If findings are significant, the alternative hypothesis should be accepted, and the null hypothesis rejected. In a health coaching context, I hear mention of "validated instruments" and "validated outcomes" without a consistent meaning behind There are three common types of parametric tests that involve: regression, comparison, and correlation tests. Locally advanced liver cancer has not spread from the liver to distant parts of the body but cannot be safely removed by surgery. Financial analysis and many others. The following is the index of a different statistical test. Sometimes an individual wants to know something about a group of people. ; The How To columns contain links with examples on how to run these tests in SPSS, Stata, Three factors determine the kind of statistical test (s) you should select. Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, and inferential statistics, which draw conclusions from data that are subject to random variation (e.g., observational errors, sampling variation). Based on this qualitative data, create a survey that will allow you to collect quantitative data about the major themes of interest at a larger scale. 4. Localized liver cancer has not spread outside the liver and can be removed by surgery. In this tutorial, you will find everything from what types of statistical tests exist to how they can be used to demonstrate relationships among different variables. This research method includes different forms The statistic for this hypothesis testing is called t-statistic, the score for which is calculated as. The computerized experiment was programmed using Z-tree and conducted in October 2020.We used G*power 188.8.131.52 to calculate the sample size with a power of 80%, a 5% significance level and an effect size of 0.5 , and the results showed that it needed at least 23 physicians per group.Considering the experimental operability and the sample As Statistician teaching statistics in the University, I have to say that NCSS is the tool that I have used since 1997. distribution free methods which do not rely on assumptions that the data are drawn from a given probability distribution.As such it is the opposite of parametric statistics.It includes non-parametric statistical models, inference and statistical tests. It requires a certain amount of intelligence to understand the meanings of different statistical tests and their implications. Types of Statistical Tests. They provide valuable evidence from which we make decisions about the significance or robustness of research findings. The type of test to be used depends on the type of data, population type, distribution, and number of groups. Discover the different types of statistical tests that are employed in these analyses. Sphericity (Mauchlys Test) Interpretation: If the main ANOVA is significant, there is a difference between at least two time points (check where difference occur with Bonferroni post hoc test). Types of statistical treatment depend heavily on the way the data is going to be used. Chi Square Test ANOVA (Analysis Of Variance): Definition, Types, Research Methods. This includes BCLC stages 0, A, and B. 10 min read The world of stats can seem bewildering to a beginner, but with the right tools and know-how these powerful techniques are yours to command, even without an advanced degree. For a statistical test to be valid, your sample size needs to be large enough to approximate the true distribution of the population being studied. Test of Significance: Type # 1. Click on each test and explore the details. Removes the requirement to assume a normal distribution 2. But tests like regression, t and z-tests, correlation, and cluster analysis are used for research statistics data. Descriptive Research: Definitions. The following is the index of a different statistical test. Given below are the 6 types of statistical analysis: Descriptive Analysis; Descriptive statistical analysis involves collecting, interpreting, analyzing, and summarizing data to present them in the form of charts, graphs, and tables. Parametric statistics test is used to test the data that can make strong inferences, and these are conducted with the data which adhere to the similar assumptions of the tests. Predictive Analysis. Hypothesis testing statistics is when statistical tests are used in experimental research to identify if the alternative or null hypothesis should be accepted in research. Correlational research attempts to determine the extent of a relationship between two or more variables using statistical data. Submitted by Michael Marrapodi on February 14, 2018 9:14 am MST. Statistical assumptions There are four cases to think about:Large sample. What happens when you use a parametric test with data from a nongaussian population? Large sample. What happens when you use a nonparametric test with data from a Gaussian population? Small samples. What happens when you use a parametric test with data from nongaussian populations? Small samples. The previous page provides a summary of different kinds of statistical tests, but how does a researcher choose the right test based on the research design, variable type, and distribution? Types of The statistic used to measure significance, in this case, is called chi-square statistic. 1 Statistical Tests. Before conducting research, its essential to know what needs to be measured or analyzed and choose a suitable statistical test to present your studys findings. Question. Causal Analysis. It mainly tests the hypothesis that is made about the significance of an observed sample. The Easy Way to Run Statistical Analysis. Census data. Here are some of the fields where statistics play an important role: Market research, data collection methods , and analysis. Researchers first make a null and alternative hypothesis regarding the nature of the effect (direction, magnitude, and variance). Statistical analysis is the process of collecting and analyzing data in order to discern patterns and trends. the types of variables that youre dealing with. Or,c =observed frequency count at level r of Variable A and level c of Variable B. The statistical analysis has the following types that considerably depends upon data types. Alternate: Variable A and Variable B are not independent. These tests are useful when the independent and dependent variables are measured categorically. Its a 1. The conjecture is called the null hypothesis. Statistical tests can be powerful tools for researchers. Usually your data could be analyzed in multiple ways, each of which could yield legitimate answers. The first step in creating statistical personas is the same as that for qualitative personas: exploratory qualitative research to identify the main themes that come up repeatedly among users. The ability to analyze and interpret statistical Data is a vital skill for researchers and professionals from a wide variety of disciplines. This chapter will discuss a few of the more commonly used tests. When you run a test in your statistical software program the following steps occur: The test statistic is calculated. There are many statistical tests used for biomedical research. Select a parametric test. 3. the basic type of test you're looking for and; the measurement levels of the variables involved. The course covers study-design, research methods, and statistical interpretation. The Key types of Statistical Analysis are . In many ways the design of a study is more important than the analysis. Here, you can use descriptive statistics tools to summarize the data. Statistical analysis is the science of organizing, exploring, summarizing and presenting large amounts of data to discover underlying patterns and trends (Daniel & Cross, 2013). Types of Statistical Analysis. Below are listings of the statistical tests by data type and sample requirements. Systematic collection of information requires careful selection of the units studied and careful measurement of each variable. Commonly used statistical tests in research Dr Naqeeb Ullah Khan 2. Create lists of favorite content with your Unsurprisingly, choosing the most fitting statistical test (s) for your research is a daunting task. Basically, the test statistic describes how much the relationship between variables differs from the null hypothesis (no relationship). There are many types of statistical tests that can be done, depending on the type of variables and the question being asked. To Prepare Review the lead-in for the Discussion and this weeks Learning Resources. Mechanistic Analysis. Types of statistical tests: There are a wide range of statistical tests. In general, if the data is normally distributed, parametric tests should be used. The same sample can be checked for abnormal cells (Pap test/Pap smear). This subject is well known for research based on statistical surveys. 2. The statistical tests can be performed when the collected data is valid from a statistical perspective by meeting certain assumptions and understanding the types of variables used in the study. - Semantic Rules Of Regular Expression - Physics For Information Science Ppt - Classification Of Personality Pdf - Anime About Painting 2021 - Banks In Belgrade Serbia - Second Skin Boxer Briefs - App State Youth Football Camp - California Radiology License Renewal - Surfers With Most World Titles - Geological Systems Examples - Mexican Open 2022 Prize Money
<urn:uuid:1308ddcb-19ca-4472-b5ab-77e8119af55c>
CC-MAIN-2022-33
https://asia-pacific.tv/getting/shell/11320132aca6df96c37603a1503
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00098.warc.gz
en
0.908524
4,698
3.75
4
Do you have too many turnouts on your garden railway? Is it a pain to control them? It can be easier. There are many ways to control turnouts, but most of them require too much repetitive work. This article takes a hint from 1:1 scale railroad practice and describes a method of designing and building an interlocking plant such that turnout control becomes much easier. Building an interlocking is easy, inexpensive, flexible, and walk around controllable. Once you try it, you'll never go back. There are many methods currently in use for controlling turnouts. They can be controlled mechanically with a switch stand or ground throw, electrically with attached motors and remote switches or pneumatically with air cylinders and remote air valves. However it is done, model railroad control systems usually activate only one, or maybe two turnouts at a time. The method described here provides full interlocking control of sets of any number of turnouts. All you need are the control switches that you might have already and a handful of diodes. If you already use the Aristo Craft Train Engineer, you can make the A-E buttons control the routes as well. Other radio remote control systems can be easily adapted as well. On narrow gauge or branch lines, individual turnout control is prototypical, a brakeman must get off the train and throw each turnout by hand. On real Class 1 railroads and many modern smaller operations, turnouts and signals are often controlled by an "interlocking." An interlocking is some arrangement by which several related turnouts and signals are controlled together as sets. For example, most main line sidings, crossings, interchanges, and yard or terminal throats are controlled by an operator in a switch tower or a remote dispatch center. The operator manipulates controls which set up "routes" through the trackwork instead of individually setting turnouts and signal aspects for each device. This method of control reduces operator workload and significantly reduces the chance for errors and subsequent accidents. This interlocking concept can be applied to garden railroads as well. While safety is a concern (train wrecks can both embarrassing and expensive), the primary motivation behind implementation of an interlocking plant on a garden railroad is to reduce the work necessary to set up a route through complicated trackwork. The more complicated the trackwork is, the more effective interlocking control can be. An advantage for the model that does not apply to the prototype is that the interlocking can also perform automatic track power control and routing through the use of switch contacts on the turnout motors. The simplest form of an interlocking is so basic that it may not be recognized as one. A crossover between two parallel lines is often connected as an interlocking. There is no reason to flip one turnout without flipping the other so both turnout motors can be wired in parallel and operated from one control. A simple passing siding can also be controlled as in interlocking, however this may not always be desirable. If both turnouts are controlled together, then a complicated switching maneuver such as a saw-by or double saw-by could be difficult or impossible. The previous examples are really too simple to make the creation of special circuits desirable. Interlocking control becomes really handy when a little more complicated trackwork is involved. How this might apply to your railroad depends specifically on its design, the more complex it is, the more interlocking control can help. If you find yourself throwing two or more turnouts at a time to do any switching moves at all, read on. One common application of interlocking control is for setting turnouts along the lead track of a stub yard. It would be convenient to press one button and automatically align all the turnouts to service one particular stub. It would also be nice to have power routed to that one stub and to none of the others. The next section of this article steps through the design of this simple interlocking for a stub yard to show how it can be done at minimal cost and complexity. This straightforward example will serve to demonstrate a design method. Interlocking control for much more complicated trackwork can be designed by exactly the same step by step method. My method uses a circuit called a "diode matrix." This is just a collection of rows and columns of wires with diodes connected at some of the crossovers. There is no fancy electronic design involved in this brute force approach and just about anybody who possesses the most basic of electrical skills should be able to build one. The use of a diode matrix to control routes is not at all new. Circuits similar to the ones shown below can be found in many model railroad electrical circuits books. The versions that I have seen can be made to work, but they are not completely flexible. Sometimes certain combinations of turnout logic can not be accommodated. In other cases, a turnout that is not part of the route logic will be thrown too. The new feature of this diode matrix (new to me anyway because I have never seen this done before) is the double row concept. This allows complete freedom to independently control each turnout motor with no ambiguity and no parasitic effects. The example shown below is very simple, however the method is equally well suited to control incredibly complicated trackwork. The method involves five distinct steps. There are a limited number of reasonable ways that a train could be expected to traverse any section of trackwork. Each of these ways is called a route. In the case of the simple stub yard there are five routes, one to each stub. Name each route something that makes sense. In this example, the names are Stub #1 through Stub #5. Also name each turnout, I've chosen Turnout #1 through Turnout #4. If the four turnouts were controlled individually, there would be four controls, one for each turnout. An interlocking control will have five controls, one for each route. At first, this may not seem to be much of a simplification. However to select any given stub track with the interlocking system requires activation of only one of those controls without regard to the present state of any of the turnouts. The non interlocked system requires manual evaluation of one to four of the four turnout positions and possibly changing all four of them. Also, the controls for the non interlocked system are two way, so there are really eight commands. If you want to use a walk around control system with a limited number of available commands, conservation of commands is important. Signals can be controlled as well by treating them as turnouts and defining states for them. A signal device must latch with a pulse command to be integrated with the turnout control diode matrix. LGB signals do latch and should work fine. A truth table defines the output state of a system based in its inputs. For the stub yard example, the inputs are the routes to be selected. The outputs are the positions of the turnouts. There are actually three possible states for each element of the truth table, "straight", "curved" and "don't care." The don't care case means that the turnout can be left in whatever position that it is in already. Explicit states could be defined for those turnouts, but that would just add extra diodes to the diode matrix and cause some turnouts to be thrown where it doesn't really matter. The truth table for the simple stub yard has five rows, one for each the five routes and four columns, one for each of the four turnouts. Each intersection in the table defines a state of one turnout depending on what is supposed to happen to that turnout when a particular route is selected. For your railroad, you will have to write and fill out your own truth table. It takes some thought to determine the proper routes, but after the routes are defined, filling in the table is nearly automatic. Just look at your track plan and fill in the proper state for each turnout so that a train can follow the chosen route. A circuit called a diode matrix will be used to implement the truth table. A diode matrix is a set of wires arranged in rows and columns with diodes wired at some of the intersections. This is actually a very simple form of ROM or read only memory. It stores the "program" for how the interlocking is to operate. Due to the particular nature of large scale turnout motors, the diode matrix used in this method is actually two interspersed matrices. The motors require a pulse of current to operate, one polarity switches it one way and the other polarity switches it back. One of the matrices provides the positive pulse and the other provides the negative pulse. The method can also be adapted to twin coil switch machines such as older LGB machines or typical small scale machines. Slow motion machines will need to be wired to complete their own cycle after being started by a pulse. Stall motors can be wired from Atlas Snap Relays. If you use air operated turnout motors, you can add routing control but it'll cost you more. You'll have to find some electrically operated air valves and run them from Atlas Snap Relays wired like the twin coil switch machine shown. You'll need one for control valve for each turnout. Without interlocking control a typical large scale turnout control switch can wired as shown. A double-pole, double-throw momentary switch, DPDT (on)-off-(on), is used to create the current pulse from a single polarity source provided by the diode. The polarity of the current pulse depends on which way the switch is pushed. This method is not suitable for use with a diode matrix as neither motor wire can be connected to a common terminal. Another way to control a turnout motor is to provide both polarities with two diodes and switch between them. A simple form of this circuit uses a single-pole double-throw, SPDT (on)-off-(on), switch. With this method, one side of each turnout motor can be connected to a common return wire back to the AC power source. This is the method used in LGB and Aristo turnout motor control boxes. A modified version of the second form of control using the SPDT switch is used in the dual diode matrix. There are five inputs (control switches) on the left connected to ten rows. Each control switch is extended to be a dual pole momentary DPST off-(on) switch that activates both a positive and a negative row together because there may be a need to throw turnouts in either direction with a single command. There are four columns, one for each turnout motor. If there is a diode at the intersection of one of the dual command rows and a motor column, that motor will be activated. Which direction it will go depends on which of the dual rows has a diode. The diodes that wire to the "positive" row are wired in one direction and the diodes wired to the "negative" row are wired in the other direction. Note that there are NEVER two diodes installed from the same dual command row to any given motor. If there were, the matrix would be commanding the motor to go both ways at once. Nothing will burn out immediately but the resultant AC voltage placed on the turnout motor will cause a strong buzz and it'll be obvious that something is wrong. Depending on the size of a diode matrix, many turnouts may be thrown at once. This version throws as many as four. The rectifier diodes from the power pack are paralleled to increase their current handling capability although one diode will actually handle five motors. You can also use larger diodes. The power pack will also have to provide enough current to power several motors. I was able to run nine turnouts from an MRC 9300 power pack, but it didn't quite have enough output capability to handle ten. A 24 VAC 2 amp transformer (available at Radio Shack), and 2 or 3 paralleled 1 amp rectifier diodes should be sufficient to power a large matrix. Any power source should be properly fused for safety, preferably on both the primary and secondary sides. With 24 volts applied to one of my matrices, ten motors, six with LGB 1203 accessory contacts, operate very smartly. This diode matrix is wired on standard 0.1" centerline bare perforated circuit board with #18 copper wire for the matrix lines. The top side of the board has the diodes and the column wires leading to the motors. The backside has horizontal wires that are the rows. This board actually has two matrices of different sizes with only one column in common between the two circuits. The two matrices control two different regions of my layout with only the one turnout that connects the two regions controlled by both matrices. If you haven't dealt with diodes before, these devices are designed to pass electrical current on one direction only therefore they must be wired in correct direction. Buzzing turnouts indicate a mistake. The diagram shows the correspondence between the schematic symbol and a typical diode case markings. The 1N4000 series diodes are very common and you shouldn't pay more than $0.10 each for them. Sometimes you can find them in packages of 100 or more for a penny each. If you can't find something marked as 1N400X, then use any diode rated at 50 volts or greater and 1 amp or greater. The diodes in the matrix do not have to all be the same type. Avoid using filtered DC power to run LGB turnout motors. The LGB motor is designed to operate from half wave rectified unfiltered power. The turnout motor is "impedance protected" by the inductance of the motor winding and will withstand continuous application of half wave current, at least for reasonable periods of time. With pure DC applied, there is almost no internal current limiting and the motor will burn out after a few seconds. This can easily happen if a control switch sticks or gets pressed continuously. The Aristo motor is protected by internal limit switches such that application of continuous half wave or DC power is not a problem. An LGB turnout motor that buzzes loudly but does not move may indicate that AC power is being applied to the motor, or that the turnout is stuck or jammed, perhaps by a piece of ballast. The motor may not actually burn out immediately, but continuous application of AC will ruin the permanent magnet inside. An Aristo motor that is fed AC power will madly cycle back and forth. In either case, if DC power is used you won't get the audible feedback that something is amiss before something else burns up. Apply the AC power and use a voltmeter to check for positive DC voltage on positive power bus. The black lead of your meter should go to the ground return of all the turnout motors. You should get a reading of about half of the AC input voltage. This may vary depending on what kind of meter you are using. Then check the negative power bus. You should get about the same reading but the voltage will be negative. First test for proper turnout motor connections. Manually set each turnout to curved. Connect a clip lead to the first turnout column and momentarily touch it to the positive power bus. The motor should flip to straight. Then touch the clip lead to the negative power bus, the turnout should flip to curved. If the motor goes backwards, reverse the wires to the motor. If the motor doesn't flip at all, check the connections to the motor. Repeat this test for each column until you can reliably flip each motor. Test the matrix by selecting a route. All the turnouts for that route should flip together. If the routing is not correct, look for a diode wired at the wrong matrix junction or a missing or improperly connected diode. If fuses or diodes fail, look for unintended short circuits like solder splashes. If a turnout motor buzzes (or an Aristo motor cycles) look for a stuck control switch or two diodes wired from the same dual command row to that motor. If all the turnouts don't throw every time, get a bigger power source or check your motors for proper operation. Track power routing can be implemented without interlocking control. All that is needed is an accessory switch connected to a turnout motor to do simple power routing. However, having a coordinated set of programmatically controlled contacts available as a result of interlocking control is too inviting an opportunity to pass up. In the case of the simple stub yard, fully automatic power routing is a natural extension to interlocking turnout control. When a particular stub is selected, power is routed that stub and to none of the others. Both LGB and Aristo turnout motors can be used for power routing. The LGB 1203 accessory contact snaps on the end of a regular turnout motor and provides a DPDT set of heavy duty contacts. The Aristo turnout motor has a set of medium duty SPDT contacts already installed. Either can be used for most power routing situations although circuitry involving reversing (for wyes and reverse loops) will required DPDT contacts. LGB and Aristo turnouts can be mixed and the diode matrix will still work fine. You also need to insulate one rail of each power routed track. A plastic insulated rail joiner will work fine. The LGB 1203 places an additional mechanical load on its turnout motor so that the motor takes a little more electrical power to reliably flip. A 24 VAC power source really helps out here. This is a schematic diagram for power routing the stub yard. It uses only SPDT contacts. Each switch either routes power to it's stub or sends the power down the line for more routing. Note that one route is always powered. To turn the whole yard turned off, power to the yard can be routed from a turnout that enters the yard, or master power switch can be provided. Note that with this method, no power wires have to run back to a control panel. Power is picked off the track leading to the yard right at the entrance to the yard. Remote turnout control will usually tie the operator to a control panel. This is not a serious problem for most indoor layouts, but it is a real inconvenience for a garden railroad. There have been a lot of methods used to allow walk around train control on garden railroads, some form of radio control being the most common. Walk around remote control of turnouts is much less common. Remote control of turnouts is not always necessary. Where there are few turnouts and the routes are simple, manual control with the aid of a broom handle is very effective. Turnouts that cannot be reached with the broom handle can be controlled by switches mounted on accessible posts near the turnout and powered from a hidden 9V battery. A standard 9V battery is strong enough to throw one turnout but not strong enough to burn out an LGB turnout motor in case of an accident. This form of turnout control is very prototypical, you are the brakeman who gets off the train when needed to throw a turnout. However, you still can't route your trains from your chaise lounge or when your other hand is holding a cylindrical fluid refreshment device. For those of you that are already using the Train Engineer throttle by Aristo, you can have automatic interlocking control and walk around capability in the same hand held unit. A new product announced by Aristo is the ART-5475 Remote Accessory Panel. This is a remote receiver which will control up to five turnouts through the use of the A through E buttons on a Train Engineer transmitter. Up to ten Accessory Panels can be controlled from one 10 channel transmitter by using the track keys. Pressing an A thru E key will set one turnout. Pressing the same key again will set it back. If you reprogram the transmitter to other channels, or use other transmitters, you can control up to 50 turnouts. The older ART-5474 receiver will work also, however you do need to build some moderately complicated converter circuits to adapt the 5474 to the diode matrix. See the circuits at the bottom of this page for the schematics. The 5474 has an advantage is that it can control up to 25 routes on one transmitter channel instead of 5 for the 5475. The 5475 provides an alternating positive or negative pulse at each of its five outputs. This is great for controlling a single turnout, but it is not suitable for driving a diode matrix. Also the 5475 really doesn't have enough output power capability to drive two motors from one output. To drive the diode matrix, a relay must added to convert the bipolar output of the 5475 into a DPST switch closure. This relay can be connected to the diode matrix that we just designed as shown below. The relay coil does not care if the driving pulse is positive or negative so that each time an output is commanded, the associated relay will activate and work just like the manually activated toggle switch. Therefore each command selects a single route each time that it is pressed without regard to any pre-existing turnout state. The relay contacts are capable of handling a lot of current so that many turnouts can be driven at once. Since all that is needed to control the matrix by remote control is a relay closure, any receiver or decoder intended for turnout control can be adapted. Proprietary radio control systems that will work are available from RSC and Locolinc. Most DCC systems have stationary decoders that can be used. For systems intended to control twin coil type motors, each twin coil motor output can be used to control two routes by hooking a relay to each motor coil output. This matrix is similar to the manually controlled matrix but the DPST switches have been replaced by DPST relays. One relay is needed for each route. I have used a 12 volt printed circuit mount DPDT relay available from Radio Shack (part number 275-249) for about $4. It has 5 amp contacts and should handle any sized matrix. Any other relay with a coil voltage between 9 and 24 volts and suitable contact configuration and ratings will work as well. If you wish you can retain the manually operated switches in addition to the relays, just wire the switch contacts in parallel with the relay contacts. I have been so pleased with the operation of the interlocking control system described that I've built five of them, three panel controlled versions on an indoor layout and another two radio controlled interlockings on a garden railway. On the garden railway, I've virtually abandoned the full control panel that I had previously built because I don't need it anymore. If you can't wait for the new ART-5475 to come out, as I couldn't, you can use the circuit below to adapt the outputs of the ART-5474 to drive the diode matrix. These circuits work, I use them every day. Since these converter circuits are a moderately involved electronics project, I assume that anybody that will try it has sufficient skill to work from a schematic. If you are electronically challenged, I would recommend that you use the ART-5475 as the circuitry is be significantly easier. The heart of the circuits is a common integrated circuit timer, often called a 555. They also come in dual versions called a 556. You can get either of these IC's from Radio Shack. The timers take a negative going input trigger and generate a 200 mS output pulse, long enough to throw a turnout motor. The A and B outputs are wired straight through from the ART-5474. This is also the same wiring that would be used for all five outputs of the ART-5475. The C output is isolated by a generic optoisolator (NTE 3041 or 4N28) and then the signal is conditioned to generate a negative going pulse for either a positive or negative edge of the C input. This pulse is then used to trigger the timer. The D and E outputs are just triggered from the optoisolators that are wired inside the ART-5474. It many also be possible to use the output of the D and E outputs of the 5474 to drive a transistor that drives the relay directly. I opted instead to control the pulse width with the 555 timer. The optoisolator in the 5474 does NOT have the current sinking capability to drive a relay directly. See ART-5474 Tips for more information on the ART-5474. I built five of these circuits on circuit boards that I got at Radio Shack (276-168B). All five are laid out somewhat differently but they all fit on the boards. This page has been accessed times since 30 Oct 1999.
<urn:uuid:410067db-8051-4420-a6f6-f28469b47c4a>
CC-MAIN-2017-51
http://www.trainweb.org/girr/tips/tips3/interlocking_tips.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512208.1/warc/CC-MAIN-20171211052406-20171211072406-00128.warc.gz
en
0.943174
5,074
2.609375
3
We’ve reached the end of this series on Mouvement laïque québécois v Saguenay (City), so let’s look back on the main points. One of the key questions throughout the Saguenay case was whether Canada is a secular country, and if so, what that means. Canada is in a strange place when it comes to whether or not it is a secular state. Unlike the US, Canada does not have an official policy of being a secular state. It is, in fact, technically a Christian nation, whose head of state was allegedly appointed by God. On the other hand, since repatriation, we have been subject to the Charter of Rights and Freedoms. One of the most important court cases in Canadian history is R v Big M Drug Mart Ltd, which was the first Supreme Court case whose judgment was based on Section 2 of the Charter. This is the infamous “Lord’s Day Act” case, where a store had been charged with doing business on a Sunday, in violation of the “Lord’s Day Act”. The ruling struck down the “Lord’s Day Act”, and established that Canada is – in practice if not literally de jure – a secular country. But while R v Big M Drug Mart Ltd established that government cannot favour or hinder religion, it did not clearly define the parameters of state secularism. Secularism was there, but you had to read carefully to see it, and it was all too easy to ignore the implications and fabricate your own understanding of what Canadian secularism should look like. This actually happened in the Court of Appeal ruling in the Saguenay case. In Mouvement laïque québécois v Saguenay (City) the Supreme Court was obligated to clarify Canadian secularism, even if only to explain why the Court of Appeal got it so wrong. Justice Clément Gascon took great care not to use the s-word, and instead talked about “religious neutrality”. In fact, he specifically ruled out secularism. However, what he ruled out was what he called “absolute secularism”, which was not actually secularism, but rather a virulent form of fascist anti-religious extremism that proponents often try to disguise as secularism. The reality is that the “neutrality” Justice Gascon describes is simply secularism, and that is how I will refer to it. The state has a duty to be religiously neutral. That means that it cannot favour or hinder any religious belief or the lack of belief in religion. It also means that it must protect every person’s freedom of conscience and religion, and thus cannot use its powers to promote the participation of certain believers or nonbelievers in public life. But that’s theory. In reality “the state” can’t be religious at all, nor can it promote or hinder religion. In fact, “the state” can’t do anything at all – it’s just a legal construct, not a person; it has no will and no capability to do anything on its own. The state can only act via its agents: state officials. Thus, if we’re going to talk about what the state can or can’t do, we must talk about the actions of its officials. So when we say that “the state” must be religiously neutral, and cannot favour or hinder any religion (or lack thereof), what we really mean is that state officials must be religiously neutral. But of course, it’s not quite that simple. State officials are still Canadians, and like all Canadians they still have Charter-guaranteed rights to freely have and practice religious beliefs. People cannot and should not have to sign away their Charter rights just to hold public office. This would seem to create a conflict. However, it really doesn’t create a conflict. The key to understanding this lies in realizing that state officials can act in two different capacities, at different times. In each action they perform they are either acting as “the state” (or rather, as an official agent of the state) or as private individuals; never as both simultaneously. When they are acting as private individuals, they have all the same Charter rights as any other private individual. When they are acting as “the state”, they enjoy all the privileges of acting as the state, but at the same time they have only those rights and freedoms that the state has. And the state does not have a right to be religious. This is not a particularly difficult concept to grasp, but because there are groups determined to sow confusion and misinformation, it’s worth discussing in some detail. Consider a provincial premier: When the Premier orders breakfast at Tim’s, who is ordering that breakfast: the person, or the province? The answer is pretty obvious: it’s the person ordering that breakfast. Thus, in that action (ordering breakfast), the Premier enjoys all the freedoms and rights of any Canadian. How about when the Premier makes a speech bestowing an official provincial honour on someone; who is making that speech: the person or the province? Again, the answer is obvious: it’s the province (a private citizen wouldn’t be able to make that speech, or bestow such an honour). As you can see, this isn’t a complicated idea. But let’s try some edgier examples. The Premier headlines an event organized by the province, and opens it with a prayer. Who is praying: the person or the province? If you answered “the person”, you’re almost certainly being deliberately dishonest. The Premier is not just “some person” who happens to be up there praying at an official provincial event. Clearly that’s the Premier of the province up there praying. Which means that ze is clearly not acting as a private person, but rather as an official of the state. And the state is not allowed to promote or favour religion. This is a violation of the state’s duty of secularism. How about this: The Premier is Sikh, and attends gurdwara every Sunday, and often leads the ardaas prayer. Now who is praying: the person or the province? In this case, it’s pretty obviously not the province. The Premier is simply enjoying the same freedom that every Canadian has to have a religion and practise it. This freedom is not magically voided upon being elected. Still not complicated, but let’s get even edgier. The Premier, being a kinky sort, decides to wear a large buttplug for a parliamentary session. Who is choosing what goes in the Premier’s ass: the person or the province? Pretty obviously the person. But this isn’t a particularly controversial case because the buttplug will not be visible. So let’s try something more visible. The Premier opts to wear bright red lipstick (sometimes called “whore lipstick”) for a parliamentary session. Who is choosing what the Premier puts on her… or his… lips: the person or the province? Once again, it’s obviously not the province. Now let’s get dangerous. The Premier, being a Sikh, opts to wear a dastar (turban) for a parliamentary session. Who is choosing to wear the dastar: the person or the province? The person, right? So there’s no problem. It’s as easy as that. And to spell it out – necessary because there are groups determined to confuse the issue – just because you are making a decision as a private individual and thus not subject to the limitations of acting as the state, it does not follow that you are free to make any decision you please without consequences. Choosing to wear a religious garment like a dastar is never going to be an inappropriate choice in practice; except in very specialized cases where health and safety are an issue, there’s nothing wrong with a person wearing one to work. Leather bondage fetish gear is rarely going to be an appropriate choice for work, given current social mores. So the fact that a state official can choose to wear a dastar to work doesn’t mean that they could also choose to go to work in bondage gear… or in a bunny costume… or wearing a T-shirt with a political slogan on it… or straight-up naked. Yes, they have a right to make all those choices as a private individual. No, they are not free from the consequences of those choices, which may include being sent home or even charged with contempt of Parliament. The standards of propriety for Parliament are set by the social mores. So long as their choices are within those standards, members of Parliament are grown-ups who are free to dress themselves. There is no boardroom in Canada that will refuse to allow someone entry wearing a turban… there is almost certainly no boardroom that would allow someone entry in bondage gear. You can argue that the social mores are wrong, and there is nothing untoward about being naked (or in bondage gear, or anything else like that), and I wouldn’t disagree, but they are what they are. So just because a state official is not bound by the limitations of the state when they dress themselves, it doesn’t mean they’re not bound by anything. They are still bound by social standards. And contemporary social standards don’t consider the turban (or kippah, or hijab, or chunni) to be inappropriate professional wear. (By contrast, a niqab or burqa is not considered appropriate professional wear in most contexts. And there are secular, practical concerns – utterly unrelated to the religiousness of the niqab or burqa – about state officials hiding their faces while doing their job. So there could be a reasonable prohibition against MPs wearing those garments.) To put it in simple terms: Whenever a person is acting as the state, they are subject to the privileges and the restrictions that come with acting as the state. But just because someone is a state official, it does not follow that every single action they perform is an official state action. When a person – any person, state official or not – makes decisions as a private person, they are protected by the Charter, and free to enjoy all the same rights and freedoms as any private person. Taking public office does not invalidate your Charter rights. To put it in even simpler terms: State officials cannot promote or hinder any religion or religious belief or practice, including non-religious beliefs and practices, unless there is a good secular reason for doing so (such as health or safety concerns). But state officials retain the same Charter rights and freedoms as any Canadian when they are making decisions outside of their official capacity – as private players. That means they are free to have a religion and to practice it – including wearing religious accessories like turbans and headscarves. Any actions they take as the state must be secular, but any actions they take as private players are subject only to the restrictions that apply to all private players. That includes dressing themselves, choosing what to eat, etc.; the state does not make those decisions, the person does. Discrimination and reasonable accommodation Alain Simoneau, the complainant in the Saguenay case, alleged that the city’s prayer discriminated against him. Did it? In Canada, discrimination can only happen under one of the protected grounds. The protected grounds defined in the Charter are: race, national or ethnic origin, colour, religion, sex, age or mental or physical disability. (Note that the Charter specifically allows discrimination on those grounds if the goal is to ameliorate a disadvantage. For example, a government program to give grants only to visible minorities would not violate Section 15.) The Canadian Human Rights Act adds the following prohibited grounds: sexual orientation, marital status, family status, and pardoned convictions. (Some provincial human rights acts add further grounds. For example, the Ontario Human Rights Code adds gender identity and expression, ancestry, and more.) Discrimination happens whenever there is an exclusion, preference, or distinction based on one of those grounds that has the effect of nullifying or impairing a person’s rights. There seems no doubt that being forced to sit through a prayer impairs Simoneau’s rights under the protected ground of religion (which includes freedom from religion, or freedom of conscience). Now the Charter allows discrimination, in some cases. Section 1 allows for reasonable limits… as can be demonstrably justified in a free and democratic society. To find out whether Section 1 applies, courts use the Oakes test: - There must be a pressing and substantial objective. - The means must be proportional: - The means must be rationally connected to the objective; - there must be minimal impairment of rights; and - there must be proportionality between the infringement and objective. It is hard to imagine what the objective behind a law that enshrines a religious practice might be, beyond simply favouring that religion. And that, of course, would be a preference under a protected ground (specifically: religion). So there doesn’t seem to be any way that such a law could be acceptable as a reasonable limit to people’s rights and freedoms. So government prayer is simply not okay, but let’s try a thought experiment. Let’s pretend that government prayer is okay – that it doesn’t fail the Oakes test. In that case, there would still be discrimination, only now it would be reasonable discrimination (by virtue of passing the Oakes test). That doesn’t mean the end of the story, though. Even if all that were true, the state would still be obligated to provide a reasonable accommodation for those who don’t want to be part of the prayer. There is a lot of confusion about what reasonable accommodation is. It’s not just arbitrarily hand-waving away rules that others have to follow. Reasonable accommodation doesn’t even happen unless some very specific criteria have been met. - First, the law, rule, or practice that one is getting a reasonable accommodation for must cause discrimination. (Recall that discrimination exists whenever there is an exclusion, preference, or distinction based on one of the protected grounds that has the effect of nullifying or impairing a person’s rights.) - Second, the discrimination must only exist for a good reason. That is, it must satisfy the Oakes, test: there must be a pressing and substantial objective, the discrimination must be rationally connected to that objective, and whatever impairment exists must be as minimal as possible. - Third, the impairment caused by the discrimination must be nontrivial and substantial. You don’t get reasonable accommodation for trivial and insubstantial things. So basically, you need a sane and secular law that nevertheless discriminates, and you need the discrimination to be nontrivial and substantial. Only when you have all that can you even begin to discuss a reasonable accommodation for the people who are discriminated against. And, totally unlike what most people seem to think, once it has been decided that you deserve a reasonable accommodation, that doesn’t mean you simply get an exemption from the discriminatory law. Recall that the law exists for a good reason. (If it didn’t, it wouldn’t survive the Oakes test, and would be thrown out completely, so there would be no need for any accommodation.) Because the law exists for a good reason, you can’t simply go around handing out exemptions to it. So accommodations are not exemptions. They are compromises. Both sides in the equation – the lawmakers and the discriminated persons – are expected to bend a little, to find a way that the intention of the rule can still be met while taking into account the rights of the discriminated people. Far from being an exemption, often this can leave the person receiving a reasonable accommodation with more rules and requirements than those who are simply following the original rule. In the Saguenay case, the defendants argued that even if the prayer discriminated, and if the discrimination was nontrivial and substantial, they nevertheless provided a reasonable accommodation for those who didn’t want to take part: They could simply leave the room and return after the prayer. Now, of course, since the prayer was not justifiable in any way, reasonable accommodation never really got serious consideration. You can’t get reasonable accommodation from an unreasonable law. Nevertheless, Justice Gascon took pains to note that the so-called “reasonable accommodation” was anything but reasonable… in fact, far from ameliorating the discrimination against those who didn’t want to take part in a prayer, it only made it much, much worse. Now people with different religious beliefs (or no religious beliefs) would have to stand up and be publicly identified by the entire audience as they left the room. Given that minority believers are already prone to victimization, this would be making it that much easier to target them. So the city failed at just about every level in its argument. The prayer is indisputably discriminatory, it accomplishes nothing and there is no sane reason for the city to be praying at all, and the proposed accommodation of leaving the room while the prayer takes place is not reasonable. Human rights complaints The Canadian Human Rights Act and every provincial human rights act describes a process by which people who feel their rights have been violated can make a complaint. Traditionally human rights complaints have been a matter for the courts. But the hard reality is: the courts have often been lousy at handling human rights complaints. Human rights is a very complex and specialized topic, and it is very, very rare to find someone who is both well-versed in human rights and a judge. And of course, requiring the courts to handle human rights cases takes up more precious legal resources, adding to the strain on our already overburdened court system. Thus, the Canadian Human Rights Act (and all provincial human rights acts) called for the creation of administrative tribunals to handle human rights complaints: human rights tribunals. They would be staffed by specialists that know the field, and would take some of the burden off the courts. They would still be subject to judicial review, of course, so they couldn’t get out of control. All in all, a brilliant idea that has worked wonderfully. That is, except for one small snag: the niggling question of just how much respect reviewing courts should be giving to the rulings of these administrative tribunals. It’s a tricky problem. On the one hand, you don’t want these tribunals to have so much freedom that they effectively operate without oversight. On the other, if courts – and particularly, judges without expertise in the field – can simply shrug off their rulings, what’s the point of them? One of the things the Supreme Court was trying to do in Saguenay was clarifying the rules for judicial review of administrative tribunals (such as human rights tribunals, as in Saguenay’s case). This was where the Court’s opinion split. The majority sided with a plan to give administrative tribunals deference right up to the point where their decisions have wide-scale impact. Justice Rosalie Abella dissented, preferring to give them deference so long as they were operating within their area of expertise. Either way, the result is that the rulings of human rights tribunals will now have much more weight. Reviewing courts will only be allowed to verify that they’re reasonable, nothing more. This is a good thing for atheists and secularists, because while the (lower) courts have always been spotty, human rights tribunals have shown themselves to be tremendous allies of reason. By all rights, Saguenay never should have happened, on many levels. Alain Simoneau should not have been treated like a second-class citizen by a bullying jackass Mayor, who wanted to use the city to promote his religious beliefs. Mayor Jean Tremblay’s associates should not have turned their back on their civic duty in favour of promoting their faith. The Québec Court of Appeal should not have presumptuously brushed aside the findings of the Tribunal des droits de la personne (TDP; or “Human Rights Tribunal”). None of these things should have happened, but they did. The God-bothered kept crossing the line, over and over relentlessly, trying to use the state to force their religion on all Canadians. And in the end, it all blew up in their faces, magnificently. First there was Alain Simoneau, just an ordinary citizen of Saguenay. He stood up to harassment, threats, and a celebrity mayor to defend his right to be treated equally. Even with the backing of Mouvement laïque québécois (MLQ), it wasn’t an easy task. Human rights heroes are almost never treated well by their society until long after their struggle has been won. It took a toll on him, both financially and personally. He ultimately left Saguenay before the case was finally won in the Supreme Court. It’s not hard to understand why; Mayor Jean Tremblay was a celebrity, and he flagrantly used his power to humiliate and demonize Simoneau – even publicly identifying Simoneau to potential harassers – in a campaign that was castigated in every ruling at every level of the case. Then there was Judge Michèle Pauzé, who was in charge of the TDP at the time of Simoneau’s complaint. Her job could have been done quite easily; at the time the Simoneau case was the second case in just a few years regarding government prayer. The previous case was in Laval, and Pauzé could simply have referred to that ruling and pretty much been done with it. Instead, Pauzé took on the challenge with gusto. In many ways, her ruling is actually a more interesting read than the final Supreme Court ruling. Though it is brief and somewhat perfunctory, it nevertheless goes into great detail considering the nature of secularism, discrimination, and the specific facts of the Saguenay prayer and the shenanigans of the Mayor and his cronies. Ultimately she set up the decision in favour of secularism that – while it got brushed aside by the Québec Court of Appeal – was eventually reinstated by the Supreme Court. And finally there was Supreme Court Justice Clément Gascon. At the time the Sagueany case was heard, Gascon was the newest member of the Court, and Saguenay was the first case for which he wrote the majority opinion. I’ve read some critiques that call his writing pedestrian, and complaining that the ruling lacks sparklingly quotable passages, but personally I would describe his style is engineered, and and actually quite subtle. On the surface, Gascon describes the nature of state secularism very clearly, and very simply, in terms that are hard to misunderstand (unless you’re reading it with the intention of misunderstanding). But the real genius is in what he says between the lines. Without specifically identifying what he’s talking about, some of what he says seems quite cleverly calculated to forestall future challenges to state secularism – particularly those by proponents of “Québec Charter of Values”-style religious symbol bans. Together these three, and the rest of the TDP and the Supreme Court, gifted Canada with a wonderfully progressive ruling, simultaneously stamping down entrenched religious privilege while opening the door to more multicultural diversity in the Canadian public sphere.
<urn:uuid:6e9376ea-3e7f-4724-a5bf-97f811bddadf>
CC-MAIN-2022-33
https://www.canadianatheist.com/2016/06/indis-mlq-v-saguenay-review-9-wrap-up/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00297.warc.gz
en
0.963392
4,980
2.765625
3
The aim of the following draft is to offer some thoughts on a local name from thirteenth-century Lincolnshire, Macamathehou, that involves a version of the Arabic name Muhammad (Middle English Makomet/Macamethe, Old French Mahomet). Whilst it has been plausibly seen as an instance of a variant of the name of Muhammed being used to mean 'heathen', 'pagan idol' or similar (based on the false but common medieval Christian belief that the prophet Muhammad was worshipped as a god), here in reference to a barrow that was considered to be a pre-Christian site, it is worth noting that there are a small number of people with names and surnames derived from Arabic Muḥammad apparently living in twelfth- to fourteenth-century England. |Figure 1: the location of Macamathehou between Spridlington and Faldingworth parishes in Lincolnshire; click the image or here for a larger version (image: C. R. Green/OpenStreetMap and its contributors).| The existence of the intriguing local name Macamathehou in the parish of Spridlington, Lincolnshire, was first noted in 2001 by Kenneth Cameron, John Field and John Insley in Place-Names of Lincolnshire VI (PNL), with both attestations of the name dating from the thirteenth century (the reign of King Henry III, 1216–72).(1) They identify the two elements of the name as being Old Norse haugr, 'mound, barrow', and Middle English Makomet/Macamethe, which derives from the name of the prophet Muhammad (Medieval Latin Machometus/Mahumetus, Anglo-Norman Mahumet/Mahomet/Machomete, Old French Mahomet < Arabic Muḥammad, probably via an Arabic regional form Maḥammad).(2) Needless to say, this solution is most intriguing and has, moreover, found favour with other place-name specialist, including the Vocabulary of English Place-Names (VEPN) and Richard Coates.(3) As to the import of this name, the easiest conclusion—and the one endorsed by PNL, VEPN and Coates—is that the first element, Macamethe/Maumate etc, is not functioning simply as a normal Middle English rendering of the name Muhammad/Mahomet, but rather as a word indicative of heathen or pagan idolatry, based on the false but common medieval Christian belief that the prophet Muhammad was worshipped as a god. So, PNL describes the name as meaning 'the heathen mound', with the first element being 'a corrupt ME [Middle English] form of the name of the prophet Mohammed, for which v. MED [Middle English Dictionary], s.v. Makomete, also used to denote a pagan god or an idol'.(4) This is taken up by Richard Coates, who says that it has been suggested, 'with great plausibility', that Macamathehou in Spridlington parish 'is a Middle English name meaning "Mahomet mound", i.e. "heathen mound"', and points to 'the repeated compound of OE hæðen + byrgels "heathen burial"' as a potential comparison.(5) Likewise, the VEPN's draft section on M includes the following discussion: makomet ME, 'idol, pagan god', an application of the name of the Arab prophet Mohammed (commonly though mistakenly believed by medieval Christians to have been worshipped as a god)... It occurs early in Macamathehou (f.n.) 1216–72 L:6·211 (haugr), presumably to be interpreted as 'heathen mound'.(6) On the whole, this interpretation is probably the safest option. There are certainly a handful of references to 'heathen' barrows in Old English charter bounds, for example of leofwynne mearce to þam hæþenan beorge, 'from Leofwine's boundary to the heathen barrow', in the charter S956 relating to Drayton, Hampshire, and dated AD 1019, although none are recorded from Lincolnshire.(7) It has also been suggested that the Lincolnshire names Bloater Hill (North Willingham) and Blod Hou (Barrow-on-Humber) derive from Old Norse blóthaugr, 'a sacrificial mound', whilst other names involving haugr certainly refer to supernatural/demonic creatures—for example, Gasthehowe/Gastehowe, Ashby Puerorum (Lincolnshire), recorded in the thirteenth century and deriving from Middle English gast/Old English gāst, 'ghost, dead-spirit', or names like Scratters (Scrathou, in Hayton, East Riding of Yorkshire) and Scrathowes (Scrathou, in Osmotherley, North Riding of Yorkshire), which derive from Old Norse skratti, 'devil, wizard' + haugr.(8) Furthermore, the Old English compound hæðen + byrgels, 'heathen burial', does indeed recur frequently in Late Saxon charter bounds, with these names often said to be identifiable with barrows in the landscape.(9) On the other hand, there are some possible issues with this explanation, and other interpretations are possible of Spridlington's Macamathehou. First, the comparison with the many instances of the OE compound hæðen + byrgels, ‘heathen burial’, is perhaps not as convincing as it might seem. Not only is a link between this term and barrows only demonstrable in a handful of instances, but Andrew Reynolds has also suggested that the sense of the term was primarily not ‘pagan’, but rather ‘unconsecrated’, and that it denoted burials of executed offenders and other social outcasts, which renders the proposed value of these names as support for interpreting Macamathehou as meaning ‘heathen mound’ open to significant debate.(10) Second, if the above is correct, then this would be the only known instance of a derivative of the Arabic name Muhammad being used in a place-name to indicate a 'heathen mound' or similar, which is potentially concerning—the other elements noted above all recur in multiple names. Third, the element identified by PNL and VEPN as being present in Macamethehou is Middle English Makomet(e). The Middle English Dictionary (MED) on Makomet(e)/Macamethe etc, however, makes it clear that the primary use of this word in Middle English is as a form of the name Muhammad, not as a word referring to an 'idol'/'pagan god', with the vast majority of quotations provided by the MED referring either the prophet Muhammad or people named Muhammad; the only exceptions are a single quotation from Layamon's Brut (c. 1200, mahimet, lacking the -c-), and three from two later texts.(11) The form of the name Muhammad that was primarily—although not exclusively—used in the sense 'pagan deity, idol', is rather Maumet/Maumate, mentioned above, deriving from Anglo-Norman Maumet, a reduced form of Mauhoumet, Old French Mahomet/Mahommet.(12) In this light, it is worth considering whether it is possible that the name Macamathehou could somehow be named from a person named Makomet/Muhammad or similar living in medieval England. Certainly, it should be noted that multiple local names relating to mounds/barrows do seem to be named after people who owned estates or land in the area. For example, Andrew Reynolds draws attention to the bounds of a mid-tenth-century charter for Swallowcliffe, Wiltshire (S468), that records the burial site of a seventh‐century woman whose grave had been cut into an existing mound as Posses hlaew, noting that 'Poss is a male name, and thus the mound is apparently not named after its Anglo‐Saxon occupant', implying that it was instead named after a later estate owner.(13) As Irene Bower long ago pointed out, such a situation can be credibly paralleled in Lincolnshire, with a number of Lincolnshire names involving haugr seeming to contain the same personal-name as is found in the same or a neighbouring parish-name—so, Scalehau (Skalli + haugr) was located near to Scawby (Skalli + bȳ), with Kenneth Cameron commenting that the two were 'no doubt named from the same man'; Leggeshou (Leggr + haugr) was located on the boundary of Legsby parish (Leggr + bȳ); Katehou/Catehowe (Kati + haugr) was located in South Cadeby (Kati + bȳ); and a Grimaldeshawe (Grimaldi + haugr) was recorded in the neighbouring parish to Grimoldby (Grimaldi + bȳ), perhaps on the boundary between the two.(14) |Figure 2: Section from the Pipe Roll Society publication of The Great Roll of the Pipe for the Seventh Year of the Reign of King Henry the Second, A.D. 1160–1161 (London: Wyman & Sons, 1885), p. 10, dealing with Mahumet of Wiltshire (image: Internet Archive).| As to the likelihood of someone named Muhammad or one of its Anglo-Norman/Middle English variants (Mahumet, Makomet and similar) actually living in medieval England, this is perhaps less far-fetched than might be assumed. Katharine Keats-Rohan and John Moore have directed attention to the Wiltshire entries of five consecutive Pipe Rolls of Henry II (1160/61–1164/65) that refer to a man named Mahumet, whose name-form Moore considers very difficult to explain as anything other than a rendering of Muhammad and which is accepted as such by the OED and MED. This Mahumet is recorded in the Pipe Rolls only because he was fined for his part in an unlicensed duel with a John de Merleberge, probably in or near Marlborough Castle, and it seems he was not an especially wealthy man, as he was pardoned the last mark of his fine due to his poverty.(15) Furthermore, Mahumet of Wiltshire was not the only man with this name for whom we have evidence from medieval England. For example, a Theobald filius Mahumet (or filius Mahomet) is recorded from early thirteenth-century Hampshire in the Pipe Rolls of Henry III for 1222–24; another man named Mahomet is recorded in 1327, when Edward III issued him and six others a pardon at Newton-on-Ouse, Yorkshire, for 'offenses in Ireland'; and a Mahummet Saraceno occurs in the Close Rolls of Henry III for 1254. Furthermore, a number of people surnamed Mahumet and similar are recorded in documents of the twelfth and thirteenth centuries, for example a Humphrey Mahumet in a charter of Southwick Priory, Hampshire, a Herbert Maumet who was sergeant of Portsmouth in the mid-thirteenth century, and a Radulphus Maumet who is recorded in the reign of King John.(16) Moore also notes the presence of someone bearing another 'apparent Arab name' in twelfth-century Hampshire, a certain Paucamatus, a name that he considers to probably reflect Bakmat, who is recorded in Winchester from 1159/60 until 1183/4 and who is associated with a man named Stephanus Sarracenus, one or both of whom may be of some relevance here.(17) Looking more generally at the question of the presence of people who were Muslims or of potential Muslim ancestry in medieval England, and so who might bear names like Mahumet/Makomet and similar, Richard of Devizes in his description of London from c. 1192 certainly implies that there were 'Moors' in that city then, when he writes that: You will arrive in London... do not mingle with the throngs in the eating-houses; avoid dice and gambling, the theatre and the tavern. You will encounter more braggarts than in the whole of France. The number of parasites is infinite. Actors, jesters, smooth-skinned lads, Moors, flatterers... All this sort of people fill all the houses.(18) We do need to be careful here, however. The word translated ‘Moors’ here is actually garamantes, which may indicate an origin for this section in a classical or literary source, rather than reality, especially as influence from Horace’s Satires has been identified in the subsequent sections of Richard’s description of London.(19) More certainly relevant may be recent archaeological excavations at the medieval cemetery of St John’s Hospital, Lichfield, which revealed the burials of between two and five people of African ancestry, some of apparently high status, and at Ipswich, where nine people out of a total of a total of 150 excavated from a cemetery there appear to be of 'sub-Saharan' African descent, spread across thirteenth to the sixteenth centuries, with the earliest having oxygen isotope results consistent with an early life spent in North Africa/Tunisia.(20) Likewise, recent work on burials in a mid-fourteenth-century cemetery at East Smithfield, London, indicated that 29% of a sample of 41 people buried there were of ‘non-White European ancestry’.(21) In the above light, it may also be worth noting that both Henry II and his son Richard I seem to have had 'Saracen mercenaries' in their employ, the latter having as many as 120 such mercenaries and apparently including at least some of them in the garrison of Domfront, Normandy.(22) Similarly, it is intriguing to note that knowledge of the location of medieval Lincoln on either side of the River Witham and the existence of the Foss Dyke as a waterway between that city and the River Trent seems to have reached the great Muslim scholar Muhammad al-Idrisi, who included these facts in his geographical encyclopaedia Nuzhat al-mushtaq fi ikhtiraq al-afaq, written for Roger II of Sicily and completed in 1154—indeed, it has been suggested that al-Idrisi probably travelled to England himself during the first half of the twelfth century, which is a point of some significance.(23) |Figure 3: Al-Idrisi's mid-twelfth-century Arabic map of Britain, from a late sixteenth-century copy in the Bodleian Library, Oxford; the map is split across three different drawings which have been combined together here so that the whole island can be seen (Bodleian Library MS. Pococke 375 folios 281b-282a, 308b, 310b-311a)—click the image or here for a larger view. Lincolnshire is on the left hand side, as the map is orientated with north at the bottom; the river flowing nearly horizontally from the left to right is the Witham, with Boston near the sea and Lincoln upstream, where the river flows through the town, just as it did in the medieval period when it divided the old Lower City from its medieval southern suburbs (image: Bodleian Library)| Finally, attention might also be directed to the evidence for at least some 'Saracens' having been unwillingly brought into England in the medieval period, although this is perhaps less directly relevant to the current enquiry. So, the Flores Historiarum under the year 1271 makes reference to Thomas de Clare having returned to England from the Holy Land with 'four Saracen prisoners',(24) and the Calendar of Patent Rolls for 1259 includes a mandate for the arrest of a runaway 'Ethiopian... sometime a Saracen' who had apparently escaped his master: Mandate to all persons to arrest an Ethiopian of the name of Bartholomew, sometime a Saracen, slave (servus) of Roger de Lyntin, whom the said Roger brought with him to England; the said Ethiopian having run away from his said lord, who has sent an esquire of his to look for him: and they are to deliver him to the said esquire to the use of the said Roger.(25) In sum, whilst we can point to no specific man named Mahumet/Makomet/Macamathe/Maumet (< Muhammad) present in twelfth-/thirteenth-century Lincolnshire after whom Macamathehou in Spridlington might be named, it seems clear that it is not entirely impossible that someone bearing such a personal name or something similar could lie behind this mound-/barrow-name, rather than it simply being a folkloric name intended to convey the meaning 'heathen barrow' or similar. Although such a usage of the name Muhammad might parallel names such as Scrathou and Gastehowe and be reflected in the usage of the medieval form Maumet and similar to mean 'pagan deity' or 'idol' in Middle English, there is significantly less evidence for the form Makomet being used in this way. Furthermore, not only are there no other instances of Makomet or Maumet being used in local names to indicate a perceived 'heathen' or 'pagan' character for landscape features such as mounds and barrows, but there is evidence for at least some people named variants of Muhammad living in medieval England between the twelfth and the fourteenth centuries. Additionally, there is also a small amount of textual evidence for Muslims and people of potential Muslim origins being present in England and Normandy in this era, some being clearly captured or enslaved, but others potentially living in cities such as London, Ipswich and Lichfield, and some even perhaps being relatively high-status or in the employ of the king. Such people were probably not present in England in great numbers, but the evidence we have for this is not insignificant, and it may at least give us further pause for thought when considering just what the meaning of Macamathehou might be. Footnotes1. K. Cameron, J. Field & J. Insley, The Place-Names of Lincolnshire: Part Six, The Wapentakes of Manley and Aslacoe, Survey of English Place-Names LXXVII (Nottingham: English Place-Name Society, 2001), p. 211; the name appears as both Macamathehou, which they treat as primary, and Mornmatehou. 2. Cameron, Field and Insley, Place-Names of Lincolnshire VI, p. 211; Oxford English Dictionary, 'Mahomet, n.', OED Online, third edition, Oxford University Press, September 2020, www.oed.com/view/Entry/112410, accessed 10 November 2020; 'Makomet(e), n.', in S. M. Kuhn & Reidy (eds), Middle English Dictionary: Part M.1 (Ann Arbor: University of Michigan, 1975), p. 83. On haugr, see M. Gelling & A. Cole, The Landscape of Place-Names (Stamford: Shaun Tyas, 2000), p. 174. 3. R. Coates, 'Azure Mouse, Bloater Hill, Goose Puddings, and One Land called the Cow: continuity and conundrums in Lincolnshire minor names', Journal of the English Place-Name Society, 39 (2007), 73–143 at p. 85; VEPN, The Vocabulary of English Place-Names: M, draft version, online edition at www.nottingham.ac.uk/research/groups/ins/documents/vocabulary-of-english-place-names-m-draft.pdf, accessed 10 November 2020, p. 14. 4. Cameron, Field and Insley, Place-Names of Lincolnshire VI, p. 211. 5. Coates, 'Lincolnshire minor names', p. 85. 6. VEPN, The Vocabulary of English Place-Names: M, draft version, p. 14. 7. A. Reynolds, Anglo-Saxon Deviant Burial Customs (Oxford: Oxford University Press, 2009), p. 274. 8. Coates, 'Lincolnshire minor names', p. 85; K. Cameron, The Place-Names of Lincolnshire: Part Two, The Wapentake of Yarborough, Survey of English Place-Names LXIV/LXV (Nottingham: English Place-Name Society, 1991), p. 24—note, a similar name, Blodhowfeld/Blodhowgate, also occurs in Thurmaston parish, Leicestershire. On gastehowe/gasthehowe, see I. M. Bower, The Place-Names of Lindsey (North Lincolnshire) (University of Leeds PhD Thesis, 1940), pp. xviii, 200; for Scratters and Scrathowes, see, for example, A. H. Smith, English Place-Name Elements, Survey of English Place-Names XXVI (Cambridge: English Place-Name Society, 1956), Part 2, p. 126. 9. Reynolds, Anglo-Saxon Deviant Burial Customs, pp. 274–7. 10. Reynolds, Anglo-Saxon Deviant Burial Customs, pp. 219–22. 11. Middle English Dictionary, 'Makomet(e, n.', in Robert E. Lewis, et al. (eds), Middle English Dictionary (Ann Arbor: University of Michigan Press, 1952–2001), online edition in F. McSparran et al. (eds), Middle English Compendium (Ann Arbor, 2000–18), quod.lib.umich.edu/m/middle-english-dictionary/dictionary/MED26593, accessed 10 November 2020. 12. Middle English Dictionary, 'Maumet, n.', in Robert E. Lewis, et al. (eds), Middle English Dictionary (Ann Arbor: University of Michigan Press, 1952–2001), online edition in F. McSparran et al. (eds), Middle English Compendium (Ann Arbor, 2000–18), quod.lib.umich.edu/m/middle-english-dictionary/dictionary/MED27106, accessed 10 November 2020. For the use of Maumet and similar as a surname, see below and MED sense 2(d). 13. Reynolds, Anglo-Saxon Deviant Burial Customs, pp. 203–04. 14. Bower, Place-Names of Lindsey, pp. xviii, 253–4; K. Cameron, A Dictionary of Lincolnshire Place-Names (Nottingham: English Place-Name Society, 1998), pp. 26, 80, 107. See also Hawardeshou, the meeting-place of Haverstoe Wapentake, which was almost certainly a barrow in Hawerby (Hawardebi) parish, both names involving the Scandinavian personal name Hāwarth, and Calnodeshou, the meeting-place of Candleshoe Wapentake, which was probably on Candlesby Hill, named from Candlesby, Calnodesbi: Cameron, Dictionary, pp. 27–8, 61. Likewise, the meeting-place of the wapentake of Wraggoe was presumably a Wraghehou (Wraggi + haugr), which may well have been at Wragohill in Wragby (Wraggi + bȳ): Bowers, Place-Names of Lindsey, p. 250; Cameron, Dictionary, pp. 143–4. 15. K. S. B. Keats-Rohan, 'Queries', Prosopon, 9 (1998), p. 6; J. S. Moore, 'Who was "Mahumet"? Arabs in Angevin England', Prosopon, 11 (2000), pp. 1–7; D. Thornton, K. Keats-Rohan & R. Wood, 'Mahumet', COEL Database: Continental Origins of English Landholders, 1066-1166, [data collection], UK Data Service SN: 5687, doi.org/10.5255/UKDA-SN-5687-1; OED third edition, 'Mahomet, n.'; Middle English Dictionary, 'Makomet(e, n.'. See The Great Roll of the Pipe for the Seventh Year of the Reign of King Henry the Second, A.D. 1160–1161, Publications of the Pipe Roll Society IV (London: Wyman & Sons, 1885), p. 10; The Great Roll of the Pipe for the Eighth Year of the Reign of King Henry the Second, A.D. 1161–1162, Publications of the Pipe Roll Society V (London: Wyman & Sons, 1885), p. 13; The Great Roll of the Pipe for the Ninth Year of the Reign of King Henry the Second, A.D. 1162–1163, Publications of the Pipe Roll Society VI (London: Wyman & Sons, 1886), p. 46; The Great Roll of the Pipe for the Tenth Year of the Reign of King Henry the Second, A.D. 1163–1164, Publications of the Pipe Roll Society VII (London: Wyman & Sons, 1886), p. 14; and The Great Roll of the Pipe for the Eleventh Year of the Reign of King Henry the Second, A.D. 1164–1165, Publications of the Pipe Roll Society VIII (London: Wyman & Sons, 1887), p. 57. 16. K. S. B. Keats-Rohan in Moore, 'Who was "Mahumet"?', pp. 6–7; The Great Roll of the Pipe for the Sixth Year of the Reign of King Henry III, Michaelmas 1222 (London: Pipe Roll Society, 1999), p. 96, and The Great Roll of the Pipe for the Eighth Year of the Reign of King Henry III, Michaelmas 1224 (London: Pipe Roll Society, 2005), p. 12; Calendar of Patent Rolls: Edward III, A.D. 1327–1330 (London: Eyre and Spottiswoode, 1891), p. 123; Close Rolls of the Reign of Henry III: A.D. 1253–1254 (London: HMSO, 1929), p. 211; K. A. Hanna (ed.), The Cartularies of Southwick Priory: Part 1 (Winchester: Hampshire County Council, 1988), pp. 16–17, and K. A. Hanna (ed.), The Cartularies of Southwick Priory: Part 2 (Winchester: Hampshire County Council, 1989); Middle English Dictionary, 'Maumet, n.', sense 2(d), as surname, and Rotuli de oblatis et finibus in Turri Londinensi asservati, tempore Regis Johannis, ed. T. D. Hardy (London: Eyre and Spottiswoode, 1845), p. 455. 17. Moore, 'Who was "Mahumet"?', p. 3. 18. Chronicle of Richard of Devizes of the Time of King Richard the First, ed. and trans. J. T. Appleby (London, 1963), pp. 65–6, with modifications by W. Johansson, 'London's Medieval Sodomites', in History of Homosexuality in Europe and America, ed. W. R. Dynes & S. Donaldson (New York and London: Garland, 1992), pp. 159–63. 19. J. Scattergood, ‘London and money: Chaucer’s Complaint to his Purse’, in Chaucer and the City, ed. A. Butterfield (Cambridge: D. S. Brewer, 2006), pp. 162–76 at pp. 171–2. 20. Ipswich: BBC, History Cold Case: Series 1, Episode 1—Ipswich Man (broadcast 27 July 2010); 'Skeleton of medieval African found in Ipswich sheds new light on Britain's ethnic history', BBC Press Office, 2 February 2010, online at www.bbc.co.uk/pressoffice/pressreleases/stories/2010/05_may/02/history.shtml, accessed 18 November 2020; K. Wade, Ipswich Archive Summaries: Franciscan Way, IAS 5003 (Ipswich: Suffolk County Council Archaeological Service, 2014), pp. 9, 10, 12, online at archaeologydataservice.ac.uk/archives/view/ipswich_5003_2015/downloads.cfm; and Xanthé Mallett, pers. comm.. Lichfield: C. Coutts, 'St John’s Hospital, Lichfield: a Black and White Medieval Cemetery', talk at the Market Hall Museum, Warwick, on 24 July 2017, online abstract at www.blackhistorymonth.org.uk/article/listings/region/west-midlands/st-johns-hospital-lichfield-black-white-medieval-cemetery/, accessed 18 November 2020; Jasmine Kilburn, pers. comm.. 21. R. Redfern and J. T. Hefner, ‘“Officially absent but actually present”: bioarchaeological evidence for population diversity in London during the Black Death, AD 1348–50’, in Bioarchaeology of Marginalized People, ed. M. L. Mant and A. J. Holland (London: Academic Press, 2019), pp. 69–114. 22. Moore, 'Who was "Mahumet"?', p. 1; F. M. Powicke, 'The Saracen mercenaries of Richard I', Scottish Historical Review, 8 (1911), 104–05. 23. C. R. Green, 'Al-Idrisi's twelfth-century map and description of eastern England', blog post, 28 March 2016, online at www.caitlingreen.org/2016/03/al-idrisi-twelfth-century-map.html, accessed 18 November 2020; A. F. L. Beeston, 'Idrisi's Account of the British Isles', Bulletin of the School of Oriental and African Studies, 13.2 (1950), 265–80 at pp. 278, 279–80; C. Loveluck, Northwest Europe in the Early Middle Ages, c. AD 600–1150: A Comparative Archaeology (Cambridge: Cambridge University Press, 2013), p. 323 ('Al-Idrisi... had visited England prior to his arrival in Sicily in c. 1138') 24. C. D. Yonge (trans.), The Flowers of History (London: Bohn, 1853), vol. 2, p. 453. 25. Calendar of Patent Rolls, Henry III: Volume 5, 1258–1266, ed. H. C. Maxwell Lyte (London: HMSO, 1910), p. 28, and see further M. Ray, 'A Black Slave on the run in Thirteenth-Century England', Nottingham Medieval Studies, 51 (2007), 111–9. Note, ‘Ethiopian’ here probably means simply someone of ‘Black African ancestry’, rather than someone from modern Ethiopia, given Late Antique and medieval uses of this term. The text content of this post and page is Copyright © Caitlin R. Green, 2020, All Rights Reserved, and should not be used without permission.
<urn:uuid:81923277-4e2c-49d0-a140-a5200f90494e>
CC-MAIN-2022-33
https://www.caitlingreen.org/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00695.warc.gz
en
0.911609
6,735
2.609375
3
- 1 The Sources of Islamic Law - 2 Normal Relations - 3 Justifications and Conditions for War - 4 Righteous Intention - 5 Jihad as an Obligation - 6 Who Is To Be Fought? Discrimination and Proportionality - 7 The Sword Verse - 8 Cessation of Hostilities - 9 Sanctity of Treaties - 10 Prisoners of War - 11 Resumption of Peaceful Relations - 12 Humanitarian Intervention - 13 International Co-operation The Sources of Islamic Law The Qur'an is the supreme authority in Islam and the primary source of Islamic Law, including the laws regulating war and peace. The second source is the hadith, the traditions of the Prophet Muhammad's acts and deeds, which can be used to confirm, explain or elaborate Qur'anic teachings, but may not contradict the Qur'an, since they derive their authority from the Qur'an itself. Together these form the basis for all other sources of Islamic law, such as ijma' (consensus of Muslim scholars on an opinion regarding any given subject) and qiyas (reasoning by analogy). These and others are merely methods to reach decisions based on the texts or the spirit of the Qur'an and hadith. The Qur'an and hadith are thus the only binding sources of Islamic law. Again, nothing is acceptable if it contradicts the text or the spirit of these two sources. Any opinions arrived at by individual scholars or schools of Islamic law, including the recognized four Sunni schools, are no more than opinions. The founders of these schools never laid exclusive claim to the truth, or invited people to follow them rather than any other scholars. Western writers often take the views of this or that classical or modern Muslim writer as "the Islamic view", presumably on the basis of assumptions drawn from the Christian tradition, where the views of people like St Augustine or St Thomas Aquinas are often cited as authorities. In Islam, however, for any view of any scholar to gain credibility, it must demonstrate its textual basis in the Qur'an and authentic hadith, and its derivation from a sound linguistic understanding of these texts. Ijtihad - exerting one's reason to reach judgments on the basis of these two sources is the mechanism by which Muslims find solutions for the ever-changing and evolving life around them. The closing of the door of ijtihad' is a myth propagated by many Western scholars, some of whom imagine that "the door" still remains closed and that Muslims have nothing to fall back on except the decisions of the Schools of Law and scholars of the classical period. In fact, scholars in present-day Muslim counties reach their own decisions on laws governing all sorts of new situations, using the same methodology based on the Qur'an and hadith and the principles derived from them, without feeling necessarily bound by the conclusions of any former school of law. In the Quran an and hadith, the fundamental sources of Islamic teachings on war and peace are to be found. The Islamic relationship between individuals and nations is one of peace. War is a contingency that becomes necessary at certain times and under certain conditions. Muslims learn from the Qur'an that God's objective in creating the human race in different communities was that they should relate to each other peacefully (Quran 49:13).1 The objective of forming the family unit is to foster affection mercy, and that of creating a baby in its mother's womb is to form bonds of blood and marriage between people: It is He who created the human being from fluid, making relationships of blood and marriage. Quran 25:54 Sowing enmity and hatred amongst people is the work of Satan: Satan wishes to sow enmity and hatred between you with intoxicants and gambling. Quran 5:91 Division into warring factions is viewed as a punishment that God brings on people who revert to polytheism after He has delivered them from distress: ...He able to divide you into discordant factions and make you taste the might of each other... Quran 6:65 War is hateful (2:216), and the changing of fear into a sense of safety is one of the rewards for those who believe and do good deeds (Quran 24:55). That God has given them the sanctuary of Mecca is a blessing for which its people should he thankful (Quran 29:67).Paradise is the Land of Peace - Dar al-Salam - Quran 6:127). Justifications and Conditions for War War may become necessary only to stop evil from triumphing in a way would corrupt the earth (Quran 2:251). For Muslims to participate in war there must be valid justifications, and strict conditions must be fulfilled. A thorough survey of the relevant verses of the Qur'an shows that it is consistent throughout with regard to these rulings on the justification of war, and its conduct, termination and consequences. War in Islam as regulated by the Qur'an and hadith has been subject to many distortions by Western scholars and even by some Muslim writers. These are due either to misconceptions about terminology or - above all -using quotations taken out of context.2 Nowhere in the Quran is changing people's religion given as a cause for waging war. The Qur'an gives a clear instruction that there is no compulsion in religion (Quran 2:256). It states that people will remain different (Quran 11:118), they will always have different religions and ways and this is an unalterable fact (Quran 5:48) - God tells the Prophet that most people will not believe "even if you are eager that they should" (Quran 12:103).3 All the battles that took place during the Prophet's lifetime, under the guidance of the Qur'an and the Prophet, have been surveyed and shown to have been waged only in self-defense or to preempt an imminent attack.4 For more than ten years in Mecca, Muslims were persecuted, but before permission was given to fight they were instructed to restrain themselves (Quran 4:77) and endure with patience and fortitude: Pardon and forgive until God gives his command. Quran 2:109; see also 29:59; 16:42 After the Muslims were forced out of their homes and their town, and those who remained behind were subjected to even more abuse, God gave His permission to fight: Permission is given to those who fight because they have been wronged, and God is indeed able to give them victory; those who have been driven from their homes unjustly only because they said, "Our Lord is God"-for had it not been for God's repelling some men by means of others, monasteries, churches, synagogues and mosques, in which the name of God is much mentioned, would certainly have been destroyed. Verily God helps those that help Him - lo! God is Strong, Almighty - those who, if they are given power in the land, establish worship and pay the poor-due and enjoin what is good and forbid iniquity. Quran 22:39-41 Here, war is seen as justifiable and necessary to defend people's right to their own beliefs, and once the believers have been given victory they should not become triumphant or arrogant or have a sense of being a superpower, because the promise of help given above and the rewards are for those who do not seek to exalt themselves on earth or spread corruption (Quran 28:83). Righteous intention is an essential condition. When fighting takes place, it should be fi sabil illah - in the way of God - as is often repeated in the Qur'an. His way is prescribed in the Qur'an as the way of truth and justice, including all the teaching it gives on the justifications and the conditions for the conduct of war and peace. The Prophet was asked about those who fight for the booty, and those who fight out of self-aggrandizement or to be seen as a hero. He said that none of these was in the way of God. The one who fights in the way of God is he who fights so that the word of God is uppermost (hadith: Bukhari). This expression of the word of God being "uppermost" was misunderstood by some to mean that Islam should gain political power over other religions. However, if we use the principle that "different parts of the Qur'an interpret each other", we find (Quran 9:40) that by simply concealing the Prophet in the cave from his trackers, after he had narrowly escaped an attempt to murder him, God made His word "uppermost", and the word of the wrongdoers "lowered". This could not be described as gaining military victory or political power. Another term which is misunderstood and misrepresented is jihad. This does not mean "Holy War". "Holy War" does not exist as a term in Arabic, and its translation into Arabic sounds quite alien. The term which is specifically used in the Qur'an for fighting is qital. Jihad can be by argumentation (25:52 ), financial help or actual fighting. Jihad is always described in the Qur'an as fi sabil illah. On returning from a military campaign, the Prophet said to his followers: "We have returned from the minor jihad to the major jihad - the struggle of the individual with his own self." Jihad as an Obligation When there is a just cause for jihad, which must have a righteous intention, it then becomes an obligation. It becomes an obligation for defending religious freedom (Quran 22:39-41), for self-defense (Quran 2:190) and defending those who are oppressed: men, women and children who cry for help (Quran 4:75). It is the duty of the Muslims to help the oppressed, except against a people with whom the Muslims have a treaty (Quran 8:72). These are the only valid justifications for war we find in the Qur'an. Even when war becomes necessary, we find that there is no "conscription" in the Qur'an. The Prophet is instructed only to "urge on the believers" (Quran 4:64). The Qur'an - and the hadith at greater length - urge on the Muslim fighters (those who are defending themselves or the oppressed) in the strongest way: by showing the justice of their cause, the bad conduct of the enemy, and promising great rewards in the afterlife for those who are prepared to sacrifice their lives and property in such a good cause.5 Who Is To Be Fought? Discrimination and Proportionality In this regard we must discuss two verses in the Qur'an which are normally quoted by those most eager to criticize Qur'anic teachings on war: 2:191 ("slay them wherever you find them") and verse 9:5, labeled the "Sword Verse". Both verses have been subjected to decontextualisation, misinterpretation and misrepresentation. The first verse comes in a passage that defines clearly who is to be fought: Fight in the way of God those who fight against you, but do not transgress. God does not love the transgressor. Quran 2:190 "Those who fight against you" means actual fighters - civilians are protected. The Prophet and his successors, when they sent out an army, gave clear instructions not to attack civilians - women, old people, religious people engaged in their worship - nor destroy crops or animals. Discrimination and proportionality should be strictly observed. Only the combatants are to be fought, and no more harm should be caused to them than they have caused (Quran 2:194). Thus wars and weapons of destruction that destroy civilians and their towns are ruled out by the Qur'an and the word and deed of the Prophet, these being the only binding authority in Islamic law. The prohibition is regularly reinforced by, "Do not transgress, God does not love the transgressor". Transgression has been interpreted by Qur'anic exegetes as meaning, "initiation of fighting, fighting those with whom a treaty has been concluded, surprising the enemy without first inviting them to make peace, destroying crops or killing those who should be protected" (Baydawu's commentary on Q. 2:190). The orders are always couched in restraining language, with much repetition of warnings, such as "do not transgress" and "God does not love the transgressors" and "He loves those who are conscious of Him". These are instructions given to people who, from the beginning, should have the intention of acting "in the way of God". Linguistically we notice that the verses in this passage always restrict actions in a legalistic way, which appeals strongly to Muslims' conscience. In six verses (Quran 2:190-5) we find four prohibitions (do not), six restrictions: two "until", two "if", two "who attack you", as well as such cautions as "in the way of God", "be conscious of God", "God does not like aggressors", "God is with those who are conscious of Him", "with those who do good deeds" and "God is Forgiving, Merciful." It should be noted that the Qur'an, in treating the theme of war, as with many other themes, regularly gives the reasons and justifications for any action it demands. Verse 2:191 begins: Slay them where you find them and expel them from where they expelled you; persecution [fitna] is worse than killing. "Slay them wherever you find them," has been made the title of an article on war in Islam.6 In this article "them" is removed from its context, where it refers back to "those who attack you" in the preceding verse. "Wherever you find them" is similarly misunderstood: the Muslims were anxious that if their enemies attacked them in Mecca (which is a sanctuary) and they retaliated, they would be breaking the law. Thus the Qur'an simply gave the Muslims permission to fight those enemies, whether outside or inside Mecca, and assured them that the persecution that had been committed by the unbelievers against them for believing in God was more sinful than the Muslims killing those who attacked them, wherever they were. Finally, it must be pointed out that the whole passage (Quran 2:190-5) comes in the context of fighting those who bar Muslims from reaching the Sacred Mosque at Mecca to perform the pilgrimage. This is clear from verse 189 before and verse 196 after the passage. In the same way, the verse giving the first permission to fight occurs in the Qur'an, also in the context of barring Muslims from reaching the Mosque to perform the pilgrimage (Quran 2:217). The Sword Verse We must also comment on another verse much referred to but notoriously misinterpreted and taken out of context - that which became labeled as the "Sword Verse": Then, when the sacred months have passed, slay the idolators wherever you find them, take them and besiege them and prepare for them every ambush. Quran 9:5 The hostility and "bitter enmity" of the polytheists and their fitna (persecution) (Quran 2:193; 8:39) of the Muslims grew so great that the unbelievers were determined to convert the Muslims back to paganism or finish them off. They would persist in fighting you until they turn you back from your religion, if they could. Quran 2:217 It was these hardened polytheists in Arabia, who would accept nothing other than the expulsion of the Muslims or their reversion to paganism, and who repeatedly broke their treaties, that the Muslims were ordered to treat in the same way - to fight them or expel them. Even with such an enemy Muslims were not simply ordered to pounce on them and reciprocate by breaking the treaty themselves; instead, an ultimatum was issued, giving the enemy notice, that after the four sacred months mentioned in 9:5 above, the Muslims would wage war on them. The main clause of the sentence "kill the polytheists" is singled out by some Western scholars to represent the Islamic attitude to war; even some Muslims take this view and allege that this verse abrogated other verses on war. This is pure fantasy, isolating and decontextualising a small part of a sentence. The full picture is given in 9:1-15, which gives many reasons for the order to fight such polytheists. They continuously broke their agreements and aided others against the Muslims, they started hostilities against the Muslims, barred others from becoming Muslims, expelled Muslims from the Holy Mosque and even from their own homes. At least eight times the passage mentions their misdeeds against the Muslims. Consistent with restrictions on war elsewhere in the Qur'an, the immediate context of this "Sword Verse" exempts such polytheists as do not break their agreements and who keep the peace with the Muslims (9:7). It orders that those enemies seeking safe conduct should be protected and delivered to the place of safety they seek (9:6). The whole of this context to v.5, with all its restrictions, is ignored by those who simply isolate one part of a sentence to build their theory of war in Islam on what is termed "The Sword Verse" even when the word "sword" does not occur anywhere in the Qur'an. Cessation of Hostilities Once the hostility of the enemy ceases, the Muslims must stop fighting (Quran 2:193; 8:39): And if they incline to peace, do so and put your trust in God. Even if they intend to deceive you, remember that God is sufficient for you. Quran 8:61-2 When the war is over, the Qur'an and hadith give instructions as to the treatment of prisoners of war and the new relationship with the non-Muslims. War is certainly not seen as a means in Islam of converting other people from their religions. The often-quoted division of the world into dar al-harb and dar al Islam is seen nowhere in the Qur'an or hadith, the only authoritative sources of Islam. The scholars who used these expressions were talking about the warring enemies in countries surrounding the Muslim lands. Even for such scholars there was not a dichotomy but a trichotomy, with a third division, dar al-sulk, the lands with which the Muslims had treaty obligations. The Qur'an and hadith talk about the different situations that exist between a Muslim state and a neighboring warring enemy. They mention a state of defensive war, within the prescriptions specified above, the state of peace treaty for a limited or unlimited period, the state of truce, and the state where a member of a hostile camp can come into a Muslim land for special purposes under safe conduct.7 Sanctity of Treaties The Prophet and his companions did make treaties, such as that of Hudaybiyya in the sixth year of the hijra and the one made by 'Umar with the people of Jerusalem.8 Faithfulness to a treaty is a most serious obligation which the Qur'an and hadith incessantly emphasize: Believers, fulfill your bonds. Quran 5:1 Keep the agreements of God when you have made them and do not break your oaths after you have made them with God as your bond ... Quran 16:91 Covenants should not be broken because one community feels stronger than another. Quran 16:92 Breaking treaties puts the culprit into a state lower than animals (Quran 8:55). As stated above, even defending a Muslim minority is not allowed when there is a treaty with the camp they are in. Prisoners of War There is nothing in the Qur'an or hadith to prevent Muslims from following the present international humanitarian conventions on war or prisoners of war. There is nothing in the Qur'an to say that prisoners of war must be held captive, but as this was the practice of the time and there was no international body to oversee exchanges of prisoners, the Qur'an deals with the subject. There are only two cases where it mentions their treatment: O Prophet! Tell the captives you have, "If God knows goodness in your heart He will give you better rewards than have been taken from you and forgive you. He is forgiving, merciful ".And if they intend to be treacherous to you, they have been treacherous to God in the past and He has put them into your hands. 8:70-1 When you have fully overcome the enemy in the battle, then tighten their bonds, but thereafter set them free either by an act of grace or against ransom. 47:4 Grace is suggested first, before ransom. Even when some were not set free, for one reason or another, they were, according to the Qur'an and hadith, to be treated in a most humane way (Q.76:8-9; 9:6o; 2:177). In the Bible, where it mentions fighting, we find a different picture in the treatment administered to conquered peoples, for example: When you march up to attack a city, make its people an offer of peace. If they accept and open their gates, all the people in it shall be subject to forced labor and shall work for you. If they refuse to make peace with you in battle, lay siege to that city. When the Lord your God delivers it into your hand, put to the sword all the men in it. As for the women, the children, the livestock and everything else in the city, you may take these as plunder for yourselves. And you may use the plunder the Lord your God gives you from your enemies. This is how you are to treat all the cities that are at a distance from you and do not belong to the nations nearby. However, in the cities of the nations the Lord your God is giving you as an inheritance, do not leave alive anything that breathes. Completely destroy them - the Hittites, Amorites, Canaanites, Perizzites, Hivites, and Jebusites - as the Lord your God has commanded you. Otherwise they will teach you to follow all the detestable things they do in worshipping their gods, and you will sin against the Lord your God. Deuteronomy 20:10-189 Resumption of Peaceful Relations We have already seen in the Qur'an 22:41 that God promises to help those who, when He has established them in a land after war, " ... establish worship and pay the poor-due and enjoin what is good and forbid iniquity". In this spirit, when the Muslim army was victorious over the enemy, any of the defeated people who wished to remain in the land could do so under a guarantee of protection for their life, religion and freedom, and if they wished to leave they could do so with safe conduct. If they chose to stay among the Muslims, they could become members of the Muslim community. If they wished to continue in their faith they had the right to do so and were offered security. The only obligation on them then was to pay jizya, a tax exempting the person from military service and from paying zakat, which the Muslims have to pay - a tax considerably heavier than the jizya. Neither had the option of refusing to pay, but in return the non-Muslims were given the protection of the state. Jizya was not a poll-tax, and it was not charged on the old, or poor people, women or children.10 Humanitarian intervention is allowed, even advocated in the Qur'an, under the category of defending the oppressed. However it must be done within the restrictions specified in the Qur'an, as we have shown above. In intervening, it is quite permissible to co-operate with non-Muslims, under the proviso: Co-operate in what is good and pious and do not co-operate in what is sinful and aggression. Quran 5:2 In the sphere of war and peace, there is nothing in the Qur'an or hadith which should cause Muslims to feel unable to sign and act according to the modern international conventions, ,and there is much in the Qur'an and hadith from which modern international law can benefit. The Prophet Muhammad remembered an alliance he witnessed that was contracted between some chiefs of Mecca before his call to prophet-hood to protect the poor and weak against oppression and said: I have witnessed in the house of lbn Jud'an an alliance which I would not exchange for a herd of red camels, and if it were to be called for now that Islam is here, I would respond readily.11 There is nothing in Islam that prevents Muslims from having peaceful, amicable and good relations with other nations when they read and hear regularly the Qur'anic injunction, referring too members of other faiths: God does not forbid you front being kind and equitable to those who have neither made war on you account of your religion nor driven you from your homes. God loves those who are equitable. Quran 60:8 This includes participation in international peace-making and peace-keeping efforts. The rule of arbitration in violent disputes between groups of Muslims is given in the Qur'an: If two, of the believers take up arms against one another, make peace between them. If either of them commits aggression against the other, fight against the aggressors until they submit to God's judgment. When they submit make peace between them in equity and justice. God loves those who act in justice. 49:9 This could, in agreement with rules of Islamic jurisprudence, be applied more generally to disputes within the international community. For this reason, Muslims should, and do, participate in the arbitration of disputes by international bodies such as the United Nations. Modern international organizations and easy travel should make it easier for different people, in accordance with the teachings of the Qur'an, to "get to know one another", "co-operate in what is good" and live in peace. The Qur'an affirms: There is no virtue in much of their counsels: only in his who enjoins charity, kindness and peace among people... Quran 4:114 Excerpted from "Understanding The Quran" by Muhammad Abdel Haleem 1. See Chapter 6 below. 2. Slay them wherever you find them: Humanitarian Law in Islam, by James J.Busuttil, Linacre College, Oxford., in Revue de Droit Penal Militaire at de Droit de la Guerre, 1991, pp. 113-40. 3. See chapter 6 below. 4. See A- M. al-'Aqqad, op.cit.. (Cairo, 1957) pp, 187-91, quoting a survey by Ahmad Zaki Pasha. 5- See for example 3:169-72; 9:120-1 and many hadiths in the chapters on jihad in the various collections of hadiths. 6. Busuttil, op. ,cit. P.127. The rendering he uses runs: Idolatry is worse than carnage. This corrupts the meaning. It is clear from the preceding words, "those who have turned you out that fitna means persecution. This meaning is borne out by the identical verb (turning out/expelling) preceding the only other verse (2:217) where the expression, "fitna is worse than killing" appears. Here the statement is clearly explained: Fighting in [the prohibited month] is a grave (offence) but graver is it in the sight of God to prevent access to the Sacred Mosque and drive out its people." 7. 'Aqqad, op.cit.,pp.204-9. 8. See Chapter 6. In the 'New Testament Jesus gives the high ideal that if someone hits you on one check, you should turn the other cheek. Pardon and forgiveness on the individual level is also highly recommended in the Qur'an. "Good and evil deeds are not alike, Requite evil with good, and he who is your enemy will become your dearest friend, but none will attain this attribute save those who patiently endure; none will attain it save for those who are truly fortunate" (4 1:34-5). And see 45:14. But when it comes to the places of worship being subjected to destruction and when hopeless, old men, women and children are persecuted and when unbelievers try to force believers to renounce their religion, the Qur'an considers it total dereliction of the duty for the Muslim state not to oppose such oppression and defend what is right. 10. See Chapter 6. 11. Red camels were proverbial in Arabia as the best one can have. Peace be with you. Since you can not produce your own "translation", even if you could, I would advice you to take into account all the different views you would come accros.Only then you will have a balance view for your expose. The Al-Qur'an alone in its authentic Arabic form, is guaranteed consistance, anything else related to it from out side; will have discrepencies or flaws. The AL-Qur'an and Issa (pbh)are both from the same source. Through the ARCH- ANGEL JIBRIL (pbh), they were brought to mankind for guidance, there is no FLAWS in them by the will of Allah(swt) Himself. Look for more translations, but most importantly seek to expose the TRUTH, that is the main purpose behind the being of Issa(pbh) and the Qur'an and All those Prophets(peace be upon them). The Message they were carrying to mankind is not about them, but about our RELATIONSHIP with Maker, the MOST HIGH, and between OURSELVES and the ENVIRONMENT(Fauna & Flora). Since the CHOICE come to US as VICEGERENTS/GUARDIANS on this earth. I would like to cautionned you though about your findings in the Qur'an.For they may not like it, after all, they do not consist of COMPLEMENTS, otherwise we would have the same Religion. Also, I would advice you against picking certains verses that only mentionned your three subjects. Consider the Word of Your Lord Most Gracious, as a VIBRATION/PULSE/SOUND, therefore with UPS and DOWNS,TENCE and LOOSE FORMS, HARD and SOFT FORMS and HOT and COLD FORMS, in HIS OWN WISDOM has made everything into a DUAL MODE, so that there can be a BALANCE/PROPORTIONALITY. Ask mathematicians, Physicist and those engage in EMPERICAL SCIENCES how IMPORTANT are these elements, for the TRUTH to be settled. Mark,may Allah(swt), Who is the BEST of guides, be your guide in this important assigment of yours.May He keeps Sheytan, the ennemy at bay and soften the hearts and minds to the TRUE message, you will be carrying. All the Best. I believe to underestand the "Islamic History", one has to understand the Pre- Islamic History of the penninsulat and of the Arab in general. Like any society it has its TRUABAL and CLAN divisions; fortunatly by the help of Allah(swt), these divisions were laid dormant for sometimes during life time of the Holy Prophet(pbh). I am sure you know what hapenned next soon after his "departure" from this life. The warfares that gave much attention to "Historians", are just lagacies of those pre-islamic divisions within those societies that embraced Islam, in general.Of course some of those warfares have out side influencies(from non-Muslim societies). If you refer to the "Pre-Islamic History", you will understand, the differencies between Imam Aliyun(may Allah be please with him) and his opponents. Sadly, Ya! Rashid, "Islamic History", has not just been war, war as it has been portraid. The quick spread of Islam, from as far as you mentionned, indeed is the very proof, that not all Muslims were indulging themselves into those warfares. Some cultures have embraced Islam without a single drop of blood, it is like they were waiting for it. Those societies have been in contact with Islam through TRADERS, TEACHERS and SEEKERS(sufi). We also must understand that ISLAM as a religion, simply AWAKENNING the ISLAM as AN AUTOMATATED SYSTEM installed within us and in Nature. Islam is the NATURAL WAY of things. Ya! Rashid, you are indeed displaying a thurst for truth, commun to all those great Men/Women who has well contributed into the spread of Islam. Peharps it is time, you fellow their foot steps as SEEKERS. Then and only then at some point, you will come across a source to satisfy your THURST, inshallah. I agree, this is an excellent article and one that, if followed, would serve as a standard for international relations everywhere. I am sorry you see me as someone out to bash Islam, as that certainly is not the case. As I have repeated many times on this site, it is not Islam with which I have a problem. I do have a problem with people who spew hate from behind a religion of any label. Your thinly veiled threat did not sound anything like what was said in the article. Perhaps you should read the Quran a bit more carefully and with a bit more understanding. Thank you so much I jope i heare from you,especially regardingthe first issue, what you say about us. I want to represent the Quyr'an accurately and I am hoping that despite some negative remarks I can find things that can teach Christians about themselves as well as Muslims and lead to better interfaith relations. Brothers & Sisters. Muhammad A. Haleem,(may Allah showers him and his friends and family members with HIS blessings in this world and the one to come; amine), has, once again, produced a brilliant source of LIGHT/GUIDANCE to the WAY of ALLAH(swt). Saddly, Ya Abdel Haleem, these Qur'anic messages are not sinking into our minds; they are not being inco-operated into our daily practises. We claim to be Muslims/mahs, yet for the majority, it only shows in our dress code or the facial features. That is very sad. This is why, we are in big trouble so far. Unless we abide to the principle of Al-Qur'an and follow it to the letter, I am afraid, our Umma will always be the underdog of all cultures, may Allah(swt) forbid. Well done! Well written! Insha Allah this article will clear the fog that has settled on Islam. Ameen. Jazak Allah Khair The extraneous corpus of "Islamic History" raises question in ones mind is how was it that the myriads of early inter-Muslim wars came to be as propagated by the Muslim Historians? How come that Muslims slaughtering millions of Muslims at the embryonic stage of Islam, and yet it was able to spread from East and West with in the first century AH? This is a fact backed by the presence of Muslims in the far flung countries today. If the propagated history is correct then logic tells us that there should have been nobody around to enlist in the armies to under take such missions! Above all questions must be asked as to how the Sahabaa(companions of the Messenger)such as Ali, Muawiya, al-Aa's, Ayisha etc of the first generation, and custodian of Islamic knowledge would indulge in such slaughters in contradiction to Qura'n's instructions? If they were preoccupied in self gratification; perish the thought,how come they had the time and energy to take part in spreading Islam? It must be asked, WHO were the so called historians and Hadis compilers and WHEN was this so called history colated, that gives us the prevailing accrimonious understanding of war? Is it in some sections of the community's interest to perpatuate te whole story? The reality of the situation boggles the inquiring mind! Can some historian enlighten us, taking a dispationate view of History, please. I bid you peace for now...until the rise of morn!
<urn:uuid:138c39bb-9e61-4195-bf7d-bb31fea39d15>
CC-MAIN-2022-33
https://www.islamicity.org/4270/war-and-peace-in-the-quran/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00498.warc.gz
en
0.965007
7,675
3.53125
4
Guest Post by Wim Röst Water, H2O, determines the ‘General Background Temperature’ for the Earth, resulting in Hothouse and Ice House Climate States. During geological periods the movement of continents changes the position of continents, oceans and seas. Because of the different configurations, a dominant warm or a dominant cold deep-water production configuration ‘sets’ average temperatures for the deep oceans. Changing vertical oceanic circulation changes surface temperatures, especially in the higher latitudes. During a Hot House State, higher temperatures in the high latitudes result in a high water-vapor concentration that prevents a rapid loss of thermal energy by the Earth. These three processes, plate tectonics (continental drift), vertical oceanic circulation variability and variations in atmospheric water vapor concentration and distribution, caused previous Hot House and Warm House Climate States. A change in the working of those mechanisms resulted in a transition from the previous Hot House Climate State to the very cold ‘Ice House State’ that we live in now. That change was set in motion by the changing configuration of continents, oceans and seas. The Earth has known Hothouse periods because of two things. The Earth warmed because of storage of thermal energy in the oceans (H2O) and because a higher quantity of water vapor (H2O) in the air (especially at the poles) prevented the Earth from cooling. The Earth has known Ice House periods because of a lack of storage of thermal energy in the oceans and because the resulting loss of atmospheric water vapor (especially at the poles) accelerated the cooling until the Earth reached an Ice House State. That is, more thermal energy can be radiated to outer space from the polar regions if they have a lower concentration of water vapor. In three ways the changing Earth created Hot House Climate States. The first and the most important was the creation of [relatively] warm deep oceans. The second important mechanism was a reversal of the vertical water circulation that resulted in a far more effective distribution of absorbed sun-energy over all latitudes. The third mechanism was the rise in the quantity of water vapor, the main infrared radiation absorbing gas of the lower atmosphere, a rise that prevented strong night and winter cooling especially at the high latitudes. All together the three mechanisms resulted in much warmer average global temperatures during Hot House climate states with far more evenly distributed temperatures over all latitudes. During Hot House (and Warm House) climate states, the whole Earth became lush and green, from the tropics to the poles. All changes were due to water, H2O. Physics did do the work, no humans involved. On a geological timescale, the configuration of ‘continents’ and ‘oceans’ determines the general climate state: warm or cold. Continents gave shape to different combinations of oceans and seas. Different configurations of oceans, seas and continents caused the production of warm or cold deep-ocean water. The temperatures of the deep-ocean set the ‘general background temperature’ for the Earth during different geological periods. Creating warm and cold climate states. Now we are living in an era of long glacials and short interglacials, our present era is an era within an Ice House Climate State. The temperature of the deep ocean is the main factor. Deep-ocean temperatures from -1 to +3 degrees Celsius, as we have now, keep the Earth in an Ice House State. Slightly warmer deep-oceans with temperatures from 6 to 10 degrees Celsius* bring the Earth to a Warm House or a Hot House Climate State. The underlying system for this switch is characterized by three mechanisms, as discussed below. 1. Warm deep oceans Warm House and Hot House Climate States were characterized by ‘warm’ deep oceans. ‘Warm’ has been warm in a relative way: as compared to our present ice-cold deep oceans of -1 to +3 degrees Celsius, the ‘warm’ deep oceans of the past probably had average temperatures of 6 to 10 degrees Celsius. As we shall see, this relatively small temperature difference had huge consequences for the global average surface temperature of the Earth and for the Earth’s climate state. In oceans and seas certain water goes down and other water wells up from the deep oceans, both in huge quantities. Think in terms of a million or more cubic kilometers a year. For the final temperature of the deep-ocean it is important which water wells down: relatively cold or relatively warm water. Present seas like the Mediterranean, the Red Sea and the Arabian Gulf demonstrate that it is possible to produce warm deep water: in arid regions the local evaporation produces high salinity surface water that is that dense that it goes down (‘sinks’) as warm salty water. After welling down, the warm and now ‘deep’ water is covered by less dense ocean water. In this way, even in our present Ice House State, the above-mentioned seas produce warm deep water of around 12 degrees Celsius and more. When the warm deep-water flows back into the oceans, it sinks to depths of 1000 to 4000 meters, depending on the salinity and the density of the local deep ocean. The deep warm and saline water is produced at latitudes where evaporation is higher than rainfall, often around 30 degrees North. Sea surfaces at 30 degrees contain very saline warm water that is still able to ‘float’ because of high water temperatures. But during wintertime this saline surface water cools and sinks. Shallow, enclosed seas like the Mediterranean still produce warm, deep water, but only have a small surface area. Too small, to get the Earth out of her present Ice House State. But in the geologic past, fifty- to one-hundred million years ago, those shallow and nearly enclosed seas were very extended at latitudes where they could produce warm deep-water: around 30°N, See Figure 1. Figure 1: The position of continents, oceans and shallow seas 100 million years ago, according to Christopher Scotese, as shown in this animation. Because of their large total surface area, these shallow and nearly enclosed seas produced huge quantities of deep warm water. That warm deep-water production warmed the deep oceans that characterized Warm House and Hot House periods, like the Cretaceous. During those Warm House and Hot House periods there was no dominating cold deep-water production. The poles were cooler than the tropics, but not frozen, because of upwelling warm deep-water. Because of the still relatively low temperatures at the poles, evaporation was low and was exceeded by rainfall, which resulted in relatively fresh surface waters in the polar seas, a freshness that became further enhanced by (fresh) poleward river runoff. Because of the freshness of the surface waters, their density was low. Low density waters don’t sink. Because of that, there was no massive cold deep-water production at the high latitudes during Warm House and Hot House states. This was the second reason why warm, deep water production dominated. Warm, deep oceans were the result, see Figure 2. Figure 2: Warm and cold deep-water production 100 million years ago, same figure as figure 1, red and blue squares added. In red: the areas around 30 degrees with a supposed massive warm deep-water production. In blue: present cold deep-water production areas, not functioning 100 million years ago: surface waters were too fresh. 2. A reversed vertical water circulation In our present Ice House State, we find a massive sink of cold water at the higher latitudes. This sinking water is replenished by warm and saline surface water, transported over the surface. Currents like the warm Gulf Stream do the work. However, the present poleward transport consists of a current over only part of the total width of the ocean and using only a minor layer at the surface. Currently, the quantity of warm water transported to the poles is far less than the warm, deep water transport during Warm House and Hot House periods. This is shown below in the figures 3a and 3b. Figure 3a: Present oceanic transport in the North Atlantic, over latitude 40N, schematic. The red block represents the poleward surface transport by the Warm Gulf Stream. The depth of the surface layer is exaggerated. Figure 3b: Hot House oceanic transport in the North Atlantic, over latitude 40N, schematic. The big block with relatively warm water represents the deep ocean, transporting warm water pole ward. The volume of warm-water poleward transport is important. In a hot-house scenario it is ocean-wide and ocean-deep. The moderate water at the surface flows from the north pole to 30N, to replenish the sinking waters at 30N. The depth of the surface layer is exaggerated. Warm, deep water was transported to the poles where at that time deep water was welling up (Golovneva 2000**). The warm water prevented formation of polar ice and the poles stayed ice free, even in winter time. During the winter half year, at the North Pole cold land areas were bordering the relatively warm Arctic Ocean, resulting in a high temperature gradient between the two. The high gradient resulted in strong winds that caused the upwelling of warm deep-water. The warm polar upwelling and the downwelling at 30N together resulted in a reversed vertical ocean circulation, when compared to the present one. Our present vertical oceanic circulation is shown in figure 4a. Figure 4a: Present (Ice House) vertical oceanic circulation. North-South transect, simplified, schematic. South Pole at the left (90°S), North Pole at the right (90°N), equator (0°) in the centre. Present cold deep-water production at the poles dominates the deep oceans. Our present deep oceans are ice-cold, and that cold water is welling up at lower latitudes, cooling the warm surface layer. After upwelling, cool surface waters (shown in orange) are warmed in the tropics and transported poleward over the surface by warm currents. But, in a Warm House or a Hot House State, the vertical water circulation is the reverse of the circulation as shown in figure 4a. See below, figure 4b. Figure 4b: Hot House vertical oceanic circulation. North-South transect, simplified, schematic. South Pole at the left (90S), North Pole at the right (90N), equator (O) in the centre. Warm saline waters went down at 30°N, filling up all the world’s deep oceans. The upwelling warm water at the poles had to flow back to the downwelling areas at 30°N to replenish the downwelling waters. Because surface waters at the poles became fresh and less dense, polar water stayed at the surface when it flowed back to 30N. As compared to the present situation (figure 4a), the vertical circulation in the oceans was reversed during Warm and Hot House periods (4b). Deep, warm water was not only transported basin-wide and basin-deep but had the advantage that it was not cooled at the surface. For these reasons, this deep redistribution of tropical thermal energy was superior to the present ‘Gulf Stream like’ thermal energy transport over the surface. It was the perfect way to transport absorbed tropical/subtropical energy over all latitudes. The results were higher average surface temperatures and a more uniform climate from the poles to the equator. That is, a smaller pole-to-equator temperature gradient, see Figure 5. Figure 5: Temperatures per latitude over the Northern Hemisphere for the Maastrichtian period, 72- 66 million years ago. The graphic below is fig. 2 from Golovneva, 2000, see the abstract at the end of the post (Golovneva 2000)**. From the paper: “Temperature gradients for present (continuous line, after Barron (1983)) and the Maastrichtian stage (dotted line).” The dotted line is based on fossil plant evidence. Source: (Golovneva 2000) Only a reversed vertical oceanic circulation can create the pole-to-equator gradient that is shown in the figure above, for the Maastrichtian. Notice the lower temperatures around the equator during the Maastrichtian (Warm House) period, as compared to the present period. Much higher temperatures than todays were not only found at and around the North Pole, but also in Antarctica (Francis and Poole 2002). *** The very different and more uniform distribution of surface temperatures over the latitudes of the Earth as caused by the oceans had important consequences for the role of water vapor, our main surface infrared radiation absorbing gas. 3. Water Vapor effects Together, the warm deep-water production and the reversed circulation created temperatures at the high latitudes that were much higher than today’s minus 30 to minus 50 degrees Celsius during winter time. During Warm House and Hot House periods, even in winter time temperatures above zero were normal at the poles and for summer time, moderate temperatures were found (Golovneva 2000)**. The much warmer poles and middle latitudes lifted the average temperature of the Earth. As surface temperatures rose, the rate of evaporation over the oceans at the higher latitudes rose exponentially, resulting in a huge rise in water vapor content in the regional lower atmosphere. Water vapor is by far our main infrared radiation absorbing gas. Because of the warm deep-water production and because of the reversed circulation during Hot House periods, water vapor became also abundant over the middle and high latitudes. Reducing the speed of heat loss to outer space. And resulting in higher surface temperatures than could have been caused by only polar upwelling of relatively warm deep-water. Especially during night time and during the long dark polar winters, the water vapor effect enhanced the polar temperature increase that is initially caused by the reversed circulation and the relatively warm deep-water. Figure 6: Water vapor content over latitudes, in our present Ice House State (on the left) and water vapor in a Warm House / Hot House State (on the right). North Pole on top, the Antarctic at the bottom of the figures. Clearly, the Earth loses more energy in an Ice House State because of the lower content of water vapor over the middle and higher latitudes, resulting in a strong cooling for the Earth as a whole. Therefore, our present average temperature for the Earth as a whole is low. The massive heat loss at the poles in combination with the poleward transport of very saline surface water results in the massive production of ice-cold, deep water that has been filling the present deep oceans. Elsewhere, upwelling deep cold water cools the surface, even far from the poles. Lowering average temperatures. During a Hot House / Warm House State, a large quantity of water vapor over the high latitudes prevents the fast loss of thermal energy at the poles, leading to a higher temperature level for the poles and for the Earth as a whole. The higher temperature level at the poles also prevents ‘Ice and Snow albedo cooling’. A Hot House / Warm House State was the result. Warm periods are characterized by a high stability. During Hot House periods climates were also very equal over most of the surface of the Earth. A warm Earth meant a stable, equal and in general moderate Earth. During Hot House / Warm House climate states, the higher water content of the atmosphere over the middle latitudes also enhances rainfall over the poles, creating lower salinity ocean surfaces at the high latitudes that prevent cold deep-water production. Fresh polar waters stabilized the deep-warm water heating system over the whole Earth as thermal energy was absorbed by the oceans at low latitudes. Water stores and transports huge quantities of thermal energy, especially if distributed ocean-wide and ocean-deep. Even if transport is very slow, it results in a huge redistribution of thermal energy over latitudes. Because of the role of water vapor, deep, warm oceans did not need to be very warm to create ice free poles. A moderate rise in deep-water temperature and a reversed circulation were enough to create the right circumstances for an important rise in water vapor content over the middle and high latitudes. Altogether, the ‘Triple H2O System’ prevented a strong heat loss at the poles and created a big rise in temperature, which resulted in higher average temperatures for the Earth as a whole. 3 x H2O + 1: Water vapor creates ‘weather’ The far more evenly distributed temperatures over all latitudes together with the enhanced water vapor content at the higher latitudes also changed the whole atmospheric system of Warm House and Hot House eras. Low- and high-pressure systems, wind direction, wind speed, evaporation, convection, clouds and cloudiness, etc., all hanged. All were acting completely different as compared to our present atmospheric system. The more equal circumstances over latitudes reduced the pole-to-equator temperature gradient. Together with ‘temperature’, it is water vapor, H2O, that creates the daily and seasonal variations in the atmosphere that we call ‘weather’. And because ‘climate’ is defined as the average of 30 years of ‘weather’, it is the water vapor molecule, H2O, that changes climate. Water vapor creates differences in the density of air, like salinity creates differences in the density of ocean water. Both water vapor and salt create the movements in the two ‘fluids’ that create our weather and our climate, the oceans and atmosphere. Water vapor and temperature rule convection in the air and salt and temperature rule convection in the oceans. In short, the system of ‘weather’, ‘climate’ and ‘climate states’ Oceans create ‘climate states’ and in the atmosphere, ‘weather’ is created by water vapor. ‘Orbit’ creates seasons. Slight differences in orbit create glacials and interglacials in our present Ice House State. Stadials (or glacial periods) start when the temperature of the oceans cools enough that snow and ice can start enhancing the Earth’s albedo. The lower temperatures cause much lower water vapor concentrations in the higher latitudes. Less water vapor in the air cools the poles during night time and during the winter, enhancing ice and snow effects and increasing the pole-to-equator temperature gradient. Weather changes, and climate changes. All this happens in a certain setting of continents and oceans and each specific continental configuration for every geological era results in a set of possibilities for weather and climate. This set of possibilities is limited by a ‘general background temperature’ that mainly depends upon the temperature of the deep oceans. Warmer deep-ocean water, a reversed vertical oceanic circulation and a far higher quantity of atmospheric water vapor over the higher latitudes, together create Warm House and Hot House Climate States. ‘Water’ is the main constituent of the three mechanisms: H2O, H2O and H2O. Another positioning of continents and oceans in previous geological eras enabled a dominate warm deep-water production. Nearly enclosed seas and shallow seas at 30N produced warm and very saline water that, because of its high density, filled the deep oceans. Over the poles, a higher rainfall than evaporation caused fresh polar surface waters, this prevented massive, deep cold-water production. Warm deep-water production dominated. Warm deep-water production at 30N resulted in warm upwelling at the poles and in a reversed vertical water circulation in the oceans. Higher temperatures at the poles and the mid-latitudes were the first result. In Warm and Hot House climate states, the poles stayed ice-free and polar winter temperatures became very moderate. Over the higher latitudes, during Hot House periods the quantity of the most important infrared radiation absorbing gas in the atmosphere, water vapor, increased because of higher polar surface temperatures. That polar water vapor prevented a large heat loss for the Earth as a whole, warming the Earth. Water vapor also kept the poles and the higher latitudes warm. And because of the strong poleward rise in temperatures, the average temperatures of the Earth rose to what we know now as the higher average temperatures of Warm House and Hot House Climate States. Because of those three H2O mechanisms Hot House Climate States developed in periods when the positioning of continents enabled a dominant warm deep-water production and prevented a massive production of cold deep-water. Nearly all the past 250 million years were [much] warmer than the Ice House State of the last 3 million years. The disappearance of warm, deep-water producing seas was the cause, in combination with the geologically recent development of a system of deep-cold-water production near the poles. Simple physics did the work. All natural. Triple H2O. With regards to commenting: please adhere to the rules known for this site: quote and react, no personal insults. In commenting: please remember you are on an international website: for foreigners it is difficult to understand abbreviations. Foreigners only understand words and (within the context) easy to guess abbreviations like ‘60N’ or ’SH’. About the author: Wim Röst studied human geography in Utrecht, the Netherlands. The above is his personal view. He is not connected to firms or foundations nor is he funded by government(s). Andy May was so kind as to read the original text and improve the English and text where necessary. Thanks again Andy. * According to this Bill Illis, graphic: ** Source of the data: (h/t Philip Mulholland) *** Cretaceous and early Tertiary climates of Antarctica: evidence from fossil wood Francis, Jane and Imogen Poole Fossil wood is abundant in Cretaceous and early Tertiary sediments of the northern Antarctic Peninsula region. The wood represents the remains of vegetation that once grew in high palaeolatitudes when the polar regions were warmer, during former greenhouse climates. Fossil wood is a unique data store of palaeoclimate information. Analyses of growth rings and anatomical characters in fossil wood provide important information about temperature, rainfall, seasonality and climate trends for this time period in Antarctica. Climate signals from fossil wood, supported by sedimentary and geochemical evidence, indicate a trend of cool climates during the Early Cretaceous, followed by peak warmth during the Coniacian to early Campanian. Narrower growth rings suggest that the climate cooled during the Maastrichtian and Palaeocene. Cool, wet and possibly seasonal climates prevailed at this time, with tentatively estimated mean annual temperatures (MATs) falling from 7°C to 4–8°C respectively, determined from dicotyledonous (dicot) wood anatomy. The Late Palaeocene/Early Eocene was once again warm, with estimated MATs of 7–15°C from dicot wood analysis, but conditions subsequently deteriorated through the latter part of the Eocene, when cold seasonal climates developed, ultimately leading to the onset of Cenozoic ice sheets and the elimination of vegetation from most of Antarctica. Source. Francis, Jane and Imogen Poole.2002. „Cretaceous and early Tertiary climates of Antarctica: evidence from fossil wood.” Palaeogeography, Palaeoclimatology, Palaeoecology 182 (1-2). https://www.sciencedirect.com/science/article/pii/S0031018201004527. Golovneva, Lena. 2000. The Maastrichtian (Late Cretaceous) climate in the Northern Hemisphere.” Geological Society, London, Special Publications 181: 43-54.
<urn:uuid:bce31e30-6ca8-42de-b707-19afcadce1d2>
CC-MAIN-2022-33
https://wattsupwiththat.com/2018/06/15/how-the-earth-became-a-hothouse-by-h2o/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00697.warc.gz
en
0.913841
5,046
4
4
Rapid progress in sequencing technologies is poised to set the imagination of biomedical researchers on fire. Experts now believe that progress is about to make possible what seemed to be utopian a few years ago – it seems likely that it will soon be possible to sequence the human genome in only a few minutes and store and automatically analyse it using tiny automates. However, is everything that is technically feasible also reasonable? While basic researchers and technology providers radiate confidence, human geneticists and clinicians are tending to attenuate expectations raised by genomic medicine. The road from genetic/genomic diagnostics into clinical routine is a long one with the one exception to the rule being commercial genetic tests for monogenic diseases. In the foreseeable future, the predictive genetic testing of common diseases will continue to make no medical sense as well as being meaningless. Recent publications by the international genome research community (https://www.gesundheitsindustrie-bw.dewww.encodeproject.org/ENCODE/ and www.1000genomes.org/) have disillusioned many researchers. Scientists have discovered that the human genome harbours around four million gene switches in DNA regions that were previously dismissed as “junk”. It is becoming increasingly clear that the 22,000 or so human genes underlie far more complex regulation processes than previously thought. The function of only half of the human genome is known. Less than two percent of the DNA (exome) code for proteins; the remaining and seemingly useless amount of DNA is vast and largely unknown territory. What implications does this have for genetic diagnostics? Independent experts agree that the application of genetic diagnostics in clinical routine is a pipe dream. As things stand at the moment, genetic diagnostics only makes medical sense for the diagnosis of monogenic diseases and pharmacokinetic applications; genetic diagnostic testing is a long way from becoming standard practice (Harper, A.). Genome sequencing is currently largely limited to basic research. Speaking at the public hearing of the German Ethics Council in March 2012, Karl J. Lackner, director of the Institute of Clinical Chemistry and Laboratory Medicine at the University of Mainz, expressed the opinion that the untargeted sequencing of the entire genome does not make any medical sense at all. “As long as we do not understand how certain genes affect a person’s susceptibility to a certain disease, it is of little use knowing that he or she has a moderately increased genetic disease risk.” The human geneticist Karsten Held believes that the same applies to prenatal diagnostics: “The higher the resolution, the more nightmarish it can be for a genetic counsellor to deal with. The level we are referring to is not the issue. It will always be a problem that the more information we have, the more difficult it becomes to communicate it.” All experts agree that sequencing technologies have developed at a revolutionary speed over the last five years. The “$1000 genome” seems just to be around the corner and, according to experts, the trend towards further miniaturisation and cost degression will continue. Next-generation sequencing (NGS) is currently replacing Frederick Sanger’s method of DNA sequencing developed in the 1980s. Basically, high-throughput DNA analysis methods enable miniaturisation and optimisation, which enables individual sequencing reactions to be carried out in a highly parallel manner. At present, 107 to 1010 nucleotides can be sequenced per day and system, surpassing the sequencing capacity of existing technologies by several magnitudes. State-of-the-art NGS systems are able to sequence and compare four human genomes within two weeks, not including data analysis. Experts believe that a 100-fold per-base coverage is required to provide a high enough level of precision. This in turn reduces the throughput rate. The NGS market is currently dominated by three companies offering different though basically comparable technologies. The most innovative approach is single-molecule sequencing. State-of-the-art sequencing technologies are already playing a major role in basic research and help to explain the pathogenesis of diseases on the molecular level as well as detecting microbes such as those that caused the EHEC epidemics in Germany in summer 2011. Oncology centres in the USA are looking into several hundred tumour genes in their effort to correlate genomics with therapy response. And yet another example: a pilot programme was recently started in Norway that aims to bring next-generation DNA sequencing into the country’s national healthcare system to personalise cancer treatments and increase the chances of curing cancer (Theurillat, Jean-Philippe). That said, NGS technologies still suffer from one major drawback in that they produce high false-positive rates (Timmermann, Bernd). All whole-genome sequencers produce their own specific errors. Although these errors are difficult to quantify, their number (error rates of up to one percent) is nevertheless too high for them to be used for medical purposes. Experts believe that genome sequencing can have far-reaching consequences for research into and treatment of monogenic diseases. They believe that the determination of the molecular causes of monogenic defects (e.g. Neuromics project at the University of Tübingen) will become easier and cheaper. Modifications in up to two thirds of all human genes can lead to monogenic diseases.In contrast to multifactorial diseases, monogenic diseases are relatively rare. However, they affect a large number of people worldwide, as Hans-Hilger Ropers from the Max Planck Institute for Molecular Genetics explains. Three to four percent of all newborn babies have a monogenic disease. Around 7000 monogenic diseases have been studied in detail; they are caused by a single modification in a single gene (of a total of 3000 genes known to cause monogenic disease), and are difficult to detect. It is assumed that between 33 and 50% of all monogenic diseases are known. Treatment (e.g. combination of diet and medication) is available for 500 of them. A universal heterozygote screening test for 448 autosomal recessive diseases can be bought in the USA for the equivalent of €400. It is not approved for sale in Germany. This test is specifically aimed at couples who want to exclude the possibility of being carriers of a rare but severe hereditary disease and potentially transferring this gene to their children. Relevant medical associations in Germany reject the test.Ethical issues related to genetic diagnostics become most obvious when dealing with monogenic diseases. There is a high chance of a monogenic disease being passed on during pregnancy and leading to many serious disorders. Moreover, monogenic diseases sometimes run in families and the quality of life of the sufferers is significantly reduced. The ethics associated with the testing of individuals for monogenic diseases was also discussed at a public debate held at the Berlin-Brandenburg Academy of Sciences in September 2012. Human geneticists like Karsten R. Held, the medical director of the Centre of Human Genetics in Hamburg, only consider genetic diagnostics acceptable on condition that it is an integral part of genetic counselling, which means that advice needs to be given before and after diagnosis. The reason for this is that the increasing sensitivity of the tests is leading to a higher number of erroneous diagnoses and makes the interpretation of data more difficult. Held further highlights that progress in the ability to predict diseases will be fostered by epigenetic analyses and the identification of new biomarkers which not only provide information about a person’s genetic make-up, but to an even greater extent on the activity of the genes. Experts therefore believe that the use of NGS is only reasonable when carried out and discussed in close cooperation with doctors who have special knowledge of human diseases and syndromes. This may prevent patients with a rare disease from having to visit one doctor after another or at least cut back on the number of visits to the doctor. With regard to monogenic diseases there is always the question as to why one should try to diagnose a disease when causal therapies are (still) lacking for most such diseases. Huntington’s disease, an autosomal dominant monogenic disease for which no cure is available, has raised several ethical issues, particularly with regard to the application of genetic tests for diagnosing the disease. For example, German law prohibits prenatal genetic testing for diseases such as Huntington’s that manifest themselves only in adulthood. Familial (hereditary) breast cancer is another disease that manifests itself in adulthood. According to experts, successful prediction of diseases using genetic tests is only achieved in rare cases, for example those in which only one or a few gene variants have an impact on disease predisposition. They believe that predictive genetic diagnostics is only possible for those psychiatric diseases that are linked to copy number variations (CNVs), which are quite rare. Examples of such diseases include early-onset Alzheimer’s where three genes have been shown to have a major effect on the risk of people developing the disease in their 30s or 40s, in people at risk of developing diabetes due to mutations in the insulin receptor gene and in people at risk of suffering myocardial infarction due to mutations in genes that code for certain enzymes. The results obtained by the Encode and 1000 Genome projects are a real goldmine for scientists. However, the data are a bitter disappointment for those who believe that genetic analyses are able to predict susceptibility to common diseases such as diabetes, Alzheimer’s or cancer. The data show unambiguously that it is impossible to deduce a reasonable risk for developing multifactorial diseases from the human genome. Numerous scientific studies have come up with similar results (e.g. Roberts, Nicholas J.). Speaking at the public hearing of the German Ethics Council on 3rd May 2012, Thomas Wienker from the Max Planck Institute for Molecular Medicine summarised the outcome of such hallmark studies as follows: “The information that can be deduced from the genetic architecture of many multifactorial diseases is extremely sobering.” The causes of genetic disease can relate to alterations in a large number of gene loci and alleles, all of which can have different effects. According to experts, many people have underestimated this heterogeneity, especially as far as common diseases are concerned. Every individual differs from another in around one thousandth (around 4 million bp) of his or her genome. That said, a person suffering from a disease might only differ from a healthy person in one mutation. The targeted search for such mutations requires access to the information deduced from the genomes of millions of individuals whose diseases would have to be known. In addition, complex diseases can arise as a result of interactions between a person’s genes and the environment; however, these interactions are usually unknown or not quantified to the degree that would be necessary to predict susceptibility to disease. Efforts undertaken by the German National Cancer Plan to develop criteria and conditions that would allow the risk-adapted prevention of diseases based on new genetic risk factors are less ambitious but potentially more successful. The goal of these approaches is to protect patients and people seeking advice against useless and unfounded predictive genetic analyses as well as strengthening cancer prevention efforts (Schmutzler, R. et al.). The quantity of data related to the human genome is increasing at enormous speed. And this also generates a greater demand for genetic counselling. Or to put it another way: the increasing application of high-tech medicine also requires paid counsellors. In Germany, prenatal diagnostics has been integrated into the Genetic Diagnostic Act. This has serious consequences; far too few human genetics specialists or doctors with in-depth genetic expertise are available to deal with the people undergoing diagnostic testing. It is estimated that around 500,000 prenatal genetic examinations and disease risk assessments are carried out in Germany per year, which means that around one million genetic counselling sessions need to be offered, one prior to and one after the examination. Up until now, human genetics specialists have provided genetic counselling to around 50,000 people per year.In order to increase the number of human genetics specialists, the German government gave doctors the possibility to attend a course (72 training hours) to undergo further training in genetic counselling (Schwerdtfeger, p. 56, in: Duttge et al.). It did not come as a surprise that the highly controversial Genetic Diagnostics Act was once again heavily criticised. The German Medical Association and the Association of German Human Geneticists called the new regulation “the worst ever legal error”, an “inflationary medical service designed by politicians without any consideration for patients and doctors” (Schulze, Bernt). The German Ethics Council has announced that a comprehensive report on genetic diagnostics is to be published in 2013 and believes that this report will fuel the discussion on genetic testing. In fact, the vast majority of experts believe that the German Genetic Diagnostics Act needs to be amended. Many people have criticised the fact that research that leads to huge quantities of genetic information does not fall under the scope of this law. Another controversial aspect relates to the information study volunteers need to be given before and after testing. Calls for guidelines for researchers, clinicians and patients are getting louder. In addition to medically questionable lifestyle genetic tests (so-called direct-to-consumer tests), the economic dimension of genetic diagnostics is another controversial issue. So for example the question as to whether insurance companies should pay for genome sequencing services in cases when lifestyle changes might prevent or improve the diagnosed or predicted condition. The Netherlands are evaluating the introduction of a targeted exome test (which comprises 1200 genes) to assess the disease risk of seriously ill children. Up until the end of the year, the interdisciplinary Genetic Diagnostic Commission will provide information about the current price of genome sequencing. Human geneticists are keen to find out about the counselling requirements associated with newborn hearing disorder screening, which has recently been implemented in all German states. Approximately 50% of all hearing disorders in children have a genetic basis (Henn, in: Duttge et al., p. 27). And there is yet another unsolved serious problem that needs to be dealt with, namely the problem of quality assurance. A survey on human molecular genetic testing laboratories published in 2012 (Berwouts, S. et al.) concluded that the quality practices vary widely in European genetic testing laboratories and that this is associated with potential risks for patients as well as compromising patient care and treatment. Human geneticists are also calling for measures that ensure the quality of genetic counselling (Henn, in Duttge et al., p. 30) and for the approval of genetic tests that enable people who have been tested to take the necessary steps to improve their health. Recent genome research results might have disproved the dogma of the exceptionality of genes. However, the bioethical debate needs to be continued as it is governments’ responsibility to regulate the use of the flood of genetic data while respecting basic individual rights. However, it is safe to assume that if something is technically feasible, it will be put into practice – now and in the future; the detailed knowledge of a person’s genome will not be an exception to this rule. wp - 23.11.2012© BIOPRO Baden-Württemberg GmbH Abecasis, GR: The 1000 Genomes Project Consortium, An integrated map of genetic variation from 1,092 human genomes, Nature, 491, 1st November 2012, doi: 10.1038/nature11632. Berwouts, S. et al.: Quality assurance practices in Europe: a survey of molecular genetic testing laboratories, European Journal of Human Genetics (2012), 20, p. 118-1126, doi: 10.1038/ejhg.2012.125 online 27th June 2012. Goldsmith, L. et al.: Direct-to-consumer genomic testing: systematic review of the literature on user perspectives, European Journal of Human Genetics (2012), 20, 811-816, doi:10.1038/ejhg.2012.18. Harper, A./Topol, E.: Pharmacogenomics in clinical practice and drug development, Nature Biotechnology, V. 30, No. 11, Nov. 2012, p. 1117-1124, doi:10.1038/nbt.2424 Kaiser, J.: A Reality Check for Personal Genomes, Science now, 2.4.2012. https://www.gesundheitsindustrie-bw.denews.sciencemag.org/sciencenow/2012/04/a-reality-check-for-personal-gen.html Mullard, A.: Consumer gene tests poised for regulatory green light, Nature Medicine, Vol. 18, No. 9, September 2012, p. 1306. Roberts, Nicholas J.: The Predictive Capacity of Personal Genome Sequencing, Science Translational Medicine Rapid Publication on April 2 2012, Sci. Transl. Med. DOI: 10.1126/scitranslmed.3003380 Challenges associated with the Encode project, e.g.: Ward, L./Kellis, M.: Interpreting noncoding genetic variation in complex traits and human disease, in: Nature Biotechnology, Vol. 30, No. 11, Nov. 2012, p. 1095-1106, doi: 10.1038/nbt.2422. Articles in German: Theurillat, Jean-Philippe, NZZ, 23.5.2012, https://www.gesundheitsindustrie-bw.denzz.ch/wissen/wissenschaft/die-krebstherapie-der-zukunft_1.16997844.html Timmermann, Bernd, MPI for Molecular Genetics, March 2012, public hearing of the German Ethics Council. Orth, M. et al.: Praktische Umsetzung des Gendiagnostikgesetzes (GenDG) in der Laboratoriumsmedizin, dem humangenetischen Laboratorium und der humangenetischen Beratung/Practical Implications oft he German Genetic Diagnostics Act (GenDG) for Laboratory Medicine, the Human Genetics Laboratory for Genetic Counseling, in: LaboratoriumsMedizin, Vol. 35, H. 5, p. 243ff. Schulze, Bernt: Gendiagnostikgesetz und genetische Beratung I: Geschichte eines Irrwegs, Dt. Ärzteblatt, H. 16, 20. April 2012 Schmutzler, R. et al.: Hoffnung und Fluch der Genanalyse, in Deutsches Ärzteblatt, H. 26, 29.6.2012, p. 1371ff. Duttge, G./Engel, W./Zoll, B. (Eds.): Das Gendiagnostikgesetz im Spannungsfeld von Humangenetik und Recht, Göttingen 2012 (Göttinger Schriften zum Medizinrecht Vol. 11). Klinkhammer, G.: Arbeitskreis medizinischer Ethikkommissionen: Eine klare Vereinbarung treffen, Dt. Ärzteblatt, H. 33-34, 17.8.2012. Public discussion of the Berlin-Brandenburg Academy of Sciences on genetic diagnostics, German title: 'Schicksal Gendiagnostik', 'Gentechnologiebericht' work group, 10th September 2012, Berlin: (audio recording) https://www.gesundheitsindustrie-bw.dewww.bbaw.de/mediathek/schicksal_gendiagnostik/?searchterm=schicksal_gendiagnostik Public hearing of the German Ethics Council, 3rd May 2012: Opportunities and limits of predictive genetic diagnostics of multifactorial diseaseshttps://www.gesundheitsindustrie-bw.dewww.ethikrat.org/veranstaltungen/anhoerungen/praediktive-genetische-diagnostik-multifaktorieller-erkrankungenPublic hearing of the German Ethics Council, 22nd March 2012: Scientific and technological developments in the field of multiplex and high-throughput diagnosticshttps://www.gesundheitsindustrie-bw.dewww.ethikrat.org/veranstaltungen/anhoerungen/multiplex-und-high-throughput-diagnostik Opinion of the German Society of Human Genetics on the qualification related to "human genetic diagnostics and consultation" as stipulated in § 7 paragraph 3 of the German Genetic Diagnostics Act, 15th Feb. 2012, and others relating to the GEKO guidelines, https://www.gesundheitsindustrie-bw.dewww.gfhev.de/de/leitlinien/gfh.htm?Submit2=Liste+anzeigen#GEKO German Society of Human Genetics/Association of German Human Geneticists (HG): S2k guideline "Human genetic diagnostics" and genetic counselling, online publication 21st June 2011, doi: 10.1007/s11825-011-0284-x, guideline available through: https://www.gesundheitsindustrie-bw.dewww.awmf.org/uploads/tx_szleitlinien/078-015l_S2k_Humangenetische_Diagnostik_genetische_Beratung.pdf Genetic Diagnostics Commission at the Robert Koch Institute:https://www.gesundheitsindustrie-bw.dewww.rki.de/DE/Content/Kommissionen/GendiagnostikKommission/GEKO_node.html German Academy of Natural Scientists Leopoldina/acatech/Berlin-Brandenburg Academy of Sciences: Opinion on predictive genetic diagnostics as a tool for disease prevention, November 2010
<urn:uuid:7157cfb0-5948-4a4d-abd0-82d07174b504>
CC-MAIN-2022-33
https://www.gesundheitsindustrie-bw.de/en/article/dossier/genetic-diagnostics-technology-reaches-the-limits-of-what-is-medically-reasonable
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00498.warc.gz
en
0.901708
4,639
2.90625
3
/* MARKETING SCRIPT */?> Chemistry, in general, is the most important subject of class 12th science class. The NCERT Solutions for class 12th chemistry is prepared by experts to assists students in their learning. The team at Physics Wallah comprises of experts having years of experience. This NCERT solutions for class 12th chemistry can assist your overall progress in the upcoming board examination. The team at Physics Wallah has maintained the top class standard over the years. Since our team is already acquainted with the type of question papers in the final examination. The NCERT solutions for class 12 chemistry is a precious text for students. The versatility of our study material is not only restrained to board exams. Students can also refer this study material for various other competitive exams as well. Chemistry consist of various typical theories as well numerical part. Many a time’s students get stuck in problems and complex theories. This hinders the smooth learning curve of students. In order to prevent any type of time wastage, the team has come up with NCERT solutions for class 12 chemistry study material. Since chemistry has a wide syllabus to cover, so it becomes a bit difficult to comprehend at the time of revision. Students can easily revise the study material right before their examinations. The NCERT solutions for class 12 chemistry has been divided into chapter wise section. Our team has made extensive efforts to research the most possible questions from year question papers. Students can find defiantly find various important marked questions in the study material. Since syllabus of class 12th chemistry is very wide & comprises of various unnecessary things. The team at Physics Wallah has completely removed unnecessary things and made learning easy. Step-by-step guidance is provided to critical numerical. Students can also find various short tricks to solve critical concepts and solutions. The experts in our team drafting NCERT solutions for class 12 chemistry are well versed in the topics. Students can find various intrinsic details in the NCERT solutions for class 12 chemistry prepared by our team. Chemistry is the subject with wide entices, we have inculcated short solving techniques in our syllabus. Whether it’s your board examinations or national level competitive examination, NCERT solutions for class 12 chemistry is a one-stop solution for all your needs. Students can also find various dedicated sections in study material for information on critical problems. The very ideology we follow at Physics Wallah is to help students with hassle-free learning. This NCERT solutions for class 12 chemistry can help students clear all their conceptual doubts. Students looking for expert guidance can study through NCERT solutions for class 12 chemistry provided by Physics Wallah. We have been helping needy students by providing the most coveted study material for no dime. Students can access our NCERT solutions for class 12 chemistry for completely free. All they need to do is to sign up with us. Avail the precious study material created by experts of their field right at your home at no cost. Physical chemistry required conceptual clarity over the subject. One must know how to apply the concept in numerical. To become expert in numerical one must start with solid state chapter this is the chapter which didn’t required help for any other chapter and interesting too.In NCERT this chapter is explained with awesome figures which will help you to understand the structure of solid and what Bravis wants to explain. Try to solve the question of text book and after that take the help of NCERT solution for class 12 chemistry. Don’t jump to solutions directly. Liquid solution required revision of stoichiometry chapter of class 11. Befor liquid solutions chapter you must know the term molarity, normality and other concentration term. Read the NCERT text book for the theory understand the concept of Raoult law and colligative properties solve the question given in the text book of NCERT. Take reference of NCERT solution for class 12 chemistry. Electrochemistry and chemical kinetics topics are very important and carries high weightage in class 12 board exam. All questions given in NCERT text book must be solved completely. You can take help of Physics Wallah chemistry formula page to learn all formula required for these chapter. At the beginning of Organic chemistry for class 12 one must revise few topics of class 11 organic chemistry such as. It is very important to have clarity in the above topics before starting class 12 alkyl halide chapter. After revision start with alkyl halide. Read the theory of SN1 and SN2 reaction mechanism try to find the mechanism of each reaction and how the reaction proceed and for mechanism detail you can take reference of Physics Wallah organic section. Read the chapter make your own notes and write all reaction and solve the text book questions at last take the help of NCERT solution for class 12 chemistry. Inorganic chemistry is the part of chemistry which is mostly neglected because of its theoretical part but one must understand that it consists of 30 percent weightage of the entire chemistry subjects. Students need to have different approach to score good marks in inorganic chemistry notes play extremely good work in inorganic chemistry. From day one start make notes for this subject from NCERT text book. Make sure you have written all important points in this section. While making notes take help of Physics Wallah inorganic theory which consist of all the information which you required and always solve the questions given in NCERT text book and take the reference of NCERT solution for class 12 chemistry. Chapter-1 Solid state Classification of solids; crystalline state, seven crystal systems (only cell parameters: a, b, c, alpha, beta and gamma); packing fcc, bcc, hcp; nearest neighbors, simple ionic compounds, point defects.Crystalline solids.Crystalline solids have long range order. The long-range order means the atoms or ions or molecules are arranged in a regular fashion and this symmetrical arrangement extends throughout the crystal length. b) Amorphous solids: An amorphous solid differs from a crystalline substance in being without any shape of its own and has a completely random particle arrangements, i.e. no regular arrangement. Example-Glass,Plastic. Chapter 2: Solutions Solutions are homogeneous mixtures of two or more than two components. By homogenous mixture we mean that its composition and properties are uniform throughout the mixture. Generally, the component that is present in the largest quantity is known as solvent. Solvent determines the physical state in which solution exists. One or more components present in the solution other than solvent are called solutes. In this Unit we shall consider only binary solutions (i.e., consisting of only two components). Chapter 3: Electrochemistry The connection between chemistry and electricity is a very old one, going back to Allesandro volta’s discovery in 1793, that electricity could be produced by placing two dissimilar metals on opposite sides of a moistened paper. Oxidation -Reduction reactions involve a transfer of electrons. Because a flow of electrons constitutes a electric current, we can at least in principle use any redox reaction to produce a electric current. Conversely we can use an electric current to carry out redox reactions that do not proceed spontaneously. Electrochemical cells are the devices in which interconversion of electrical energy and chemical energy takes place. Electro-chemical cells are of two basic types. Chapter 4: Chemical Kinetics Chemical Kinetics is the branch of science that deals with rate of reaction, factors affecting the rate of reaction and reaction mechanism.Different reactions occur at different rate. In fact a chemical reaction involves redistribution of bonds –– breaking of bond(s) in the reactant molecule(s) and making of bonds in the product molecule(s). The rate of a chemical reaction actually depends upon the strength of the bond(s) and number of bonds to be broken during the reaction. It takes longer time for the reactant molecules to acquire higher amount of energy which they do by collision. Hence reactions involving breaking of strong bond at relatively slower rate while those involving breaking of weak bond at relatively faster rate.On the basis of rate, reactions are classified as. Chapter 5: Surface Chemistry Surface chemistry is the branch of chemistry, which deals with the study of phenomena occurring at the surface separating the two bulk phases. These two bulk phases can be pure compounds or solutions. ADSORPTION-The phenomenon of attracting and retaining the molecules of the substance on the surface of a liquid or a solid resulting into a higher concentration of molecules on the surface is called adsorption. The substance thus adsorbed on the surface is called the adsorbate and the substance on which it is adsorbed is called adsorbent. The reverse process, i.e. removal of the adsorbed substance from the surface is called desorption (which can be brought about by heating or reducing the pressure). The adsorption of gases on the surface of metals is called occlusion. Chapter 6: General Principles and Processes of Isolation of Elements Commercially important ores of iron, copper, lead, magnesium, aluminium, tin and silver. Carbon reduction process (iron and tin), Self reduction process (copper and lead), Electrolytic reduction process (magnesium and aluminium), Cyanide process (silver and gold).The materials available for making tools and weapons, houses and skyscrapers, computers and lasers have had a profound effect on the development of human civilization. The earth’s crust is the main source of metals. The occurrence of metal in native or in combined state in the earth’s crust along with a number of rocky and other impurities depends upon the chemical nature of metals. Metals having less electropositive character have less affinity for oxygen, moisture and occur in free or metallic or native state. Minerals which are naturally occurring chemical substances in the earth’s crust obtainable by mining. Out of many minerals in which a metal may be found, only a few are viable to be used as sources of that metal. Such minerals are known as ores. Chapter 7: The p-Block Elements Group 13 to 18 elements (except helium), in which the last electron enters the p-orbitals, constitute the p-block. In this chapter will be study systematic group wise details of p-block elements.Most common oxidation states are +3 and +1. Stability of +1 oxidation state increases on going down the group from aluminium to Tl. Thus, in aqueous solution Tl+ is more stable.Metallic character increases down the group due to decreasing I.E. and easy loss of electrons. Boron having small size cannot easily lose its electron and hence is an exception to metallic character. Chapter 8: The d & f Block Elements Transition elements (only the first row, 3d series): Definition, Werner’s approach to coordination compounds, general characteristic properties [viz., variable oxidation states, colour (excluding the details of electronic transition) calculation of spin-only magnetic moment, formation of complexes (stereochemistry excluded), nomenclature of simple coordination compounds, valence bond approach to define geometries of coordination compounds of linear, tetrahedral, octahedral and square planar geometries.Three series of elements are formed by filling the 3d, 4d and 5d-subshells of electrons. Together these comprise the d-block elements. They are often called ‘transition elements’ because their position in the periodic table is between the s-block and p-block elements.These elements either in their atomic state or in any of their common oxidation state have partly filled (n-1) d-orbitals of(n-1)th main shell. In these elements the differentiating electron enters (n-1)d orbitals of (n-1)th main shell and as such these are called d-block elements. Their properties are transitional between the highly reactive metallic elements of the s-block, which typically forms ionic compounds and the elements of the p-block, which are largely covalent. In the s and p-blocks, electrons are added to the outer shell of the atom. In the d-block, electrons are added to the penultimate shell, expanding it from 8 to 18 electrons. Typically the transition elements have an incompletely filled d-level. The transition elements make up three complete rows of ten elements and an incomplete fourth row. Chapter 9: Coordination Compounds When solutions of two or more stable compounds are mixed in stoichiometric (simple molecular) proportions new crystalline compounds called molecular or addition compounds are formed. These are of two types: Chapter 10: Haloalkanes and Haloarenes Alkyl halides are halogen substituted alkanes. A monohaloalkane is written as RX, where X is any halogen atom (F, Cl, Br and I). The general formula of monohaloalkanes is CnH2n + 1X while that of a dihaloalkane is CnH2nX2.Alkyl halides of a particular kind, in which halogen atom is attached to a saturated carbon, which in turn is linked to unsaturated carbon, are called allyl halides. And when halogen atom is attached to an unsaturated (sp2) carbon, they are called vinyl halides. Alkyl halides are classified as primary (1°), secondary (2°) or tertiary (3°), depending upon the type of carbon to which X is bonded. When X is bonded to a carbon, which is bonded to one more carbon is called 1° halide and their general representation is RCH2X. When X is linked to a carbon, which is bonded to two carbons is called 2° halides and is denoted by R2CHX. When X is bonded to a carbon, which is attached to 3 carbons is called 3° halides and is designated as R3CX. CH3X is unique (not classified as 1°, 2° or 3°) as carbon is bonded to only hydrogens and is simply called methyl halide. Dihaloalkanes with both halogens on same carbon are called gemdihalides and with halogen on adjacent carbons are called vicinal dihalides. Chapter 11: Alcohols, Phenols, and Ethers Alcohol (esterification, dehydration and oxidation) Reaction of alcohols with sodium, phosphorus halides, ZnCl2/Conc. HCl. Phenol, Acidity of phenols, halogenation nitration, sulfonation and Riemer – Tiemann reaction .All alcohol contains the hydroxyl (-OH) group, which, as the functional group, determines the properties characteristic of this family. Variations in structure of the R group may affect the rate at which the alcohol undergoes certain reactions, and even, in a few cases, may affect the kind of reaction.When the hydroxyl group is attached directly to an aromatic ring they are phenols, and differ so markedly from the alcohols that we shall consider them separately. Chapter 12: Aldehydes, Ketones, and Carboxylic Acids Aldehydes and Ketones (oxidation, reduction, oxime and hydrazone formation, Aldol condensation, Perkin reaction, Cannizzaro reaction, haloform and original reaction only. Aldehydes and ketones both possess a carbonyl group and therefore are called carbonyl compounds. Formaldehyde is the simplest aldehyde, bonded to two hydrogens. In all other aldehydes, the carbonyl group is bonded to a hydrogen and to an alkyl (or an aryl) group. The carbonyl group of a ketone is bonded to two alkyl (or aryl) groups. Chapter 13: Amines Basicity of aniline and aliphatic amine, preparation from nitro compounds, reaction with nitrous acid, formation and reactions or diazonium salts, and its coupling with phenols, carbylamine reaction. Amines are organic compounds of ammonia in which one or more than one hydrogen atoms are replaced by other atoms or group of atoms. They are classified as primary (RNH2 or 1°), secondary (R2NH or 2°) or tertiary (R3N or 3°) depending upon one, two or all the three hydrogen atoms of ammonia are replaced by alkyl groups (R). Amines are commonly named as alkyl amines. The alkyl groups attached to N atom are named in the alphabetical order followed by amine. According to IUPAC system of nomenclature they are named as Alkanamine. The longest carbon atom chain attached to N atom is chosen as the parent compound and in the name of parent hydrocarbon, the last letter ‘e’ is replaced by the suffix amine. The substituents are named as prefixes in the alphabetical order. Chapter 14: Biomolecules Carbohydrates: Classification – mono, di and polysaccharides (Glucose, Sucrose and Starch only); hydrolysis of sucrose. Amino acids and Peptides: General structure and physical properties. Properties and uses of some important polymers (natural rubber, cellulose, nylon, teflon, PVC),The group of compounds known as carbohydrates received their general name because of early observations that they often have the formula Cx(H2O)y - that is, they appear to be hydrates of carbon.Limitations: The above definition could not survive long due to the following reasons. Chapter 15: Polymers Sit quietly and think about your activities today from the morning. You wake up in the morning, You want to brush your teeth. You fetch your toothpaste. The tube is made up of a polymer. Your brush is made up of a polymer. When you want to rinse your mouth, you open your plastic(polymer) tap. The pipe lines used to bring water to your tap is made of PVC(polymer). Skip it. You start preparing your break fast. You take a non-stick tawa. Non-stick? What does that mean? What is it made of? It is poly tetrafluoro ethylene abbreviated as teflon, a polymer. See, how polymers play an important role in our daily life from dawn to dusk. The molded chair in which you are sitting is a polymer. The pen with which I'm writing this is a polymer. Want to know more about polymers? Read further. Polymers can be called as macromolecules. Macromolecules can be considered as an association of small molecules to give a big molecule. Macromolecules can be man-made, too. The first syntheses were aimed at making substitutes for the natural macromolecules, rubber and silk; but a vast technology has grown up that now produces hundreds of substances that have no natural counterparts. Synthetic macromolecular compounds include: elastomers, which have the particular kind of elasticity characteristic of rubber; fibers, long, thin and threadlike, with the great strength along the fiber that characterizes cotton, wool, and silk; and plastics, which can be extruded as sheets or pipes, painted on surfaces, or molded to form countless objects. We wear these manmade materials, eat and drink from them, sleep between them, sit and stand on them; turn knobs, pull switches, and grasp handles made of them; with their help we hear sounds and see sights remote from us in time and space; we live in houses and move about in vehicles that are increasingly made of them. Q1. How to score good marks in CBSE Board class 12 chemistry with the help of the NCERT textbook? Ans. To score good marks in class 12 chemistry you must be strong in all three parts of chemistry physical, inorganic, and organic chemistry. You need a different approach in all three parts of chemistry for example to do well in physical chemistry focus on theory and solve numerical given in the NCERT textbook. Inorganic chemistry is highly dependent on your notes so prepare a good note form the NCERT textbook and note down all bullet points in it. Understand all reaction mechanisms given in the NCERT textbook. Q2. What is the right approach to reading the NCERT class 12 chemistry Textbook for the Board exam? Ans. For board exam, the NCRT textbook is highly recommended and must be read line by line. The best approach to score good marks in class 12 chemistry is to make good notes from chapter-1 to chapter-16. Solve all questions given in the NCERT exercise. Q3. What are the right strategies to deal with inorganic chemistry for the board exam? Ans. Inorganic chemistry must be done through the NCERT textbook. It is sufficient and consists of all the important points which are required to score good marks in class 12 chemistry. Q4. Is NCERT class 12 chemistry text is enough for the entrance exam? Ans. Yes now a day all most all entrance exam is conducted by NTA that includes JEE and NEET and the recommended book of NTA is NCERT. Check out the previous year's paper you can easily do an analysis that almost 75 % of questions are asked directly from the NCERT textbook. Q5. What are the most important chapters of class 12 chemistry for the board exam? Ans. The most important chapters of class 12 chemistry are electrochemistry, liquid solution, coordination compounds, and carbonyl compounds.
<urn:uuid:3d2a58b5-c718-4728-ad82-3b17cd0606b8>
CC-MAIN-2022-33
https://www.pw.live/ncert-solutions-for-class-12-chemistry
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00098.warc.gz
en
0.934602
4,516
2.65625
3
Jason Ponic works in the exciting world of Hollywood film and television by day and writes by night. The Most Interesting Shipwrecks in History Nothing compares to the forbidden and mysterious aura of a shipwreck. Since the dawn of civilization, water has claimed countless vessels of humanity. It is believed that some three million shipwrecks lay scattered across the world's oceans. Like seashells scattered across a beach, each ship has a different story to tell; each is unique. It is in this spirit that I have collected some of the world's most famous and fascinating shipwrecks. Some are well known, others all but forgotten. The Top 17 Most Intriguing Famous Shipwrecks - R.M.S. Titanic - R.M.S. Lusitania - S.S. Andrea Doria - S.S. American Star - M.S. World Discoverer - S.S. Edmund Fitzgerald - H.M.S. Investigator - R.M.S. Queen Elizabeth - R.M.S. Carpathia - M.V. Wilhelm Gustloff - H.M.S. Erebus - H.M.S. Terror - The Bismarck - U.S.S. Arizona - U.S.S. Yorktown (CV-5) - U.S.S. Hornet (CV-8) - U.S.S. Lexington (CV-2) 1. R.M.S. Titanic Some 400 miles off the coast of the United States, about two and a half miles below the surface, the Atlantic Ocean holds tight to its most famous prisoner, the R.M.S. Titanic. The ship, upon its construction in 1912, was considered the largest moving object ever made. Eight hundred eighty-two feet in length and 48,000 tons, she was a floating luxury palace. Additional refits, equipment, and design even separated Titanic from her sister ship Olympic, outsizing her by 1,500 tons. Intended to be the White Star Line's flagship for the next 20 years, the exceptions of the vessel were high. She would sail into history as the greatest maritime disaster of all time. Over 1,500 souls would perish on the night of April 15, 1912, two hours and forty minutes after colliding with an iceberg. The sinking of the great liner would usher in a new era of change in Maritime emergency procedures, many of which are still in effect today. Since the discovery of the wreck in 1985 by Dr. Robert Ballard, the Titanic is one of the most frequently visited wrecks on the ocean floor. In total, 5,500 artifacts have been recovered and are now displayed in exhibitions worldwide. In the last ten years, the deterioration of the wreck has increased, and experts estimate that the wreck will completely collapse by 2030. 2. R.M.S. Lusitania She was one of the fastest liners on the ocean and the direct rival of White Star Line's R.M.S. Titanic. Launched in 1906, the R.M.S. Lusitania was the Cunard Line's answer to the booming passenger trade of the Trans-Atlantic Shipping Lanes. Her size allowed for 50% greater passenger space, and until the Titanic and Olympic were built, she remained unchallenged in the market cap. Her four steam turbine engines produced an astonishing 76,000 horsepower, capable of driving the ship through the water at 26.1 knots. In her eight years of service, she made 202 crossings from Liverpool to New York. In 1914, World War I was breaking out, and the trans-Atlantic trade quickly became bottlenecked by the threat of German U-Boats. The Germans grew more and more aggressive with their U-Boat campaign, and soon they were targeting any vessel, military or civilian that crossed British waters. On May 7, 1915, Lusitania was hit by a German torpedo off the coast of Ireland. The resulting explosion blew a gaping hole in the starboard side. The ship began to list dangerously, and while the order was given to abandon ship, the captain attempted a last-ditch attempt to beach the vessel on the shore. Charging ahead at full speed only added to the chaos of the evacuation. Lifeboats were getting snagged on the ship's hull plating, and some were sucked into the ship's still turning propellers. In less than 20 minutes, the ship plunged beneath the waves, taking 1,198 of her 1,959 passengers and crew to their graves. The sinking helped to change public views on Germany and would help push the United States into the war. Read More From Owlcation Unlike the Titanic, the Lusitania's wreck was known since its sinking and would be subject to decades of abuse and destruction. Salvage attempts were made as early as the 1930s. A dive tunnel was proposed to be built over the wreck that would allow divers to access the ship relatively safely. Their goal was to salvage the pursuer's safe and any other items of value. Due to financial problems, this never happened. During World War II, the Lusitania was depth charged several times for target practice and by fears that Nazi U-Boats might use the wreck as a hiding spot. The wreck even changed owners several times. In the 1960s, the wreck was sold by its insurance company to John Light, a U.S. Navy dive officer, for 1,000 GPS. He was later bought out by Gregg Bemis in 1986, and the site was declared a National Heritage Site by Ireland in 1995, which created a legal nightmare for Bemis. Today the Lusitania lies on her side, draped in snagged fishing nets. It has almost completely collapsed into an unrecognizable mass of steel. 3. S.S. Andrea Doria On July 25, 1956, the world tuned in to an extraordinary yet tragic event. Broadcasted live to a stunned world, the stricken transatlantic liner, S.S. Andrea Doria, made her final plunge into the ocean. For the first time, the world watched the disaster unfold in real-time. The loss of the ship would signal the end of the trans-Atlantic passenger service. From then on, air travel would become the preferred method of transportation. The flagship of a defeated country, the Andrea Doria was Italy's phoenix from the ashes. Launched in 1951, the S.S. Andrea Doria was hailed as one of the safest vessels ever built. Small at only 697 feet and a displacement of 29,083 tons, she was neither the largest nor the fastest vessel of her day. She was, however, the most luxurious. The first vessel ever to feature three outdoor swimming pools, she was adorned with over $1 million worth of artwork and décor. Despite her safety features which included a double hull, watertight compartment design, and early morning radar, the ship had a number of design flaws. The most problematic of these was the habit of listing dangerously after any significant force hits the ship. This would ultimately contribute to her fate. A heavy fog rolled in on the night of July 25, 1956. The Andrea Doria, nearing the end of her voyage, was on approach to New York. Meanwhile, the light cruiser Stockholm was departing from New York. Misinterpretation of radar signals and the fog was a fateful recipe for disaster. At approximately 11:10 PM, the two ships collided. The Stockholm struck the Andrea Doria amidships, killing 57 people and tearing a huge hole in the side of the ship. The Stockholm survived, but the Andrea Doria did not. She sank eleven hours later. Her slow sinking allowed for virtually all her passengers and crew to be saved. The wreck of the Andrea Doria lies in just 160 feet of water. To many scuba divers, the ship is hailed as the Mt. Everest of diving. Hundreds of divers have explored the interior and have recovered artifacts from the wreck, including jewelry, china, statues, and even the ship's bells. The wreck's location has also contributed to its rapid deterioration. Strong ocean currents continuously bombard the wreck. Over the last 50 years, the superstructure has all but collapsed, and the hull has begun to split. While few lives were lost during the sinking, the Andrea Doria has developed a fatal reputation with divers. In the last 20 years, over 16 people have lost their lives diving at the wreck site. The most recent death occurred in 2015. 4. S.S. American Star A ship with a colorful history, the S.S. American Star was originally built by the United States Line in 1936 as the S.S. America. Until 1964, she served as one of the fastest passenger liners on the ocean. Renamed U.S.S. West Point during the war, the S.S. America returned to civilian service in 1946 and continued to serve until being sold in 1964. From 1964 to 1998, the S.S. America was sold and renamed many times. From 1964 to 1978, she was purchased by the Chandris Group and renamed Australis. She was retired in 1978 and sold to Venture Cruises, which was renamed the ship America. She was in major disrepair and, after several lawsuits, was seized by the United States and sold at auction later that same year. Repurchased by Chandris again, the ship was extensively refitted and renamed Italis. She served for two years before being laid up and sold to Intercommerce Corporation in 1980, renamed S.S. Noga, and preparations began to be converted into a prison ship. This never happened; instead, she was sold to Silver Moon Ferries, renamed Alferdoss, and began experiencing mechanical problems after years of neglect. Beyond repair, the ship was sold for scrapping in the 1980s, a process that would take ten years and several more owners to carry out. In 1993, the ship was saved from scrapping by a company that intended to refit the ship into a five-star hotel. She was renamed a final time to American Star, and she was towed to Triland. The ship never made it; a thunderstorm severed the tow lines, and the American Star ran aground, breaking in two. Declared a total loss, the ship was left to 14 years of abuse by the tides, which battered the ship into an unrecognizable pile of steel by 2008. During those 14 years, she was a favorite of surfers and enthusiasts who would brave the dangerous waves to enter and explore the ship. After 2019, nothing remained of the ship but a few pieces of twisted steel. 5. M.S. World Discoverer The M.S. World Discoverer, originally BEWA Discoverer, was a German cruise ship built in 1974. A tiny vessel, only 287 feet long, the vessel changed owners a couple of times. Its 8,000-mile range allowed the ship to travel the most remote of routes including the Northwest Passage and Antarctica. On Sunday, April 30, 2000, in the Solomon Islands, the World Discoverer struck an unknown reef, ripping a huge hole in her keel. The captain then gave the order to beach the ship to avoid its sinking. The damage would later prove too much, and the ship was declared a loss. The ship has remained where she beached ever since. Several unsuccessful salvage attempts were made, and the scrappers noted that the ship had been repeatedly ransacked by the locals since the islands were in a time of civil war. Anything of value was taken from the ship. Since then, the ship has become a tourist destination and is even a featured stop on Princess Cruises. 6. S.S. Edmund Fitzgerald What exactly happened on the night of November 10, 1975, will probably never be known. That was the night the S.S. Edmund Fitzgerald went down with all hands during a Great Lakes storm. Launched in 1958, the Edmund Fitzgerald was a cargo freighter. At 729 feet, she was the largest vessel ever to sail on the great lakes, and for 17 years, that's what she did. On the night of her disappearance, the Big Fitz was fully loaded with a cargo of taconite pellets bound for a steel mill in Michigan. On the night of November 9, the Fitz and her convoy partner encountered a massive winter storm with hurricane-force winds with waves hitting 35 feet. The Fitz would not survive the storm. Weighed down by her cargo, the vessel snapped in two and sank to the bottom, killing all 29 on board. For much of the coming decades, speculation and controversy ravaged the Fitz legacy. The ship's convoy partner, the S.S. Arthur Anderson, survived the storm, and its crew reported receiving several transmissions from the Fitzgerald that the ship had lost hatch covers and was taking on water. Many theories have developed over the years as to why the ship sank, but nothing has been proven. The US Navy located the wreck on November 14, 1975. Remotely operated vehicles conducted surveys and revealed that the wreck was lying in two pieces on the lake bottom, its cargo of taconite scattered everywhere. In the 1980s, Jean-Michael Cousteau took the first submarine to the wreck. Several other expeditions would follow throughout the '80s and '90s, and the wreck was extensively mapped and surveyed. One dive located the remains of a crew member lying alongside the wreck. Another recovered the bell from the wreck and replaced it with a memorial replica and even placed a can of beer inside the pilothouse as a salute to the sailors entombed forever in the wreck. In the 2000s, legislation was enacted in both the United States and Canada, declaring the wreck of the Edmund Fitzgerald an archeological site, thus requiring permits before diving. This was done in part to prevent the unauthorized salvage of the ship's taconite cargo, valued at over $24 million. Illegal Removal of Taconite 7. H.M.S. Investigator Finding a preserved wooden sailing ship is incredibly rare, so when it happens, it is nothing short of breathtaking. In 2010, Canadian expeditions discovered the remains of the H.M.S. Investigator at the bottom of the Arctic Ocean. Built in the 1840s, the Investigator was purchased by the Royal Navy in 1848 to search for the Sir John Franklin and his 120 men who disappeared several years earlier in the Arctic. When she was purchased, she underwent a refit to outfit her for Arctic exploration. This included steel plating reinforcement, ventilation systems, and heating systems. She was first commanded by James Clark Ross in 1848 and then commanded by Robert McClure in 1853. During her second voyage, she was hopelessly locked in the ice and was forced to be abandoned. She sat abandoned in the ice for nearly 40 years, providing a source of copper and iron for the local Inuit people. Eventually, the ship would sink after the ice ripped holes in her hull. The wreckage of the Investigator was discovered in 2010 by Parks Canada sitting upright in 30 feet of water. Its masts were missing, likely ripped off by the ice flows. Artifacts were recovered including the ship's wheel and several old muskets. 8. R.M.S. Queen Elizabeth Alongside her sister ship, R.M.S. Queen Mary, the original Queen Elizabeth dominated the sea waves for over 20 years. Launched in 1938, the largest liner in the world at the time was pressed immediately into service as a troopship. She helped bring an end to World War II by ferrying over 750,000 troops to both the Atlantic and Pacific theaters. Once Japan surrendered, the Queen Elizabeth enjoyed over 20 years of passenger travel across the Atlantic. It was air travel, not old age, which finally forced the Queen Elizabeth into retirement. She, like the Queen Mary, originally became a tourist attraction in Miami, Florida in 1969. But the harsh Florida weather forced the venture into bankruptcy after being declared a fire hazard. The ship was then sold at auction in 1970 to a Chinese businessman who intended to convert her into a floating university. During her conversion, however, she caught fire and sank in Hong Kong harbor in 1972. Seen as a shipping hazard, the Queen Elizabeth was partially scrapped where she lay in 1974. Her boilers and keel remained at the bottom of Hong Kong harbor until the 1990s, when land reclamation buried them forever beneath tons of earth and concrete. 9. R.M.S. Carpathia She would gain superstardom in April of 1912 as the ship that saved Titanic's survivors after the great liner's sinking. Built in 1903 by the Cunard Line, the R.M.S. Carpathia was a small 13,000-ton vessel that was on her way to a Mediterranean cruise the night Titanic sank. During World War I, she became a troop ship and ferried American and Canadian troops to Europe. In 1918, she met her end by the hand of German U-Boat U-55, which drove three torpedoes into her. Not a single passenger or crew member was lost. Today, the Carpathia lies at the bottom of the Atlantic, 120 miles from the coast of Ireland. Sitting upright on the sea floor, the wreck has begun to collapse in on itself. There have been only a handful of expeditions to the wreck. 10. M.V. Wilhelm Gustloff More than 9,500 souls perished on board the M.V. Wilhelm Gustloff when she was torpedoed in the final months of World War II. She was not a warship or a troop transport; she was a cruise ship stuffed with 10,000 German refugees fleeing the Red Army. A Soviet submarine, the S-14, fired three torpedoes into the side of the Gustloff in the dead of night. Only 996 would survive the sinking. The Wilhelm Gustloff lies in the Baltic Sea; her wreck is off-limits to all diving and salvage. 11. H.M.S. Erebus On Sept. 9, 2014, Parks Canada announced the discovery of one of the two long-lost vessels of the fabled Franklin Expedition. She was positively identified as the H.M.S. Erebus when her bronze bell was found and recovered. The H.M.S. Terror and H.M.S. Erebus were hopelessly trapped by the ice in 1845 and were abandoned by Sir John Franklin and his men. This decision sealed their fate, and the men would spend the rest of their days wandering the Canadian Arctic before finally perishing from starvation. The Terror and Erebus were Hecia Class bomb ships built by the British Navy in 1813 and 1826, respectively. The Erebus had already successfully served on one Arctic expedition in 1841 and was refitted for Franklin's expedition four years later. The wreck site was declared a Canadian National Historic Site in 2015. 12. H.M.S. Terror Two years almost to the day after the discovery of the H.M.S. Erebus, headlines around the world once again flashed the discovery of the other Franklin vessel, the H.M.S. Terror. Thirty-one miles from her running mate, in approximately 80 feet of icy water, the terror's condition is remarkable. Originally constructed as a bomb vessel for the British Royal Navy, the Terror fought against the United States in the War of 1812. Afterward, she would see several exploration expeditions until being assigned to John Franklin in 1845. Like the Erebus, the Terror became locked in the ice flow and abandoned by the expedition. The discovery of this ship marks the end of a search that has spanned 140 years. She was found some 50 miles from where her ice-locked location was recorded. Interestingly enough, evidence was found on board suggesting that the ship was remanned after being abandoned. Perhaps some Franklin men returned to the ship and attempted once again to free her from the ice. In 2019, Parks Canada entered the ship for the very first time. What they saw inside stunned everyone. Perfectly preserved rooms, complete with furniture, pots, pans, bunks, and even the captain's deck. The condition of the wreck has opened the possibility of first-hand Franklin relics and information that could help shed light on the expedition's failure. 13. The Bismarck The infamous German warship, destroyed in a dramatic battle of air and sea, marked the beginning of the end for battleships. Launched in 1939, her size in full violation of the Anglo-German Naval Agreement, Bismarck and her sister Tirpitz were the largest battleships ever built by Germany. She entered into a combined naval operation with the heavy cruiser Prinz Eugen in May 1941 and quickly became a target for the Royal Navy. In a series of battles, the Bismarck was completely crippled by aircraft. The final battle involved three British ships against the crippled Bismarck. In just 30 minutes, the German ship was reduced to an inferno of steel and ash. The Bismarck was scuttled by her crew shortly after, and she sank. Her wreck was discovered by Dr. Robert Ballard in June of 1989. They found her standing upright in over 15,000 feet of water. The extreme depth has kept the ship in remarkable condition. 14. U.S.S. Arizona Bombed and sent to the bottom of Pearl Harbor during the Japanese attack of December 7, 1941, the U.S.S. Arizona was a Pennsylvania-class super dreadnaught launched in 1915. While she never fired her guns in anger, Arizona did serve the US Navy for 26 years before sinking. During the First World War, she served primarily as a training ship, avoiding German U-Boats altogether. Modernized in the 1920s and enjoying the role of flagship in the 1930s, Arizona had a successful career. During the attack, she was hit with a bomb that detonated in the powder magazine. The explosion was so powerful it actually lifted the ship out of the water before sending it to the bottom. Over 1,100 sailors were killed. Two of her four main turrets were salvaged and reused, and her superstructure was removed and scrapped. The hull remains today where it sank. Over two quarts of oil a day still leak from the ship. Diving is strictly forbidden as the wreck was declared a shine by Congress in 1962. 15. U.S.S. Yorktown (CV-5) American's carrier casualty of the Battle of Midway, U.S.S. Yorktown was a flat top lead ship of her class. Launched just before World War II, the Yorktown was originally stationed in the Atlantic. After Pearl Harbor, Yorktown transferred to the Pacific and began operations against Japan. After receiving serious damage during the Battle of the Coral Sea, hasty repairs were made and the ship joined the Battle of Midway. During the attack, three of the four Japanese carriers involved were sunk. The remaining launched an assault on Yorktown. Despite an effect scramble and counter assault by Yorktown's planes, the ship took three direct hits by bombs. One struck the flight deck blowing a hole ten feet across. Another pierced the flight deck and exploded down the funnel disabling most of her boilers. The ship temporarily lost maneuvering ability and had barely enough power to maintain auxiliary systems. The third bomb pierced the starboard elevator shaft and caused fires deep below decks. Despite this, crews were able to contain the fires enough to refuel and launch another wave of planes to intercept the Japanese planes as they closed in for another assault. Crews were able to restore power enough for Yorktown to begin steaming. She successfully evaded two torpedo assaults before getting hit twice in the port side, jamming her rudder. The resulting list extinguished all her boilers, a total power loss. The order was given to abandon ship. Surprisingly, the ship did not sink that night. The following morning, an attempt was made to salvage the ship. Five destroyers formed an anti-torpedo net around the carrier. Crews boarded and pushed planes, guns, and other heavy objects off the flight and hanger decks in an attempt to level the ship. Unknown to the crews, the Japanese I-168 had found a favorable position and fired four torpedoes. One struck a destroyer, sinking, and two others struck Yorktown. The salvage attempt was abandoned, and a day later, the Yorktown finally sank. In 1998, Dr. Robert Ballard, fabled discoverer of the Titanic, discovered the Yorktown wreck. Despite capsizing when she sank, the ship landed upright on the ocean floor at a depth of three miles, a mile deeper than the Titanic. Such a depth has left the Yorktown in a surprisingly prestigius condition. With no biological growth, the carrier is expected to survive in deep for centuries before deteriorating. 16. U.S.S. Hornet (CV-8) In January 2019, another Yorktown-class aircraft carrier, the very last American carrier ever sunk in combat, was discovered some 17,000 feet below the surface. The U.S.S. Hornet was less than two years old when she was sunk by Japanese destroyers after the Battle of the Santa Cruz Islands. A veteran of the Battles of Midway and Solomon Islands, Hornet was attacked by several waves of Japanese dive bombers. She took three bombs to the deck, one damaged bomber turned kamikaze to the control island, and another to the bow. In 15 minutes, the carrier was crippled. Unable to land on her flaming, powerless deck, her planes were forced to either land on nearby U.S.S. Enterprise or ditch in the ocean. While under tow from cruiser U.S.S. Northhampton, crews raced to restore power. They nearly succeeded up until the Japanese attacked again, this time with torpedo planes. A single torpedo hit to the carrier's starboard side permanently destroyed the electrical plant and the order was given to abandon ship. The Americans attempted to scuttle the carrier, but after nine torpedos, the ship failed to sink. She was left to the Japanese, who finished her off. Like her sister carrier, Yorktown, the three-mile depth of her resting place has kept Hornet in amazing condition. When she was surveyed, she was found upright and largely intact. Much of her battle damage remains clearly visible. The most notable artifact was an aircraft tug tractor, sitting on its wheels, in the hanger deck. 17. U.S.S. Lexington (CV-2) As one of America's first aircraft carriers, the U.S.S. Lexington was originally launched in 1925 as a battlecruiser but was then retrofitted into an aircraft carrier in 1928. Together with her sister, Saratoga, these early CVs helped the Navy refine carrier design and tactics. When Japan attacked Pearl Harbor on December 7, 1941, Lexington and the other carriers were out at sea and spared destruction. The subsequent loss of battleship power after the attack forced the Navy to rely on carrier offensives and fueled a rapid advance of carrier technology. This fundamentally changed naval warfare for good as battleships would ultimately be phased out in favor of aircraft assaults. Lexington served in multiple Pacific offensives, with her final battle occurring in May 1942. Japanese forces attacked the American carrier, which took two bomb hits. An explosion, as the result of sparks hitting fuel vapors from her damaged fuel tanks below decks, knocked out her damage control system and aircraft refueling system. A second explosion jammed the forward aircraft elevator, started several fires in the hanger deck, and knocked out forward power. A third explosion knocked out the water pressure system and forced the total evacuation of all compartments below the waterline, permanently disabling the ship. The "abandon ship" order was given shortly afterward. Over 2,000 men safely evacuated. During the evacuation, several more explosions blew the flight deck apart, sending aircraft flying into the sea. The destroyer U.S.S. Phelps was ordered to scuttle the burning carrier. Five torpedos later, Lexington sank. Lexington's wreck was discovered in 2018 in 10,000 feet of water. She is in three pieces, the tip of the bow and stern breaking off the main hull as the ship sank. This content is accurate and true to the best of the author’s knowledge and is not meant to substitute for formal and individualized advice from a qualified professional. © 2012 Jason Ponic Freedom of the sea from The US on July 17, 2018: thanks, Good article Kathleen Cochran from Atlanta, Georgia on August 25, 2017: OMG! An article! Somebody here still writes articles - not just starts discussions! And what an interesting article. Research and photos! This was such an enjoyable read. Thank you for the work that went into it. Paul Sam on March 18, 2017: Reading this was just like going through all of those tragedies in real life. It is had to believe that these peices of steel were some of the most glorious ships ever. Great work, man!! toad on September 25, 2015: suggestions for a few next: Bismarck, Empress of Ireland, Britanic Jessee R from Gurgaon, India on May 19, 2012: Great hub! very detailed! and very unique thing to write on... My maternal grandfather was born on the day titanic sank... Jason Ponic (author) from Albuquerque on May 15, 2012: Why thank you! Shipwrecks have always fascinated me. There are a couple more I want to add to this list too. Marites Mabugat-Simbajon from Toronto, Ontario on May 15, 2012: An interesting read, Jason! Shipwrecks are so poignant. One of the worst accidents one could ever imagine, hold one's breath, grieve one's heart. What a collection there and learning back history of these unfortunate ships! Awesome information and very interesting! Jason Ponic (author) from Albuquerque on May 15, 2012: Thank you so much for your comment. Yes I agree, the human factor is a powerful element for each of the ships. What's interesting is the human factor that prompts changes. Patricia Scott from North Central Florida on May 15, 2012: You covered these tragedies well. It is so sad that the words 'safest' and 'unsinkable' was used to describe these vessels which defied the odds and sank. Every time I read of these my heart is wrenched by the thoughts of those who lost their lives or lost loved ones due to the catastrophe... Well done...voted up and interesting Jason Ponic (author) from Albuquerque on May 15, 2012: Thank you very much!! buddinglinguist on May 14, 2012: Very cool list!
<urn:uuid:f8119dda-01a1-4543-a4c7-166a354f4cd6>
CC-MAIN-2022-33
https://owlcation.com/humanities/fascinatingshipwrecks
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00098.warc.gz
en
0.976396
6,475
2.609375
3