text
stringlengths 4.11k
592k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 15
407
| file_path
stringlengths 138
138
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 1.01k
130k
| score
float64 2.52
5
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
1. Why is extremism an issue in prisons?
Extremist groups often pose special security risks in prisons. They may encourage the overthrow of the government, and prison officials can be targeted as agents of "illegal" government authority. Further, their literature often encourages ethnic hatred, promoting a violent and racially charged prison atmosphere.
Since the 1980s, white supremacist organizations have spread throughout the American prison system, beginning with the growth of Aryan Brotherhood.1 Aryan Nations, although not permitting inmates to become members, has engaged in "prison outreach" since 1979. In 1987, it began publishing a "prison outreach newsletter" called The Way to facilitate recruitment. Aryan Nations also disseminates its literature and letters to inmates. The World Church of the Creator and some Identity Church groups engage in similar outreach activity, as do other racist groups, such as Nation of Islam. The situation is further complicated by the fact that nonideological criminal prison gangs are often organized based on race, which increases racial polarization.
Imprisoned extremists also pose a security threat by continuing their activities while incarcerated. They recruit inmates, and teach other inmates extremist tactics. Some imprisoned extremists also have attempted to continue to influence adherents outside of prison by, for instance, publishing newsletters from the prison to maintain their outside following.
Prison officials have responded in various ways, reflecting the fact that each state has its own prison system (as do cities, counties and the federal government), and that prisons have varying populations. At times, prison officials have tried to limit access to extremist literature, and these responses have occasionally given rise to litigation because they potentially impinge upon inmates' First Amendment rights. The questions are especially complicated when the censored material comes from a group that claims to be religious.
1 Aryan Brotherhood, at one time associated with Aryan Nations, began as a virulent racist and anti-Semitic prison gang, and has since developed into a crime gang associated with extortion, drug operations and prison violence.
2. Do inmates have the same First Amendment rights as everybody else?
The United States Supreme Court has said that "prison walls do not form a barrier separating prison inmates from the protections of the Constitution." Nevertheless, inmates' First Amendment rights are less extensive than other citizens' and their rights can be limited due to security or other penological concerns. Because of the particular challenges administrators face running prisons, the Supreme Court has acknowledged there is a compelling government interest which warrants limiting prisoners' rights. Courts have been deferential to prison officials' assessments of security threats, and sensitive to their related regulatory decisions, even if such decisions impact inmates' First Amendment rights.
A prison regulation that impinges on an inmate's constitutional rights will be upheld in court if that regulation is reasonably related to legitimate penological objectives. This means that, generally, prison officials can ban extremist materials from prisons because of concerns that the distribution of such material will undermine prison security. Extremist books, leaflets, and magazines have been forbidden to prisoners on this basis. Such material has not been allowed through the mail and has not been kept in the prison library.
However, prisons have less discretion to limit inmates' religious practices than other First Amendment rights due to a new federal law. Because of the Religious Land Use and Institutionalized Persons Act (RLUIPA), prison officials' discretion in limiting access to extremist material may depend in part on whether such material is related to an inmate's religious exercise. Therefore, prison regulations that affect religious exercise, including access to religious literature, will be reviewed carefully if challenged in court.
3. What legal standard is used to determine the constitutionality of prison regulations?
The Supreme Court announced the standard under which it would review the constitutionality of prison regulations in Turner v. Safley, a case involving a challenge to a complete prohibition on inmate marriage. As noted earlier, a prison regulation is constitutional if it is reasonably related to legitimate penological objectives. Under this standard, courts have upheld regulations based on the consideration of certain factors:
- Is there a valid, rational connection between the prison regulation and the legitimate governmental
interest put forward to justify it?
- Are there alternative means of exercising the assert- ed right that remain open to inmates?
- How great a negative impact will accommodating the inmates' rights have on guards, other inmates,a
nd on the allocation of prison resources?
Courts will consider the existence of obvious and easy alternatives to a challenged regulation as evidence of a regulation's arbitrariness.
4. Is the same legal standard used to determine the constitutionality of prison regulations that implicate an inmate's right to free exercise of religion?
No, the same standard is not applicable to determining the constitutionality of prison regulations alleged to violate inmates' free exercise rights. The constitutionality of such regulations is determined under the more stringent standard set forth in RLUIPA. RLUIPA says that the government cannot impose a substantial burden on the religious exercise of an inmate, even if the inmate's religious exercise is being limited by a generally applicable rule. However, an inmate's religious practices can be limited if the prison official demonstrates that the regulations in question (i) further a compelling interest and (ii) the same interest cannot be served in a manner that is less restrictive of the inmate's free exercise rights.
Since RLUIPA was enacted in September 2000, it has not yet been interpreted by the courts. Therefore, how this statute will impact prison regulations that affect inmates' religious exercise remains unclear.
5. How should prison officials evaluate whether particular material can be withheld from inmates?
Generally, the First Amendment does not allow speech to be censored by the government because of the content of that speech. The government can only limit the time, place, and manner of speech. However, because inmates have more limited First Amendment rights than other citizens, some content-based discrimination is allowed for security reasons. For example, the United States Court of Appeals for the 10th Circuit upheld a prison official's decision to withhold entire issues of the magazine, Muhammad Speaks, because certain articles in the magazine created a danger of violence by advocating racial, religious, or national hatred. This decision was prior to the passage of RLUIPA, and therefore the Court's analysis might be somewhat different today. Under current law, if having the entire magazine withheld was determined to be a substantial burden on inmates' free exercise rights, the Court might require that the offending material be removed rather than the entire issue being withheld.
Regulations that exclude publications from a prison because of security concerns have been found constitutional when the regulations have required individualized review of any material before it is banned, notification to inmates that the material has been denied, and the possibility of review of such decisions. Courts have tended to find prison regulations that ban all literature from particular groups unconstitutional. However, the determination of the constitutionality of a given regulation or the implementation of the regulation has tended to be very fact-specific. Courts look not only at the regulation at issue but also consider the nature of the prison (high, medium, or low security) and the particular administrative challenges faced by the prison (such as crowding and quantity of incoming mail) in determining reasonableness, or the practical existence of less restrictive alternative measures.
6. Can prison officials apply the same restrictions to outgoing prison material?
The Supreme Court does not allow content regulation with respect to outgoing mail from inmates. While outgoing mail can be searched for contraband,2 content regulation of outgoing mail is also more restricted because it implicates the First Amendment rights of non-prisoner addressees.3 In addition, outgoing material does not pose a threat to internal prison security; therefore content limitations have been considered less urgent. However, regulations can limit the content of outgoing mail categorically. For example, escape plans, threats, running a business, and blackmail are categories that have been disallowed. Therefore, correspondence from prisoners to extremist groups cannot be banned outright because of its content. However, inmates can be prevented from distributing a newsletter from prison when doing so constitutes running a business.
2 Special rules exist with respect to attorney-client correspondence or mail that implicates an inmate's right to access the courts that are beyond the scope of this discussion.
3 However, prison officials can forbid all correspondence between incarcerated individuals.
7. Can extremist "missionaries" be prevented from visiting prisons?
Prison officials can ban categories of prison visitors, such as former inmates or visitors who have previously broken visiting rules. An extremist "missionary" can be barred from a prison because of generally applicable rules. In addition, prisons can create procedures for requesting visiting ministers, and impose conditions on the selection of the ministers, such as sponsorship by an outside religious organization. Prison officials can also exclude prison "missionaries" if they are advocating violence or otherwise fomenting prison unrest by encouraging racial tension. However, under RLUIPA, the prison would have to show that any restrictions on visiting clergy are the least restrictive means of achieving its end.
Prison officials do not have a responsibility to hire a minister for each religious denomination represented in the prison population. However, if visiting ministers of one denomination are compensated, visiting ministers of other denominations must be equally compensated. Security limitations can be placed on inmate-led prayer or services, but again, under RLUIPA, the prison would have to show that any restrictions on such gatherings is the least restrictive means of achieving its end. For example, it is more likely that the prison could limit the frequency of such meetings, the number of attendees and require supervision than that such gatherings could be banned outright.
8. Under what circumstances must prisons accommodate prisoners' religious dietary requirements?
Accommodating religiously based dietary rules has become an issue when dealing with extremists because incidents have raised concern that extremists "adopt" religious practices that are not based on sincere beliefs in order to obtain special privileges, such as specialized diets. Generally, if an inmate's request for a special diet is because of a sincerely held belief and religious in nature, the inmate has a constitutionally protected interest. Under RLUIPA, a request for a special religious diet can only be refused based on a compelling prison interest and if it is the least restrictive means possible for the prison protecting that interest. Prisons may offer more limited food selection to prisoners with religious dietary limitations, such as providing only cold kosher meals rather than hot food. In the past, when determining whether a prison was required to provided a special diet for a prisoner, courts have considered whether the dietary restrictions were central to the prisoner's religious observance. Under RLUIPA, such a determination would probably not be relevant. The threshold question in evaluating the prison's obligation to accommodate a request would still be whether the inmate's dietary request arose out of sincerely held beliefs that were religious in nature. | <urn:uuid:4b29297a-7223-4dae-a9fc-a294677b62b3> | CC-MAIN-2013-20 | http://archive.adl.org/civil_rights/prison_ex.asp | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.950167 | 2,202 | 2.796875 | 3 |
- published: 19 Mar 2013
- views: 42
- author: T.A. B
possibly testing on weans, that worries me http://www.bbc.co.uk/news/world-us-canada-21849808.
A vaccine is a biological preparation that improves immunity to a particular disease. A vaccine typically contains an agent that resembles a disease-causing microorganism, and is often made from weakened or killed forms of the microbe, its toxins or one of its surface proteins. The agent stimulates the body's immune system to recognize the agent as foreign, destroy it, and "remember" it, so that the immune system can more easily recognize and destroy any of these microorganisms that it later encounters.
Vaccines can be prophylactic (example: to prevent or ameliorate the effects of a future infection by any natural or "wild" pathogen), or therapeutic (e.g. vaccines against cancer are also being investigated; see cancer vaccine).
The term vaccine derives from Edward Jenner's 1796 use of cow pox (Latin variola vaccinia, adapted from the Latin vaccīn-us, from vacca, cow), to inoculate humans, providing them protection against smallpox.
Vaccines do not guarantee complete protection from a disease. Sometimes, this is because the host's immune system simply does not respond adequately or at all. This may be due to a lowered immunity in general (diabetes, steroid use, HIV infection, age) or because the host's immune system does not have a B cell capable of generating antibodies to that antigen.
Even if the host develops antibodies, the human immune system is not perfect and in any case the immune system might still not be able to defeat the infection immediately. In this case, the infection will be less severe and heal faster.
Adjuvants are typically used to boost immune response. Most often aluminium adjuvants are used, but adjuvants like squalene are also used in some vaccines and more vaccines with squalene and phosphate adjuvants are being tested. Larger doses are used in some cases for older people (50–75 years and up), whose immune response to a given vaccine is not as strong.
The efficacy or performance of the vaccine is dependent on a number of factors:
When a vaccinated individual does develop the disease vaccinated against, the disease is likely to be milder than without vaccination.
The following are important considerations in the effectiveness of a vaccination program:
In 1958 there were 763,094 cases of measles and 552 deaths in the United States. With the help of new vaccines, the number of cases dropped to fewer than 150 per year (median of 56). In early 2008, there were 64 suspected cases of measles. 54 out of 64 infections were associated with importation from another country, although only 13% were actually acquired outside of the United States; 63 of these 64 individuals either had never been vaccinated against measles, or were uncertain whether they had been vaccinated.
Vaccines are dead or inactivated organisms or purified products derived from them.
There are several types of vaccines in use. These represent different strategies used to try to reduce risk of illness, while retaining the ability to induce a beneficial immune response.
Some vaccines contain killed, but previously virulent, micro-organisms that have been destroyed with chemicals, heat, radioactivity or antibiotics. Examples are the influenza vaccine, cholera vaccine, bubonic plague vaccine, polio vaccine, hepatitis A vaccine, and rabies vaccine.
Some vaccines contain live, attenuated microorganisms. Many of these are live viruses that have been cultivated under conditions that disable their virulent properties, or which use closely related but less dangerous organisms to produce a broad immune response. Although most attenuated vaccines are viral, some are bacterial in nature. They typically provoke more durable immunological responses and are the preferred type for healthy adults. Examples include the viral diseases yellow fever, measles, rubella, and mumps and the bacterial disease typhoid. The live Mycobacterium tuberculosis vaccine developed by Calmette and Guérin is not made of a contagious strain, but contains a virulently modified strain called "BCG" used to elicit an immune response to the vaccine. The live attenuated vaccine containing strain Yersinia pestis EV is used for plague immunization. Attenuated vaccines have some advantages and disadvantages. They have the capacity of transient growth so they give prolonged protection, and no booster dose is required. But they may get reverted to the virulent form and cause the disease.
Toxoid vaccines are made from inactivated toxic compounds that cause illness rather than the micro-organism. Examples of toxoid-based vaccines include tetanus and diphtheria. Toxoid vaccines are known for their efficacy. Not all toxoids are for micro-organisms; for example, Crotalus atrox toxoid is used to vaccinate dogs against rattlesnake bites.
Protein subunit – rather than introducing an inactivated or attenuated micro-organism to an immune system (which would constitute a "whole-agent" vaccine), a fragment of it can create an immune response. Examples include the subunit vaccine against Hepatitis B virus that is composed of only the surface proteins of the virus (previously extracted from the blood serum of chronically infected patients, but now produced by recombination of the viral genes into yeast), the virus-like particle (VLP) vaccine against human papillomavirus (HPV) that is composed of the viral major capsid protein, and the hemagglutinin and neuraminidase subunits of the influenza virus. Subunit vaccine is being used for plague immunization.
Conjugate – certain bacteria have polysaccharide outer coats that are poorly immunogenic. By linking these outer coats to proteins (e.g. toxins), the immune system can be led to recognize the polysaccharide as if it were a protein antigen. This approach is used in the Haemophilus influenzae type B vaccine.
A number of innovative vaccines are also in development and in use:
While most vaccines are created using inactivated or attenuated compounds from micro-organisms, synthetic vaccines are composed mainly or wholly of synthetic peptides, carbohydrates or antigens.
Vaccines may be monovalent (also called univalent) or multivalent (also called polyvalent). A monovalent vaccine is designed to immunize against a single antigen or single microorganism. A multivalent or polyvalent vaccine is designed to immunize against two or more strains of the same microorganism, or against two or more microorganisms. In certain cases a monovalent vaccine may be preferable for rapidly developing a strong immune response.
The immune system recognizes vaccine agents as foreign, destroys them, and "remembers" them. When the virulent version of an agent comes along the body recognizes the protein coat on the virus, and thus is prepared to respond, by (1) neutralizing the target agent before it can enter cells, and (2) by recognizing and destroying infected cells before that agent can multiply to vast numbers.
When two or more vaccines are mixed together in the same formulation, the two vaccines can interfere. This most frequently occurs with live attenuated vaccines, where one of the vaccine components is more robust than the others and suppresses the growth and immune response to the other components. This phenomenon was first noted in the trivalent Sabin polio vaccine, where the amount of serotype 2 virus in the vaccine had to be reduced to stop it from interfering with the "take" of the serotype 1 and 3 viruses in the vaccine. This phenomenon has also been found to be a problem with the dengue vaccines currently being researched,[when?] where the DEN-3 serotype was found to predominate and suppress the response to DEN-1, -2 and -4 serotypes.
Vaccines have contributed to the eradication of smallpox, one of the most contagious and deadly diseases known to man. Other diseases such as rubella, polio, measles, mumps, chickenpox, and typhoid are nowhere near as common as they were a hundred years ago. As long as the vast majority of people are vaccinated, it is much more difficult for an outbreak of disease to occur, let alone spread. This effect is called herd immunity. Polio, which is transmitted only between humans, is targeted by an extensive eradication campaign that has seen endemic polio restricted to only parts of four countries (Afghanistan, India, Nigeria and Pakistan). The difficulty of reaching all children as well as cultural misunderstandings, however, have caused the anticipated eradication date to be missed several times.
In order to provide best protection, children are recommended to receive vaccinations as soon as their immune systems are sufficiently developed to respond to particular vaccines, with additional "booster" shots often required to achieve "full immunity". This has led to the development of complex vaccination schedules. In the United States, the Advisory Committee on Immunization Practices, which recommends schedule additions for the Centers for Disease Control and Prevention, recommends routine vaccination of children against: hepatitis A, hepatitis B, polio, mumps, measles, rubella, diphtheria, pertussis, tetanus, HiB, chickenpox, rotavirus, influenza, meningococcal disease and pneumonia. The large number of vaccines and boosters recommended (up to 24 injections by age two) has led to problems with achieving full compliance. In order to combat declining compliance rates, various notification systems have been instituted and a number of combination injections are now marketed (e.g., Pneumococcal conjugate vaccine and MMRV vaccine), which provide protection against multiple diseases.
Besides recommendations for infant vaccinations and boosters, many specific vaccines are recommended at other ages or for repeated injections throughout life—most commonly for measles, tetanus, influenza, and pneumonia. Pregnant women are often screened for continued resistance to rubella. The human papillomavirus vaccine is recommended in the U.S. (as of 2011) and UK (as of 2009). Vaccine recommendations for the elderly concentrate on pneumonia and influenza, which are more deadly to that group. In 2006, a vaccine was introduced against shingles, a disease caused by the chickenpox virus, which usually affects the elderly.
Sometime during the 1770s Edward Jenner heard a milkmaid boast that she would never have the often-fatal or disfiguring disease smallpox, because she had already had cowpox, which has a very mild effect in humans. In 1796, Jenner took pus from the hand of a milkmaid with cowpox, inoculated an 8-year-old boy with it, and six weeks later variolated the boy's arm with smallpox, afterwards observing that the boy did not catch smallpox. Further experimentation demonstrated the efficacy of the procedure on an infant. Since vaccination with cowpox was much safer than smallpox inoculation, the latter, though still widely practiced in England, was banned in 1840. Louis Pasteur generalized Jenner's idea by developing what he called a rabies vaccine, and in the nineteenth century vaccines were considered a matter of national prestige, and compulsory vaccination laws were passed.
The twentieth century saw the introduction of several successful vaccines, including those against diphtheria, measles, mumps, and rubella. Major achievements included the development of the polio vaccine in the 1950s and the eradication of smallpox during the 1960s and 1970s. Maurice Hilleman was the most prolific of the developers of the vaccines in the twentieth century. As vaccines became more common, many people began taking them for granted. However, vaccines remain elusive for many important diseases, including malaria and HIV.
||The neutrality of this section is disputed. Please see the discussion on the talk page. Please do not remove this message until the dispute is resolved. (October 2011)|
||This article is missing information about Scientific rebuttal to the attacks. This concern has been noted on the talk page where whether or not to include such information may be discussed. (October 2011)|
Opposition to vaccination, from a wide array of vaccine critics, has existed since the earliest vaccination campaigns. Although the benefits of preventing suffering and death from serious infectious diseases greatly outweigh the risks of rare adverse effects following immunization, disputes have arisen over the morality, ethics, effectiveness, and safety of vaccination. Some vaccination critics say that vaccines are ineffective against disease or that vaccine safety studies are inadequate. Some religious groups do not allow vaccination, and some political groups oppose mandatory vaccination on the grounds of individual liberty. In response, concern has been raised that spreading unfounded information about the medical risks of vaccines increases rates of life-threatening infections, not only in the children whose parents refused vaccinations, but also in other children, perhaps too young for vaccines, who could contract infections from unvaccinated carriers (see herd immunity).
One challenge in vaccine development is economic: many of the diseases most demanding a vaccine, including HIV, malaria and tuberculosis, exist principally in poor countries. Pharmaceutical firms and biotechnology companies have little incentive to develop vaccines for these diseases, because there is little revenue potential. Even in more affluent countries, financial returns are usually minimal and the financial and other risks are great.
Most vaccine development to date has relied on "push" funding by government, universities and non-profit organizations. Many vaccines have been highly cost effective and beneficial for public health. The number of vaccines actually administered has risen dramatically in recent decades.[when?] This increase, particularly in the number of different vaccines administered to children before entry into schools may be due to government mandates and support, rather than economic incentive.
The filing of patents on vaccine development processes can also be viewed as an obstacle to the development of new vaccines. Because of the weak protection offered through a patent on the final product, the protection of the innovation regarding vaccines is often made through the patent of processes used on the development of new vaccines as well as the protection of secrecy.
Vaccine production has several stages. First, the antigen itself is generated. Viruses are grown either on primary cells such as chicken eggs (e.g., for influenza), or on continuous cell lines such as cultured human cells (e.g., for hepatitis A). Bacteria are grown in bioreactors (e.g., Haemophilus influenzae type b). Alternatively, a recombinant protein derived from the viruses or bacteria can be generated in yeast, bacteria, or cell cultures. After the antigen is generated, it is isolated from the cells used to generate it. A virus may need to be inactivated, possibly with no further purification required. Recombinant proteins need many operations involving ultrafiltration and column chromatography. Finally, the vaccine is formulated by adding adjuvant, stabilizers, and preservatives as needed. The adjuvant enhances the immune response of the antigen, stabilizers increase the storage life, and preservatives allow the use of multidose vials. Combination vaccines are harder to develop and produce, because of potential incompatibilities and interactions among the antigens and other ingredients involved.
Vaccine production techniques are evolving. Cultured mammalian cells are expected to become increasingly important, compared to conventional options such as chicken eggs, due to greater productivity and low incidence of problems with contamination. Recombination technology that produces genetically detoxified vaccine is expected to grow in popularity for the production of bacterial vaccines that use toxoids. Combination vaccines are expected to reduce the quantities of antigens they contain, and thereby decrease undesirable interactions, by using pathogen-associated molecular patterns.
In 2010, India produced 60 percent of world's vaccine worth about $900 million.
Many vaccines need preservatives to prevent serious adverse effects such as Staphylococcus infection that, in one 1928 incident, killed 12 of 21 children inoculated with a diphtheria vaccine that lacked a preservative. Several preservatives are available, including thiomersal, phenoxyethanol, and formaldehyde. Thiomersal is more effective against bacteria, has better shelf life, and improves vaccine stability, potency, and safety, but in the U.S., the European Union, and a few other affluent countries, it is no longer used as a preservative in childhood vaccines, as a precautionary measure due to its mercury content. Although controversial claims have been made that thiomersal contributes to autism, no convincing scientific evidence supports these claims.
There are several new delivery systems in development[when?] that will hopefully make vaccines more efficient to deliver. Possible methods include liposomes and ISCOM (immune stimulating complex).
The latest developments[when?] in vaccine delivery technologies have resulted in oral vaccines. A polio vaccine was developed and tested by volunteer vaccinations with no formal training; the results were positive in that the ease of the vaccines increased. With an oral vaccine, there is no risk of blood contamination. Oral vaccines are likely to be solid which have proven to be more stable and less likely to freeze; this stability reduces the need for a "cold chain": the resources required to keep vaccines within a restricted temperature range from the manufacturing stage to the point of administration, which, in turn, may decrease costs of vaccines. A microneedle approach, which is still in stages of development, uses "pointed projections fabricated into arrays that can create vaccine delivery pathways through the skin".
A nanopatch is a needle free vaccine delivery system which is under development. A stamp-sized patch similar to an adhesive bandage contains about 20,000 microscopic projections per square inch. When worn on the skin, it will deliver vaccine directly to the skin, which has a higher concentration of immune cells than that in the muscles, where needles and syringes deliver. It thus increases the effectiveness of the vaccination using a lower amount of vaccine used in traditional syringe delivery system.
The use of plasmids has been validated in preclinical studies as a protective vaccine strategy for cancer and infectious diseases. However, in human studies this approach has failed to provide clinically relevant benefit. The overall efficacy of plasmid DNA immunization depends on increasing the plasmid's immunogenicity while also correcting for factors involved in the specific activation of immune effector cells.
Vaccinations of animals are used both to prevent their contracting diseases and to prevent transmission of disease to humans. Both animals kept as pets and animals raised as livestock are routinely vaccinated. In some instances, wild populations may be vaccinated. This is sometimes accomplished with vaccine-laced food spread in a disease-prone area and has been used to attempt to control rabies in raccoons.
Where rabies occurs, rabies vaccination of dogs may be required by law. Other canine vaccines include canine distemper, canine parvovirus, infectious canine hepatitis, adenovirus-2, leptospirosis, bordatella, canine parainfluenza virus, and Lyme disease among others.
Vaccine development has several trends:
Principles that govern the immune response can now be used in tailor-made vaccines against many noninfectious human diseases, such as cancers and autoimmune disorders. For example, the experimental vaccine CYT006-AngQb has been investigated as a possible treatment for high blood pressure. Factors that have impact on the trends of vaccine development include progress in translatory medicine, demographics, regulatory science, political, cultural, and social responses.
|Modern Vaccine and Adjuvant Production and Characterization, Genetic Engineering & Biotechnology News|
The World News (WN) Network, has created this privacy statement in order to demonstrate our firm commitment to user privacy. The following discloses our information gathering and dissemination practices for wn.com, as well as e-mail newsletters.
We do not collect personally identifiable information about you, except when you provide it to us. For example, if you submit an inquiry to us or sign up for our newsletter, you may be asked to provide certain information such as your contact details (name, e-mail address, mailing address, etc.).
We may retain other companies and individuals to perform functions on our behalf. Such third parties may be provided with access to personally identifiable information needed to perform their functions, but may not use such information for any other purpose.
In addition, we may disclose any information, including personally identifiable information, we deem necessary, in our sole discretion, to comply with any applicable law, regulation, legal proceeding or governmental request.
We do not want you to receive unwanted e-mail from us. We try to make it easy to opt-out of any service you have asked to receive. If you sign-up to our e-mail newsletters we do not sell, exchange or give your e-mail address to a third party.
E-mail addresses are collected via the wn.com web site. Users have to physically opt-in to receive the wn.com newsletter and a verification e-mail is sent. wn.com is clearly and conspicuously named at the point ofcollection.
If you no longer wish to receive our newsletter and promotional communications, you may opt-out of receiving them by following the instructions included in each newsletter or communication or by e-mailing us at michaelw(at)wn.com
The security of your personal information is important to us. We follow generally accepted industry standards to protect the personal information submitted to us, both during registration and once we receive it. No method of transmission over the Internet, or method of electronic storage, is 100 percent secure, however. Therefore, though we strive to use commercially acceptable means to protect your personal information, we cannot guarantee its absolute security.
If we decide to change our e-mail practices, we will post those changes to this privacy statement, the homepage, and other places we think appropriate so that you are aware of what information we collect, how we use it, and under what circumstances, if any, we disclose it.
If we make material changes to our e-mail practices, we will notify you here, by e-mail, and by means of a notice on our home page.
The advertising banners and other forms of advertising appearing on this Web site are sometimes delivered to you, on our behalf, by a third party. In the course of serving advertisements to this site, the third party may place or recognize a unique cookie on your browser. For more information on cookies, you can visit www.cookiecentral.com.
As we continue to develop our business, we might sell certain aspects of our entities or assets. In such transactions, user information, including personally identifiable information, generally is one of the transferred business assets, and by submitting your personal information on Wn.com you agree that your data may be transferred to such parties in these circumstances. | <urn:uuid:049ed48d-f01e-4fc9-846b-2e2c5e6c254d> | CC-MAIN-2013-20 | http://article.wn.com/view/2013/01/16/Vaccine_timetable_for_children_is_safe_US_experts_say_t/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945185 | 4,705 | 3.78125 | 4 |
Wars have given us the Jeep, the computer and even the microwave.
Will the war in Iraq give us the Tiger?
Military scientists at Edgewood Chemical Biological Center at Aberdeen Proving Ground hope so. The machine - its full name is the Tactical Garbage to Energy Refinery - combines a chute, an engine, chemical tanks and other components, giving it the appearance of a lunar rover. It's designed to turn food and waste into fuel. If it works, it could save scores of American and Iraqi lives.
Among the biggest threats that soldiers face in the war in Iraq are the roadside bombs that have killed or maimed thousands since the U.S.-led invasion in 2003. Because some military bases lack a landfill, transporting garbage to dumps miles away in the desert has become a potentially fatal routine for U.S. troops and military contractors.
The Tiger would attempt to solve two problems at once: It would sharply reduce those trash hauls and provide the military with an alternative source of fuel.
It is the latest in a long line of wartime innovations, from can openers to desert boots. The conflict in Iraq has produced innovations such as "warlocks," which jam electronic signals from cell phones, garage door openers and other electronic devices that insurgents use to detonate roadside bombs, according to Inventors Digest.
"In wartime, you're not worried about making a profit necessarily. You're worried about getting the latest technology on the street," said Peter Kindsvatter, a military historian at Aberdeen Proving Ground, who added that money is spent more freely for research when a nation is at war. "Basically, you find yourself in a technology race with your enemy."
The Tiger, now being tested in Baghdad, would not be the first device to turn garbage into energy - a large incinerator near Baltimore's downtown stadiums does it. But it would be among the first to attempt to do it on a small scale. Its creators say it could one day become widely used in civilian life, following the lead of other wartime innovations.
During World War II, contractors developed the Jeep to meet the military's desire for a light, all-purpose vehicle that could transport supplies.
The development of radar technology to spot Nazi planes led to the microwave, according to historians.
The World War II era also gave birth to the first electronic digital computer, the Electronic Numerical Integrator and Computer, or ENIAC. Funded by the Defense Department, the machine was built to compute ballistics tables that soldiers used to mechanically aim large guns. For years it was located at Aberdeen Proving Ground.
This decade, the Pentagon determined that garbage on military bases poses a serious logistical problem.
"When you're over in a combat area and people are shooting at you, you still have to deal with your trash," said John Spiller, project officer with the Army's Rapid Equipping Force, which is funding the Tiger project. "How would you feel if somebody was shooting at you every other time you pushed it down the curb?"
He and other Army officials said they could not recall any specific attacks against troops or contractors heading to dumpsites.For years, large incinerators have burned trash to generate power. Baltimore Refuse Energy Systems Co., the waste-to-energy plant near the stadiums, consumes up to 2,250 tons of refuse a day while producing steam and electricity.
The process is so expensive that it has only made sense to do it on a large scale, scientists say.
The military has spent almost $3 million on two Tiger prototypes, each weighing nearly 5 tons and small enough to fit into a 20- to 40-foot wide container. The project is being developed by scientists from the Edgewood, Va.-based Defense Life Sciences LLC and Indiana's Purdue University.
The biggest challenge was getting the parts to work together, said Donald Kennedy, an Edgewood spokesman. Because the Tiger is a hybrid consisting of a gasifier, bioreactor and generator, much of it is built with off-the-shelf items, including a grinder.
Another big challenge: expectations.
"When we would initially talk to people about the Tiger system, a large percentage would refuse to believe it could actually work," Kennedy wrote in an e-mail. "Alternatively, a similar percentage would be so intrigued by the idea that they would demand to know when they could buy one for their neighborhood."
The Tiger works like this: A shredder rips up waste and soaks it in water. A bioreactor metabolizes the sludge into ethanol. A pelletizer compresses undigested waste into pellets that are fed into a gasification unit, which produces composite gas.
The ethanol, composite gas and a 10-percent diesel drip are injected into a diesel generator to produce electricity, according to scientists. It takes about six hours for the Tiger to power up. When it works, the device can power a 60-kilowatt generator.
The prototypes are being tested at Camp Victory in Baghdad
Initial runs proved successful. The prototypes have been used to power an office trailer. At their peak, they could power two to three trailers.
In recent weeks, the scientists suffered a setback: The above-100 degree temperatures caused a chiller device to overheat and shut off occasionally. A new chiller from Edgewood just arrived at the site, Kennedy said.
After the 90-day testing phase that ends Aug. 10, the Army will decide whether to fund the project further.
Its developers envision the device being used to respond to crises such as Hurricane Katrina, when there is no lack of garbage but a great need for electricity.
Spiller, of the Army's Rapid Equipping Force, said he is optimistic.
"The mere fact we wrote a check means we think it's got a high chance of success," Spiller said. | <urn:uuid:749f4f2e-01bf-42ab-ac03-1dfa84af34dc> | CC-MAIN-2013-20 | http://articles.baltimoresun.com/2008-07-21/news/0807200131_1_garbage-aberdeen-proving-ground-war-in-iraq | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959887 | 1,205 | 2.828125 | 3 |
Belgian physicist Francois Englert, left, speaks with British physicist… (Fabrice Coffrini / AFP/Getty…)
For physicists, it was a moment like landing on the moon or the discovery of DNA.
The focus was the Higgs boson, a subatomic particle that exists for a mere fraction of a second. Long theorized but never glimpsed, the so-called God particle is thought to be key to understanding the existence of all mass in the universe. The revelation Wednesday that it -- or some version of it -- had almost certainly been detected amid more than hundreds of trillions of high-speed collisions in a 17-mile track near Geneva prompted a group of normally reserved scientists to erupt with joy.
For The Record
Los Angeles Times Friday, July 06, 2012 Home Edition Main News Part A Page 4 News Desk 1 inches; 48 words Type of Material: Correction
Large Hadron Collider: In some copies of the July 5 edition, an article in Section A about the machine used by physicists at the European Organization for Nuclear Research to search for the Higgs boson referred to the $5-billion Large Hadron Collider. The correct amount is $10 billion.
Peter Higgs, one of the scientists who first hypothesized the existence of the particle, reportedly shed tears as the data were presented in a jampacked and applause-heavy seminar at CERN, the European Organization for Nuclear Research.
"It's a gigantic triumph for physics," said Frank Wilczek, an MIT physicist and Nobel laureate. "It's a tremendous demonstration of a community dedicated to understanding nature."
The achievement, nearly 50 years in the making, confirms physicists' understanding of how mass -- the stuff that makes stars, planets and even people -- arose in the universe, they said.
It also points the way toward a new path of scientific inquiry into the mass-generating mechanism that was never before possible, said UCLA physicist Robert Cousins, a member of one of the two research teams that has been chasing the Higgs boson at CERN.
"I compare it to turning the corner and walking around a building -- there's a whole new set of things you can look at," he said. "It is a beginning, not an end."
Leaders of the two teams reported independent results that suggested the existence of a previously unseen subatomic particle with a mass of about 125 to 126 billion electron volts. Both groups got results at a "five sigma" level of confidence -- the statistical requirement for declaring a scientific "discovery."
"The chance that either of the two experiments had seen a fluke is less than three parts in 10 million," said UC San Diego physicist Vivek Sharma, a former leader of one of the Higgs research groups. "There is no doubt that we have found something."
But he and others stopped just shy of saying that this new particle was indeed the long-sought Higgs boson. "All we can tell right now is that it quacks like a duck and it walks like a duck," Sharma said.
In this case, quacking was enough for most.
"If it looks like a duck and quacks like a duck, it's probably at least a bird," said Wilczek, who stayed up past 3 a.m. to watch the seminar live over the Web while vacationing in New Hampshire.
Certainly CERN leaders in Geneva, even as they referred to their discovery simply as "a new particle," didn't bother hiding their excitement.
The original plan had been to present the latest results on the Higgs search at the International Conference on High Energy Physics, a big scientific meeting that began Wednesday in Melbourne.
But as it dawned on CERN scientists that they were on the verge of "a big announcement," Cousins said, officials decided to honor tradition and instead present the results on CERN's turf.
The small number of scientists who theorized the existence of the Higgs boson in the 1960s -- including Higgs of the University of Edinburgh -- were invited to fly to Geneva.
For the non-VIP set, lines to get into the auditorium began forming late Tuesday. Many spent the night in sleeping bags.
All the hubbub was due to the fact that the discovery of the Higgs boson is the last piece of the puzzle needed to complete the so-called Standard Model of particle physics -- the big picture that describes the subatomic particles that make up everything in the universe, and the forces that work between them.
Over the course of the 20th century, as physicists learned more about the Standard Model, they struggled to answer one very basic question: Why does matter exist?
Higgs and others came up with a possible explanation: that particles gain mass by traveling through an energy field. One way to think about it is that the field sticks to the particles, slowing them down and imparting mass.
That energy field came to be known as the Higgs field. The particle associated with the field was dubbed the Higgs boson.
Higgs published his theory in 1964. In the 48 years since, physicists have eagerly chased the Higgs boson. Finding it would provide the experimental confirmation they needed to show that their current understanding of the Standard Model was correct.
On the other hand, ruling it out would mean a return to the drawing board to look for an alternative Higgs particle, or several alternative Higgs particles, or perhaps to rethink the Standard Model from the bottom up.
Either outcome would be monumental, scientists said. | <urn:uuid:fb237ffb-9cc0-4077-99d5-56c6fce1ca5f> | CC-MAIN-2013-20 | http://articles.latimes.com/2012/jul/05/science/la-sci-higgs-boson-new-particle-20120705 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.963451 | 1,134 | 2.59375 | 3 |
Nursing a critically ill state back to health
|Indranill Basu Ray highlights the core problems that afflict Bengal's health sector and suggests a few ways to improve the situation|
Despite many technological and other achievements that have propelled India from being a developing nation to one of the top economies of the world, one field that India continues to lag behind in is health. This is why stories of babies dying in large numbers haunt newspaper headlines. India is behind Bangladesh and Sri Lanka in life expectancy at birth or under-five mortality level. India accounts for about 17 per cent of the world population, but it contribute to a fifth of the world's share of diseases. A third of all diarrhoeal diseases in the world occurs in India. The country has the second largest number of HIV/AIDS cases after South Africa. It is home to one-fifth of the world's population afflicted with diabetes and cardiovascular diseases.
A common excuse that I often hear is that we have limited resources to tackle the huge and burgeoning health problems. But even the richest country on earth, the United States of America, has failed to provide appropriate health services to a large section of the populace. The problem in India is quite different. Apart from being a poor nation with limited resources, it also has a sizeable population in need of basic health services. Furthermore, the lack of appropriate sanitary measures and education ensures an ever increasing presence of communicable disease that have been controlled and even eradicated in the developed nations.
India's list of woes does not stop here. Lack of foresight on the part of successive governments and selective and fragmented strategies to counter daily problems without a definite public health goal have been the mainstay of India's health policies. Resource allocation to this sector is influenced by the prevailing fiscal situation as well as by the priorities of the reigning government. Unfortunately, in Bengal — a state that faces a dismal fiscal situation — the government's priorities have been skewed as a result of political necessities. Although we have a new government at the helm, it is important to realize that gross changes at the practical level cannot be initiated without having a team with experience and knowledge define a well-thought-out strategy. It is also essential to have a government that is willing to fulfil the financial needs necessary for the strategy to work.
It is difficult, if not impossible, to paint a picture of the present state of public health in West Bengal and to suggest measures to rectify the same in a short article like this. My intention is to highlight the core problems plaguing the system and to suggest solutions based on accepted principles of public health and healthcare management. The steps that need to be taken are as follows: reducing disease burden, including infectious diseases as well as non-communicable epidemics like diabetes mellitus and coronary heart disease; restructuring the existing primary healthcare system to make it more accountable; creating a skilled and professional workforce which is quality driven; financial planning to bring more investment to the health sector.
Reducing disease burden is the cornerstone of any good health policy. The factors that help reduce communicable diseases are clean drinking water, improved sanitation and an effective vaccination programme. A paradigm shift, from the prevalent curative approach to a preventive approach, including health promotion by inculcating behavioural changes, is imperative to reduce disease burden. West Bengal is one of four states that urgently needs high investment in safe drinking water and toilet facilities. It is estimated that Rs 18,000 crore is required to provide effective drinking water and sanitation facilities for the entire country. Kerala, Maharashtra, West Bengal and Odisha would account for more than 60 per cent of the total outlay.
Similarly, a huge investment is required to provide nutritional supplements to malnourished children and pregnant and lactating mothers living below the poverty line. According to a report by the national commission on macroeconomics and health, West Bengal would need to harness an additional resource requirement of rupees (in crore) 1,286, 2,459, 4,693, 13,811 and 8,485 in sectors such as health, water and sanitation, nutrition, primary schooling and roads. It has been projected that in the next five years West Bengal will spend a large portion of its revenues on wages and salaries, interest payments and pensions, leaving very little for discretionary expenditure in the field of health. It is imperative that the present government rethink and strategize in collaboration with the Centre to ensure the appropriate funding necessary to make the state healthy.
Restructuring the present healthcare delivery system is also equally important. Most primary healthcare centres are old, dilapidated buildings with few or no facilities. Some do not even have basic resources like healthcare workers or pharmacists. What is required is a radical overhaul of the existing system. There are differences in health systems of different countries. A State-run health system, such as the one in Canada, suffers from delayed medical care. A privately-run health system like the one in the US provides only limited health services to its poor. India's healthcare should carve out the best of both systems. Private healthcare is thriving in India. It is uncontrolled and aimed at profit-making. Government-run hospitals are poorly managed, providing few or no facilities to those living below the poverty line.
Different models have been suggested to take care of this disparity. While private investment will always be geared towards profit-making, it is mandatory to rein in these bodies under well-defined rules. Large private hospitals in the US are non-profit bodies, which have to follow stringent rules in patient care. At the other end of the spectrum is the National Health Service in Britain in which small, medium and even a few large hospitals are making way for a more competent and accountable government-controlled health system with fewer hospitals.
Human resource management is very important in running an effective health system. One of the biggest lacunae of government health service is its poor human-resource management. Many physicians are not paid appropriate salaries or are posted in places that are not of their choice. Political intervention and favouritism play a big role in posting physicians. Consequently, dedicated physicians who want to serve the public or work in the academic setting found in government hospitals are forced to remain in private hospitals. To boost morale and efficacy, discipline needs to be instituted in the system and a transparent posting policy adopted. The doctor-population ratio needs to be improved by filling up vacancies in the West Bengal health service. It is important to free postings from the grip of bureaucrats to ensure the registration of quality candidates. Physicians failing to report to duty or indulging in indiscipline must be punished. Doctors who do sign up need to provide relevant and quality medical care. This can only be done if some form of recertification of doctors is made mandatory once every 10 years. Physicians' salaries in the state health service must be made on a par with those of the Central government to make sure that it remains a lucrative option. Senior physicians providing exemplary public service must be rewarded for the same. A commonly-held notion is that most physicians run after the lucrative salaries that are offered in private hospitals. Hence it is difficult to retain them in the government sector. This, however, is true of a minority. The majority of physicians are willing to work in a healthy, progressive and academic environment if there are appropriate non-financial incentives. Let us take the example of Christian Medical College, Vellore. Most of the faculty there are paid salaries that are much lower than those of the private sector. However, physicians are provided with other facilities such as good housing, free schools, free-to-highly-subsidized college education and, most importantly, a progressive and research oriented work environment.
West Bengal lags behind many other states when it comes to medical education. There is an urgent need to increase the number of medical colleges in the state. Private investment for the same should be welcomed but appropriate laws must be instituted so that huge capitation fees are not charged for seats. Furthermore, selection should be made through competitive examinations. A certain percentage of seats can be reserved for the economically weaker sections. Students passing out of such medical colleges must be given postings in rural hospitals. This has been true on paper for many decades now, but the rule has been poorly implemented even in government-run medical colleges.
Innovative schemes ought to be thought of to involve the cash-rich private sector to service the medical needs of the state. Private institutions using government money or land must be asked to provide free service to 20 per cent of their capacity. Appropriate punitive measures — such as temporarily withholding or cancelling licences — can be taken when a private institution fails to honour this commitment. Institutions willing to set up large hospitals, particularly around Calcutta, must be helped through the provision of low-cost land. But in return, promises to set up satellite hospitals in far-flung district headquarters have to be met.
The biggest challenge to the rejuvenation of the healthcare system is the garnering of funds. West Bengal is financially broke, thanks to the misrule of the communists. Unlike most other communist rulers, our home-grown variants failed to provide basic sanitation, good roads, a working healthcare system and appropriate nutritional supplements to women and children. The lack of social services resulted in poor health and in increased mortality among the vulnerable sections of society. Government efforts to improve basic health services must fund programmes that provide sanitation, nutritional supplements, and daily meals for school-going children. Substantial investments in these sectors can reduce mortality in children. It is popular to blame doctors for not being able to save severely ill, malnourished children. But things won't change unless determined steps are taken to root out the problems, such as poor funds, minimal resources and an incompetent workforce, that affect the West Bengal health service.
In the next five years, in collaboration with the Centre and the non-government organizations involved in public health, the state government must chalk out a definitive strategy to improve the supply of clean drinking water, provide better sanitation and one full meal to school-going children and arrange for nutritional supplements to pregnant women. Private investment should be wooed in the health sector to set up hospitals in large metropolitan areas as well as in small district towns. While government land is needed at an appropriate price to help investors build hospitals, steps must be taken to bring about the inclusion of the deprived sections in their service plans. Strong regulatory bodies that can monitor private hospitals and nursing homes must be instituted. Many of the profiteering health institutions do not provide basic facilities, lack trained nurses and paramedical staff, and some are even run by quacks without medical degrees. It is of utmost importance that a regulatory body conducts surprise checks on these institutions, registers complaints and takes remedial steps.
Many NGOs have been able to set up large projects benefiting thousands of people. They have also succeeded in bringing foreign aid to tackle malaria and HIV. The state government should help these NGOs achieve their goals while exercising control to prevent financial irregularities. Their services ought to be applauded and single-window processing of applications instituted to help them tackle bureaucratic delays. Health is a service industry and not a lucrative business. Unfortunately, in Bengal, most large hospitals are owned by corporates. Only a few are owned or run by doctors. There is thus a sustained effort to make profit. Poor consumer protection makes the man on the street vulnerable to substandard service at high prices.
These are trying times for Bengal, after years of mismanagement in the health sector. It is important for the present rulers to rectify the situation by laying down the stepping stones for a better tomorrow.
Tuesday, November 22, 2011
Nursing a critically ill state back to health Indranill Basu Ray highlights the core problems that afflict Bengal’s health sector and suggests a few ways to improve the situation | <urn:uuid:a51737a0-6a1a-4721-a739-791f50bfecba> | CC-MAIN-2013-20 | http://basantipurtimes.blogspot.com/2011/11/nursing-critically-ill-state-back-to.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959302 | 2,399 | 2.703125 | 3 |
Public Papers - 1991
White House Fact Sheet on The Strategic Arms Reduction Treaty (START)
Today, the United States and the Soviet Union signed the Strategic Arms Reduction Treaty. This treaty marks the first agreement between the two countries in which the number of deployed strategic nuclear weapons will actually be reduced. Reductions will take place over a period of 7 years, and will result in parity between the strategic nuclear forces of the two sides at levels approximately 30 percent below currently deployed forces. Deeper cuts are required in the most dangerous and destabilizing systems.
START provisions are designed to strengthen strategic stability at lower levels and to encourage the restructuring of strategic forces in ways that make them more stable and less threatening. The treaty includes a wide variety of very demanding verification measures designed to ensure compliance and build confidence.
The treaty sets equal ceilings on the number of strategic nuclear forces that can be deployed by either side. In addition, the treaty establishes an equal ceiling on ballistic missile throw-weight (a measure of overall capability for ballistic missiles). Each side is limited to no more than:
-- 1600 strategic nuclear delivery vehicles (deployed intercontinental ballistic missiles [ICBM's], submarine launched ballistic missiles [SLBM's], and heavy bombers), a limit that is 36 percent below the Soviet level declared in September 1990 and 29 percent below the U.S. level.
-- 6000 total accountable warheads, about 41 percent below the current Soviet level and 43 percent below the current U.S. level.
-- 4900 accountable warheads deployed on ICBM's or SLBM's, about 48 percent below the current Soviet level and 40 percent below the current U.S. level.
-- 1540 accountable warheads deployed on 154 heavy ICBM's, a 50-percent reduction in current Soviet forces. The U.S. has no heavy ICBM's.
-- 1100 accountable warheads deployed on mobile ICBM's.
-- Aggregate throw-weight of deployed ICBM's and SLBM's equal to about 54 percent of the current Soviet aggregate throw-weight.
Ballistic Missile Warhead Accountability
The treaty uses detailed counting rules to ensure the accurate accounting of the number of warheads attributed to each type of ballistic missile.
-- Each deployed ballistic missile warhead counts as 1 under the 4900 ceiling and 1 under the 6000 overall warhead ceiling.
-- Each side is allowed 10 on-site inspections each year to verify that deployed ballistic missiles contain no more warheads than the number that is attributed to them under the treaty.
Downloading Ballistic Missile Warheads
The treaty also allows for a reduction in the number of warheads on certain ballistic missiles, which will help the sides transition their existing forces to the new regime. Such downloading is permitted in a carefully structured and limited fashion.
-- The U.S. may download its three-warhead Minuteman III ICBM by either one or two warheads. The Soviet Union has already downloaded it's seven warhead SS - N - 18 SLBM by four warheads.
-- In addition, each side may download up to 500 warheads on two other existing types of ballistic missiles, as long as the total number of warheads removed from downloaded missiles does not exceed 1250 at any one time.
The treaty places constraints on the characteristics of new types of ballistic missiles to ensure the accuracy of counting rules and prevent undercounting of missile warheads.
-- The number of warheads attributed to a new type of ballistic missile must be no less than the number determined by dividing 40 percent of the missile's total throw-weight by the weight of the lightest RV tested on that missile.
-- The throw-weight attributed to a new type must be no less than the missile's throw-weight capability at specified reference ranges (11,000 km for ICBM's and 9,500 km for SLBM's).
START places significant restrictions on the Soviet SS - 18 heavy ICBM.
-- A 50-percent reduction in the number of Soviet SS - 18 ICBM's; a total reduction of 154 of these Soviet missiles.
-- New types of heavy ICBM's are banned.
-- Downloading of heavy ICBM's is banned.
-- Heavy SLBM's and heavy mobile ICBM's are banned.
-- Heavy ICBM's will be reduced on a more stringent schedule than other strategic arms.
Because mobile missiles are more difficult to verify than other types of ballistic missiles, START incorporates a number of special restrictions and notifications with regard to these missiles. These measures will significantly improve our confidence that START will be effectively verifiable.
-- Nondeployed mobile missiles and non-deployed mobile launchers are numerically and geographically limited so as to limit the possibility for reload and refire.
-- The verification regime includes continuous monitoring of mobile ICBM production, restrictions on movements, on-site inspections, and cooperative measures to improve the effectiveness of national technical means of intelligence collection.
Because heavy bombers are stabilizing strategic systems (e.g., they are less capable of a short-warning attack than ballistic missiles), START counting rules for weapons on bombers are different than those for ballistic missile warheads.
-- Each heavy bomber counts as one strategic nuclear delivery vehicle.
-- Each heavy bomber equipped to carry only short-range missiles or gravity bombs is counted as one warhead under the 6000 limit.
-- Each U.S. heavy bomber equipped to carry long-range nuclear ALCM's (up to a maximum of 150 bombers) is counted as 10 warheads even though it may be equipped to carry up to 20 ALCM's.
-- A similar discount applies to Soviet heavy bombers equipped to carry long-range nuclear ALCM's. Each such Soviet heavy bomber (up to a maximum of 180) is counted as 8 warheads even though it may be equipped to carry up to 16 ALCM's.
-- Any heavy bomber equipped for long-range nuclear ALCM's deployed in excess of 150 for the U.S. or 180 for the Soviet Union will be accountable by the number of ALCM's the heavy bomber is actually equipped to carry.
Building on recent arms control agreements, START includes extensive and unprecedented verification provisions. This comprehensive verification regime greatly reduces the likelihood that violations would go undetected.
-- START bans the encryption and encapsulation of telemetric information and other forms of information denial on flight tests of ballistic missiles. However, strictly limited exemptions to this ban are granted sufficient to protect the flight-testing of sensitive research projects.
-- START allows 12 different types of on-site inspections and requires roughly 60 different types of notifications covering production, testing, movement, deployment, and destruction of strategic offensive arms.
START will have a duration of 15 years, unless it is superseded by a subsequent agreement. If the sides agree, the treaty may be extended for successive 5-year periods beyond the 15 years.
Noncircumvention and Third Countries
START prohibits the transfer of strategic offensive arms to third countries, except that the treaty will not interfere with existing patterns of cooperation. In addition, the treaty prohibits the permanent basing of strategic offensive arms outside the national territory of each side.
Air-Launched Cruise Missiles (ALCM's)
START does not directly count or limit ALCM's. ALCM's are limited indirectly through their association with heavy bombers.
-- Only nuclear-armed ALCM's with a range in excess of 600 km are covered by START.
-- Long-range, conventionally armed ALCM's that are distinguishable from nuclear-armed ALCM's are not affected.
-- Long-range nuclear-armed ALCM's may not be located at air bases for heavy bombers not accountable as being equipped for such ALCM's.
-- Multiple warhead long-range nuclear ALCM's are banned.
Sea Launched Cruise Missiles (SLCM's)
SLCMs are not constrained by the treaty. However, each side has made a politically binding declaration as to its plans for the deployment of nuclear-armed SLCM's. Conventionally-armed SLCM's are not subject to such a declaration.
-- Each side will make an annual declaration of the maximum number of nuclear-armed SLCM's with a range greater than 600 km that it plans to deploy for each of the following 5 years.
-- This number will not be greater than 880 long-range nuclear-armed SLCM's.
-- In addition, as a confidence building measure, nuclear-armed SLCM's with a range of 300 - 600 km will be the subject of a confidential annual data exchange.
The Soviet Backfire bomber is not constrained by the treaty. However, the Soviet side has made a politically binding declaration that it will not deploy more than 800 air force and 200 naval Backfire bombers, and that these bombers will not be given intercontinental capability.
The START agreement consists of the treaty document itself and a number of associated documents. Together they total more than 700 pages. The treaty was signed in a public ceremony by Presidents Bush and Gorbachev in St. Vladimir's Hall in the Kremlin. The associated documents were signed in a private ceremony at Novo Ogaryevo, President Gorbachev's weekend dacha. Seven of these documents were signed by Presidents Bush and Gorbachev. Three associated agreements were signed by Secretary Baker and Foreign Minister Bessmertnykh. In addition, the START negotiators, Ambassadors Brooks and Nazarkin, exchanged seven letters related to START in a separate event at the Soviet Ministry of Foreign Affairs in Moscow.
Magnitude of START -- Accountable Reductions
Following is the aggregate data from the Memorandum of Understanding, based upon agreed counting rules in START. (Because of those counting rules, the number of heavy bomber weapons actually deployed may be higher than the number shown in the aggregate.) This data is effective as of September 1990
(TABLE START)and will be updated at entry into force:
Delivery Vehicles .... 2,246 .... 2,500
Warheads .... 10,563 .... 10,271
Ballistic Missile Warheads .... 8,210 .... 9,416
Heavy ICBM's/Warheads .... None .... 308/3080
Throw-weight (metric tons) .... 2,361.3 .... 6,626.3
As a result of the treaty, the above values will be reduced by the following percentages:
Delivery Vehicles .... 29 percent .... 36 percent
Warheads .... 43 percent .... 41 percent
Ballistic Missile Warheads .... 40 percent .... 48 percent
Heavy ICBM's/Warheads .... None .... 50 percent
Throw-weight (metric tons) .... None .... 46 percent | <urn:uuid:61022017-5b85-4840-958f-d37f75698705> | CC-MAIN-2013-20 | http://bushlibrary.tamu.edu/research/public_papers.php?id=3263&year=1991&month=all | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.924267 | 2,184 | 2.625 | 3 |
Instructors: Andrea Dykstra, Curt Van Dam, Kelli Ten Haken and Tami De Jong
1. Students will gain interest in the Unit on Alaska.
2. Students will be introduced to Alaska and the Iditarod race that takes place
in Alaska every year.
3. Students will be able to appreciate the beauty of Godís creation in Alaska.
4. Students will be able to see Godís majesty and power in their personal experiences.
In this lesson, the students will discuss what they know about Alaska. They will watch
a movie and then discuss how God shows His power and majesty through creation. Next,
they will be introduced to the Iditarod race by reading a story and then the teachers will
explain the game the students will play about the Iditarod through the unit. At the end of
class, students will have a chance to start work on their maps of Alaska and then the
teachers will end in closing prayer.
- Psalm 19:1-
The Heavens declare the glory of God; the skies proclaim the work of His hands.
- Other Scripture references that can be used through out the unit:
The Creation story in Gen. 1 and 2
Alaska: Spirit of the Wild
2. DVD player
5. Learning center and trade books
6. Example of the Iditarod Game
7. Book: Iditarod Dream by Ted Wood
8. Overhead projector, overhead and pen
9. Construction paper
10. Markers, crayons, colored pencils
1. On the first day of this unit, teachers should enter the room dressed in parkas,
snowshoes, scarves, mittens; anything that looks like what people in Alaska would
wear. Motion for the student to sit down. Once they are quiet, ask them where
they think the teachers are from and how they came to this conclusion. We would
expect conclusions such as the Artic, Antarctica, and possibly Alaska.
2. Have students take out a sheet of paper and write five things down that come to
their minds when they think of Alaska. Have them get into groups of three and
share what they wrote with their group. The students will be encouraged to share
the combined ideas from their group with the whole class. The teacher will write
down these ideas on the overhead.
3. Explain to the students that they are going to be learning about all of these of
these things and even more about Alaska in the upcoming unit.
4. Have each student write down one or two things about Alaska they would like
to know more about. Suggest ideas such as: What sports do they play in Alaska?
How many people live there? Is it really cold and snowy year round? Take these
ideas into consideration when planning the rest of the unit.
1. Put in the DVD Alaska: Sprit of the Wild. Students will watch the movie. It is forty
minutes long. Before they watch it, share with them the beauty that can be found in
Alaska. Tell them to look specifically for how they can see God in the things that are
shown on the film.
2. After the movie, discuss with the students what they thought of the movie. Ask them
questions such as what surprised you about this film? What did you learn about Alaska
that you didnít know before? What can we discover about God by watching this movie?
How can we get to know God better by studying Alaska?
3. Read Psalm 19:1 aloud. Read it again, this time have the students say it after you. Ask
them how this verse relates to Alaska. Hopefully they will make the connection that
creation shouts Godís praise. Alaska is so beautiful; this reflects on Godís majesty,
creativity and mercy. God loves us enough to give us beautiful creation simply so we
can enjoy it. We can see his fingerprints in Alaska.
4. Read Psalm 8 aloud. Again, ask them how this verse relates to Alaska. They will probably
have similar responses as above in step three. Share a personal experience of how he/
she has seen Godís power and majesty in His creation.
- For example, this is my own experience; you could share something similar to it:
One time I climbed the highpoint of Colorado with my dad. We started hiking
before the sun was up. As we were walking along the ridge of the mountain, the
sun began to rise; the colors were brilliant! We kept on hiking and hiking. I was
getting tired and hungry but soon we came close to the top. As I climbed up the
last little peak and the top of the mountain, I looked out and the view was
breathtaking!!! I had never seen so many snow capped mountains before. Sitting
up there on the mountaintop, I felt such a joy and peace. What a great God I
serve! He created all of this; His creation alone is enough to tell of His majesty.
5. Ask the students if any of them have had an experience like this; encourage them to
share if they would like.
6. Encourage them to find other verses that could relate to our study of Alaska and bring
them to class tomorrow to share.
1. Introduce the Iditarod race the studentís will be learning about by reading the book
Iditarod Dream by Ted Wood. As you are reading, stop periodically through out the
book and ask them to jot down a few of their thoughts. At the end of the book ask
them to share a few thoughts they wrote down about the book.
2. Introduce the game the students will be playing throughout the unit. Tell the students
they will be having their own Iditarod race in the classroom. Each student will make a
map of Alaska on construction paper. On this map, they will draw the trail of the
Iditarod race. They will have to map out the different checkpoints of the race on their
trails. It is their job to find out how many miles are between each checkpoint and how
many miles they can travel in one day.
3. Each day the students will move their markers on their maps how ever many miles we
decide as a class they can travel in one day. Every morning the students will receive
a ďracerís fateĒ card. These cards will say various things such as, ďyour dog has broken
a leg, move back twenty milesĒ, or ď you have found an extra bundle of food on the trail,
move ahead twelve milesĒ. The students will have to keep track of where they are on
the trail on their own maps and on a large map on the classroom bulletin board.
4. Each afternoon, students will have an opportunity to receive another card if they got
their homework done on time that day. This card could be good or bad, but the students
get to decide if they want to take it.
5. This activity will be incorporated into language arts. The students will be keeping a
race journal. As they play this game they can write their feelings about the race in the
journal as if they were an actual racer.
6. This game will also be incorporated into math. Students will need to do calculations to
play the game correctly. They will also discover how to find median, mean and
using the game.
1. The students will begin making their maps of Alaska for the Iditarod game. The
outline of the map of Alaska will be projected on the overhead so the students have
something to follow when they draw. Copies of the outline of this map will be available
for students to trace if they do not want to draw the map freehand.
2. The students can use crayons or colored pencils to make their maps on.
3. The trail outline and check points will be labeled on the overhead map, but the students
will need to research how many miles are in between each check point in a later class
1. Read Psalm 8 one more time and end in prayer, thanking God for His creativity that
is evident in all of creation, especially as it has been seen in Alaska today.
1. Students can do more research about the real Iditarod race on the Internet.
2. Students can read one of the many books about Alaska set up in the learning center.
3. Students can complete any activity set up in the learning center, including: math
story problems, language arts writing activities, and social studies and science
1. Observe how much students participate in the lesson. Have one teacher walk
around with a checklist and put checks by the names of the students who are
on task and participating by sharing, asking questions, diligently listening.
2. Observe how diligently students work on their maps. Check the next day to see
if they have completed them. Give them a check if they are finished and are done
Lesson Plans Unit Outline Home Page
Trade Books Learning Center | <urn:uuid:d07cc3a6-5c93-4a54-aa41-e4364927c35f> | CC-MAIN-2013-20 | http://center.dordt.edu/266.543units/Alaska%20unit/intro.les.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.949955 | 1,899 | 3.671875 | 4 |
Is this bone a Neanderthal flute?
Cave Bear femur fragment from Slovenia, 43+kya
DOUBTS AIRED OVER NEANDERTHAL BONE 'FLUTE'
(AND REPLY BY MUSICOLOGIST BOB FINK)
Science News 153 (April 4, 1998): 215.
By B. Bower
Amid much media fanfare, a research team in 1996 trumpeted an ancient, hollowed out bear bone pierced on one side with four complete or partial holes as the earliest known musical instrument. The perforated bone, found in an Eastern European cave, represents a flute made and played by Neandertals at least 43,000 ye us ago, the scientists contended.
Now it's time to stop the music, say two archaeologists who examined the purported flute last spring. On closer inspection, the bone appears to have been punctured and gnawed by the teeth of an animal -- perhaps a wolf -- as it stripped the limb of meat and marrow report, April Nowell and Philip G. Chase, both of the University of Pennsylvania in Philadelphia. "The bone was heavily chewed by one or more carnivores, creating holes that became more rounded due to natural processes after burial," Nowell says. "It provides very weak evidence for the origins of [Stone Age] music." Nowell presented the new analysis at the annual meeting of the Paleoanthropology Society in Seattle last week.
Nowell and Chase examined the bone with the permission of its discoverer, Ivan Turk of the Slovenian Academy of Sciences in Ljubljana (S.N.: 11/23/96, p. 328). Turk knows of their conclusion but still views the specimen as a flute.
Both open ends of the thighbone contain clear signs of gnawing by carnivores, Nowell asserts. Wolves and other animals typically bite off nutrient-rich tissue at the ends of limb bones and extract available marrow. If Neandertals had hollowed out the bone and fashioned holes in it, animals would not have bothered to gnaw it, she says.
Complete and partial holes on the bone's shaft were also made by carnivores, says Nowell. Carnivores typically break open bones with their scissor like cheek teeth. Uneven bone thickness and signs of wear along the borders of the holes, products of extended burial in the soil, indicate that openings made by cheek teeth were at first less rounded and slightly smaller, the researchers hold.
Moreover, the simultaneous pressure of an upper and lower tooth produced a set of opposing holes, one partial and one complete, they maintain.
Prehistoric, carnivore-chewed bear bones in two Spanish caves display circular punctures aligned in much the same way as those on the Slovenian find. In the March Antiquity, Francesco d'Errico of the Institute of Quaternary Prehistory and Geology in Talence, France, and his colleagues describe the Spanish bones.
In a different twist, Bob Fink, an independent musicologist in Canada, has reported
on the Internet
(http://www.webster.sk.ca/greenwich/fl-compl.htm) that the spacing of the two complete and two partial holes on the back of the Slovenian bone conforms to musical notes on the diatonic (do, re, mi. . .) scale.
The bone is too short to incorporate the diatonic scale's seven notes, counter Nowell and Chase. Working with Pennsylvania musicologist Robert Judd, they estimate that the find's 5.7-inch length is less than half that needed to cover the diatonic spectrum. The recent meeting presentation is "a most convincing analysis," comments J. Desmond Clark of the University of California, Berkeley, although it's possible that Neandertals blew single notes through carnivore-chewed holes in the bone.
"We can't exclude that possibility," Nowell responds. "But it's a big leap of faith to conclude that this was an intentionally constructed flute."
TO THE EDITOR, SCIENCE NEWS (REPLY BY BOB FINK, May 1998)
(See an update of this discussion on Bob Fink's web site, November 2000)
The doubts raised by Nowell and Chase (April 4th, DOUBTS AIRED OVER NEANDERTHAL BONE 'FLUTE') saying the Neanderthal Bone is not a flute have these weaknesses:
The alignment of the holes -- all in a row, and all of equivalent diameter, appear to be contrary to most teeth marks, unless some holes were made independently by several animals. The latter case boggles the odds for the holes ending up being in line. It also would be strange that animals homed in on this one bone in a cave full of bones, where no reports of similarly chewed bones have been made.
This claim is harder to believe when it is calculated that chances for holes to
be arranged, by chance, in a pattern that matches the spacings of 4 notes of a
diatonic flute, are only one in hundreds to occur .
The analysis I made on the Internet (http://www.webster.sk.ca/greenwich/fl-compl.htm) regarding the bone being capable of matching 4 notes of the do, re, mi (diatonic) scale included the possibility that the bone was extended with another bone "mouthpiece" sufficiently long to make the notes sound fairly in tune. While Nowell says "it's a big leap of faith to conclude that this was an intentionally constructed flute," it's a bigger leap of faith to accept the immense coincidence that animals blindly created a hole-spacing pattern with holes all in line (in what clearly looks like so many other known bone flutes which are made to play notes in a step-wise scale) and blindly create a pattern that also could play a known acoustic scale if the bone was extended. That's too much coincidence for me to accept. It is more likely that it is an intentionally made flute, although admittedly with only the barest of clues regarding its original condition.
The 5.7 inch figure your article quoted appears erroneous, as the centimeter scale provided by its discoverer, Ivan Turk, indicates the artifact is about 4.3 inches long. However, the unbroken femur would originally have been about 8.5 inches, and the possibility of an additional hole or two exists, to complete a full scale, perhaps aided by the possible thumbhole. However, the full diatonic spectrum is not required as indicated by Nowell and Chase: It could also have been a simpler (but still diatonic) 4 or 5 note scale. Such short-scale flutes are plentiful in homo sapiens history.
Finally, a worn-out or broken flute bone can serve as a scoop for manipulation of food, explaining why animals might chew on its ends later. It is also well-known that dogs chase and maul even sticks, despite their non-nutritional nature. What appears "weak" is not the case for a flute, but the case against it by Nowell and Chase.
Letter to the Editor: Antiquity Journal:
"A Bone to Pick"
By Bob Fink
I have a bone to pick with Francesco d'Errico's viewpoint in the March issue of Antiquity (article too long to reproduce here) regarding the Neanderthal flute found in Slovenia by Ivan Turk. D'Errico argues the bone artifact is not a flute.
D'Errico omits dealing with the best evidence that this bone find is a flute.
Regarding the most important evidence, that of the holes being lined up, neither d'Errico nor Turk make mention of this.
This line-up is remarkable especially if they were made by more than one carnivore, which apparently they'd have to be, based on Turk's analysis of the center-spans of the holes precluding their being made by a single carnivore or bite (Turk,* pp.171-175). To account for this possible difficulty, some doubters do mention "one or more" carnivores (Chase & Nowell, Science News 4/4/98).
My arguments over the past year pointed out the mathematical odds of the lining up of the holes occurring by chance-chewing are too difficult to believe.
The Appendix in my essay ("Neanderthal Flute --A Musicological Analysis") proves that the number of ways a set of 4 random holes could be differently spaced (to produce an audibly different set of tones) are 680 ways. The chances a random set would match the existing fragment's spacing [which also could produce a match to four diatonic notes of the scale] are therefore only one in hundreds. If, in calculating the odds, you also allowed the holes to be out of line, or to be less than 4 holes as well, then the chance of a line-up match is only one from many tens of thousands.
And yet randomness and animal bites still are acceptable to account for holes being in line that could also play some notes of the scale? This is too much coincidence for me to believe occurred by chance.
D'Errico mentions my essay in his article and what he thought it was about, but he overstates my case into being a less believable one. My case simply was that if the bone was long enough (or a shorter bone extended by a mouthpiece insert) then the 4 holes would be consistent and in tune with the sounds of Do, Re, Mi, Fa (or flat Mi, Fa, Sol, and flat La in a minor scale).
In the 5 points I list below, extracted from Turk's monograph in support of this being a flute, d'Errico omits dealing with much of the first, and all of the second, fourth and sixth points.
Turk & Co's monograph shows the presence on site of boring tools, and includes experiments made by Turk's colleague Guiliano Bastiani who successfully produced similar holes in fresh bone using tools of the type found at the site (pp. 176-78 Turk).
They also wrote (pp. 171-75) that:
1. The center-to-center distances of the holes in the artifact are smaller than that of the tooth spans of most carnivores. The smallest tooth spans they found were 45mm, and the holes on the bone are 35mm (or less) apart;
2. Holes bitten are usually at the ends of bones rather than in the center of them;
3. There is an absence of dents, scratches and other signs of gnawing and counter-bites on the artifact;
4. The center-to-center distances do not correspond to the spans of carnivores which could pierce the bone;
5. The diameters of the holes are greater than that producible by a wolf exerting the greatest jaw pressure it had available -- it's doubtful that a wolf's jaws would be strong enough (like a hyena's) to have made the holes, especially in the thickest part of the wall of the artifact.
6. If you accept one or more carnivores, then why did they over-target one bone, when there were so many other bones in the cave site? Only about 4.5% of the juvenile bones were chewed or had holes, according to Turk (p. 117).
* Turk, Ivan (ed.) (1997). Mousterian Bone Flute. Znanstvenoraziskovalni
Center Sazu, Ljubljana, Slovenia.
Maintained by Francis F. Steen, Communication Studies, University of California Los Angeles | <urn:uuid:f166f15d-9976-40ed-8a49-8bed360001ff> | CC-MAIN-2013-20 | http://cogweb.ucla.edu/ep/FluteDebate.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.957844 | 2,445 | 3.71875 | 4 |
Dictionary and translator for handheld
New : sensagent is now available on your handheld
A windows (pop-into) of information (full-content of Sensagent) triggered by double-clicking any word on your webpage. Give contextual explanation and translation from your sites !
With a SensagentBox, visitors to your site can access reliable information on over 5 million pages provided by Sensagent.com. Choose the design that fits your site.
Improve your site content
Add new content to your site from Sensagent by XML.
Crawl products or adds
Get XML access to reach the best products.
Index images and define metadata
Get XML access to fix the meaning of your metadata.
Please, email us to describe your idea.
Lettris is a curious tetris-clone game where all the bricks have the same square shape but different content. Each square carries a letter. To make squares disappear and save space for other squares you have to assemble English words (left, right, up, down) from the falling squares.
Boggle gives you 3 minutes to find as many words (3 letters or more) as you can in a grid of 16 letters. You can also try the grid of 16 letters. Letters must be adjacent and longer words score better. See if you can get into the grid Hall of Fame !
Change the target language to find translations.
Tips: browse the semantic fields (see From ideas to words) in two languages to learn more.
1.the language of educated people in ancient Rome"Latin is a language as dead as dead can be. It killed the ancient Romans--and now it's killing me"
classical Latin (n.)
Latin inscription in the Colosseum
|Spoken in||Roman republic, Roman empire|
|Region||mare nostrum (Mediterranean)|
|Era||75 BC to the 3rd century AD, when it developed into Late Latin|
|Writing system||Latin alphabet|
|Official language in||Roman republic, Roman empire|
|Regulated by||Schools of grammar and rhetoric|
The range of Latin, 60 AD
Classical Latin in simplest terms is the socio-linguistic register of the Latin language regarded by the enfranchised and empowered populations of the late Roman republic and the Roman empire as good Latin. Most writers during this time made use of it. Any unabridged Latin dictionary informs moderns that Marcus Tullius Cicero and his contemporaries of the late republic while using lingua Latina and sermo Latinus to mean the Latin language as opposed to the Greek or other languages, and sermo vulgaris or sermo vulgi to refer to the vernacular of the uneducated masses, regarded the speech they valued most and in which they wrote as Latinitas, "Latinity", with the implication of good. Sometimes it is called sermo familiaris, "speech of the good families", sermo urbanus, "speech of the city" or rarely sermo nobilis, "noble speech", but mainly besides Latinitas it was Latine (adverb), "in good Latin", or Latinius (comparative degree of adjective), "good Latin."
Latinitas was spoken as well as written. Moreover, it was the language taught by the schools. Prescriptive rules therefore applied to it, and where a special subject was concerned, such as poetry or rhetoric, additional rules applied as well. Now that the spoken Latinitas has become extinct (in favor of various other registers later in date) the rules of the, for the most part, polished (politus) texts may give the appearance of an artificial language, but Latinitas was a form of sermo, or spoken language and as such retains a spontaneity. No authors are noted for the type of rigidity evidenced by stylized art, except possibly the repetitious abbreviations and stock phrases of inscriptions.
Good Latin in philology is "classical" Latin literature. The term refers to the canonicity of works of literature written in Latin in the late Roman republic and the early to middle Roman empire: "that is to say, that of belonging to an exclusive group of authors (or works) that were considered to be emblematic of a certain genre." The term classicus (masculine plural classici) was devised by the Romans themselves to translate Greek ἐγκριθέντες (egkrithentes), "select", referring to authors who wrote in Greek that were considered model. Before then, classis, in addition to being a naval fleet, was a social class in one of the diachronic divisions of Roman society according to property ownership by the Roman constitution. The word is a transliteration of Greek κλῆσις (klēsis) "calling", used to rank army draftees by property from first to fifth class.
Classicus is anything primae classis, "first class", such as the authors of the polished works of Latinitas, or sermo urbanus. It had nuances of the certified and the authentic: testis classicus, "reliable witness." It was in this sense that Marcus Cornelius Fronto (an African-Roman lawyer and language teacher) in the 2nd century AD used scriptores classici, "first-class" or "reliable authors" whose works could be relied upon as model of good Latin. This is the first known reference, possibly innovated at this time, to classical applied to authors by virtue of the authentic language of their works.
In imitation of the Greek grammarians, the Roman ones, such as Quintilian, drew up lists termed indices or ordines on the model of the Greek lists, termed pinakes, considered classical: the recepti scriptores, "select writers." Aulus Gellius includes many authors, such as Plautus, who are currently considered writers of Old Latin and not strictly in the period of classical Latin. The classical Romans distinguished Old Latin as prisca Latinitas and not sermo vulgaris. Each author (and work) in the Roman lists was considered equivalent to one in the Greek; for example Ennius was the Latin Homer, the Aeneid was a new Iliad, and so on. The lists of classical authors were as far as the Roman grammarians went in developing a philology. The topic remained at that point while interest in the classici scriptores declined in the medieval period as the best Latin yielded to medieval Latin, somewhat less than the best by classical standards.
The Renaissance brought a revival of interest in restoring as much of Roman culture as could be restored and with it the return of the concept of classic, "the best." Thomas Sebillet in 1548 (Art Poétique) referred to "les bons et classiques poètes françois", meaning Jean de Meun and Alain Chartier, which was the first modern application of the word. According to Merriam Webster's Collegiate Dictionary, the term classical, from classicus, entered modern English in 1599, some 50 years after its re-introduction on the continent. Governor William Bradford in 1648 referred to synods of a separatist church as "classical meetings" in his Dialogue, a report of a meeting between New-England-born "young men" and "ancient men" from Holland and England. In 1715 Laurence Echard's Classical Geographical Dictionary was published. In 1736 Robert Ainsworth's Thesaurus Linguae Latinae Compendarius turned English words and expressions into "proper and classical Latin." In 1768 David Ruhnken (Critical History of the Greek Orators) recast the mold of the view of the classical by applying the word canon to the pinakes of orators, after the Biblical canon or list of authentic books of the Bible. Ruhnken had a kind of secular catechism in mind.
In 1870 Wilhelm Sigismund Teuffel in Geschichte der Römischen Literatur (A History of Roman Literature) innovated the definitive philological classification of classical Latin based on the metaphoric uses of the ancient myth of the Ages of Man, a practice then universally current: a Golden Age and a Silver Age of classical Latin were to be presumed. The practice and Teuffel's classification, with modifications, are still in use. His work was translated into English as soon as published in German by Wilhelm Wagner, who corresponded with Teuffel. Wagner published the English translation in 1873. Teuffel divides the chronology of classical Latin authors into several periods according to political events, rather than by style. Regarding the style of the literary Latin of those periods he had but few comments.
Teuffel was to go on with other editions of his history, but meanwhile it had come out in English almost as soon as it did in German and found immediate favorable reception. In 1877 Charles Thomas Cruttwell produced the first English work along the same lines. In his Preface he refers to "Teuffel's admirable history, without which many chapters in the present work could not have attained completeness" and also gives credit to Wagner.
Cruttwell adopts the same periods with minor differences; however, where Teuffel's work is mainly historical, Cruttwell's work contains detailed analyses of style. Nevertheless like Teuffel he encounters the same problem of trying to summarize the voluminous detail in a way that captures in brief the gist of a few phases of writing styles. Like Teuffel, he has trouble finding a name for the first of the three periods (the current Old Latin phase), calling it mainly "from Livius to Sulla." The language, he says, is "…marked by immaturity of art and language, by a vigorous but ill-disciplined imitation of Greek poetical models, and in prose by a dry sententiousness of style, gradually giving way to a clear and fluent strength…" These abstracts have little meaning to those not well-versed in Latin literature. In fact, Cruttwell admits "The ancients, indeed, saw a difference between Ennius, Pacuvius, and Accius, but it may be questioned whether the advance would be perceptible by us."
Some of Cruttwell's ideas have become stock in Latin philology for better or for worse. While praising the application of rules to classical Latin, most intensely in the Golden Age, he says "In gaining accuracy, however, classical Latin suffered a grievous loss. It became cultivated as distinct from a natural language… Spontaneity, therefore, became impossible and soon invention also ceased… In a certain sense, therefore, Latin was studied as a dead language, while it was still a living." These views are certainly debatable; one might ask how the upper classes of late 16th century Britain, who shared the Renaissance zealousness for the classics, managed to speak spontaneous Latin to each other officially and unofficially after being taught classical Latin by tutors hired for the purpose. Latinitas in the Golden Age was in fact sermo familiaris, the spoken Latin of the Roman upper classes, who sent their children to school to learn it. The debate continues.
A second problem is the appropriateness of Teuffel's scheme to the concept of classical Latin, which Teuffel does not discuss. Cruttwell addresses the problem, however, altering the concept of the classical. As the best Latin is defined as golden Latin, the second of the three periods, the other two periods considered classical are left hanging. While on the one hand assigning to Old Latin the term pre-classical and by implication the term post-classical (or post-Augustan) to silver Latin Cruttwell realizes that this construct is not according to ancient usage and asserts "…the epithet classical is by many restricted to the authors who wrote in it [golden Latin]. It is best, however, not to narrow unnecessarily the sphere of classicity; to exclude Terence on the one hand or Tacitus and Pliny on the other, would savour of artificial restriction rather than that of a natural classification." (This from a scholar who had just been complaining that golden Latin was not a natural language.) The contradiction remains; Terence is and is not a classical author depending on context.
After defining a "First Period" of inscriptional Latin and the literature of the earliest known authors and fragments, to which he assigns no definitive name (he does use the term "Old Roman" at one point), Teuffel presents "the second period", his major, "das goldene Zeitalter der römischen Literatur", the Golden Age of Roman Literature, dated 671 – 767 AUC or 83 BC – 14 AD according to his time reckoning, between the dictatorship of Lucius Cornelius Sulla and the death of the emperor Augustus. Of it Wagner translating Teuffel writes
The golden age of the Roman literature is that period in which the climax was reached in the perfection of form, and in most respects also in the methodical treatment of the subject-matters. It may be subdivided between the generations, in the first of which (the Ciceronian Age) prose culminated, while poetry was principally developed in the Augustan Age.
The Ciceronian Age was dated 671–711 AUC (83 BC – 43 BC), ending just after the assassination of Gaius Julius Caesar, and the Augustan 711–67 AUC (43 BC – 14 AD), ending with the death of Augustus. The Ciceronian Age is further divided by the consulship of Cicero in 691 AUC or 63 BC into a first and second half. Authors are assigned to these periods by years of principal achievements.
The Golden Age had already made an appearance in German philology but in a less systematic way. In Bielfeld's 1770 Elements of universal erudition the author says (in translation): "The Second Age of Latin began about the time of Caesar [his ages are different from Teuffel's], and ended with Tiberius. This is what is called the Augustan Age, which was perhaps of all others the most brilliant, a period at which it should seem as if the greatest men, and the immortal authors, had met together upon the earth, in order to write the Latin language in its utmost purity and perfection." and of Tacitus "…his conceits and sententious style is not that of the golden age…". Teuffel evidently received the ideas of a golden and silver Latin from an existing tradition and embedded them in a new system, transforming them as he thought best.
In Cruttwell's introduction, the Golden Age is dated 80 BC – 14 AD ("from Cicero to Ovid"), which is about the same as Teuffel's. Of this "Second Period" Cruttwell says that it "represents the highest excellence in prose and poetry," paraphrasing Teuffel. The Ciceronian Age is now "the Republican Period" and is dated 80–42 BC through the Battle of Philippi. Later in the book Cruttwell omits Teuffel's first half of the Ciceronian and starts the Golden Age at Cicero's consulship of 63 BC, an error perpetuated into Cruttwell's second edition as well. He must mean 80 BC as he includes Varro in Golden Latin. Teuffel's Augustan Age is Cruttwell's Augustan Epoch, 42 BC – 14 AD.
The literary histories list all authors canonical to the Ciceronian Age even though their works may be fragmentary or may not have survived at all. With the exception of a few major writers, such as Cicero, Caesar, Lucretius and Catullus, ancient accounts of Republican literature are glowing accounts of jurists and orators who wrote prolifically but who now can't be read because their works have been lost, or analyses of language and style that appear insightful but can't be verified because there are no surviving instances. In that sense the pages of literary history are peopled with shadows: Aquilius Gallus, Quintus Hortensius Hortalus, Lucius Licinius Lucullus and many others who left a reputation but no readable works; they are to be presumed in the Golden Age by their associations. A list of some canonical authors of the period, whose works have survived in whole or in part (typically in part, some only short fragments) is as follows:
The Golden Age is divided by the assassination of Julius Caesar. In the wars that followed the Republican generation of literary men was lost, as most of them had taken the losing side; Marcus Tullius Cicero was beheaded in the street as he enquired from his litter what the disturbance was. They were replaced by a new generation that had grown up and been educated under the old and were now to make their mark under the watchful eye of the new emperor. As the demand for great orators was more or less over, the talent shifted emphasis to poetry. Other than the historian Livy, the most remarkable writers of the period were the poets Vergil, Horace, and Ovid. Although Augustus evidenced some toleration to republican sympathizers, he exiled Ovid, and imperial tolerance ended with the continuance of the Julio-Claudian Dynasty.
Augustan writers include:
In his second volume, on the Imperial Period, Teuffel initiated a slight alteration in approach, making it clearer that his terms applied to the Latin and not just to the age, and also changing his dating scheme from years AUC to modern. Although he introduces das silberne Zeitalter der römischen Literatur, "the Silver Age of Roman Literature", 14–117 AD, from the death of Augustus to the death of Trajan, he also mentions regarding a section of a work by Seneca the Elder a wenig Einfluss der silbernen Latinität, a "slight influence of silver Latin." It is clear that he had shifted in thought from golden and silver ages to golden and silver Latin, and not just Latin, but Latinitas, which must at this point be interpreted as classical Latin. He may have been influenced in that regard by one of his sources, E. Opitz, who in 1852 had published a title specimen lexilogiae argenteae latinitatis, mentioning silver Latinity. Although Teuffel's First Period was equivalent to Old Latin and his Second Period was equal to the Golden Age, his Third Period, die römische Kaiserheit, encompasses both the Silver Age and the centuries now termed Late Latin, in which the forms seemed to break loose from their foundation and float freely; that is, literary men appeared uncertain as to what "good Latin" should mean. The last of the Classical Latin is the Silver Latin. The Silver Age is the first of the Imperial Period and is divided into die Zeit der julischen Dynastie, 14–68; die Zeit der flavischen Dynastie, 69–96; and die Zeit des Nerva und Trajan, 96–117. Subsequently Teuffel goes over to a century scheme: 2nd, 3rd, etc., through 6th. His later editions (which came out in the rest of the late 19th century) divide the Imperial Age into parts: the 1st century (Silver Age), the 2nd century: Hadrian and the Antonines and the 3rd through the 6th Centuries. Of the Silver Age proper, pointing out that anything like freedom of speech had vanished with Tiberius, Teuffel says
…the continual apprehension in which men lived caused a restless versatility… Simple or natural composition was considered insipid; the aim of language was to be brilliant… Hence it was dressed up with abundant tinsel of epigrams, rhetorical figures and poetical terms… Mannerism supplanted style, and bombastic pathos took the place of quiet power.
The content of new literary works was continually proscribed by the emperor (by executing or exiling the author), who also played the role of literary man (typically badly). The talent therefore went into a repertory of new and dazzling mannerisms, which Teuffel calls "utter unreality." Crutwell picks up this theme:
The foremost of these [characteristics] is unreality, arising from the extinction of freedom… Hence arose a declamatory tone, which strove by frigid and almost hyterical exaggeration to make up for the healthy stimulus afforded by daily contact with affairs. The vein of artificial rhetoric, antithesis and epigram… owes its origin to this forced contentment with an uncongenial sphere. With the decay of freedom, taste sank…
In Crutwell's view (which had not been expressed by Teuffel), Silver Latin was a "rank, weed-grown garden", a "decline." Cruttwell had already decried what he saw as a loss of spontaneity in Golden Latin. That Teuffel should regard the Silver Age as a loss of natural language and therefore of spontaneity, implying that the Golden Age had it, is passed without comment. Instead, Tiberius brought about a "sudden collapse of letters." The idea of a decline had been dominant in English society since Edward Gibbon's Decline and Fall of the Roman Empire. Once again, Cruttwell evidences some unease with his stock pronouncements: "The Natural History of Pliny shows how much remained to be done in fields of great interest." The idea of Pliny as a model is not consistent with any sort of decline; moreover, Pliny did his best work under emperors at least as tolerant as Augustus had been. To include some of the best writings of the Silver Age, Cruttwell found he had to extend the period through the death of Marcus Aurelius, 180 AD. The philosophic prose of that good emperor was in no way compatible with either Teuffel's view of unnatural language or Cruttwell's depiction of a decline. Having created these constructs, the two philologists found they could not entirely justify them; apparently, in the worst implications of their views, there was no classical Latin by the ancient definition at all and some of the very best writing of any period in world history was a stilted and degenerate unnatural language.
Writers of the Silver Age include the following.
Of the additional century granted by Cruttwell and others of his point of view to Silver Latin but not by Teuffel the latter says "The second century was a happy period for the Roman State, the happiest indeed during the whole Empire… But in the world of letters the lassitude and enervation, which told of Rome's decline, became unmistakeable… its forte is in imitation." Teuffel, however, excepts the jurists; others find other "exceptions," recasting Teuffels's view.
The style of language refers to repeatable features of speech that are somewhat less general than the fundamental characteristics of the language. The latter give it a unity allowing it to be referenced under a single name. Thus Old Latin, Classical Latin, Vulgar Latin, etc., are not considered different languages, but are all referenced under the name of Latin. This is an ancient practice continued by moderns rather than a philological innovation of recent times. That Latin had case endings is a fundamental feature of the language. Whether a given form of speech prefers to use prepositions such as ad, ex, de for "to", "from" and "of" rather than simple case endings is a matter of style. Latin has a large number of styles. Each and every author has a style, which typically allows his prose or poetry to be identified by experienced Latinists. The problem of comparative literature has been to group styles finding similarities by period, in which case one may speak of Old Latin, Silver Latin, Late Latin as styles or a phase of styles.
The ancient authors themselves first defined style by recognizing different kinds of sermo, or "speech." In making the value judgement that classical Latin was "first class" and that it was better to write with Latinitas they were themselves selecting the literary and upper-class language of the city as a standard style and all sermo that differed from it was a different style; thus in rhetoric Cicero was able to define sublime, intermediate and low styles (within classical Latin) and St. Augustine to recommend the low style for sermons (from sermo). Style therefore is to be defined by differences in speech from a standard. Teuffel defined that standard as Golden Latin. | <urn:uuid:ca387b2a-7df2-4bb1-8e2d-9e82dcdc8b5a> | CC-MAIN-2013-20 | http://dictionary.sensagent.com/Classical_Latin/en-en/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.956578 | 5,107 | 2.859375 | 3 |
Born in 1940, Wangari Maathai is a Kenyan ecologist and environmental activist who founded the Green Belt Movement in 1977, causing the media to depict her as a latter-day Johnny Appleseed who has planted millions of trees in Africa. (The Green Belt Movement has been responsible for the planting of more than 10 million trees to prevent soil erosion and provide a source of firewood.)
As a member of the Green Belt Movement, Maathai has led sub-Saharan African women in provoking sometimes-violent clashes with police. Though casting herself as a hero of the downtrodden, she has demonstrated against peasants’ economic interests. When Kenyan autocratic leader Daniel arap Moi wanted to revive the nation’s dead economy by building the world’s largest skyscraper in the capital, her riotous actions dried up investment. Later, she led a protest to prevent “small-scale farming” on African forestland and called farmers “invaders” who were guilty of “rape.” In 1992, she and the women in her Green Belt Movement foreshadowed contemporary Western antiwar demonstrators by staging a public strip-in.
In 2004 she won the Nobel Peace Prize for her work in “human rights” and “reversing deforestation across Africa.”
When Maathai was awarded her Nobel Prize, United Nations Secretary-General Kofi Annan paid her a glowing tribute:
Maathai is also an anti-white, anti-Western crusader for international socialism. She charges that “some sadistic [white] scientists” created the AIDS virus “to punish blacks” and, ultimately, “to wipe out the black race.” Maathai continues:
“Renowned and admired throughout her native Kenya and across Africa for her pioneering struggle against deforestation and for women’s rights and democracy, Ms. Maathai has also played an important role at UN conferences such as the Earth Summit, making an imprint on the global quest for sustainable development.... Selfless and steadfast, Ms. Maathai has been a champion of the environment, of women, of Africa, and of anyone concerned about our future security.”
“Some say that AIDS came from the monkeys, and I doubt that, because we have been living with monkeys [since] time immemorial; others say it was a curse from God, but I say it cannot be that.... Us black people are dying more than any other people in this planet. It’s true that there are some people who create agents to wipe out other people.”
“Why is the rest of the world just watching,” Maathai asks, “doing nothing while Africans are being wiped out? The rest of the world has abandoned us.”
There is, of course, a very real genocide throughout sub-Saharan Africa, as Muslim Arabs murder indigenous black Christians and animists, 100,000 in Darfur alone. The repeated rape of young black boys by Arabs is now commonplace. These scenes first played out during the genocide in Rwanda, which began early in the Clinton administration, and have been seen all over the sub-continent for a decade. Maathai addressed this brutality at the World Women’s Conference in Beijing in 1995, where she blamed it on Western capitalists. She claims that Western governments laid the groundwork for present slaughter during the Cold War. “The carnage goes on in Somalia, Rwanda, Liberia and in the streets of many cities,” she says. “People of Africa continue to be sacrificed so that some factories may stay open, earn capital and save jobs.”
Thus in Maathai’s view, Arab genocide is the fault of wealthy whites.
Maathai has courted global socialism through her long association with the United Nations’ environmentalist agenda. She was a member of the Commission on Global Governance (CGG), founded in 1992 at the suggestion of former West German Chancellor and socialist Willy Brandt. Maathai worked on the CGG alongside Maurice Strong, Jimmy Carter, and Robert McNamara. The group’s manifesto, “Our Global Neighborhood,” calls for a dramatic reordering of the world’s political power – and redistribution of the world’s wealth.
Most importantly, the CGG’s proposals would phase out America’s veto in the Security Council. At the same time, the CGG would increase UN authority over member nations, declaring, “All member-states of the UN that have not already done so should accept the compulsory jurisdiction of the World Court.” It asks the UN to prevail upon member governments to enact proposals made by wide NGOs – such as the Green Belt Movement. “Our Global Neighborhood” also suggested creating a 10,000-man “UN Volunteer Force” to be deployed at the UN’s approval on infinite peacekeeping missions everywhere (except Iraq).
Maathai currently acts as a commissioner for the Earth Charter, along with the aforementioned Maurice Strong, Mikhail Gorbachev and Steven Rockefeller. She is also on the Earth Charter’s Steering Committee. In addition to calling for sharing the “benefits of development . . . equitably,” the Earth Charter calls on international bodies to “Promote the equitable distribution of wealth within nations and among nations.” Another Charter provision would disarm the entire world and use the money previously allocated for national defense to restore the environment. Additionally, the Earth Charter worries about the “unprecedented rise in human population,” and demands “universal access to health care.”
Maathai earned her biology degree from Mount St. Scholastica College in Kansas and a Master’s degree at the University of Pittsburgh. She later returned to Kenya and worked in veterinary medical research at the University of Nairobi, eventually earning a Ph.D. there and becoming head of the veterinary medicine faculty. | <urn:uuid:fae983e5-45da-44d4-86c0-5ccc7cb210d8> | CC-MAIN-2013-20 | http://discoverthenetworks.com/printindividualProfile.asp?indid=2007 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.955895 | 1,242 | 2.84375 | 3 |
Now he’s alerted me to a new study and related lecture on what he and his co-authors are calling “peak farmland” — an impending stabilization of the amount of land required for food as humanity’s growth spurt plays out. While laying out several important wild cards (expanded farming of biofuels among them), Ausubel and his co-authors see a reasonable prospect for conserving, and restoring, forests and other stressed terrestrial ecosystems even as humanity exerts an ever greater influence on the planet.
The study, “Peak Farmland and the Prospects for Sparing Nature,” is by Ausubel, Iddo K. Wernick and Paul E. Waggoner and will be published next year as part of a special supplement to the journal Population and Development Review, published by the Population Council.
Drawing on a host of data sets, the authors conclude that a combination of slowing population growth, moderated demand for land-intensive food (meat, for instance) and more efficient farming methods have resulted in a substantial “decoupling” of acreage and human appetites.
Here’s the optimistic opener:
Expecting that more and richer people will demand more from the land, cultivating wider fields, logging more forests, and pressing nature, comes naturally. The past half-century of disciplined and dematerializing demand and more intense and efficient land use encourage a rational hope that humanity’s pressure will not overwhelm nature.
Ausubel will describe the findings in a talk during a daylong symposium at his university on Tuesday honoring Paul Demeny, who at age 80 is stepping down as editor of the journal.
Ausubel’s prepared remarks are online. In his talk, he explains that while the common perception is that meeting humanity’s food needs is the task of farmers, there are many other players, including those of us who can choose what to eat and how many children to have:
[T]he main actors are parents changing population, workers changing affluence, consumers changing the diet (more or less calories, more or less meat) and also the portion of crops entering the food supply (corn can fuel people or cars), and farmers changing the crop production per hectare of cropland (yield).
The new paper builds on a long string of studies by Ausubel and the others, including the 2001 paper “How Much Will Feeding More and Wealthier People Encroach on Forests?.” Also relevant is “Restoring the Forests,” a 2000 article in Foreign Affairs co-written by Ausubel and David G. Victor (now at the University of California, San Diego)
This body of analysis is closely related to the core focus of this blog: finding ways to fit infinite human aspirations (and appetites) on a finite planet. The work presents a compelling case for concentrating agriculture through whatever hybrid mix of means — technological or traditional — that best fits particular situations, but also fostering moderation in consumption.
Here’s an excerpt from the paper’s conclusion, which notes the many wild cards that make the peak farmland scenario still only a plausible, and hardly inevitable, future:
[W]ild cards remain part of the game, both for and against land sparing. As discussed, the wild card of biofuels confounded expectations for the past 15 years. Most wild cards probably will continue to come from consumers. Will people choose to eat much more meat? If so, will it be beef, which requires more land than poultry and fish, which require less? Will people become vegetarian or even vegan? But if they become vegan, will they also choose clothing made from linen, hemp, and cotton, which require hectares? Will the average human continue to grow taller and thus require more calories? Will norms of beauty accept obesity and thus high average calories per capita? Will a global population with a median age of 40 eat less than one with a median age of 28? Will radical innovations in food production move humanity closer to landless agriculture (Ausubel 2010)? Will hunger or international investment encourage cropland expansion in Africa and South America? (Cropland may, of course, shrink in some countries while expanding in others as the global sum declines.) And will time moderate the disparities cloaked within global averages, in particular disparities of hunger and excess among regions and individuals?
Allowing for wild cards, we believe that projecting conservative values for population, affluence, consumers, and technology shows humanity peaking in the use of farmland. Over the next 50 years, the prospect is that humanity is likely to release at least 146 mHa [146 million hectares, or 563,710 square miles], one and a half times the size of Egypt, two and a half times that of France, or ten Iowas, and possibly multiples of this amount.
Notwithstanding the biofuels case, the trends of the past 15 years largely resemble those for the past 50 and 150. We see no evidence of exhaustion of the factors that allow the peaking of cropland and the subsequent restoration of nature.
In an e-mail exchange today, I asked Ausubel about another issue touched on in the paper:
Looking around the planet, it’s clear from a biodiversity standpoint that all forests — or farming pressures — are not equal. For instance, in Southeast Asia, palm oil and orangutans are having a particularly hard time co-existing. So while the overall trend is great, do you see the need for maintaining a focus on particular “hot spots,” to use a term familiar in environmental circles?
So far, I don’t see lots of evidence that conservation campaigners (you are one on ocean resources) have found a way to accept this kind of good news and/or incorporate it in their prescriptions for sustaining a rich and variegated biological sheath on Earth. If you agree, any idea why?
Indonesia is the number one place where letting the underlying trend work will not work fast enough. The list of threatened regions is quite well identified: parts of the central African forest, parts of the Amazon.
Some conservation groups have realized that the slow growth in demand for calories as well as pulp and paper are creating big chances to reserve or protect more land. In the right places, where crops are no longer profitable, some amounts of money can acquire large amounts of land for nature.
Conservation groups also ought to attend more to the ecological disaster called biofuels.
I encourage you to dig in on this paper and related work, which provides a useful guide for softening the human impact on a crowding planet. There’ll be plenty of losses, and surprises, but there are real prospects for sustaining a thriving, and peopled, orb.
6:57 p.m. | Addendum | For relevant work with somewhat different conclusions review the presentations from “Intensifying agriculture within planetary boundaries,” a session at the Planet Under Pressure conference in London last March. I’ll be adding links to other relevant analysis here. | <urn:uuid:d30effea-1f5f-4ff1-874d-1ea41dc97c53> | CC-MAIN-2013-20 | http://dotearth.blogs.nytimes.com/2012/12/17/scientists-see-promise-for-people-and-nature-in-peak-farmland/?n=Top%2FReference%2FTimes%20Topics%2FSubjects%2FA%2FAgriculture | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.931343 | 1,474 | 2.6875 | 3 |
Volume 4 Number 2
©The Author(s) 2002
The Continuity Framework: A Tool for Building Home, School, and Community Partnerships
AbstractWe will need to become savvy about how to build relationships, how to nurture growing, evolving things. All of us will need better skills in listening, communicating, and facilitating groups, because these are the talents that build strong relationships. (Wheatley, 1992, p. 38)
In the face of today's challenging social and family issues, many new efforts are underway to help children and families. One solution that many communities have adopted is the establishment of a collaborative partnership that involves all the relevant partners—home, school, and community—in the planning and monitoring of services for children. Unfortunately, achieving a strong partnership with meaningful participation can often be difficult and time-consuming. This article focuses on a set of training materials that has been developed to assist community partnerships in their efforts. These materials highlight eight elements of continuity and successful partnerships: (1) families as partners, (2) shared leadership, (3) comprehensive/responsive services, (4) culture and home language, (5) communication, (6) knowledge and skill development, (7) appropriate care and education, and (8) evaluation of partnership success. Results from a field study that included more than 200 reviewers and 8 pilot sites are summarized. Results indicate that a majority of reviewers found the training materials easy to understand, relevant to their work, and up-to-date. In addition, data gathered from the pilot sites indicate that the partnerships found the materials practical and useful for addressing a variety of issues, including time constraints, communication gaps, differences in professional training, and funding limitations.
Communities face a host of problems that threaten the health and well-being of their children and families. Poverty, unemployment, inadequate care/education, and poor health care are just a few of the difficult issues that communities must confront. What makes these issues particularly challenging is that children and families who experience one problem are often likely to experience other problems as well.
Compounding the problem is that delivery of services to help children and families is typically fragmented and scattered. Even efforts designed to increase the quality and supply of services to children and families have, at times, created greater fragmentation and discontinuity.
In previous years, those who sought to improve outcomes for children concentrated only on the child. Today, however, many service providers have come to understand that the best way to serve and preserve children is to serve and preserve the supportive networks that benefit children (Family Support America, 1996). An extensive body of research identifies the elements that contribute to children's well-being, beginning with those closest to the child and moving outward to encompass the family, early care/education, the neighborhood, the community, and beyond. This ecological perspective (Bronfenbrenner, 1979) has motivated a growing number of communities to focus more closely on the need for collaboration--engaging in a process that allows the community to address many problems at once rather than one at a time.
One solution that many communities have adopted is the establishment of a collaborative partnership involving all the relevant partners--home, school, and service providers--in the planning and monitoring of services for children (Kagan, 1992; Hoffman, 1991). The goal of most of these collaboration initiatives is to improve child outcomes, recognizing that many of the child's needs are closely linked to needs of the family and the community.
Challenges to Collaboration
Community collaboratives/partnerships represent one of the most challenging--yet one of the most effective--efforts for creating a flexible, comprehensive system that meets the needs of children and families. They involve new relationships among service providers and the children and families they serve. They require time, resources, and the willingness of collaborating agencies to learn about and establish trust with each other. In short, they require change (Bruner, Kunesh, & Knuth, 1992).
As a result of the new roles and responsibilities that service providers must assume, collaboratives/partnerships encounter many common difficulties, including (Melaville, Blank, & Asayesh, 1996):
- staff or agency representatives who are resistant to relinquishing power;
- policies and regulations within individual agencies that make it difficult to coordinate services, information, and resources;
- differences in prior knowledge, training, or experience that make it difficult for members to communicate and work together; and
- lack of time to meet and plan together.
Many factors contribute to the success or failure of a community collaborative, and no two collaboratives operate in exactly the same way. However, certain guidelines seem to help smooth the way for a more successful partnership, including (North Central Regional Educational Laboratory, 1993):
- involve all key stakeholders;
- establish a shared vision of how the partnership will operate and expected outcomes for the children and families served;
- build in ownership at all levels;
- establish communication and decision-making processes that are open and allow conflict to be addressed constructively;
- institutionalize changes through established policies, procedures, and program mandates;
- provide adequate time for partners to meet, plan, and carry out activities.
The process of establishing and maintaining a collaborative partnership is not easy, and in the end, each partnership must find a way to proceed that is consistent with its community and unique set of circumstances. However, a number of resources and tools are available to help communities get started creating an effective system for delivering services. In this article, we describe one such tool that assembles elements essential to building a successful collaborative partnership.
Development of Continuity Framework Materials
For the past eight years, the 10 Regional Educational Laboratories (RELs) serving each region of the country have studied effective strategies for strengthening collaboration and increasing continuity among programs for young children and their families. The RELs are overseen by the U.S. Department of Education's Office of Educational Research and Improvement [now the Institute of Education Sciences], and their primary purpose is ensuring that those involved in educational improvement have access to the best information from research and practice. During the contract period of 1995-2000, the RELs established a program called the Laboratory Network Program (LNP), which convened representatives from each Laboratory as a national network working on common issues.
In 1995, the Early Childhood LNP developed Continuity in Early Childhood: A Framework for Home, School, and Community Linkages (U.S. Department of Education, 1995), a document designed with two key purposes in mind: first, an emphasis on the need for children and families to receive comprehensive and responsive services, reflected in the eight elements of continuity outlined in the Framework (see Figure 1). Taken together, the elements are intended to promote a comprehensive understanding of continuity and transition during early childhood. Second, the Framework offered a set of guidelines that partnerships could use to compare and assess their current policies and practices, as well as identify areas in need of improvement.
Figure 1. Elements of Continuity
(U.S.Department of Education, 1995)
An extensive field review of the Framework indicated that although the document was helpful and informative, many community partnerships continued to have difficulty "getting started." As a result, a Trainer's Guide was developed to support the use of the Framework and assist community partnerships in the first stages. These materials were developed by the Early Childhood LNP in collaboration with the National Center for Early Development & Learning.
The Trainer's Guide provides an overview of the content and potential uses of the Framework and includes all activities and materials necessary to conduct training sessions. The Guide itself consists of four training sessions that are organized around the eight elements of continuity. The materials are designed so that a local partnership has everything needed to conduct the training: background information, scripts, handouts, transparencies, sample agendas, and checklists for additional equipment and supplies:
- The first session, Understanding Continuity, is designed to introduce participants to the Framework document and help participants develop a greater understanding and appreciation for continuity.
- The second session, Developing a Continuity Team, highlights the importance of broad representation and shared leadership among partnership members.
- The third session, Planning for Continuity, emphasizes the need for a comprehensive approach to service delivery and encourages participants to examine their current partnership practices and policies.
- The final session, Formalizing Continuity, focuses on the importance of effective communication among group members and provides participants with an opportunity to formulate action plans.
The Guide is designed to be a flexible training tool, adaptable to meet the needs of a particular audience. The intended audience includes local partnerships for children and families (including Smart Start partnerships in North Carolina), Head Start Program representatives, public schools, and communities. The overall objectives of the training are (1) to enhance the collaborative's knowledge and understanding of continuity, (2) to strengthen and support collaborative groups in their efforts to work as partners, and (3) to maximize the benefit they might receive from using the Framework.
What follows is a description of the field test that was designed to assess the use and effectiveness of the Trainer's Guide. The field test focused exclusively on the Framework materials--no other instructional sources were employed. We will present the major findings of the field test and summarize recommendations based on those findings. In addition, we will highlight the work of several collaborative partnerships that took part in the field study, and we will describe some of the problems they encountered, how they used the Framework materials to address those problems, and where they are today. Specifically, the evaluation will explore:
- To what extent is the information contained in the Framework and Trainer's Guide relevant and useful to community partnerships?
- What is the perceived impact of the training and Framework on partnership activities?
- How do partnerships incorporate elements of the Framework into their ongoing activities?
- Of the review sites that indicated interest in the training materials, what proportion actually conducted the training?
The overall usefulness and effectiveness of the Trainer's Guide was studied in two phases. Phase One consisted of document review and feedback from individuals working in the early childhood field. In Phase Two of field testing, the training was actually piloted in eight partnership sites.
Phase One: Document Review
Reviewers for the Trainer's Guide were solicited through the Laboratory Network Program (LNP) and at conferences related to early childhood issues. Three hundred thirteen individuals/organizations requested a set of the Framework materials (participant manual, Trainer's Guide, and a sample color transparency) and feedback form. Feedback questions centered on four areas: (1) information's relevancy and accuracy, (2) format and organization of the Trainer's Guide, (3) specific training needs, and (4) possible barriers to conducting training.
Of the 313 requesting materials, 215 (68.7%) reviewers returned feedback forms. Twenty-one percent (N = 45) of the respondents were members of a Smart Start partnership (North Carolina initiative), 19% (N = 40) worked in Head Start agencies, and 11% (N = 24) worked in family resource centers. Others included representatives from state agencies, school personnel, and university faculty. A majority (89%) of the respondents indicated that they are actively involved in a community partnership.
Final Follow-up with Select Reviewer Sites. Of the original 215 organizations/individuals who reviewed the Framework materials, 80 indicated an interest in conducting the training in its entirety and requested a complete set of transparencies. (The original materials included one sample color transparency, and the REL offered a complete set of Framework transparencies to all organizations making the request.) Approximately one year after receiving the materials, interviews were conducted with representatives who received transparencies. The purpose of these follow-up telephone calls was to determine if the materials had been used and the degree to which outside support or assistance might be needed to conduct the training.
Phase Two: Pilot Training
During the second phase of the field testing, the training was piloted in eight collaborative partnerships from across the nation (see Table 1). These sites were recruited through the LNP and selected based on their interest in the project. To assist with logistical details, a liaison, identified at each site, coordinated training dates and assisted with data collection. Sites varied according to demographics, partnership maturity, and sponsoring or lead agency.
|Site Location||Community Type||Sponsor/Lead Agency|
|Beaufort, SC||Rural||Success by 6|
|Dothan, AL||Urban||Family Resource Center|
|Walnut Cove, NC||Rural||Smart Start|
|Valdosta, GA||Rural||Family Connections/County Commission|
|Wheeling, WV||Rural||Head Start|
|Troy, NC||Rural||Smart Start|
|Concord, WV||Rural||Family Resource Center|
Five of the partnerships described themselves as existing collaboratives (two years or more), while the remaining three indicated that they were in the planning stages of building a collaborative partnership. Sponsors of the partnerships included Smart Start (2); Head Start, family resource centers (2); Success by 6; a public school system; and a county task force.
Across the eight sites, a total of 160 individuals participated in the training. Approximately 64% of the attendees were White, 27% were African American, and the remainder were either Hispanic, American Indian/Alaskan Native, or multiracial.
Several of the partnerships invited persons who were not part of the collaborative partnership to attend the training. As a result, slightly more than half (54%) of the participants reported that they were current members of the partnership. The majority of these had been members less than one year (53%). Early childhood specialists represented the largest group attending the training (29%), followed by program administrators (18%), teachers/caregivers (14%), and parents (10%). Other groups represented included policy makers, members of the business community, and university faculty.
Each of the sites conducted the entire training course in the fall; however, there was some variability in delivery of training. For example, some partnerships conducted the training as described in the Trainer's Guide--two complete, consecutive days of training. Other partnerships modified the training schedule to meet the needs of its members and used other formats such as one day of training followed two weeks later by a second day of training.
At the conclusion of training, participants were asked to provide feedback on specific elements of the training, including organization, training content, and materials/resources. In addition, participants were asked to comment on their satisfaction with the training and the overall usefulness of the training materials. This information, along with information gathered from the review sites, was used to revise the Trainer's Guide.
In the six months following the training, partnership activities were studied to determine the degree to which the collaboratives incorporated content from the Framework into their regular activities. Materials studied included a record of stakeholder attendance and meeting minutes documenting partnership activities. At the end of this period, a follow-up survey was sent to participants at each pilot site. Survey questions focused on three major areas: (1) impact of the training, (2) impact of the Framework materials, and (3) overall familiarity with Framework materials.
In addition to the final survey with individuals who participated in the training, a final interview was conducted with seven site liaisons (one liaison was unavailable for interview). Interview questions focused on the original goal of the partnership, reasons for participating in the field study, and impact of the training and Framework materials.
The data were analyzed to determine general response patterns and to identify logical changes or improvements to the Trainer's Guide. Both quantitative and qualitative techniques were used to analyze data from the review sites and the pilot sites.
Phase One: Document Review
Analyses of data from reviewer sites were conducted on 215 surveys. Table 2 summarizes Trainer's Guide as easy to understand, relevant to their work, accurate, and up-to-date.
|Survey Statement||Agreed or Strongly Agreed with Statement|
|Information is accurate and up to date.||94.9% (4.54)|
|Format is easy to understand and follow.||93.9% (4.49)|
|Training materials were easy to understand and follow.||92.5% (4.46)|
|Information is relevant to my work.||89.3% (4.41)|
|I would be comfortable using the materials.||83.3% (4.29)|
|*Note: According to the scale, 1 = strongly disagree and 5 = strongly agree. Mean scores are presented in parentheses.|
A series of open-ended questions provided respondents with an opportunity to provide more specific information and feedback. When asked what parts of the training were most useful, of those who responded, approximately 30% reported that the materials were the most useful part of the training. Reviewers specifically mentioned handouts, transparencies, and checklists. Another 22% reported that the information focusing on the need to include families and share leadership responsibilities was most useful.
Reviewers also were asked to identify the greatest training need within their partnerships. Of those who responded, more than one-third (34%) reported that they often need assistance identifying and including community stakeholders. Reviewers cited family members and members of the business community as groups that often are poorly represented at partnership meetings. Other topics representing challenges to partnerships included developing the team, sharing leadership responsibilities, and involving families in meaningful ways.
In terms of barriers or factors that would influence the use of training, most of the respondents (75%) cited time as the greatest barrier to conducting training. This factor was followed by a lack of funding (68%), the unavailability of a trainer (45%), and lack of interest of collaborative partners (39%).
Final Follow-up with Select Reviewer Sites. Of the 80 individuals/organizations who requested a complete set of transparencies, 68 were located for follow-up interviews (85%). For the remaining 12, attempts to contact the site were unsuccessful; either the person requesting the transparencies was no longer there, or the materials were never received.
Interviews revealed that 23 of the respondents had conducted training using the Framework and accompanying materials. Of those who stated that they had conducted the training, only two (less than 10%) had used the training in its entirety. Most had conducted at least one part of the training, selecting the portions most useful for their work. "Families as Partners," "Shared Leadership," and "Comprehensive and Responsive Services" were the elements from the Framework most often used for training.
An additional 17% said that although they had not conducted the training as designed, they had adapted the materials or used them in other circumstances. Examples of how they had adapted the materials included using the exercises, overheads, major concepts, and other information in training activities.
Head Start agencies were the primary sponsors for half of the training events. Public schools, area education associations, state departments of education, local partnerships, child development centers, and related-type centers were listed as sponsors or lead agencies for the remaining training activities.
Training participants included staff and administrators at Head Start agencies, preschool and child care providers, local education agencies, schools, school improvement teams, state departments of education staff, local family service agencies and boards of directors, and parents.
All who said they had used the training materials were asked to comment on the usefulness of the training. The majority of respondents rated the training as "very useful" or "useful," and all said they would recommend the training to others. Particular aspects of the training that respondents liked included:
- professional quality, clarity of materials, and sequencing of content of the Framework;
- handouts, activities, and overheads;
- content and the ability to present the material at multiple skill levels; and
- ease of use of the Framework.
There were suggestions for improving the training. Four respondents said the course was "too long," especially if used in school systems or with parents. Others maintained a need for greater emphasis on action planning and implementation, "more written support materials (research, position support, background), and additional copies of key pieces of materials that helped shape the Framework."
Phase Two: Pilot Training
In terms of the training quality and overall effectiveness, most of the participants rated the training sessions as either "good" or "excellent." Participants tended to rate the second day of training as higher in quality and more effective than the first day of training (M = 4.392 and M = 4.17, respectively, based on a 5-point scale).
Participants also evaluated the effects of the training and estimated its impact on future partnership practices. Using a four-point Likert-type scale, participants rated the extent to which they agreed with each statement. Table 3 summarizes participants' appraisal of the training and reinforces the focus of the original training objectives.
Objective 1: To enhance the collaborative's knowledge and understanding of continuity
|As a result of the training, I believe that I am motivated to build and strengthen continuity efforts in my community.||3.44||.65|
|As a result of the training, I believe that I have a better understanding of continuity and why it is important.||3.41||.65|
|I believe that this training will have an impact on increasing awareness of new skills and knowledge for our team.||3.31||63|
Objective 2: To strengthen and support collaborative groups in their efforts to works as partners
|As a result of the training, I believe that I am better able to participate as a member of a home, school, and community partnership.||3.40||.65|
|I believe that this training will have an impact on how decisions are made and the planning we do for services.||3.25||.59|
|I believe that this training will have an impact on changing/enhancing the quality of community practices.||3.23||.58|
Objective 3: To maximize the benefit the collaborative might receive from using the Framework
|As a result of the training, I believe that I am better able to use the Framework as a tool for exploring continuity and transition||3.26||.63|
|I believe that this training will have an impact on positively affecting outcomes for children and families.||3.31||.63|
|*Note: According to the scale, 1 = strongly disagree and 4 = strongly agree.|
In addition to participant ratings immediately following the training, data were collected on regular partnership activities after the training. Analysis of materials such as meeting minutes revealed that during the six months following completion of the training, five of the eight sites reported that they continued to use the Framework materials. Exactly how the materials were used varied from site to site. Two of the sites selected specific elements of the Framework as their priority concerns for the coming year. They then organized subcommittees to review the partnerships' practices with respect to those elements and make recommendations for improving existing services. Another partnership used the materials to provide training to other agencies and organizations not directly involved with the partnership. The remaining two partnerships used the Framework as a resource for improving transition practices with their communities.
At the end of the six months, a final survey was distributed to participants at the last partnership meeting of the year, and surveys were mailed to those not in attendance at the final meeting. Approximately half of the individuals who participated in the training (81 of 160) responded to the survey. Participants were asked to rate the extent to which the Framework materials had had an impact on partnership practices. On a four-point scale (4 = "a great deal," 3 = "some," 2 = "very little," and 1 = "not at all"), the majority of respondents (88.6%) reported that the training had "impacted" their knowledge and skill development "some" or a "great deal." Respondents also thought that the Framework had at least "some" impact on the knowledge and skills development of their partnership (83%) and community (72%). The majority (97.4%) speculated that the Framework would have at least some future impact.
Finally, participants were asked to indicate the single greatest impact they experienced as a result of the training. Approximately 41% reported that as a result of the training they felt more motivated to build or strengthen efforts to support continuity of services for children in their communities. Thirty-five percent of the respondents said they had a better understanding of continuity and its importance; 17% felt that the training prepared them to be better members of their partnership; and 7% said that the training gave them a greater understanding of the Framework as a tool.
Stokes County Partnership for Children, King, NC
An ongoing goal of the Stokes County Partnership for Children is to create a system that encourages service providers to work together and promotes continuity for children and their families. Members of the partnership began by using the Framework to build their own knowledge and skills about continuity; however, they soon recognized the need to inform others of the importance of continuity in children's lives. As a result, the Partnership conducted a series of focus groups and meetings among parents and family members within the community. They used information from Elements 3 (Comprehensive/Responsive Services) and 7 (Developmentally Appropriate Care/Education) to explain what was needed to support continuity and its potential benefits for children. These meetings were also an opportunity to inform families of the various resources and supports available within the community. Later, the focus groups were expanded to include all stakeholders (e.g., child care, kindergarten, Head Start, school administrators, special needs coordinators, etc). The information gathered from these meetings has been used to guide the development and implementation of policies and practices that promote continuity.
Final Interview with Liaisons. In the final interview conducted with site liaisons, five of the seven liaisons reported that the overall goal of their partnership is to improve services for children and their families by connecting agencies and strengthening the collaborative bonds between those agencies. Three of the liaisons specifically mentioned the need to improve transitions and create a system of responsive and comprehensive services.
In addition, liaisons were asked to talk about their reasons for participating in the field-test process. At least three of the liaisons cited low levels of collaboration across agencies and indicated that partnership meetings were used primarily as a time for sharing information. Others saw the training as an opportunity to invite additional partners to the table and begin a discussion of how they could better work together.
Finally, liaisons were asked to rate the extent to which the Framework materials had been helpful in accomplishing their overall partnership goal. Using a five-point scale, five of the liaisons rated the Framework materials as either "helpful" (4) or "very helpful" (5). The remaining two liaisons rated the Framework materials as at least "somewhat helpful" (3).
Developing and maintaining a community collaborative is hard work, and it is a challenge that requires a great deal of commitment and cooperation from those involved. Training and resource materials available to help community partnerships build a more responsive system must address such issues as time constraints, communication gaps, differences in professional training, and funding limitations. Given these challenges, the Continuity Framework and its Trainer's Guide seem to be important and useful tools for helping partnerships increase collaboration and involvement.
Data gathered from participant ratings and key-informant interviews indicated that the training was helpful in a number of ways. A feature of the training mentioned by many of the participants was the fact that the experience helped "level the playing field." That is, it provided stakeholders with a common language to use as they worked together. As illustrated in the following example, stakeholders often come from a variety of agencies and backgrounds, which can be a major impediment when a community must begin to work together and coordinate its efforts.
The case studies in the sidebars highlight the work of four collaborative partnerships that took part in the field study. These case studies discuss some of the problems they encountered, how they used the Framework materials to address those problems, and where they are today.
Bovill, Idaho, Collaborative
Bovill is a small town (population 310) located in the north central part of the state. Bovill has no resident doctor or dentist. At the time, there also was no child care center or preschool available to children. (The closest one was 35 miles away.)
In 1998, various members of the community decided that they wanted to do something to help improve the situation for children. This group of citizens brought together parents and virtually every local organization to work on a plan that would support the learning needs of children and their families. Part of this effort was a proposal submitted to the J.A. and Kathryn Albertson Foundation that would help fund an early learning center. In 1999, they were awarded a grant, and they began the work to open the Bovill Early Childhood Community Learning Center.
However, once the work began, members of the partnership found that they did not have a common vocabulary to talk about the issues of early childhood education. There were also difficulties associated with establishing a partnership, such as "Who else should be included?" and "How do you get started?" In an effort to "get started" and begin the planning process, the partnership elected to participate in the field testing of the Framework materials.
Framework training was provided over two consecutive days and built into the inservice training schedule of the elementary school. In addition to staff and faculty from the elementary school, representatives from other agencies and organizations participated, including the health department, the Idaho Department of Disabilities, news media, schools, early childhood education, Even Start, parents, university students, attorneys, community leaders, and businesses.
According the site liaison, the Framework materials were used:
- To improve awareness of key issues in providing high-quality services. The Framework provides direction to help develop a program that really works.
- To provide a common language and for internal communication enhancement. Now everyone "speaks the same language."
- As an external communication tool. According to the liaison, "it is so much easier to talk with funding sources when you use the structure of the elements as a base."
- To validate their progress toward providing the best practices in early childhood education.
- As a piece of the Bovill Elementary School improvement plan.
Positive impact on individual partnership members was cited as another basis for success of the training. Many indicated they had a better understanding of continuity and were more motivated to continue to work on the difficult issues that often arise as part of the collaborative process. An added value of the training was the opportunity to spend time together and develop relationships with persons from other agencies. Often, these individual relationships help form the basis for collaborative work within the partnership.
Based on the sites that continued to use the materials, the Continuity Framework and its Trainer's Guide seem to be equally useful to both existing and newly established partnerships. A common experience in the maturation of partnerships is that they are prone to lose initial momentum, often stagnating into "easy" roles such as simple information sharing. A serendipitous discovery of this study is that such partnerships evidenced rejuvenation of their efforts after participating in the training (see the Valdosta, Georgia, example).
Valdosta, Georgia, Collaborative
The Lowndes County/Valdosta Commission for Children and Youth has been in existence for more than a decade, and during this time, the partnership has experienced various "ups and downs." According to site liaison Vickie Elliott, cycles are a normal part of the collaborative process, "They may be the result of staff turnover or changes in the board chair and/or board members." She reports that participation in the training provided members with practical, research-based information. This information served as a reminder to members that they were doing good work and that their work was important.
Since the training, the partnership has continued to use Framework materials as a reference and resource. For example, during a recent meeting, members began a discussion regarding the evaluation of partnership activities. They used Element 8: Evaluation of Partnership Success to help shape and guide this discussion. In addition, the partnership has applied for and received a 21st Century Learning Community grant. Because of the knowledge and understanding they gained during the training, members requested funds for a case manager position to be based at each school and conducting home visits. It is hoped that this strategy will facilitate communication and create greater continuity of services for students and families.
Finally, the data indicate that change takes place slowly. Participants reported that the training had had some impact on their community but felt that the greatest impact was yet to come. Bringing everyone to the table is not enough. True collaboration that produces continuity in services for children takes place over a long period of time, as agencies that have not previously worked together begin to get to know each other and slowly modify procedures and practices.
Marshall County Tadpole Team, Wheeling, WV
Efforts to collaborate are often driven by the realization that single agencies cannot solve problems alone. Partners must be willing to jointly plan and implement new ventures, as well as pool resources such as money and personnel. Nowhere is this need to collaborate and pool resources more crucial than in Marshall County, WV. Located in the northern part of West Virginia, Marshall County remains a predominantly rural county. With a population of approximately 36,000, Marshall County has seen a decline in the number of residents over the past two to three years, largely attributed to the economic hardships of the area. This part of West Virginia relies heavily on the coal and steel industries, and as these industries have fallen on hard times, so too have many families. As a result, many families have moved away to find other employment; however, many others have sought support from social services agencies within the community. In order to make the most of the limited resources and support available within the county, many of the local agencies (e.g., Northern Panhandle Head Start, Starting Points Center, Tadpoles Team) came together to form a community collaborative. Although their collaborative meetings began more as a time for sharing information, members soon realized that to be a true "working group," they would need to broaden the meeting agendas and formalize the collaborative relationships. Using the Framework materials as an assessment tool, members worked through each element identifying the gaps in services and generating ideas for possible programs and procedures to address those gaps. This shift encouraged members to devote meeting times to discussing specific issues facing the community. Moreover, it encouraged members to formalize the partnership with written agreements. These agreements have allowed members to make a solid commitment to the collaborative, as well as clarify specific roles and responsibilities for services.
Beyond the content of the training and issues related to the collaborative process, the field study underscored the importance of training structure and design. Many study participants praised the Framework materials for flexibility and relevance to a variety of contexts. The training materials were designed so that particular attention was devoted to issues such as target audience attributes (e.g., varied educational and professional development backgrounds), which dictate the appropriate level of sophistication as well as the need for course module structure (i.e., overall organization and scripting) to be highly adaptable to local training needs.
The field studies indicate that community partnerships benefit from training and technical assistance that help with the process of getting started, as well as recapturing momentum and focus. Additional research is needed to document the ongoing efforts of these communities and explore whether the Framework materials continue to have an impact on community practices and outcomes, as many of the participants predicted. Further study also is needed to determine what other kinds of training or technical assistance might be useful to these partnerships as they work to build capacity and expand or grow new programs.
Bronfenbrenner, Urie. (1979). The ecology of human development. Cambridge, MA: Harvard University Press.
Bruner, Charles; Kunesh, Linda; & Knuth, Randy. (1992). What does research say about interagency collaboration? [Online]. Oak Brook, IL: North Central Regional Educational Laboratory. Available: http://www.ncrel.org/sdrs/areas/stw_esys/8agcycol.htm [2002, October 22].Editor's Note: this url is no longer active.
Family Support America. (1996). Making the case for family support [Online]. Chicago: Author. Available: http://www.familysupportamerica.org/content/pub_proddef.htm [2002, October 22]. Editor's Note: this url is no longer active.
Hoffman, Stevie (Ed.). (1991). Educational partnerships: Home-school-community [Special issue]. Elementary School Journal, 91(3).
Kagan, Sharon Lynn. (1992). The strategic importance of linkages and the transition between early childhood programs and early elementary school. In Sticking together: Strengthening linkages and the transition between early childhood education and early elementary school (Summary of a National Policy Forum). Washington, DC: U.S. Department of Education. ED 351 152.
Kunesh, Linda. (1994). Integrating community services for children, youth, and families. Oak Brook, IL: North Central Regional Educational Laboratory.
Melaville, Atelia; Blank, Martin; & Asayesh, Gelareh. (1996). Together we can: A guide for crafting a profamily system of education and human services (Rev. ed.). Washington, DC: U.S. Department of Education. Available: http://eric-web.tc.columbia.edu/families/TWC/ Editor's Note: this url is no longer active.[2002, October 22]. ED 443 164.
North Central Regional Educational Laboratory. (1993). NCREL's policy briefs: Integrating community services for young children and their families. Oak Brook, IL: Author. Available: http://www.ncrel.org/sdrs/areas/issues/envrnmnt/go/93-3toc.htm [2002, October 22].
U.S. Department of Education and U.S. Department of Health and Human Services. (1995). Continuity in early childhood: A framework for home, school, and community linkages [Online]. Washington, DC: Author. Available: http://www.sedl.org/prep/hsclinkages.pdf [2002, October 22]. ED 395 664.
Wheatley, Margaret J. (1992). Leadership and the new science. San Francisco: Berrett-Koehler.
Dr. Glyn Brown is a senior program specialist with SERVE Regional Educational Laboratory. She studied at the University of Alabama (B.S.), the University of Southern Mississippi (M.S.), and completed her Ph.D. in Family and Child Development at Auburn University. Prior to coming to SERVE, Dr. Brown worked as a children's therapist in a community mental health program. As a program specialist with SERVE, Dr. Brown provides training and direct consultation to school personnel, child care providers, and community partnerships.
SERVE Regional Educational Laboratory
1203 Governor's Square Blvd., Suite 400
Tallahassee, FL 32301
Carolynn Amwake, a program specialist at the SERVE Regional Educational Laboratory, has extensive experience working with families, child care providers, teachers, administrators, and community partners. She received her B.S. from Radford University in early childhood education and special education and has taught children with special needs in elementary schools, children's homes, and child care centers. Her experiences as an educator and parent led to an interest in improving the quality and continuity of early childhood transitions for both children and families.
SERVE Regional Educational Laboratory
1203 Governor's Square Blvd., Suite 400
Tallahassee, FL 32301
Timothy Speth is a research associate at Northwest Regional Educational Laboratory (NWREL). He received his B.S. in psychology from South Dakota State University and his M.A. from San Diego State University. He has extensive training and experience in research design, statistics, and program evaluation. Mr. Speth is currently involved with several research and evaluation projects throughout the Northwest, as a Research Associate of NWREL's Child and Family Program. He is the primary external evaluator for six Alaska schools participating in the Comprehensive School Reform Demonstration Project (CSRD) and assists in CSRD-related activities throughout the Northwest.
Northwest Regional Educational Laboratory
101 S.W. Main Street, Suite 500
Portland, OR 97204-3297
Catherine Scott-Little, Ph.D., is director of the Expanded Learning Opportunities Project for SERVE. Dr. Little completed her graduate work in human development at the University of Maryland, College Park. Her undergraduate degree in child development and family relations is from the University of North Carolina at Greensboro. Prior to joining SERVE, Dr. Little was deputy director of a large Head Start program in Fort Worth, Texas, and she has also served as director for a child development center serving homeless families in the Washington, DC, area.
SERVE Regional Educational Laboratory
P.O. Box 5367
Greensboro, NC 27435 | <urn:uuid:5796c026-a8b2-4a00-ac9d-d935eecfa46f> | CC-MAIN-2013-20 | http://ecrp.uiuc.edu/v4n2/brown.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.961456 | 8,499 | 3.296875 | 3 |
The significance of Alabama Unionists during the Civil War and Reconstruction has long been a subject of study among scholars. Largely centered in northern Alabama and to a lesser degree in the southeast region and in Montgomery and Mobile, Unionists were important both militarily and politically. Until recently, however, the details of this phenomenon have remained less well known, largely because the term Unionist (both then and now) has been used to refer to a range of different individuals and positions.
In the broadest sense, Unionist has meant any white person who opposed secession (including those who later supported the Confederacy) and those who came to support the Union during the war despite having originally supported the Confederacy. This broad definition includes a very wide range of Alabamians—from the most well-to-do planters who ultimately become officers in the Confederate Army to the subsistence farmer who deserted the southern cause midway through the war. It is also possible to define Unionism more narrowly, confining the label to those individuals who resisted both secession and the Confederacy during the war. Such unconditional loyalists probably represented no more than 15 percent of Alabama's adult white population. They were mostly nonslaveholding farmers (though a small minority owned slaves) living in the northern third of the state. A few Unionists also lived in the piney woods and coastal plain further south. In many respects, these men and women were very much like their neighbors who supported the Confederate cause. The reasons they remained loyal to the Union were also quite diverse. Many saw secession as illegal, whereas others felt that it would dishonor the American Revolution and their own ancestors. Still others were certain that secession would end in political or military disaster. Many were influenced by the respected figures in their families or neighborhoods.
Unionism in Alabama arose under the pressures of the presidential election of 1860. Nine months before, the state legislature had directed that, in the event of a Republican's election, a state secession convention would be called. By directly linking the presidential election to secession, the legislature fostered a political atmosphere that was particularly hostile to Unionists. Newspaper editorials and participants at community meetings condemned as traitors those who canvassed for Illinois senator Stephen Douglas, the nominee of the regular Democratic Party, rather than the southern-rights Democratic nominee, John Breckinridge. In the election, fully 80 percent of Alabama's eligible voters participated, giving Breckinridge a substantial victory, with 54 percent of the vote. John Bell, the Constitutional Union candidate who was supported by a number of Alabamians hostile to secession, received 31 percent of the vote. Douglas, the candidate most associated with a strongly Unionist position, polled slightly more than 15 percent. Republican Abraham Lincoln was not even on the ballot in Alabama.
As promised, Alabama secessionists called a convention in the wake of Lincoln's election. The campaign for convention delegates provoked heated and sometimes violent debates among neighbors, forcing many to defend their positions in public. Of the 100 delegates elected, 53 were secessionists and 47 were cooperationists, a term that refers to the delegates' desire to secede only in "cooperation" with other southern states. In fact, the men elected on this platform represented a wide range of ideas about if, when, and under what circumstances to cooperate with secession and included a minority faction—probably less than one-third (the vast majority of them from the northern third of the state)—of unconditional Unionists who opposed secession outright.
These delegates convened in Montgomery on January 7, 1861, and debated secession for four days. On January 11, 1861, the convention passed Alabama's Ordinance of Secession by a vote of 61 to 39. Many of those who voted against the ordinance, however, ultimately did support secession, and four immediately reversed themselves and signed with the majority. Among the opposition, 33 delegates subsequently signed the "Address to the People of Alabama," in which they pledged to consult with their supporters and then act on their wishes. Ten signatories of the address signed the ordinance to satisfy their constituents. Other delegates who rejected the ordinance eventually took active part in the war. Only three signers—Henry C. Sanford of Cherokee County, Elliot P. Jones of Fayette County, and Robert Guttery of Walker County—never signed the ordinance and maintained their Unionism throughout the war. Only two wartime Unionists—R. S. Watkins of Franklin County and Christopher C. Sheats of Winston County—signed neither the "Address" nor the Ordinance of Secession.
Most of the men and women who supported the Union after Alabama's secession faced great difficulties. Many were ostracized and ridiculed by neighbors, called before community vigilance committees for questioning and intimidation, or actually harmed for endorsing the Union. Such treatment was most commonly meted out to those who publicly asserted their views; those who kept quiet and did not interfere with volunteering were often left alone during the first year of the war. After Confederate conscription began in April 1862, however, community tolerance of Unionists waned. Individuals who resisted the draft, for whatever reason, were subject to arrest and imprisonment. Family members who supported resisters were frequently threatened with violence or exile by conscript cavalry who hoped to pressure men to come in from the woods or mountains and surrender. In addition, it was not at all uncommon for the families of Unionists to be targeted for punitive foraging or arson by Confederate forces or local conscript cavalry.
After the Union Army invaded Alabama in early 1862, Unionists had more opportunities to flee behind Union lines for safety and the possibility of employment as soldiers, spies, or laborers. Most well known of Alabama's Union troops was the First Alabama Cavalry, U.S.A., organized in late 1862 by Brig. Gen. Grenville M. Dodge, stationed at Corinth, Mississippi. The regiment served mostly in northern Alabama, western Tennessee, and northeastern Mississippi, though it marched with Gen. William Tecumseh Sherman to Savannah in 1864. Alabama Unionists also joined other federal regiments, particularly those from Tennessee, Indiana, Illinois, and Ohio. Those who remained at home, both within Union-occupied territory and behind Confederate lines, also actively assisted Union forces as spies and guides. In some cases, they collaborated with local African Americans (most often their own slaves) to aid and abet the Union Army or pro-Union men in their neighborhoods. Moreover, African Americans from Alabama also crossed the Union lines to serve as laborers and soldiers, and after the Emancipation Proclamation went into effect in 1863, many were inducted into United States Colored Troops regiments. Almost 5,000 African Americans, or 6 percent of Alabama's black male population between the ages of 18 and 45, volunteered in the Union ranks.
As was the case throughout the South, by the midpoint of the war Alabama's original Unionists were increasingly joined in their dissent by deserters from the Confederate Army, mostly men whose families were struggling at home without their labor. Disillusioned by the realities of warfare, angered by the inequities of service under laws exempting slaveowners and selected professionals, such Alabamians generally wanted the war to end more than they desired Union victory, though some did cross lines and join the Union army rather than desert and avoid service altogether. A small peace movement also emerged at this time among men who had originally opposed secession but later supported the state.
After the war, Unionists continued to struggle politically and socially, for their wartime activities had alienated them from
their now-defeated neighbors. Most eagerly joined the Union League and the Republican Party. Some wartime Unionists helped reintroduce the Methodist-Episcopal Church (as contrasted with the Methodist-Episcopal Church, South) to northern Alabama, finding there a more hospitable environment
for worship. Many campaigned strenuously to convince the president and Congress to limit the political rights of former Confederates.
They also sought positions of local and state authority for others who had supported the Union during the war. At this point,
a number of men who had originally opposed secession but supported the state in 1861, as well as citizens who had become disillusioned
with the war, also moved to the fore of political life in Alabama. These moderates were, in general, encouraged by Pres. Andrew Johnson, who appointed such men to
positions of political authority in the immediate post-war provisional governments he established. The Republican Party in
Alabama was populated by such individuals, as well as core Unionists who had served in the Union Army or otherwise actively
resisted the Confederacy. Both groups were referred to by their Democratic opponents as sc alawags.
Under Congressional Reconstruction (1867-74) wartime loyalists gained greater political power than they had under Presidential Reconstruction, taking leading roles in the constitutional convention of 1867, the Freedmen's Bureau, and the Republican-dominated state legislature. Most also supported, though sometimes reluctantly, voting rights for African Americans as a means to gain political power over former Confederates. For their continued association with northern Republicans and support for African American equality, white Unionists were targeted for intimidation and physical violence by the Ku Klux Klan and other anti-Reconstruction vigilantes. As elsewhere in the South, Alabama Unionists and their Republican allies (white and black, northern and southern) received little in the way of federal assistance to defend against the onslaught of violence. As their party was overwhelmed by the Democratic opposition, Unionists retreated from the forefront of state politics, though those in communities with substantial loyalist populations continued in positions of local political leadership well into the late nineteenth century.
Barney, William L. The Secessionist Impulse: Alabama and Mississippi in 1860. Princeton: Princeton University Press, 1974.
Fitzgerald, Michael W. The Union League Movement in the Deep South: Politics and Agri cultural Change During Reconstruction. Baton Rouge: Louisiana State University Press, 1989.
Mills, Gary B. Southern Loyalists in the Civil War: The Southern Claims Commission. A Composite Directory of Case Files Created by the U.S. Commissioner of Claims, 1871-1880, including those appealed to the War Claims Committee of the U.S. House of Representatives and the U.S. Court of Claims. Baltimore: Genealogical Publishing Company, Inc. 1994.
Rogers, William Warren, Jr. The Confederate Home Front: Montgomery During the Civil War. Tuscaloosa: The University of Alabama Press, 1999.
Storey, Margaret M. Loyalty and Loss: Alabama's Unionists in the Civil War and Reconstruction. Baton Rouge: Louisiana State University Press, 2004.
Margaret M. Storey
Published December 14, 2007
Last updated October 3, 2011 | <urn:uuid:dcf6578e-71df-4e20-904c-5952df38fb9c> | CC-MAIN-2013-20 | http://encyclopediaofalabama.org/face/Article.jsp?id=h-1415 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.973099 | 2,188 | 3.859375 | 4 |
LANs to WANs(c) The Complete Management Guide
Authors: Muller N.J.
Published year: 2003
|< Day Day Up >|
Depending on the situation facing network managers, bridges can be used to either extend or segment LANs. At one level, bridges can be used for segmenting LANs into smaller subnets to improve performance, control access, and facilitate fault isolation and testing without impacting the overall user population. At another level, they are used to create an extended network that greatly expands the number of devices that can be supported and the services available to each user . Bridges may even offer additional features such as data compression, which has the effect of providing greater throughput over low-speed lines. Compression ratios of 2:1 all the way down to 6:1 may be selected by the network manager, depending on what the vendor offers with a specific product.
As noted, bridging occurs at the data link layer (see Figure 5.1), which provides physical addressing, manages access to the physical medium, controls data flow, and handles transmission errors. Bridges analyze incoming frames, make forwarding decisions based on the source and destination addresses of those frames, and then forward the frames to their destinations. Sometimes, as in source-route bridging, the frame contains the entire path to the destination. In other cases, as in transparent bridging, frames are forwarded one hop at a time toward the destination.
Figure 5.1: Bridge functionality in reference to the OSI model.
Bridges can be either local or remote. Local bridges provide direct connections between many LAN segments in the same area. Remote bridges connect LAN segments in different areas, usually over telecommunication lines. There are several kinds of bridging and all may be supported in the same device:
Transparent bridging —used mostly in Ethernet environments that have the same media types, these bridges keep a table of destination addresses and outbound interfaces.
Source-route bridging —used mostly in token-ring environments, these bridges only forward frames based on the routing indicator contained in the frame. End stations are responsible for determining and maintaining the table of destination addresses and routing indicators.
Translation bridging —used to bridge data between different media types, these devices typically go between Ethernet and FDDI or token ring to Ethernet.
Source-route translation bridging —this is a combination of source-route bridging and transparent bridging that allows communication in mixed Ethernet and token-ring environments. (Translation bridging without routing indicators between token ring and Ethernet is also called source-route transparent bridging.)
The engine for transparent bridging is the spanning tree algorithm (STA), which dynamically discovers a loop-free subset of the network’s topology. The STA accomplishes this by placing active bridge ports that create loops into a standby or blocked condition. A blocked port can provide redundancy in that if the primary port fails, it can be activated to take the traffic load.
The spanning tree calculation is triggered when the bridge is powered up and whenever a change in topology is detected . A topology change might occur when a forwarding port is going down (blocking) or when a port transitions to forwarding and the bridge has a designated port, which also indicates that the bridge is not standalone. Configuration messages known as bridge protocol data units (BPDUs) actually trigger the spanning tree calculation. These messages are exchanged between bridges at regular intervals set by the network manager, usually 1 to 4 seconds.
Once a change in topology is detected, this information must be shared with all bridges on the network. This is a two-step process that starts when a bridge notifies the root bridge of the spanning tree by sending it a special BPDU known as a topology change notification (TCN). The bridge sends the TCN out over its root port. The root bridge acknowledges the message by sending back a normal configuration BPDU with the topology change acknowledgment (TCA) bit set. The second step in the topology update process entails the root bridge sending out configuration BPDUs with the topology change (TC) bit set. These BPDUs are relayed by every bridge, so they can become aware of the changed topology.
There are some problems associated with spanning tree. The more hosts on the network, the higher the probability of topology changes. For example, a directly attached host, such as a client or server, will trigger a topology change when powered off, then go on to clear an operating system problem. In a large, flat network, the point can be reached when it is continually in topology change status. The resulting high level of flooding can lead to an unstable STP environment. To deal with this problem, vendors have come up with ways to avoid TCN generation for certain events. For example, the network manager can configure the bridge so that it issues a TCN when a server is power cycled, but not when client devices are power cycled. If a bridge port going up or down is not deemed an important event, this event too can be programmed not to issue a TCN.
Source-route bridging (SRB) is used in the token-ring environment as the method by which a station establishes a route through a multiple-ring network to its destination. The first step for a station to reach another is to create a packet called an explorer. This packet is copied by all bridges in the network, with each of them adding information about itself before passing it on. The explorer packet’s routing information field (RIF) contains the information of where it has traversed through the network and within the RIF; a route descriptor stores the path it has taken through the network.
As the explorer packet is constructed on its way through the network, the destination station will start receiving data packets from the originating station. Based on the contents of the explorer packet, the destination station will then decide which route to use to send data packets back to the originating station. Or it will send its own explorer packet so that the originating station can determine its own route.
The explorer packet is limited in terms of how many rings it can hold in the routing information field. Although the RIF can hold a total of 14 rings, IBM long ago limited this to seven. Other vendors also adopted this limitation. Consequently, an explorer packet that has traversed seven rings will be dropped in the network. To control traffic in the network with more precision, parameters can be set in the bridge to decrease this number even further, so that packets that reach X number of rings (any number below seven) will be dropped.
While explorers are limited to traversing only seven rings, in a meshed ring environment, one explorer can finish being copied by many bridges, which can cause too many explorers. Explorer storms can be prevented in redundant network topologies by setting the bridge to filter out explorers that have already been forwarded once. Since explorer traffic can be distinguished from regular source route traffic, the network manager can issue commands that check the bridge for various parameters, such as the number of explorers that were dropped outbound on that interface.
While Ethernet has become the network of choice for new installations, there is still a good amount of token ring in use, making it necessary to mix the two environments for data exchange. Doing so is complicated because some very fundamental differences between Ethernet and token ring must be reconciled. Token ring has functional addresses, while Ethernet primarily relies on broadcasts.
Furthermore, MAC addresses on the Ethernet are different from MAC addresses on the token ring. Ethernet does not have a source-route bridging capability and token ring has a routing information field. Finally, token ring and Ethernet use different methods to read the bits into their adapters.
To unify the two environments, vendors have come up with various methods such as translation bridging. This is a type of bridging that is implemented on networks that use different MAC sublayer protocols, providing a method of resolving differences in header formats and protocol specifications. Since there are no real standards in how communication between two media types should occur, however, no single translation implementation can be called correct. The only consideration for network managers is to select a method of translation and implement it uniformly throughout the network.
Essentially, the bridges reorder source and destination address bits when translating between Ethernet and token-ring frame formats. The problem of embedded MAC-addresses can be resolved by programming the bridge to look for various types of MAC addresses. Some translation-bridges simply check for the most popular embedded addresses. If others are used, the bridge must be programmed to look for them as well. But if translation-bridging software runs in a multi-protocol router, which is very common today, these protocols can be routed and the problem avoided entirely.
Token ring’s RIF field has a component that indicates the largest frame size that can be accepted by a particular source-route bridging implementation. Translation bridges that send frames from the transparent-bridging domain to the SRB domain usually set the maximum transfer unit (MTU) field to 1,500 bytes to limit the size of token-ring frames entering the transparent-bridging domain, because this is the maximum size of Ethernet frames. Some hosts cannot process this field correctly, in which case translation bridges are forced to drop the frames that exceed Ethernet’s MTU size.
Bits representing token-ring functions that are absent in Ethernet are discarded by translation bridges. For example, token ring’s priority, reservation, and monitor bits are discarded during translation. And token ring’s frame status bits are treated differently, depending on the bridge manufacturer; the products of some manufacturers may even ignore these bits.
Sometimes, the bridge will have the C bit set, indicating that the frame has been copied, but not the A bit set, indicating that the destination station recognizes the address. In the former case, a token-ring source node determines if the frame it sent has become lost. Advocates of this approach claim that reliability mechanisms, such as the tracking of lost frames, are better left for implementation in Layer 4 of the OSI model. Advocates of setting the C bit argue that this bit must be set to track lost frames, but that the A bit cannot be set because the bridge is not the final destination.
Translation bridges also can be used to create a software gateway between the token ring and Ethernet domains. To the SRB end stations, the translation bridge has a ring number and a bridge number associated with it, so it looks like a standard source-route bridge. In this case, the ring number reflects the entire transparent-bridging domain. To the transparent-bridging domain, the translation bridge is just another transparent bridge.
When bridging from the SRB domain to the transparent-bridging domain, SRB information is removed. Token ring’s routing information fields usually are cached for use by any subsequent return traffic. When bridging from the transparent bridging to the SRB domain, the translation bridge checks the frame to see if it has a multicast or unicast destination. If the frame has a multicast or broadcast destination, it is sent into the SRB domain as a spanning-tree explorer. If the frame has a unicast address, the translation bridge looks up the destination in the RIF cache. If a path is found, it is used and the RIF information is added to the frame; otherwise , the frame is sent as a spanning-tree explorer.
Another solution to unify the Ethernet and token-ring environments is source-route translation bridging (SRTLB). This entails the addition of bridge groups to the interfaces of both the token ring and Ethernet bridges to create a transparent bridge domain between the two environments. The bridges at each end are responsible for establishing the path through the network. When a bridge on a token ring receives a packet from an Ethernet, for example, path establishment is handled as follows (see Figure 5.2):
Figure 5.2: Source-route translation bridging, from token ring to Ethernet.
Bridge-1 receives a packet from the Ethernet. This is from PC-1 to the host.
Bridge-1 needs a RIF to reach the host, so it creates an explorer to learn the path to reach the host.
After Bridge-1 receives the response, it sends the response (without a RIF) to the Ethernet station.
PC-1 sends an exchange identifier (XID) to the host MAC address.
Bridge-1 gets the Ethernet packet, attaches the RIF to the host, and sends the packet on its way.
As far as the host is concerned , the Ethernet is sitting on a pseudo ring. This is configured with the source-bridge transparent command on the bridge. The pseudo ring makes the host treat the Ethernet as if it were a token ring.
|< Day Day Up >|
LANs to WANs(c) The Complete Management Guide
Authors: Muller N.J.
Published year: 2003 | <urn:uuid:e16b4240-d208-4ee2-8272-ce223fa2146e> | CC-MAIN-2013-20 | http://flylib.com/books/en/4.62.1.44/1/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.92248 | 2,669 | 3.359375 | 3 |
Cancer Fighting Foods/Spices
The National Cancer Institute estimates that roughly one-third of all cancer deaths may be diet related. What you eat can hurt you, but it can also help you. Many of the common foods found in grocery stores or organic markets contain cancer-fighting properties, from the antioxidants that neutralize the damage caused by free radicals to the powerful phytochemicals that scientists are just beginning to explore. There isn’t a single element in a particular food that does all the work: The best thing to do is eat a variety of foods.
The following foods have the ability to help stave off cancer and some can even help inhibit cancer cell growth or reduce tumor size.
Avocados are rich in glutathione, a powerful antioxidant that attacks free radicals in the body by blocking intestinal absorption of certain fats. They also supply even more potassium than bananas and are a strong source of beta-carotene. Scientists also believe that avocados may also be useful in treating viral hepatitis (a cause of liver cancer), as well as other sources of liver damage.
Broccoli, cabbage, and cauliflower have a chemical component called indole-3-carbinol that can combat breast cancer by converting a cancer-promoting estrogen into a more protective variety. Broccoli, especially sprouts, also have the phytochemical sulforaphane, a product of glucoraphanin – believed to aid in preventing some types of cancer, like colon and rectal cancer. Sulforaphane induces the production of certain enzymes that can deactivate free radicals and carcinogens. The enzymes have been shown to inhibit the growth of tumors in laboratory animals. However, be aware that the Agriculture Department studied 71 types of broccoli plants and found a 30-fold difference in the amounts of glucoraphanin. It appears that the more bitter the broccoli is, the more glucoraphanin it has. Broccoli sprouts have been developed under the trade name BroccoSprouts that have a consistent level of sulforaphane – as much as 20 times higher than the levels found in mature heads of broccoli.
Carrots contain a lot of beta carotene, which may help reduce a wide range of cancers including lung, mouth, throat, stomach, intestine, bladder, prostate and breast. Some research indicated beta carotene may actually cause cancer, but this has not proven that eating carrots, unless in very large quantities – 2 to 3 kilos a day, can cause cancer. In fact, a substance called falcarinol that is found in carrots has been found to reduce the risk of cancer, according to researchers at Danish Institute of Agricultural Sciences (DIAS). Kirsten Brandt, head of the research department, explained that isolated cancer cells grow more slowly when exposed to falcarinol. This substance is a polyacethylen, however, so it is important not to cook the carrots.
Chili peppers and jalapenos contain a chemical, capsaicin, which may neutralize certain cancer-causing substances (nitrosamines) and may help prevent cancers such as stomach cancer.
November 20, 2008 at 3:27 pm
Maybe you should be eating more beets, left, or chopped cabbage. (Credit: Evan Sung for The New York Times, left
Nutritionist and author Jonny Bowden has created several lists of healthful foods people should be eating but aren’t. But some of his favorites, like purslane, guava and goji berries, aren’t always available at regular grocery stores. I asked Dr. Bowden, author of “The 150 Healthiest Foods on Earth,” to update his list with some favorite foods that are easy to find but don’t always find their way into our shopping carts. Here’s his advice.
- Beets: Think of beets as red spinach, Dr. Bowden said, because they are a rich source of folate as well as natural red pigments that may be cancer fighters.
How to eat: Fresh, raw and grated to make a salad. Heating decreases the antioxidant power.
- Cabbage: Loaded with nutrients like sulforaphane, a chemical said to boost cancer-fighting enzymes.
How to eat: Asian-style slaw or as a crunchy topping on burgers and sandwiches.
- Swiss chard: A leafy green vegetable packed with carotenoids that protect aging eyes.
How to eat it: Chop and saute in olive oil.
- Cinnamon: Helps control blood sugar and cholesterol.
How to eat it: Sprinkle on coffee or oatmeal.
- Pomegranate juice: Appears to lower blood pressure and loaded with antioxidants.
How to eat: Just drink it.
- Dried plums: Okay, so they are really prunes, but packed with cancer-fighting antioxidants.
How to eat: Wrapped in prosciutto and baked.
- Pumpkin seeds: The most nutritious part of the pumpkin and packed with magnesium; high levels of the mineral are associated with lower risk for early death.
How to eat: Roasted as a snack, or sprinkled on salad.
- Sardines: Dr. Bowden calls them “health food in a can.’’ They are high in omega-3’s, contain virtually no mercury and are loaded with calcium. They also contain iron, magnesium, phosphorus, potassium, zinc, copper and manganese as well as a full complement of B vitamins.
How to eat: Choose sardines packed in olive or sardine oil. Eat plain, mixed with salad, on toast, or mashed with dijon mustard and onions as a spread.
- Turmeric: The “superstar of spices,’’ it has anti-inflammatory and anti-cancer properties.
How to eat: Mix with scrambled eggs or in any vegetable dish.
- Frozen blueberries: Even though freezing can degrade some of the nutrients in fruits and vegetables, frozen blueberries are available year-round and don’t spoil; associated with better memory in animal studies.
How to eat: Blended with yogurt or chocolate soy milk and sprinkled with crushed almonds.
- Canned pumpkin: A low-calorie vegetable that is high in fiber and immune-stimulating vitamin A; fills you up on very few calories.
How to eat: Mix with a little butter, cinnamon and nutmeg.
You can find more details and recipes on the Men’s Health Web site, which published the original version of the list last year.
In my own house, I only have two of these items — pumpkin seeds, which I often roast and put on salads, and frozen blueberries, which I mix with milk, yogurt and other fruits for morning smoothies. How about you? Have any of these foods found their way into your shopping cart?
Courtesy: New York Times
July 1, 2008 at 9:06 am | <urn:uuid:055624c0-62af-41df-8f11-60150520d344> | CC-MAIN-2013-20 | http://funinlife.wordpress.com/tag/health/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945379 | 1,450 | 3.15625 | 3 |
The basics of heat stress
When the thermometer rises, it can-and often does-create a multitude of problems. Anyone, given the right (or wrong) conditions, can get heat stress. Some are lucky enough to suffer only from heat cramps, while those who are less fortunate may be laid up by heat exhaustion or devastated by heat stroke. As the long, hot days of summer approach, it is helpful to review the effects of warm weather on the human body, the illnesses that may result and what you can do.
How the body stays cool Unknowingly, you constantly engage your body in the life-and-death struggle to disperse the heat it produces. If allowed to accumulate, this heat would quickly increase your body temperature beyond its comfortable 98.6oF. This does not normally happen because your body is able to lose enough heat to maintain a steady temperature. You become aware of this struggle for heat balance during hard labor or exercise in hot environments, when your body produces heat faster than it can lose it. Under certain conditions, your body may build up too much heat, your temperature may rise to life-threatening levels, and you may become delirious or lose consciousness. This is called heat stroke, and it is a serious medical emergency. If you do not rid your body of excess heat fast enough, it cooks the brain and other vital organs. It often is fatal, and those who survive may have permanent damage to their vital organs. Before your temperature reaches heat-stroke levels, however, you may suffer heat exhaustion with its flu-like symptoms, and while treating its symptoms you avoid heat stroke.
How does your body dispose of excess heat? Humans lose heat largely through their skin, similar to how a car loses heat through its radiator. Exercising muscles warms the blood, just as a car's hot engine warms its radiator fluid. Warm blood travels through the skin's dilated blood vessels losing heat by evaporating sweat to the surrounding air, just like a car loses engine heat through its radiator.
When blood delivers heat to the skin, two of the most important ways the body loses heat are radiation and evaporation (vaporization of sweat). When the temperature is 70oF or less, the body releases its heat by radiation. As environmental temperatures approach your body temperature, you lose less heat through radiation. In fact, people working on hot summer days actually gain heat through radiation from the sun. This leaves evaporation as the only way to effectively control body temperature.
Water loss Your body is about half water. You lose about 2 quarts every day (breathing, urinating, bowel movements and sweat). A working adult can produce 2 quarts of sweat per hour for short periods and up to 15 quarts per day. Because the body's water absorption rate of 1.5 quarts per hour is less than the body's 2 quarts per hour sweat rate, dehydration results. This happens because you cannot drink enough water to keep up with your sweat losses.
If you drink only when you are thirsty, you are dehydrated already. Thirst is not a good guide for when to drink water. In fact, in hot and humid conditions, you may be so dehydrated by the time you become thirsty that you will have trouble catching up with your fluid losses. One guideline regarding your water intake is to monitor your urine. You are getting enough water if you produce clear urine at least five times a day. Cloudy or dark urine, or urinating less than five times a day, means you should drink more.
In the Gulf War, American armed forces followed the practice of the Israeli army: drinking a minimum of 1 quart of fluid per hour. This tactic resulted in zero deaths from heat illness. In contrast, during the Six Day War of 1967, more than 20,000 Egyptian soldiers died3/4with no visible wounds3/4most likely from dehydration and heat illness because they were restricted to 3 quarts daily.
While working in hot weather, drink 8 ounces of water every 20 minutes. Generally, 16 ounces is the most a person can comfortably drink at once. You cannot "catch up" by drinking extra water later because only about 1 quart of water per hour can pass out of the stomach. Therefore, if possible, workers should begin drinking water before they start work.
Cool water (50oF) is easier for the stomach to absorb than warm water, and a little flavoring may make the water more tasty. The best fluids are those that leave the stomach fast and contain little sodium and some sugar (less than 8 percent). You should avoid coffee and tea because they contain caffeine, which is a diuretic that increases water loss through urination. Alcoholic beverages also dehydrate by increasing urination. Soda pop contains about 10 percent sugar and, therefore, your body does not absorb it as well as water or commercial sports drinks. The sugar content of fruit juices ranges from 11 to 18 percent and has an even longer absorption time. Commercial sports drinks contain about 5 to 8 percent sugar.
Electrolyte loss Sweat and urine contain potassium and sodium, which are essential electrolytes that control the movement of water in and out of the body's cells. Many everyday foods contain these electrolytes. Bananas and nuts are rich with potassium, and most American diets have up to 10 times as much sodium as the body needs. Getting enough salt is rarely a problem in the typical American diet. In fact, most Americans consume an excessive amount of sodium-averaging 5 to 10 grams of sodium per day-although we probably require only 1 to 3 grams. Therefore, sodium loss is seldom a problem, unless a person is sweating profusely for long periods and drinking large amounts of water.
Commercial sports drinks can be useful if you are participating in vigorous physical activity for longer than 1 hour (some experts say longer than 4 hours). Most of the time, however, people merely require water to remain hydrated. The truth is that excessive sodium can draw water out of the body cells, accentuating the dehydration. In addition, drinking large amounts of water (more than 1 quart an hour) can cause water intoxication, a condition that flushes electrolytes from the body. Frequent urination and behavior changes (irrationality, combativeness, coma, seizures, etc.) are signs of water intoxication.
Effects of humidity Sweat can only cool the body if it evaporates. In dry air, you will not notice sweat evaporating. However, sweat cannot evaporate in high-humidity conditions; it just drips off the skin. At about 70-percent humidity, sweating is ineffective in cooling the body.
Because humidity can significantly reduce evaporative cooling, a highly humid but mildly warm day can be more stressful than a hot, dry one. Therefore, the higher the humidity, the lower the temperature at which heat risk begins, especially those who are generating heat with vigorous work.
Who is at risk? Everyone is susceptible to heat illness if environmental conditions overwhelm the body's temperature-regulating mechanisms. Heat waves can set the stage for a rash of heat-stroke victims. For example, during the 1995 summer heat wave in Chicago, the death toll reached 590.
People who are obese, chronically ill or alcoholics have an increased risk. The elderly are at higher risk because of impaired cardiac output and decreased ability to sweat. Infants and young children also are susceptible to heat stroke, as well.
The fluid loss and dehydration resulting from physical activity puts outdoor laborers at particular risk. Certain medications predispose individuals to heat stroke, such as drugs that alter sweat production (antihistamines, antipsychotics, antidepressants) or interfere with thermoregulation.
Heat illnesses Several disorders exist along the spectrum of heat illnesses. Heat cramps, heat exhaustion and heat stroke are on the more serious side of the scale, whereas heat syncope, heat edema and prickly heat are less serious (see "Heat illnesses," page C 18). Only heat stroke is life-threatening. Untreated heat-stroke victims always die.
* Heat cramps are painful muscular spasms that occur suddenly. They usually involve the muscles in the back of the leg or the abdominal muscles. They tend to occur immediately after exertion and are caused by salt depletion. Victims may be drinking water without adequate salt content. However, some experts disagree because the typical American diet is heavy with salt.
* Heat exhaustion is characterized by heavy perspiration with normal or slightly above-normal body temperatures. A depletion of water or salt3/4or both3/4causes this condition. Some experts believe severe dehydration is a better term because it happens to workers who do not drink enough fluids while working in hot environments. Symptoms include severe thirst, fatigue, headache, nausea, vomiting and diarrhea. The affected person often mistakenly believes he or she has the flu. Uncontrolled heat exhaustion can evolve into heat stroke.
* Heat stroke is classified in two ways: classic and exertional. Classic heat stroke, also known as the "slow cooker," may take days to develop. This condition is prevalent during summer heat waves and typically affects poor, elderly, chronically ill, alcoholic or obese persons. Because the elderly often have medical problems, heat stroke exacerbates the problem, and more than 50 percent of elderly heat-stroke victims die3/4even with medical care. Death results from a combination of a hot environment and dehydration. Exertional heat stroke also is more common in the summer. You see it frequently in athletes, laborers and military personnel who sweat profusely. Known as the "fast cooker," this condition affects healthy, active individuals who strenuously work or play in a warm environment. Exertional heat-stroke victims usually are sweating when stricken, while the classic victims are not sweating. Its rapid onset does not allow enough time for severe dehydration to occur.
Because uncontrolled heat exhaustion can evolve into heat stroke, you should know how to tell the difference between them. If the victim feels extremely hot when touched, suspect heat stroke. Another mark of heat stroke is that the victim's mental status (behavior) changes drastically3/4ranging from being slightly confused and disoriented to falling into a coma. In between these conditions, victims usually become irrational, agitated or even aggressive and may have seizures. In severe cases, the victim can go into a coma in less than 1 hour. The longer a coma lasts, the lower the chance for survival, so rescuers must be quick.
A third way of distinguishing heat stroke from heat exhaustion is by rectal temperature. Obviously, this is not very practical because conscious heat-stroke victims may not cooperate. Taking a rectal temperature can be embarrassing to both victim and rescuer. Moreover, rectal thermometers are seldom available, and the whole procedure of finding the appropriate thermometer and then using it wastes time and distracts from important emergency care. In most cases, an ambulance arrives within 10 to 20 minutes.
* Heat syncope, in which a person becomes dizzy or faints after exposure to high temperatures, is a self-limiting condition. Victims should lie down in a cool place when it occurs. Victims who are not nauseated can drink water.
* Heat edema, which is also a self-limiting condition, causes ankles and feet to swell from heat exposure. It is more common in women unacclimated to a hot climate. It is related to salt and water retention and tends to disappear after acclimation. Wearing support stockings and elevating the legs often helps reduce swelling.
* Prickly heat, also known as a heat rash, is an itchy rash that develops on skin that is wet from sweating. Dry and cool the skin.
Cooling methods Sometimes the only way to stop possible damage is to cool the victim as quickly as possible. However, it is important to pay attention to both the cooling methods and cautions.
* Ice baths cool a victim quickly but require a great deal of ice3/4at least 80 pounds3/4to be effective. Needing a big enough tub also limits this method. Cool-water baths3/4(less than 60oF)3/4can be successful if you stir the water to prevent a warm layer from forming around the body. This is the most effective method in highly humid conditions (greater than 75-percent humidity).
* Spraying the victim with water combined with fanning is another method for cooling the body. The water droplets act as artificial sweat and cool the body through evaporation. However, this method is not effective in high humidity3/4greater than 75 percent.
* Ice bags wrapped in wet towels and placed against the large veins in the groin, armpits and sides of the neck also cool the body, though not nearly as quickly as immersion.
Cautions to remember when employing any cooling method include: * Do not delay the onset of cooling while waiting for an ambulance. Doing so increases the risk of tissue damage and prolonged hospitalization. * Stop cooling when the victim's mental status improves to avoid hypothermia. * Do not use rubbing alcohol to cool the skin. It can be absorbed into the blood, causing alcohol poisoning. Its vapors are a potential fire hazard. * Do not use aspirin or acetaminophen. They are not effective because the brain's control-center temperature is not elevated as it is with fever caused by diseases.
Adjusting to heat Most heat illness occur during the first days of working in the heat. Therefore, acclimation (adjusting to the heat) is the main preventive measure. To better handle the heat, the body adjusts by decreasing the salt content in sweat and increases the sweating rate. Year-round exercise can help workers prepare for hot weather. Such activity raises the body's core temperature so it becomes accustomed to heat. Full acclimation, however, requires exercise in hot weather. You can do this by exercising a minimum of 60 to 90 minutes in the heat each day for 1 to 2 weeks.
The acclimated heart pumps more blood with each stroke than a heart unused to working in the heat. Sweating earlier and doubles the amount of sweat per hour from 1.5 quarts to 3 quarts or more.
When new workers are exposed to hot weather, team them with veterans of the heat who know how much water to drink. Heat illnesses are avoidable. With knowledge, preparation, fluid replacement and prompt emergency care, heat casualties need not be a factor for those working in warm weather.
Dr. Alton Thygerson is a professor of health science at Brigham Young University, Provo, Utah. He also serves as the technical consultant for the National Safety Council's First Aid Institute.
Want to use this article? Click here for options!
© 2013 Penton Media Inc. | <urn:uuid:4b4eb0e1-7746-4f13-a3ba-67961d4c7971> | CC-MAIN-2013-20 | http://grounds-mag.com/mag/grounds_maintenance_basics_heat_stress/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.943288 | 3,036 | 3.65625 | 4 |
Contemporary world politics make it necessary for nations to integrate into international unions in the interest of their own national security and economy. In these international unions, which are usually based upon geographic location, such factors as natural resources, trading blocs, and even cultural values play an important role. Many neighboring countries combine their resources under the auspices of such organizations, create defensive alliances, and cooperate on a wide array of issues. The goal of such unions is to preserve peace, control the arms race, resolve disputes through diplomacy, promote socioeconomic development, and protect fundamental human rights and democracy. At the present time, NATO, the OSCE, the EU, NAFTA, OPEC, ASEAN, the G-8, the D-8, and APEC are the foremost international political, military, and economic unions.
These institutions are subject to organizational reforms because of new members or a widening of scope. All of these organizations, formed in the aftermath of the Second World War, have contributed to creating stability and order in the world and have played a major role in global socioeconomic development. Member nations protect their economic and military interests, and also acquire a stronger regional and international position. Even the developed world perceives the necessity of such partnerships. The creation of free trade zones, regional trade agreements, abolished customs controls, and even a common currency (as in the EU) safeguard the future of member states. Defensive pacts enable member states to reduce military expenditures and to divert those resources to cultural and educational fields.
A similar organization will provide considerable benefits to Muslim nations. For those that are desperate for technological as well as economic development, the foremost step toward stability is the creation of a central organization or, in other words, a unified Islamic world under the auspices of the Islamic Union.
Economic Development and Increasing Prosperity
Economic cooperation is necessary on two counts: stability and development. Muslim nations must bring stability and solidity to their economies. Developing industries and making the required investments is vital, as is the need for a comprehensive development plan and the simultaneous development of education, economy, culture, science, and technology. While various sectors are developed technologically, the labor force's educational levels and standards must be raised accordingly. Society must be motivated to become more productive, and the resulting economic cooperation will play a major role in eradicating poverty, illiteracy, the unjust distribution of wealth, and other socioeconomic problems rampant in Muslim countries. This partnership can be formed only by the creation of free trade zones, customs unions, and common economic areas.
Most Muslim countries have geostrategic importance as well as rich natural resources (e.g., natural gas and crude oil). These resources and strategic opportunities, however, are not being used effectively. In the Islamic world, 86% of the population's living standards fall below $2,000, 76% under $1,000, and 67% under $500 per year. When the Islamic world's total resources are considered,(1) this is quite a paradox: Roughly half of the petrol consumed in the West is exported from the Islamic world, as is 40% of the world's agricultural production.(2) Many economists and strategists freely admit that the world economy depends upon the Islamic world's oil and gas exports, in particular those of the Persian Gulf.(3)
The Persian Gulf holds two-thirds of the planet's discovered crude oil reserves. Data obtained from research concludes that Saudi Arabia alone holds 25.4% of the world's oil reserves, or 262 billion barrels. A further 11% is found in Iraq, 9.6 % in the UAE, 9.2 % in Kuwait, 8.6 % in Iran, 13% in other OPEC member states. The rest is distributed across the remainder of the world.(4) Research commissioned by the U.S. Department of Energy shows that between 2000 and 2020, oil exports from the area will increase by 125%.(5) This means that the world will continue to meet most of its energy needs by imports from the Gulf region. Moreover, the Middle East has 40% of the global natural gas reserves; 35 % of these reserves are in the Gulf region.(6) Algeria, Libya, and other North African countries have 3.7 % of the world's reserves.
The Caucasus and Central Asia are also rich in oil, natural gas, and other natural resources. For instance, Kazakhstan has between 10-17.6 billion barrels of proven oil reserves, and its natural gas reserves are estimated at between 53 and 83 trillion cubic feet.
Turkmenistan hasbetween 98 and 155 trillion cubic feet of natural gas reserves, making it the fourth largest producer.(7) Some other Muslim countries have valuable mineral resources. For instance, Uzbekistan and Kyrgyzstan are two of the world's leading gold producers. Turkey has one of the world's richest boron reserves, only recently discovered to be very important, and Tajikistan has the world's largest aluminum producing facilities.
These advantages will become more important in the twenty-first century, which some have already christened the "energy century." Energy is an essential element of modern society in terms of the military, industry, urbanization, and transport. Given that economic activity and manufacturing depend primarily upon energy, nations will do their best to achieve control over these energy resources. The Islamic world is not using its resources effectively, for many of its members lack the infrastructure and technology to increase the production and use their natural resources to develop their industries. Therefore, the resources' contributions to the country's economy are limited to export earnings. These countries do not have the means to process their own crude oil, use it in their industrial complexes, or to develop their industries. Worse still, some Muslim nations do not even have the necessary means to explore and research their natural resources or to discover and extract them. Explorations undertaken by foreign companies reveal that other Muslim nations have oil and gas reserves, but they cannot benefit from their resources.
Naturally, the ineffective use of natural resources is not the Islamic world's only economic problem. However, solving this problem can begin the process of solving many other problems. The economies of Muslim nations contain differences in structure and functioning. Some nations' economies depend upon mineral resources, such as the members of OPEC, while other nations' depend upon agriculture. These differences are also reflected, to some extent, in their social structures, such as the widely varying degrees of rural and urban populations. Developing complementary relationships and helping each other in their respective areas of expertise can turn these differences into a source of riches. All of this will be possible with the Islamic Union.
Joint ventures and project partnerships will be an important step in the right direction, for they will enable countries to benefit from one another's experiences and the income earned from investment projects will benefit all of the participating countries. Such mutual financial support is compatible with Islamic morality, for helping the needy and having a sense of social responsibility are important characteristics that Muslims strive to acquire. Many verses in the Qur’an remind Muslims to watch over the needy.
Society's internal cohesion must be extended to international relations. As international cooperation within a partnership cannot be one-sided, employment and income levels will rise in both countries. For example, one country will produce oil and another one will process it, and agriculturally dependent countries will be able to import the food they need from agriculturally developed countries. A manpower-poor country’s need will be met by another Islamic country, while rich countries will be able to invest in and help out a manpower-rich country that does not have enough jobs for its people. This will be to the benefit of both. Sharing know-how and experience will increase prosperity, and all Muslims will benefit from technological developments.
Joint ventures that realize the Islamic world's unification of opportunities and means will enable Muslims to produce hi-tech products. The Islamic common market will enable Muslim-made products to be marketed in other Muslim countries without the hindrance of customs, quotas, and other cross-border obstacles. The marketplace will grow, the market share and exports of all Muslim nations will rise, industrialization will speed up, and economic development will bring progress in technology. The living standards and wealth of Muslim nations will increase, and their existing inequalities will disappear. Some free trade agreements are already in place between countries in the Gulf, the Pacific Rim, and North Africa. Trade agreements signed by Turkey are already operational in the Islamic world. Bilateral cooperation exists in some regions; however, their scope must be widened. Such cooperation will safeguard the rights and interests of all Muslim nations and lead to all of them becoming developed—a result from which all of them will derive a far greater benefit than if they do not cooperate with each other.
All of these can be realized only under a central authority's leadership and coordination. Achieving this will be possible if Muslim nations adopt the Qur'an's values and the Prophet's (May God bless him and grant him peace) Sunnah, or, in other words, if they adopt Islamic culture. The Islamic Union must lead the way to this cultural awakening, as well as the resulting political and economic cooperation.
Mutual cooperation among Muslims, part of the Islamic code, must be adhered to by all Muslims, for God commands people to refrain from avarice and to guard the needy and support one another. In fact, destitute people have a due share of the believers' wealth (Qur'an, 51:19). As the Qur'an proclaims:
Our Lord also reveals that believers are one another's guardians (Qur'an, 9:71). The word "guardian" conveys such meanings as friend, helper, mentor, and protector. It also expresses the importance of cooperation and solidarity between Muslim nations. The cooperation that will arise from this fraternal awareness between Muslim nations will bring prosperity and wealth to Muslims and eradicate poverty, an important problem of the Islamic world. Societies that follow the Qur'an's values will not experience famine, destitution, and poverty. Muslims will develop their nations by following rational and long-term policies, establishing good relations with other nations and people, valuing trade and development, and learning from other cultures' experiences. This was so in history and, God willing, under the Islamic Union's leadership it will be so once again.
1- Demetrios Yiokaris, Islamic Leage Study Guide-1997, United Nations: Study Guides. Online at: www.vaxxine.com/cowac/islmclg1.htm.
2- “Islamic Countries have the resources to match the west, scientist”, Arabic News.com, 28 May 2000. Online at: www.arabicnews.com/ansub/ Daily/Day/000628/2000062848.html.
3- Anthony H. Cordesman and Arleigh A. Burke, “The Gulf and Transition: Executive Summary and Major Policy Recommendations” (October 30, 2000).
4- Anthony H. Cordesman and Arleigh A. Burke, “The US Military and the Evolving Challenges in the Middle East” (March 9, 2002), 3.
5- Anthony H. Cordesman and Arleigh A. Burke, “The US Military and the Evolving Challenges in the Middle East” (March 9, 2002), 3.
6- Anthony H. Cordesman and Arleigh A. Burke, “The US Military and the Evolving Challenges in the Middle East” (March 9, 2002), 4.
7- Jim Nichol, “Central Asia’s New States: Political Developments and Implications for U.S. Interests,” CRS (Congressional Research Service) Issue Brief for Congress (June 13, 2003). Online at: www.ncseonline.org/NLE/CRS/abstract.cfm?NLEid=16833. | <urn:uuid:294ae327-cc7d-4b2a-962f-3f40d8a01c9b> | CC-MAIN-2013-20 | http://harunyahya.com/en/Makaleler/4324/How-Islamic-Union-will-affect-the-economic-development | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.937494 | 2,451 | 2.78125 | 3 |
In the early 1900s, a dispute arose over who controlled Greenland—Norway or Denmark. The case was submitted to the Permanent Court of International Justice in 1933. The court ruled in Denmark’s favor.
After WWII, the United States developed a geopolitical interest in Greenland. In 1946, they offered to buy the country from Denmark for $100 million dollars. Denmark refused to sell though. They did, however, allow the US to reopen Thule Air Base in 1950. From 1951 and 1953, the base was greatly expanded as a part of a NATO Cold War defense strategy. It is still the US Air Forces’ northernmost base, located inside the Arctic Circle.
Though Xerxes did not found the Achaemenid Persian Empire, he ruled it at its greatest size, and made it the global force that it was at the time. His failed invasion of Greece has secured him a legendary place in not just Asian, but also Western culture.
If once a man indulges himself in murder, very soon he comes to think little of robbing; and from robbing he next comes to drinking and Sabbath-breaking, and from that to incivility and procrastination.
—Thomas De Quincey (1785-1859)
Tritones is a musical interval that spans three whole tones. This interval, the gap between two notes played in succession or simultaneously, was branded Diabolus in Musica or the Devil’s Interval by medieval musicians.
One historian said, on the tritone: “It apparently was the sound used to call up the beast. There is something very sexual about the tritone.In the Middle Ages when people were ignorant and scared, when they heard something like that and felt that reaction in their body they thought ‘uh oh, here come the Devil’.”
The Devil’s Interval came back into vogue under Wagner, of all people, who used it in his operas. Since then, the tritone has been used for everything from ACDC to The Simpson’s theme song.
The first light portrait and first human portrait every taken. From October or November, 1839. It is a self-portrait by Robert Cornelius.
A caricature of Europe right before WWI. For a full explanation of the imagery for each country, click on the image.
Around 300 BCE, the Maya began adopting a hierarchical system of government with rule by nobles and kings. This civilization developed into highly structured kingdoms during the Classic Period, around 200-900 CE. Their society consisted of many independent states, each with a rural farming community and large urban sites built around ceremonial centers. It started to decline around 900 CE when - for reasons which are still debated - the southern Maya abandoned their cities. When the northern Maya were integrated into the Toltec society by 1200 CE, the Maya civilization finally came to a close, although some peripheral centers continued to thrive until the Spanish Conquest in the early sixteenth century. Even today, many in Guatemala and Mexico identify first as Maya and second as their nationality.
Fort Sumter, in Charleston, South Carolina, at the time of the American Civil War.
In 98 AD, the Roman historian Tacitus wrote a detailed description about the Fenni, a people to the north. This is probably the earliest written reference to the Finnish people. According to him, these poor, savage Fennis lived somewhere in the northeast Baltic region — at the time inhabited by many other peoples, and the description also fits the Sami, another group still living near the Arctic Circle today. Given the name’s closeness to the modern Finns, they think it was probably them. Historians can never be certain exactly who Tacitus was referring to, however. Welcome to history class, guys!
In the mid-1950s, Sammy Davis Jr was involved with Kim Novak, who was a valuable star under contract to Columbia Studios.
The head of the studio, Harry Cohn, called one of the mob bosses. He paid the mob to threaten Sammy into ending the affair.
Great Britain finished repaying the United States’ lend-lead aid from World War II in 2006.
August 12, 1944: a band of battle-hardened nurses take a break to get their picture taken in a field close to the front lines in France.
Successor of the unfortunate Pope Formosus, Pope Boniface VI joins the league of forgotten Popes. Very little is known about him, and what is known, he probably wishes we’d forget. Pope for just 15 days, Boniface died from gout. This nasty disease comes from eating too much red meat and other rich foods. This causes a build-up of uric acid (gross) leading to swelled joints and purplish skin. Two years after his death, John IX declared Boniface Vi’s election null and void but he is still included in the official list of Popes.
This is the remarkable Lady Malcolm Douglas-Hamilton. In 1940 she was Natalie Latham, a former debutante and fixture at New York society balls, now 30, twice divorced with two children and still so beautiful that Vogue printed items about her.
All this changed when German U-boats began their devastating attacks on the North Atlantic convoys supplying Britain. Although America had not entered the war, Natalie Latham decided to do something to help, and established Bundles for Britain, which began as little more than a “knitting bee” — albeit one convened by Natalie Latham and some of the grandest dames of the New York social scene. The group quickly expanded to over 1.5 million volunteers, with branches all over the country. Bundles for Britain started shipping over not just clothing but also blankets, children’s cots, ambulances, X-ray machines, hospital beds, oxygen tents, surgical instruments, blood transfusion kits, tinned food and children’s cots. Every item was labelled “From your American friends.”
In Britain, she secured the support of Winston Churchill’s wife, Clementine, and of Janet Murrow, wife of the CBS reporter Ed Murrow, whose live radio broadcasts to America during the Blitz began with the words: “This is London.” When Bundles for Britain held a raffle, Queen Elizabeth donated items, including a piece of shrapnel that had hit Buckingham Palace. King George VI later appointed Natalie Latham an honorary CBE; she was the first non-British woman thus honored.
After her fourth husband’s death in 1951, she arrived in London to promote Common Cause, an anti-communist organization she had founded, and met the third son of the 13th Duke of Hamilton, Lord Malcolm Douglas-Hamilton, MP for Inverness-shire and an ardent anti-communist. They eventually moved to the US, and she died on January 14, 2013. | <urn:uuid:a1f038cb-890b-4c07-8d0a-10c8339ec271> | CC-MAIN-2013-20 | http://historical-nonfiction.tumblr.com/page/5 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.966188 | 1,431 | 2.671875 | 3 |
Business Language Learning
From APEC HRDWG Wiki
As part of International Education Week 2010, APEC has expanded on several themes of the seminar on "Language Education: An Essential for a Global Economy," to provide a guide for students and instructors interested in the critical importance of business language for strengthening business relations in a global context. These themes include Business in the 21st Century; Cross Cultural Awareness for 21st Century Business; Language for 21st Century Business; Business Language Learning; and Business Language Policy.
In Business Language Instruction, we learn that different economies use different methodologies by which to teach and learn the subject of business. We find that conflict may arise when these differing methodologies come together in a single classroom.
Another application of advanced communications technologies may be found in the classroom, where traditional textbooks may be supplemented with electronic media such as video clips, as well as live information from Internet newsfeeds, essentially making textbook materials come alive. Students today may not learn history, geography, and science as it was taught a few years ago. They may actually view and experience events via the Internet as if they were present during the moment in which they took place. Video conferencing in the classroom may have other applications, such as providing students access to language teachers in foreign countries and to subject matter experts thousands of miles away, who can appear in the classroom and guest lecture as if they were actually there. These powerful new communications technologies have enhanced business language instruction in schools and universities, as evidenced from the scenario presented below.
- Technology provides web-based content to expand, complement, and supplement textbooks and teacher instruction.
- Online educational materials blend face-to-face learning with digital teaching and curricula.
- Technology such as virtual classroom fosters peer-to-peer and instructor-peer relationship building, collaboration, and social networking.
- When designing lesson plans for international students, educators must consider how cultural values affect the way students respond to specific assignments.
- Technology contributes to a green environment by saving paper and reducing travel.
In the fictional scenario below, teaching and learning methodologies from different economies clash as they are brought together into a single classroom, made possible only by advancements in telecommunications technologies.
A prestigious university located in collectivist Economy A invited a Marketing professor from a renowned university in individualistic Economy B teach a year-long course on the Fundamentals of Marketing to first-year business students. The professor had recently published a book on McBurger, the hamburger chain, and its success in Economy A. The students in Economy A viewed his book as a premier marketing book in the field of international business. Conducted virtually over Internet video stream, the course was the first [Ed Note: for which economy? Using a mix of traditional and technology-mediated instruction is not that new. It may be a stretch to say it was the first time for such a mix.] to integrate traditional methods of teaching with new technologies. The professor would present a traditional lecture from the university's video conferencing room in Economy B and the students in Economy A would view the lecture and participate in discussion as if the professor were in their classroom. Students would submit all assignments and exams to the professor through a "digital drop box," and the professor would return graded materials back to students via this medium. Using advanced technology in the classroom allowed students to learn from a renowned professor while enrolling in a "green course," one in which the professor did not need to travel to the economy and no paper would be used for assignments.
To prepare for the course, the professor chose various marketing, advertising, and strategy cases from around the world. On the first day of class, he presented a case study on Boca Rola, and its advent into Economy C. He gave the students 30 minutes to read the case study, and then encouraged the students to share their views about: (1) Boca Rola’s strategy to enter the market in Economy C, (2) the barriers Boca Rola faced in entering the market, (3) perceptions of foreign products previously unavailable in a particular economy, and (4) consumers' reaction to the new product. He found the students reluctant to share their individual views in the class. Thus, he presented his own views from the perspective of an outsider to Economy C, and shared his views about how Boca Rola’s business culture may be different than the culture of Economy C in which it was operating. At the end of class, the professor gave the students a list of questions about the case study. He asked the students to form small groups of 3-4 students and discuss the answers to the questions. After they discussed the questions, he asked each team to submit a 5-6 page summary of the responses in three days. Additionally, he assigned another case study for the students to read – one that focused on a large multinational company’s entry into the beauty care segment in Economy D for future discussion.
When the professor reviewed the students’ responses to the Boca Rola case study, he discovered that the 20 students had submitted 5 separate sets of case study responses, as required. However, each group provided the same responses to the same questions, with no variation. He knew that this could not be a blatant incidence of cheating. When the next class reconvened, he asked the students why they turned in identical sets of answers. The students looked surprised, believing that they had followed his instructions, but had perhaps misinterpreted them. Finally, one student raised his hand and stated that the class had formed groups of 3-4 students, but that each group tackled one question, and then shared the answers with the other groups. The students believed that it was not time efficient to discuss each question. Rather, they decided that each group would respond to just one question, and then share the response with the other groups, who would do the same. The professor smiled in exasperation, and, frustrated by his inability to engage the students in an open discussion, began discussing the beauty company’s entry into Economy D.
Points to Consider
- How has technology enhanced international educational opportunities for both students and instructors? Other than the examples cited, what other ways can technology facilitate international educational opportunities?
- To what extent did the professor understand the students’ motivation to learn, the context in which they learn, and their willingness to experiment and use different approaches to demonstrate what they can do and what they know?
- Why was the strategy of open classroom discussion widely popular in Economy B and a widely used strategy to introduce opposing views, and to encourage critical thinking?
- To what extent can strategies such as lesson study encourage students in Economy A to demonstrate problem solving skills, critical thinking, and creativity?
- What could the professor do to model how each group could engage in separate discussions to understand the various perceptions about Boca Rola’s strategy to enter the market in Economy C?
- Individualistic cultures are those cultures in which the opinion of the individual is greatly sought after and deeply valued, even though it may differ from the views of the group. These cultures believe that it is a variety of individual opinions that produce the best solutions to problems and that promote success, whether in social relationships or in the workplace.
- Collectivist cultures, on the other hand, value group consensus and harmony. These cultures believe that an environment conducive for business and personal success can only be created when members of the group align in sync with one another. Members of groups will first debate the merits of a question among themselves, and then choose the opinion that they deem most valuable before presenting it to a higher authority.
- The Professor from Economy B was used to receiving individual responses to his case discussion questions, responses that varied greatly from one another. Although not all responses he received were correct, he enjoyed reading the individual opinions present in them before discussing the correct answers with the class during the following lecture. Economy A students were, however, from a collectivist culture and valued sharing their responses with their group first before reaching a consensus on a particular answer choice.
- The professor noticed that, although he had received only one response per question, it was more or less correct, although there was not a way for him to ascertain which of his students had provided the response, how the learning had occurred, and what the viewpoints of those who disagreed might be.
- Teaching Tips for IEW 2010 provided by TESOL
- Teaching Tips for IEW 2009 submitted by teachers throughout the Asia-Pacific region
- Videos from the APEC-RELC International Language Seminar presentation "Creating Prosperity: Using the Internet to Revolutionize Language Learning"
- New paths of communication through:
- Technology providing access to content beyond books
- Video from the APEC-RELC International Language Seminar presentation "Changes in Our Field: Where are We Going?"
- E-Language Learning for Students - a collection of online language learning resources from various APEC members
- Related Tips for Teaching 21st Century Workplace Skills
More content from International Education Week 2010 | <urn:uuid:a3ee253d-4ba5-4daa-a3ab-576294344497> | CC-MAIN-2013-20 | http://hrd.apec.org/index.php/Business_Language_Learning | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.964194 | 1,852 | 2.90625 | 3 |
This is an old lecture by linguist and political activist Noam Chomsky (professor at MIT) given at UC Berkeley in 2003. For that evening in the Charles M. and Martha Hitchcock Lecture series, Chomsky examined biolinguistics - the study of relations between physiology and speech.
A second video of Chomsky is featured below, which is the second half of this talk. Fair warning - this is not easy material - Chomsky is speaking to people who are well-versed in this field.
Chomsky has been one the most influential scholars over the last three or four decades - between 1980 and 1992, he was cited as a source more than any other living scholar, and ranked eighth overall.
As background for this lecture, Wikipedia offers a good summary of his influence in linguistics (below the video).
Chomskyan LinguisticsChomskyan linguistics, beginning with his Syntactic Structures, a distillation of his Logical Structure of Linguistic Theory (1955, 75), challenges structural linguistics and introduces transformational grammar. This approach takes utterances (sequences of words) to have a syntax characterized by a formal grammar; in particular, a context-free grammar extended with transformational rules.
Perhaps his most influential and time-tested contribution to the field, is the claim that modeling knowledge of language using a formal grammar accounts for the "productivity" or "creativity" of language. In other words, a formal grammar of a language can explain the ability of a hearer-speaker to produce and interpret an infinite number of utterances, including novel ones, with a limited set of grammatical rules and a finite set of terms. He has always acknowledged his debt to Pāṇini for his modern notion of an explicit generative grammar although it is also related to rationalist ideas of a priori knowledge.
It is a popular misconception that Chomsky proved that language is entirely innate and discovered a "universal grammar" (UG). In fact, Chomsky simply observed that while a human baby and a kitten are both capable of inductive reasoning, if they are exposed to exactly the same linguistic data, the human child will always acquire the ability to understand and produce language, while the kitten will never acquire either ability. Chomsky labeled whatever the relevant capacity the human has which the cat lacks the "language acquisition device" (LAD) and suggested that one of the tasks for linguistics should be to figure out what the LAD is and what constraints it puts on the range of possible human languages. The universal features that would result from these constraints are often termed "universal grammar" or UG.
The Principles and Parameters approach (P&P)—developed in his Pisa 1979 Lectures, later published as Lectures on Government and Binding (LGB)—makes strong claims regarding universal grammar: that the grammatical principles underlying languages are innate and fixed, and the differences among the world's languages can be characterized in terms of parameter settings in the brain (such as the pro-drop parameter, which indicates whether an explicit subject is always required, as in English, or can be optionally dropped, as in Spanish), which are often likened to switches. (Hence the term principles and parameters, often given to this approach.) In this view, a child learning a language need only acquire the necessary lexical items (words, grammatical morphemes, and idioms), and determine the appropriate parameter settings, which can be done based on a few key examples.
Proponents of this view argue that the pace at which children learn languages is inexplicably rapid, unless children have an innate ability to learn languages. The similar steps followed by children all across the world when learning languages, and the fact that children make certain characteristic errors as they learn their first language, whereas other seemingly logical kinds of errors never occur (and, according to Chomsky, should be attested if a purely general, rather than language-specific, learning mechanism were being employed), are also pointed to as motivation for innateness.
More recently, in his Minimalist Program (1995), while retaining the core concept of "principles and parameters," Chomsky attempts a major overhaul of the linguistic machinery involved in the LGB model, stripping from it all but the barest necessary elements, while advocating a general approach to the architecture of the human language faculty that emphasizes principles of economy and optimal design, reverting to a derivational approach to generation, in contrast with the largely representational approach of classic P&P.
Chomsky's ideas have had a strong influence on researchers of the language acquisition in children, though many researchers in this area such as Elizabeth Bates and Michael Tomasello argue very strongly against Chomsky's theories, and instead advocate emergentist or connectionist theories, explaining language with a number of general processing mechanisms in the brain that interact with the extensive and complex social environment in which language is used and learned.
His best-known work in phonology is The Sound Pattern of English (1968), written with Morris Halle (and often known as simply SPE). This work has had a great significance for the development in the field. While phonological theory has since moved beyond "SPE phonology" in many important respects, the SPE system is considered the precursor of some of the most influential phonological theories today, including autosegmental phonology, lexical phonology and optimality theory. Chomsky no longer publishes on phonology. | <urn:uuid:249ad1f9-cb8f-42dd-b955-c4814c5a2452> | CC-MAIN-2013-20 | http://integral-options.blogspot.com/2012/04/noam-chomsky-language-and-mind.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945467 | 1,107 | 2.75 | 3 |
The most illustrious Czars and mighty Princes, John and Peter Alexewitz, my most gracious Lords, having in their Wise Council of State resolved to send a splendid Embassy, on some important affairs, to the Great Bogdaichan, or Sovereign of the famous Kingdom of Kitai, by us Europeans commonly called China: This obliged me with a welcoming opportunity of traveling through part of the famous, but hitherto unknown, Siberian and Kitaian Countries, (never before visited by any German) and informing my self by credible witnesses of the remainder of those Lands, as well as obtaining a certain knowledge of several things with which the World hath not been hitherto acquainted.
Evert Ysbrants Ides was the first educated European to travel in Siberia and gather firsthand information about the collection of fossil ivory. Ides' opportunity to travel across Siberia was the direct result of the satisfactory settlement of a small war on the Chinese border.
The speed with which the first wave of Russian fur traders, called promyshleniki, crossed Siberia created serious supply problems for them. Men carrying small loads of goods and supplies could easily cross Siberia using a network of rivers and short portages by boat in the summer and sled in the winter. Bringing large loads of bulky goods, specifically enough grain to feed a small settlement, was a much more difficult and expensive proposition. It could take three or four years for a shipment of grain to reach a remote place like Yakutsk and, by then, the majority of the load would be inedible. Because of this, the promyshleniki were relieved and excited when they began to hear rumors of the Amur, a valley in the south filled with grain, cattle, and silver.
The first expedition to reach the Amur was a group of 132 cossacks under Vassily Poiarkov in 1643-46. The Amur natives, whom the Russians called Daurians, greeted Poiarkov with hospitality but the relationship turned sour as the Russians resorted to kidnapping, plunder, and, it is reputed, cannibalism to get what they wanted. This kind of behavior went over with the locals about as well as you might expect. Poiarkov had to fight his way out of the country and lost half of his command to native attacks and starvation. However, because he confirmed that the Amur was a land of cattle and grain (he didn't find any silver), the expedition was proclaimed a success. Several other Russians tried to map out a better route into the Amur valley. In 1651, Yerofey Khabarov fought his way down the river with even more brutality than Poiarkov had and built a fort near the site of the city that now bears his name. This is when things began to go to hell.
Khabarov knew, but chose to ignore, that the Amur was within the Chinese sphere of influence. What he might not have known was that it was also part of the homeland of the new Qing dynasty of China. The only reason he was able to occupy as much land as he did was that most of the armed Manchu horsemen were still busy conquering China. A year after Khabarov built Achansk, a Chinese military expedition arrived to drive him out of the valley. This was the beginning of more than thirty years of seesawing occupation of the Amur country. By the early eighties, with most of China finally pacified, the Kangxi emperor was ready to deal with the Russians once and for all. Now it was the turn of Moscow to get alarmed.
Moscow, in the 1680s, was infected with a bad case of "who's in charge here?" In April 1682, Tsar Fedor III died at the tender age of twenty one without leaving an heir. The succession fell on his brothers Ivan and Peter. The elder of the two, Ivan, was severely epileptic, nearly blind, and may have suffered from a variety of other problems (diagnosing the physical and mental health of historical figures is more of a parlor game than a science among historians). Peter was strong as an ox, but only ten years old. To further complicate matters, the two boys had different mothers and the two sets of in-laws formed powerful and antagonistic factions at court. Fedor's death was followed by a week of riot and rebellion (not all of which was related to the succession). When the dust cleared, Ivan and Peter had been declared co-tsars and their sister Sophia was the de facto regent ruling in their names.
Except for a few years during the reign of Catherine the Great, historians have not been kind to Sophia. She has been reduced to cartoonish stereotype of a scheming woman (which is bad) who was finally put back in her place by a strong male (which is good). In fact, Sophia Alexeevna Romanov was an extraordinary woman. She was intelligent, well informed, and literate in three languages. She was comfortable giving orders and appearing in public at a time when most upper-class Russian women were kept in harem-like seclusion for their entire lives. During the seven years that she served as regent for the two tsars, Sophia had successes and failures no different than any other rulers’. For the advance of mammoth knowledge, her most important achievement was settling the Amur conflict.
Since the beginning of the century, the tsars had recognized the potential for Siberia to become a private trade route to China, but every attempt at making official contact with the Chinese court had failed due to cultural misunderstandings. Despite that, the Kangxi emperor wanted to open trade with the Russians and hoped that a show of strength would be enough to drive the promyshleniki and Cossacks out of the Amur valley. In 1684 a large and well supplied Chinese army arrived on the lower Amur and began to move west driving the Russians before them. At Albazin, on the northern bend of the Amur, the Russians attempted to make a stand, but were soon defeated. The Chinese allowed the survivors to retreat, razed their fort, and moved down river to their base of operations. When word of the defeat on the Amur reached Sophia and her advisors, they quickly dispatched an envoy to make peace with the Chinese.
This should have been the end of the crisis, but, before the envoy could arrive, the Siberian Russians returned to Albazin and built a new fort provoking the Chinese army to return and start a new siege. They were only saved by the arrival in Beijing of advance messengers from the embassy. The Kangxi emperor ordered his army to lift the siege and prepared his own diplomatic mission to meet the Russians. Further complications--and there are always further complications in diplomacy--delayed the meeting of the two missions until the summer of 1689. The negotiation took place at the Russian outpost of Nerchinsk on a tributary of the Amur almost 300 miles west of Albazin. Amid elaborate ceremonies by the official heads of the missions, the real negotiations were carried out in Latin by a Polish cavalry officer (for the Russians) and a French Jesuit (for the Chinese). The agreement, signed on August 27, the first formal treaty signed between China and a Western power, required the Russians to evacuate the entire Amur valley, but established formal trade through Nerchinsk.
Sophia did not get to celebrate the Treaty of Nerchinsk. At the same time that the negotiations were wrapping up in the East, Sophia's regency was coming to an abrupt and unanticipated end in Moscow. Sophia's position had been dramatically weakened by two disastrous campaigns in the Crimea and by her half brother Peter turning seventeen in June. Amid rumors that Sophia was planning to murder Peter and rule in her own name, supporters of the two Romanovs engaged in a month of dramatic maneuvers that resulted in Peter taking control and Sophia retiring to a convent. Peter's half brother Ivan stayed on as co-tsar until his natural death seven years later.
When word of the treaty reached Peter, he accepted the terms and began planning a trade mission to Beijing. Russia had a severe shortage of literate agents who were competent to make their way through foreign cultures, which explains the necessity of hiring Latin speaking Polish cavalry officers to conduct delicate diplomatic negotiations. For his first official trade mission to China, Peter hired a German, Dutch, or possibly Danish merchant named Evert Ysbrants Ides*. Ides had been in Russia since 1677, operating his own merchant house, first in Archangel and later in Moscow. In the spring of 1692, Ides left Moscow at the head of a 400 man caravan with instructions to exchange ratifications of the treaty, determine the best items for trade, feel out official attitudes toward the treaty, and request that a Chinese envoy be sent to Moscow.
The most direct route from Moscow to China is the same one that the Trans-Siberian Railway follows today, around the southern end of the Ural Mountains, across the steppe lands at the center of Eurasia, across Lake Baikal, and on to the Amur. Unfortunately, the steppe lands were controlled by Kirghiz nomads and unsafe for Russian merchants. For this reason, Ides' caravan had to take a much more roundabout path to Baikal that took them across the Urals on the same path as Ermak a century before, down the Irtysh River to its junction with the Ob, up the Ob and its tributary the Ket, to a portage into the Yenisei basin, and up the Angara River to Baikal. By October, the mission had only reached the way station of Makofskoi on the Ket portage. It was here that Ides had had his encounter with fossil mammoths.
Amongst the hills, which are situate North-East of [Makofskoi], and not far from hence, the Mammuts Tongues and Legs are found; as they are also particularly on the Shores of the Rivers Jenize, Trugan [Lower Tunguska], Mongamsea [Taz], Lena, and near Jakutskoi [Yakutsk], even as far as the Frozen Sea. ... I had a Person with me to China, who had annually went out in search of these Bones; he told me, as a certain truth, that he and his Companions found the Head of one of these Animals, which was discovered by the fall of such a frozen piece of Earth. As soon as he opened it, he found the greatest part of the Flesh rotten, but it was not without difficulty that they broke out his Teeth, which were placed before his Mouth, as those of the Elephants are; they also took some Bones out of his head, and afterwards came to his Fore-foot, which they cut off, and, carried part of it to the City of Trugan [Turukhansk], the Circumference of it being as large as that of the wast of an ordinary Man. The Bones of the Head appeared somewhat red, as tho' they were tinctured with Blood.
This account by Ides is the first Western description of a frozen mammoth and the beginning of a scientific and popular fascination that hasn't ended over three hundred years later.
Locating the mammoth to which Ides' unnamed traveling companion referred is a little tricky. Makofskoi was, and still is, a small town on the western end of the portage between the Ob and Yenisei Rivers. Ides gave no indication of how far he meant when he said mammoth remains were found in the hills to the Northeast. My conclusion, based on Ides' phrase "not far from hence," is that the find must have been close to Makofskoi. The explorer Adolf Nordenskiold, who traveled along the Arctic coast in the late nineteenth century, thought, because the hunter took the mammoth's foot to Turukhansk, that the find must have been close to that place. Turukhansk is 450 miles north of Makofskoi, which is not "not far from hence." In Ides' day there were two major towns on the Yenesei where his companion might have sold the ivory, Turukhansk and Yeneseisk, which is only eighty miles from Makofskoi. That argues in Nordenskiold's favor. If the find was closer to Yeneseisk the only reasons the hunter would have had for going all the way to Turukhansk would have been if Turukhansk was offering a better price for ivory or if he had other business there. Without more evidence there's no way to settle the matter. If we split the difference between Makofskoi and Turukhansk we arrive at the Stony Tunguska River. Maybe the site was blown up in 1908 by the Tunguska meteorite.
Ides goes on to report what the locals believed about the remains.
Concerning this Animal there are very different reports. The Heathens of Jakuti, Tungusi, and Ostiacki, say that they continually, or at least, by reason of the very hard Frosts, mostly live under ground, where they go backwards and forwards; to confirm which they tell us, That they have often seen the Earth heaved up when one of these Beasts was on the March, and after he was past, the place sink in, and thereby make a deep Pit. They further believe, that if this Animal comes so near to the surface of the frozen Earth as to smell, or discern the Air, he immediately dies, which they say is the reason that several of them are found dead, on the high Banks of the River, where they unawares came out of the Ground. This is the opinion of the Infidels concerning these Beasts, which are never seen.
But the old Siberian Russians affirm, that the Mammuth is very like the Elephant, with this only difference, that the Teeth of the former are firmer, and not so straight as those of the latter. They also are of Opinion, that there were Elephants in this Country before the Deluge, when this Climate was warmer, and that their drowned bodies floating on the Surface of the Water of that Flood, were at last wash'd and forced into Subterranean Cavities...
The description of the mammoth as a subterranean animal that dies on exposure to surface air is almost identical to that given by the Chinese writer Tung-fang So in the second century BC.
The three "heathen" tribes that Ides mentions are names given by the Russian conquerors and used to lump together all of the peoples of the Lower Irtysh, Ob, Yenisei, and Lena river basins. That is to say, he was ascribing the belief in the mammoth as a giant mole to most of the people of Western and Central Siberia. Later travelers ascribed different beliefs to many of these peoples. Still other travelers confirmed Ides' observations. When Ides traveled across Siberia, most of these peoples had been under Russian rule for a century, giving them plenty of time to have heard about the ideas of tribes with which they had had very little contact and to have learned the Biblical stories of Noah and Behemoth. Today, it is virtually impossible to sort out which tribes believed what before their contact with the Russians.
While Ides was the first educated European to travel in Siberia and report firsthand information on the collection of fossil ivory, he wouldn't be the last. Peter the Great's diplomacy, wars, economic needs, and personal curiosity would send a constant stream of educated Europeans into his Eastern realms. They in turn would send back a constant stream of information that would be eagerly consumed by a Europe that was looking at the world through an increasingly scientific lens.
Hmmm. I still seem to be having trouble with that "keep your blog posts under a thousand words" thing. Oh well...
* Ides nationality and name have been the source of much confusion over the years. Accounts of his journey describe him variously as Dutch, German, and Danish. In the opening quote he implies that he considers himself to be German, but the first edition of his book was published in Dutch. The confusion comes from the fact that his parents were Dutch immigrants to Holstein, a German-speaking province that is the home of many cows and was then ruled by the King of Denmark. It's likely that Ides was fluent in both German and Dutch.
The possible spellings given for his first and middle names are even more varied than his nationality. Because his middle name is sometimes spelled Ysbrand, some writers have assumed that he and the mission's secretary, Adam Brand, were one person. Adding to that confusion was the fact that both of them published memoirs of the journey, which the same writers who thought they were the same person assumed were merely different editions of the same book. They weren't, it wasn't, and that's that. | <urn:uuid:53e8bab9-3a15-4496-a8db-400a8f03146a> | CC-MAIN-2013-20 | http://johnmckay.blogspot.com/2009/03/fragments-of-my-research-viii.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.984575 | 3,506 | 2.828125 | 3 |
It's normal for parents to disagree and argue from time to time. Parents might disagree about money, home chores, or how to spend time. They might disagree about big things — like important decisions they need to make for the family. They might even disagree about little things that don't seem important at all — like what's for dinner or what time someone gets home.
Sometimes parents can disagree with each other and still manage to talk about it in a calm way, where both people get a chance to listen and to talk. But many times when parents disagree, they argue. An argument is a fight using words.
Most kids worry when their parents argue. Loud voices and angry words parents might use can make kids feel scared, sad, or upset. Even arguments that use silence — like when parents act angry and don't talk to each other at all — can be upsetting for kids.
If the argument has anything to do with the kids, kids might think they have caused their parents to argue and fight. If kids think it's their fault, they might feel guilty or even more upset. But parents' behavior is never the fault of kids.
What Does It Mean When Parents Fight?
Kids often worry about what it means when parents fight. They might jump to conclusions and think arguments mean their parents don't love each other anymore. They might think it means their parents will get a divorce.
But parents' arguments usually don't mean that they don't love each other or that they're getting a divorce. Most of the time the arguments are just a way to let off steam when parents have a bad day or feel stressed out over other things. Most people lose their cool now and then.
Just like kids, when parents get upset they might cry, yell, or say things they don't really mean. Sometimes an argument might not mean anything except that one parent or both just lost their temper. Just like kids, parents might argue more if they're not feeling their best or are under a lot of stress from a job or other worries.
Kids usually feel upset when they see or hear parents arguing. It's hard to hear the yelling and the unkind words. Seeing parents upset and out of control can make kids feel unprotected and scared.
Kids might worry about one parent or the other during an argument. They might worry that one parent may feel especially sad or hurt because of being yelled at by the other parent. They might worry that one parent seems angry enough to lose control. They might worry that their parent might be angry with them, too, or that someone might get hurt.
Sometimes parents' arguments make kids cry or give them a stomachache. Worry from arguments can even make it hard for a kid to go to sleep or go to school.
What to Do When Parents Fight
It's important to remember that the parents are arguing or fighting, not the kids. So the best thing to do is to stay out of the argument and go somewhere else in the house to get away from the fighting or arguing. So go to your room, close the door, find something else to do until it is over. It's not the kid's job to be a referee.
When Parents' Fighting Goes Too Far
When parents argue, there can be too much yelling and screaming, name calling, and too many unkind things said. Even though many parents may do this, it's never OK to treat people in your family with disrespect, use unkind words, or yell and scream at them.
Sometimes parents' fighting may go too far, and include pushing and shoving, throwing things, or hitting. These things are never OK. When parents' fights get physical in these ways, the parents need to learn to get their anger under control. They might need the help of another adult to do this.
Kids who live in families where the fighting goes too far can let someone know what's going on. Talking to other relatives, a teacher, a school counselor, or any adult you trust about the fighting can be important.
Sometimes parents who fight can get so out of control that they hurt each other, and sometimes kids can get hurt, too. If this happens, kids can let an adult know, so that the family can be helped and protected from fighting in a way that hurts people.
If fighting is out of control in a family, if people are getting hurt from fighting, or if people in the family are tired of too much fighting, there is help. Family counselors and therapists know how to help families work on problems, including fighting.
They can help by teaching family members to listen to each other and talk about feelings without yelling and screaming. Though it may take some work, time, and practice, people in families can always learn to get along better.
Is It OK for Parents to Argue Sometimes?
Having arguments once in a while can be healthy if it helps people get feelings out in the open instead of bottling them up inside. It's important for people in a family to be able to tell each other how they feel and what they think, even when they disagree. The good news about disagreeing is that afterward people usually understand each other better and feel closer.
Parents fight for different reasons. Maybe they had a bad day at work, or they're not feeling well, or they're really tired. Just like kids, when parents aren't feeling their best, they can get upset and might be more likely to argue. Most of the time, arguments are over quickly, parents apologize and make up, and everyone feels better again.
Happy, Healthy Families
No family is perfect. Even in the happiest home, problems pop up and people argue from time to time. Usually, the family members involved get what's bothering them out in the open and talk about it. Everyone feels better, and life can get back to normal.
Being part of a family means everyone pitches in and tries to make life better for each other. Arguments happen and that's OK, but with love, understanding, and some work, families can solve almost any problem. | <urn:uuid:12d027d4-f1ad-47d0-baaa-eea0edc16cc9> | CC-MAIN-2013-20 | http://kidshealth.org/PageManager.jsp?dn=RadyChildrensHospital&lic=102&cat_id=20068&article_set=22653&ps=304 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.971338 | 1,240 | 3.28125 | 3 |
This “The Best…” list is a companion to The Best Sites To Learn About The U.S. Financial Crisis. Those sites tried to explain how we got into this mess. The resources on this list share what is happening to us as a result. These sites try to give a picture of the recession’s effects throughout the world.
These sites, all relatively accessible to English Language Learners, are divided into three sections. The first are some narrative reports on what is occurring. The second are interactive charts or graphs that show “the numbers.” The third are multimedia presentations giving a human face to the recession (of course, most of my students are experiencing that human face directly in their own lives).
Here are my picks for The Best Sites To Learn About The Recession:
Voice of America’s Special English has a report (with audio support for the text) titled Trying To Live With A Recession In The World’s Largest Economy.
Breaking New English has a lesson (again, with audio support for the text) called Huge U.S. Job Losses Spark Recession Fears.
ESL Podcast Blog has an engaging report on ways a recession affects society
CBBC has a good report on the recession in the United Kingdom.
CHARTS & GRAPHS:
Where Does Your State Rank? is a map from CNN showing the recession’s effect across the United States.
Layoffs Pile-Up is a graph from the Wall Street Journal showing what economic sectors are experiencing the worst job loss.
USA Today has a very complete analysis on jobs loss and growth in the United States.
The National Conference of State Legislatures also has an interactive map on the effects of the recession in all fifty states.
These would require some teacher explanation, but are intriguing nevertheless. They’re are two infographics showing how the proposed economic stimulus would be used — one from the Washington Post and the other from Credit Loan. CNN has a new interactive on the compromise that the Senate and House just agreed to.
The Obamameter is a regularly updated visual representation of different aspects of the U.S. economy. It would be accessible to Intermediate English Language Learners with some explanation.
FinViz shows the stock market in a vivid color-code.
The Economy Tracker from CNN shows the latest economic data on a map, and combines that with personal stories of those affected.
The Geography Of A Recession comes from The New York Times and shows, in detail, unemployment rates throughout the United States.
Maplibs has a color-coded world map that shows international financial centers. The key is the color — if it’s shown in red then it’s down, if it’s shown in green then it’s up.
The Sacramento Bee has a scary map of unemployment in California.
Economic Reality Check is from CNN and provide short facts about different aspects of the recession.
The Sacramento Bee has just published an Income Gap Interactive Graphic. It’s based on Sacramento data, but I suspect the information is similar across the United States. It vividly, and in a way that’s accessible to English Language Learners, shows how long it takes for different people (by occupation, ethnicity, and educational background) to earn $100,000.
MSNBC has developed what they call an Adversity Index. It’s an animated map that “measures the economic health of 381 metro areas and all 50 states.” It’s pretty intriguing, though would probably require some initial explanation before English Language Learners could fully decipher it. Right below the Adversity Map, you can also find a “Map:Recession-resistant areas” that highlights communities in the U.S. that have escaped the recession’s effects.
The San Francisco Chronicle published a simple and very accessible chart today titled Unemployment Characteristics. It “breaks down” unemployment data by race, gender, and education background.
Great Depression Comparison is an excellent interactive comparing the Depression to our present Recession.
Here’s a very accessible infographic that shows the change in unemployment in major US cities over the past year.
The Associated Press has an Economic Stress Index which shows, in an interactive graphic form, what is happening to every county in the United States economically. It measures bankruptcies, home foreclosures, and unemployment, and then interprets it into what they call a “stress index.”
The New York Times has published an interactive graphic titled Broad Unemployment Across the U.S. It shows both the official unemployment rate, and what the rate would be if it included “ipart-time workers who want to work full time, as well some people who want to work but have not looked for a job in the last four weeks.”
Moody’s has put together an impressive and accessible Global Recession Map showing how all the economies in the world are going.
“Food Assistance” is a very simple and visual infographic from GOOD Magazine tracking the rise of food stamps over the past year.
Times Of Crisis is an extraordinary interactive timeline showing the critical events of the economic recession over the past 365 days.
The Geography of Jobs is an excellent animated map demonstrating the loss of jobs in different parts of the United States during the recession.
Flowing Data has some maps that very visually show where unemployment has increased over the past few years.
The Unemployed States of America, a nice infographic (in terms of accessibility, not because it shares good news)
How the Great Recession Reshaped the U.S. Job Market, an informative (and a bit “busy” looking) interactive from The Wall Street Journal.
“America’s 35 Hardest-Hit Cities” is a very accessible infographic showing the communities around the U.S. with the highest unemployment rates. Quite a few of them are located right here in California’s Central Valley.
Comparing This Recession to Previous Ones: Job Changes is a New York Times graphic that very clearly shows we’re not doing so great right now.
“How The Great Recession Has Changed Life In America” is an interactive from The Pew Center.
Who’s Hurting? is a Wall Street Journal interactive showing which economic sector is losing/gaining jobs
How Do Americans Feel About The Recession? is an infographic from MINT.It has some interesting information, and a teacher could ask similar questions of their students.
“Decline and fall of the California job market” is a very good interactive from The Sacramento Bee showing the chronological progress of the monthly unemployment rate for each county in the state over the past three years.
Visual Economics has published two good infographics in one place: “Cities That Have Missed The Recovery” and “Cities That Are Having A Great Recovery.”
“How The Recession Has Changed Us” is what I think is a pretty amazing infographic from The Atlantic.
Where Are The Jobs? is a very good interactive infographic from The Washington Post showing which economic sectors are increasing jobs and which are not doing so well.
GOOD has just published a very good series of infographics explaining the economy.
It’s called All About The Benjamins.
VIDEOS & SLIDESHOWS:
Boomtown To Bust is a New York Times slideshow on the recession’s effect in Florida.
The Sacramento Bee has a series of photos Chronicling The Economic Downturn.
Long Lines Of Job Seekers Continue is a slideshow from The Washington Post.
Downturn Leaves More Families Homeless is another slideshow from The Washington Post.
The Wall Street Journal has excerpts from recent songs that have been written about the recession.
Following A Closing, The Struggle To Find Work is another slideshow from The New York Times.
A Community Facing Hunger is a video from The New York Times.
Out Of Work In China is a video showing the effects of the recession in that country.
A Painful Return is a slideshow discussing the recession’s effects in China.
Tough Times For Summitville Tiles is a Wall Street Journal slideshow about the closing of a factory.
Black Thursday In France is a Wall Street Journal slideshow about protests in that country demanding that the government do more to stop the recession.
Ohio Town Faces Economic Collapse is a slideshow from Pixcetra.
The American Economy: Down and Out is a slideshow from TIME Magazine.
Tough Times In Cleveland is another TIME slideshow.
An audio slideshow from The New York Times called In Economic Vise, Pontiac Struggles.
There Goes Retirement is an online video from The Wall Street Journal.
The progressive magazine The Nation has a useful slideshow called The Great Recession. It’s a bit ideological, but provides a different kind of analysis and response to the recession. It also includes links to articles that would not be accessible to ELL’s. However, the images, teacher modifications of the articles, and lesson ideas provided by them could offer some good opportunities for student discussion and higher order thinking.
The Faces Of The Unemployed is a slideshow from The New York Times.
Searching For A Job is a series of photos from the Sacramento Bee.
Looking For Work is an audio slideshow from Reuters.
Desperately Seeking A Salary is another audio slideshow from Reuters.
Job Seekers Flood Local Job Fair is a slideshow from The Sacramento Bee.
Recession Hits The Saddle is a slideshow from The New York Times.
Auto Town Struggles With Unemployment is a slideshow from The New York Times.
Dark Stores from TIME Magazine.
The New York Times has an audio slideshow about people looking for work in the state of Tennessee.
Inside California’s Tent Cities is the newest addition to this list. It’s a New York Times slideshow on the growing number of homeless encampments around the United States, particularly here in Sacramento (which was recently featured on Oprah Winfrey’s show) and in Fresno.
The Death of the American Mall is a slideshow from The Wall Street Journal.
Stimulus Watch is a site that doesn’t really fit into any of the categories on this list, but it’s intriguing. It supposedly lists all the projects different governmental projects have proposed to do with stimulus money, and then people can vote which ones they think are best. They’re categorized by community, so they’re very accessible. The only drawback to it is since it’s a wiki, even though all the projects are listed, many don’t have detailed information yet on what the project entails. Nevertheless, its interactivity could offer some good possibilities for student engagement.
How Do You Feel About The Economy? is a great interactive graphic — especially for English Language Learners — from The New York Times. You’re supposed to be able to enter a word that indicates how you’re filling, and you’re given many choices. It’s a good opportunity for vocabulary development.
Picturing The Recession is yet another exceptional interactive from The New York Times. It’s composed of photos contributed by readers, including captions, divided by topic or location.
Adapting To Job Loss is a slideshow from The Washington Post.
Survival Strategies is a new interactive feature from The New York Times. People offer brief ideas on how they’re saving money now in the recession. Readers can vote on which ones they think are best. You have to register in order to vote, offer suggestions, or contribute your own.
Forced From Home is a slideshow from The Wall Street Journal.
Ghost Factories is a slideshow from The New York Times.
“The Long-Term Unemployed” is a multimedia interactive from The Wall Street Journal.
“America Out Of Work” is ongoing series of video interviews the Los Angeles Times is doing with the unemployed.
America at Work is slideshow from The Atlantic.
As always, feedback is welcome. | <urn:uuid:96f462cf-90d3-4dee-acd5-263d2ee58f50> | CC-MAIN-2013-20 | http://larryferlazzo.edublogs.org/2009/02/12/the-best-sites-to-learn-about-the-recession/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.917049 | 2,508 | 2.640625 | 3 |
WE have in this chapter to consider why the females of many birds have not acquired the same ornaments as the male; and why, on the other hand, both sexes of many other birds are equally, or almost equally, ornamented? In the following chapter we shall consider the few cases in which the female is more conspicuously coloured than the male.
In my Origin of Species* I briefly suggested that the long tail of the peacock would be inconvenient and the conspicuous black colour of the male capercailzie dangerous, to the female during the period of incubation: and consequently that the transmission of these characters from the male to the female offspring had been checked through natural selection. I still think that this may have occurred in some few instances: but after mature reflection on all the facts which I have been able to collect, I am now inclined to believe that when the sexes differ, the successive variations have generally been from the first limited in their transmission to the same sex in which they first arose. Since my remarks appeared, the subject of sexual colouration has been discussed in some very interesting papers by Mr. Wallace,*(2) who believes that in almost all cases the successive variations tended at first to be transmitted equally to both sexes; but that the female was saved, through natural selection, from acquiring the conspicuous colours of the male, owing to the danger which she would thus have incurred during incubation.
* Fourth edition, 1866, p. 241.
*(2) Westminster Review, July, 1867. Journal of Travel, vol. i., 1868, p. 73.
This view necessitates a tedious discussion on a difficult point, namely, whether the transmission of a character, which is at first inherited by both sexes can be subsequently limited in its transmission to one sex alone by means of natural selection. We must bear in mind, as shewn in the preliminary chapter on sexual selection, that characters which are limited in their development to one sex are always latent in the other. An imaginary illustration will best aid us in seeing the difficulty of the case; we may suppose that a fancier wished to make a breed of pigeons, in which the males alone should be coloured of a pale blue, whilst the females retained their former slaty tint. As with pigeons characters of all kinds are usually transmitted to both sexes equally, the fancier would have to try to convert this latter form of inheritance into sexually-limited transmission. All that he could do would be to persevere in selecting every male pigeon which was in the least degree of a paler blue; and the natural result of this process, if steadily carried on for a long time, and if the pale variations were strongly inherited or often recurred, would be to make his whole stock of a lighter blue. But our fancier would be compelled to match, generation after generation, his pale blue males with slaty females, for he wishes to keep the latter of this colour. The result would generally be the production either of a mongrel piebald lot, or more probably the speedy and complete loss of the pale-blue tint; for the primordial slaty colour would be transmitted with prepotent force. Supposing, however, that some pale-blue males and slaty females were produced during each successive generation, and were always crossed together, then the slaty females would have, if I may use the expression, much blue blood in their veins, for their fathers, grandfathers, &c., will all have been blue birds. Under these circumstances it is conceivable (though I know of no distinct facts rendering it probable) that the slaty females might acquire so strong a latent tendency to pale-blueness, that they would not destroy this colour in their male offspring, their female offspring still inheriting the slaty tint. If so, the desired end of making a breed with the two sexes permanently different in colour might be gained.
The extreme importance, or rather necessity in the above case of the desired character, namely, pale-blueness, being present though in a latent state in the female, so that the male offspring should not be deteriorated, will be best appreciated as follows: the male of Soemmerring's pheasant has a tail thirty-seven inches in length, whilst that of the female is only eight inches; the tail of the male common pheasant is about twenty inches, and that of the female twelve inches long. Now if the female Soemmerring pheasant with her short tail were crossed with the male common pheasant, there can be no doubt that the male hybrid offspring would have a much longer tail than that of the pure offspring of the common pheasant. On the other hand, if the female common pheasant, with a tail much longer than that of the female Soemmerring pheasant, were crossed with the male of the latter, the male hybrid offspring would have a much shorter tail than that of the pure offspring of Soemmerring's pheasant.*
* Temminck says that the tail of the female Phasianus Soemmerringii is only six inches long, Planches coloriees, vol. v., 1838, pp. 487 and 488: the measurements above given were made for me by Mr. Sclater. For the common pheasant, see Macgillivray, History of British Birds, vol. i., pp. 118-121.
Our fancier, in order to make his new breed with the males of a pale-blue tint, and the females unchanged, would have to continue selecting the males during many generations; and each stage of paleness would have to be fixed in the males, and rendered latent in the females. The task would be an extremely difficult one, and has never been tried, but might possibly be successfully carried out. The chief obstacle would be the early and complete loss of the pale-blue tint, from the necessity of reiterated crosses with the slaty female, the latter not having at first any latent tendency to produce pale-blue offspring.
On the other hand, if one or two males were to vary ever so slightly in paleness, and the variations were from the first limited in their transmission to the male sex, the task of making a new breed of the desired kind would be easy, for such males would simply have to be selected and matched with ordinary females. An analogous case has actually occurred, for there are breeds of the pigeon in Belgium* in which the males alone are marked with black striae. So again Mr. Tegetmeier has recently shewn*(2) that dragons not rarely produce silver-coloured birds, which are almost always hens; and he himself has bred ten such females. It is on the other hand a very unusual event when a silver male is produced; so that nothing would be easier, if desired, than to make a breed of dragons with blue males and silver females. This tendency is indeed so strong that when Mr. Tegetmeier at last got a silver male and matched him with one of the silver females, he expected to get a breed with both sexes thus coloured; he was however disappointed, for the young male reverted to the blue colour of his grandfather, the young female alone being silver. No doubt with patience this tendency to reversion in the males, reared from an occasional silver male matched with a silver hen, might be eliminated, and then both sexes would be coloured alike; and this very process has been followed with success by Mr. Esquilant in the case of silver turbits.
* Dr. Chapius, Le Pigeon Voyageur Belge, 1865, p. 87.
*(2) The Field, Sept., 1872.
With fowls, variations of colour, limited in their transmission to the male sex, habitually occur. When this form of inheritance prevails, it might well happen that some of the successive variations would be transferred to the female, who would then slightly resemble the male, as actually occurs in some breeds. Or again, the greater number, but not all, of the successive steps might be transferred to both sexes, and the female would then closely resemble the male. There can hardly be a doubt that this is the cause of the male pouter pigeon having a somewhat larger crop, and of the male carrier pigeon having somewhat larger wattles, than their respective females; for fanciers have not selected one sex more than the other, and have had no wish that these characters should be more strongly displayed in the male than in the female, yet this is the case with both breeds.
The same process would have to be followed, and the same difficulties encountered, if it were desired to make a breed with the females alone of some new colour.
Lastly, our fancier might wish to make a breed with the two sexes differing from each other, and both from the parent species. Here the difficulty would be extreme, unless the successive variations were from the first sexually limited on both sides, and then there would be no difficulty. We see this with the fowl; thus the two sexes of the pencilled Hamburghs differ greatly from each other, and from the two sexes of the aboriginal Gallus bankiva; and both are now kept constant to their standard of excellence by continued selection, which would be impossible unless the distinctive characters of both were limited in their transmission.
The Spanish fowl offers a more curious case; the male has an immense comb, but some of the successive variations, by the accumulation of which it was acquired, appear to have been transferred to the female; for she has a comb many times larger than that of the females of the parent species. But the comb of the female differs in one respect from that of the male, for it is apt to lop over; and within a recent period it has been ordered by the fancy that this should always be the case, and success has quickly followed the order. Now the lopping of the comb must be sexually limited in its transmission, otherwise it would prevent the comb of the male from being perfectly upright, which would be abhorrent to every fancier. On the other hand, the uprightness of the comb in the male must likewise be a sexually-limited character, otherwise it would prevent the comb of the female from lopping over.
From the foregoing illustrations, we see that even with almost unlimited time at command, it would be an extremely difficult and complex, perhaps an impossible process, to change one form of transmission into the other through selection. Therefore, without distinct evidence in each case, I am unwilling to admit that this has been effected in natural species. On the other hand, by means of successive variations, which were from the first sexually limited in their transmission, there would not be the least difficulty in rendering a male bird widely different in colour or in any other character from the female; the latter being left unaltered, or slightly altered, or specially modified for the sake of protection.
As bright colours are of service to the males in their rivalry with other males, such colours would be selected whether or not they were transmitted exclusively to the same sex. Consequently the females might be expected often to partake of the brightness of the males to a greater or less degree; and this occurs with a host of species. If all the successive variations were transmitted equally to both sexes, the females would be indistinguishable from the males; and this likewise occurs with many birds. If, however, dull colours were of high importance for the safety of the female during incubation, as with many ground birds, the females which varied in brightness, or which received through inheritance from the males any marked accession of brightness, would sooner or later be destroyed. But the tendency in the males to continue for an indefinite period transmitting to their female offspring their own brightness, would have to be eliminated by a change in the form of inheritance; and this, as shewn by our previous illustration, would be extremely difficult. The more probable result of the long-continued destruction of the more brightly-coloured females, supposing the equal form of transmission to prevail would be the lessening or annihilation of the bright colours of the males, owing to their continual crossing with the duller females. It would be tedious to follow out all the other possible results; but I may remind the reader that if sexually limited variations in brightness occurred in the females, even if they were not in the least injurious to them and consequently were not eliminated, yet they would not be favoured or selected, for the male usually accepts any female, and does not select the more attractive individuals; consequently these variations would be liable to be lost, and would have little influence on the character of the race; and this will aid in accounting for the females being commonly duller-coloured than the males.
In the eighth chapter instances were given, to which many might here be added, of variations occurring at various ages, and inherited at the corresponding age. It was also shewn that variations which occur late in life are commonly transmitted to the same sex in which they first appear; whilst variations occurring early in life are apt to be transmitted to both sexes; not that all the cases of sexually-limited transmission can thus be accounted for. It was further shewn that if a male bird varied by becoming brighter whilst young, such variations would be of no service until the age for reproduction had arrived, and there was competition between rival males. But in the case of birds living on the ground and commonly in need of the protection of dull colours, bright tints would be far more dangerous to the young and inexperienced than to the adult males. Consequently the males which varied in brightness whilst young would suffer much destruction and be eliminated through natural selection; on the other hand, the males which varied in this manner when nearly mature, notwithstanding that they were exposed to some additional danger, might survive, and from being favoured through sexual selection, would procreate their kind. As a relation often exists between the period of variation and the form of transmission, if the bright-coloured young males were destroyed and the mature ones were successful in their courtship, the males alone would acquire brilliant colours and would transmit them exclusively to their male offspring. But I by no means wish to maintain that the influence of age on the form of transmission, is the sole cause of the great difference in brilliancy between the sexes of many birds.
When the sexes of birds differ in colour, it is interesting to determine whether the males alone have been modified by sexual selection, the females having been left unchanged, or only partially and indirectly thus changed; or whether the females have been specially modified through natural selection for the sake of protection. I will therefore discuss this question at some length, even more fully than its intrinsic importance deserves; for various curious collateral points may thus be conveniently considered.
Before we enter on the subject of colour, more especially in reference to Mr. Wallace's conclusions, it may be useful to discuss some other sexual differences under a similar point of view. A breed of fowls formerly existed in Germany* in which the hens were furnished with spurs; they were good layers, but they so greatly disturbed their nests with their spurs that they could not be allowed to sit on their own eggs. Hence at one time it appeared to me probable that with the females of the wild Gallinaceae the development of spurs had been checked through natural selection, from the injury thus caused to their nests. This seemed all the more probable, as wing-spurs, which would not be injurious during incubation, are often as well developed in the female as in the male; though in not a few cases they are rather larger in the male. When the male is furnished with leg-spurs the female almost always exhibits rudiments of them,- the rudiment sometimes consisting of a mere scale, as in Gallus. Hence it might be argued that the females had aboriginally been furnished with well-developed spurs, but that these had subsequently been lost through disuse or natural selection. But if this view be admitted, it would have to be extended to innumerable other cases; and it implies that the female progenitors of the existing spur-bearing species were once encumbered with an injurious appendage.
* Bechstein, Naturgeschichte Deutschlands, 1793, B. iii., 339.
In some few genera and species, as in Galloperdix, Acomus, and the Javan peacock (Pavo muticus), the females, as well as the males, possess well-developed leg-spurs. Are we to infer from this fact that they construct a different sort of nest from that made by their nearest allies, and not liable to be injured by their spurs; so that the spurs have not been removed? Or are we to suppose that the females of these several species especially require spurs for their defence? It is a more probable conclusion that both the presence and absence of spurs in the females result from different laws of inheritance having prevailed, independently of natural selection. With the many females in which spurs appear as rudiments, we may conclude that some few of the successive variations, through which they were developed in the males, occurred very early in life, and were consequently transferred to the females. In the other and much rarer cases, in which the females possess fully developed spurs, we may conclude that all the successive variations were transferred to them; and that they gradually acquired and inherited the habit of not disturbing their nests.
The vocal organs and the feathers variously modified for producing sound, as well as the proper instincts for using them, often differ in the two sexes, but are sometimes the same in both. Can such differences be accounted for by the males having acquired these organs and instincts, whilst the females have been saved from inheriting them, on account of the danger to which they would have been exposed by attracting the attention of birds or beasts of prey? This does not seem to me probable, when we think of the multitude of birds which with impunity gladden the country with their voices during the spring.* It is a safer conclusion that, as vocal and instrumental organs are of special service only to the males during their courtship, these organs were developed through sexual selection and their constant use in that sex alone- the successive variations and the effects of use having been from the first more or less limited in transmission to the male offspring.
* Daines Barrington, however, thought it probable (Philosophical Transactions, 1773, p. 164) that few female birds sing, because the talent would have been dangerous to them during incubation. He adds, that a similar view may possibly account for the inferiority of the female to the male in plumage.
Many analogous cases could be adduced; those for instance of the plumes on the head being generally longer in the male than in the female, sometimes of equal length in both sexes, and occasionally absent in the female,- these several cases occurring in the same group of birds. It would be difficult to account for such a difference between the sexes by the female having been benefited by possessing a slightly shorter crest than the male, and its consequent diminution or complete suppression through natural selection. But I will take a more favourable case, namely the length of the tail. The long train of the peacock would have been not only inconvenient but dangerous to the peahen during the period of incubation and whilst accompanying her young. Hence there is not the least a priori improbability in the development of her tail having been checked through natural selection. But the females of various pheasants, which apparently are exposed on their open nests to as much danger as the peahen, have tails of considerable length. The females as well as the males of the Menura superba have long tails, and they build a domed nest, which is a great anomaly in so large a bird. Naturalists have wondered how the female Menura could manage her tail during incubation; but it is now known* that she "enters the nest head first, and then turns round with her tail sometimes over her back, but more often bent round by her side. Thus in time the tail becomes quite askew, and is a tolerable guide to the length of time the bird has been sitting." Both sexes of an Australian kingfisher (Tanysiptera sylvia) have the middle tail-feathers greatly lengthened, and the female makes her nest in a hole; and as I am informed by Mr. R. B. Sharpe these feathers become much crumpled during incubation.
* Mr. Ramsay, in Proc. Zoolog. Soc., 1868, p. 50.
In these two latter cases the great length of the tail-feathers must be in some degree inconvenient to the female; and as in both species the tail-feathers of the female are somewhat shorter than those of the male, it might be argued that their full development had been prevented through natural selection. But if the development of the tail of the peahen had been checked only when it became inconveniently or dangerously great, she would have retained a much longer tail than she actually possesses; for her tail is not nearly so long, relatively to the size of her body, as that of many female pheasants, nor longer than that of the female turkey. It must also be borne in mind that, in accordance with this view, as soon as the tail of the peahen became dangerously long, and its development was consequently checked, she would have continually reacted on her male progeny, and thus have prevented the peacock from acquiring his present magnificent train. We may therefore infer that the length of the tail in the peacock and its shortness in the peahen are the result of the requisite variations in the male having been from the first transmitted to the male offspring alone.
We are led to a nearly similar conclusion with respect to the length of the tail in the various species of pheasants. In the Eared pheasant (Crossoptilon auritum) the tail is of equal length in both sexes, namely sixteen or seventeen inches; in the common pheasant it is about twenty inches long in the male and twelve in the female; in Soemmerring's pheasant, thirty-seven inches in the male and only eight in the female; and lastly in Reeve's pheasant it is sometimes actually seventy-two inches long in the male and sixteen in the female. Thus in the several species, the tail of the female differs much in length, irrespectively of that of the male; and this can be accounted for, as it seems to me, with much more probability, by the laws of inheritance,- that is by the successive variations having been from the first more or less closely limited in their transmission to the male sex than by the agency of natural selection, resulting from the length of tail being more or less injurious to the females of these several allied species.
We may now consider Mr. Wallace's arguments in regard to the sexual colouration of birds. He believes that the bright tints originally acquired through sexual selection by the males would in all, or almost all cases, have been transmitted to the females, unless the transference had been checked through natural selection. I may here remind the reader that various facts opposed to this view have already been given under reptiles, amphibians, fishes and lepidoptera. Mr. Wallace rests his belief chiefly, but not exclusively, as we shall see in the next chapter, on the following statement,* that when both sexes are coloured in a very conspicuous manner, the nest is of such a nature as to conceal the sitting bird; but when there is a marked contrast of colour between the sexes, the male being gay and the female dull-coloured, the nest is open and exposes the sitting bird to view. This coincidence, as far as it goes, certainly seems to favour the belief that the females which sit on open nests have been specially modified for the sake of protection; but we shall presently see that there is another and more probable explanation, namely, that conspicuous females have acquired the instinct of building domed nests oftener than dull-coloured birds. Mr. Wallace admits that there are, as might have been expected, some exceptions to his two rules, but it is a question whether the exceptions are not so numerous as seriously to invalidate them.
* Journal of Travel, edited by A. Murray, vol. i., 1868, p. 78.
There is in the first place much truth in the Duke of Argyll's remark* that a large domed nest is more conspicuous to an enemy, especially to all tree-haunting carnivorous animals, than a smaller open nest. Nor must we forget that with many birds which build open nests, the male sits on the eggs and aids the female in feeding the young: this is the case, for instance, with Pyranga aestiva,*(2) one of the most splendid birds in the United States, the male being vermilion, and the female light brownish-green. Now if brilliant colours had been extremely dangerous to birds whilst sitting on their open nests, the males in these cases would have suffered greatly. It might, however, be of such paramount importance to the male to be brilliantly coloured, in order to beat his rivals, that this may have more than compensated some additional danger.
* Journal of Travel, edited by A. Murray, vol. i., 1868, p. 281.
*(2) Audubon, Ornithological Biography, vol. i., p. 233.
Mr. Wallace admits that with the king-crows (Dicrurus), orioles, and Pittidae, the females are conspicuously coloured, yet build open nests; but he urges that the birds of the first group are highly pugnacious and could defend themselves; that those of the second group take extreme care in concealing their open nests, but this does not invariably hold good;* and that with the birds of the third group the females are brightly coloured chiefly on the under surface. Besides these cases, pigeons which are sometimes brightly, and almost always conspicuously coloured, and which are notoriously liable to the attacks of birds of prey, offer a serious exception to the rule, for they almost always build open and exposed nests. In another large family, that of the humming-birds, all the species build open nests, yet with some of the most gorgeous species the sexes are alike; and in the majority, the females, though less brilliant than the males, are brightly coloured. Nor can it be maintained that all female humming-birds, which are brightly coloured, escape detection by their tints being green, for some display on their upper surfaces red, blue, and other colours.*(2)
* Jerdon, Birds of India, vol. ii., p. 108. Gould's Handbook of the Birds of Australia, vol. i., p. 463.
*(2) For instance, the female Eupetomena macroura has the head and tail dark blue with reddish loins; the female Lampornis porphyrurus is blackish-green on the upper surface, with the lores and sides of the throat crimson; the female Eulampis jugularis has the top of the head and back green, but the loins and the tail are crimson. Many other instances of highly conspicuous females could be given. See Mr. Gould's magnificent work on this family.
In regard to birds which build in holes or construct domed nests, other advantages, as Mr. Wallace remarks, besides concealment are gained, such as shelter from the rain, greater warmth, and in hot countries protection from the sun;* so that it is no valid objection to his view that many birds having both sexes obscurely coloured build concealed nests.*(2) The female horn-bill (Buceros), for instance, of India and Africa is protected during incubation with extraordinary care, for she plasters up with her own excrement the orifice of the hole in which she sits on her eggs, leaving only a small orifice through which the male feeds her; she is thus kept a close prisoner during the whole period of incubation;*(3) yet female horn-bills are not more conspicuously coloured than many other birds of equal size which build open nests. It is a more serious objection to Mr. Wallace's view, as is admitted by him, that in some few groups the males are brilliantly coloured and the females obscure, and yet the latter hatch their eggs in domed nests. This is the case with the Grallinae of Australia, the superb warblers (Maluridae) of the same country, the sun-birds (Nectariniae), and with several of the Australian honey-suckers or Meliphagidae.*(4)
* Mr. Salvin noticed in Guatemala (Ibis, 1864, p. 375) that humming-birds were much more unwilling to leave their nests during very hot weather, when the sun was shining brightly, as if their eggs would be thus injured, than during cool, cloudy, or rainy weather.
*(2) I may specify, as instances of dull-coloured birds building concealed nests, the species belonging to eight Australian genera described in Gould's Handbook of the Birds of Australia, vol. i., pp. 340, 362, 365, 383, 387, 389, 391, 414.
*(3) Mr. C. Horne, Proc. Zoolog. Soc., 1869. p. 243.
*(4) On the nidification and colours of these latter species, see Gould's Handbook of the Birds of Australia, vol. i., pp. 504, 527.
If we look to the birds of England we shall see that there is no close and general relation between the colours of the female and the nature of the nest which is constructed. About forty of our British birds (excluding those of large size which could defend themselves) build in holes in banks, rocks, or trees, or construct domed nests. If we take the colours of the female goldfinch, bullfinch, or black-bird, as a standard of the degree of conspicuousness, which is not highly dangerous to the sitting female, then out of the above forty birds the females of only twelve can be considered as conspicuous to a dangerous degree, the remaining twenty-eight being inconspicuous.* Nor is there any close relation within the same genus between a well-pronounced difference in colour between the sexes, and the nature of the nest constructed. Thus the male house sparrow (Passer domesticus) differs much from the female, the male tree-sparrow (P. montanus) hardly at all, and yet both build well-concealed nests. The two sexes of the common fly-catcher (Muscicapa grisola) can hardly be distinguished, whilst the sexes of the pied fly-catcher (M. luctuosa) differ considerably, and both species build in holes or conceal their nests. The female blackbird (Turdus merula) differs much, the female ring-ouzel (T. torquatus) differs less, and the female common thrush (T. musicus) hardly at all from their respective males; yet all build open nests. On the other hand, the not very distantly-allied water-ouzel (Cinclus aquaticus) builds a domed nest, and the sexes differ about as much as in the ring-ouzel. The black and red grouse (Tetrao tetrix and T. scoticus) build open nests in equally well-concealed spots, but in the one species the sexes differ greatly, and in the other very little.
* I have consulted, on this subject, Macgillivray's British Birds, and though doubts may be entertained in some cases in regard to the degree of concealment of the nest, and to the degree of conspicuousness of the female, yet the following birds, which all lay their eggs in holes or in domed nests, can hardly be considered, by the above standard, as conspicuous: Passer, 2 species; Sturnus, of which the female is considerably less brilliant than the male; Cinclus; Motallica boarula (?); Erithacus (?); Fruticola, 2 sp.; Saxicola; Ruticilla, 2 sp.; Sylvia, 3 sp.; Parus, 3 sp.; Mecistura anorthura; Certhia; Sitta; Yunx; Muscicapa, 2 sp.; Hirundo, 3 sp.; and Cypselus. The females of the following 12 birds may be considered as conspicuous according to the same standard, viz., Pastor, Motacilla alba, Parus major and P. caeruleus, Upupa, Picus, 4 sp., Coracias, Alcedo, and Merops.
Notwithstanding the foregoing objections, I cannot doubt, after reading Mr. Wallace's excellent essay, that looking to the birds of the world, a large majority of the species in which the females are conspicuously coloured (and in this case the males with rare exceptions are equally conspicuous), build concealed nests for the sake of protection. Mr. Wallace enumerates* a long series of groups in which this rule bolds good; but it will suffice here to give, as instances, the more familiar groups of kingfishers, toucans, trogons, puff-birds (Capitonidae), plantain-eaters (Musophagae, woodpeckers, and parrots. Mr. Wallace believes that in these groups, as the males gradually acquired through sexual selection their brilliant colours, these were transferred to the females and were not eliminated by natural selection, owing to the protection which they already enjoyed from their manner of nidification. According to this view, their present manner of nesting was acquired before their present colours. But it seems to me much more probable that in most cases, as the females were gradually rendered more and more brilliant from partaking of the colours of the male, they were gradually led to change their instincts (supposing that they originally built open nests), and to seek protection by building domed or concealed nests. No one who studies, for instance, Audubon's account of the differences in the nests of the same species in the northern and southern United States,*(2) will feel any great difficulty in admitting that birds, either by a change (in the strict sense of the word) of their habits, or through the natural selection of so-called spontaneous variations of instinct, might readily be led to modify their manner of nesting.
* Journal of Travel, edited by A. Murray, vol. i., p. 78.
*(2) See many statements in the Ornithological Biography. See also some curious observations on the nests of Italian birds by Eugenio Bettoni, in the Atti della Societa Italiana, vol. xi., 1869, p. 487.
This way of viewing the relation, as far as it holds good, between the bright colours of female birds and their manner of nesting, receives some support from certain cases occurring in the Sahara Desert. Here, as in most other deserts, various birds, and many other animals, have had their colours adapted in a wonderful manner to the tints of the surrounding surface. Nevertheless there are, as I am informed by the Rev. Mr. Tristram, some curious exceptions to the rule; thus the male of the Monticola cyanea is conspicuous from his bright blue colour, and the female almost equally conspicuous from her mottled brown and white plumage; both sexes of two species of Dromolaea are of a lustrous black; so that these three species are far from receiving protection from their colours, yet they are able to survive, for they have acquired the habit of taking refuge from danger in holes or crevices in the rocks.
With respect to the above groups in which the females are conspicuously coloured and build concealed nests, it is not necessary to suppose that each separate species had its nidifying instinct specially modified; but only that the early progenitors of each group were gradually led to build domed or concealed nests, and afterwards transmitted this instinct, together with their bright colours, to their modified descendants. As far as it can be trusted, the conclusion is interesting, that sexual selection together with equal or nearly equal inheritance by both sexes, have indirectly determined the manner of nidification of whole groups of birds.
According to Mr. Wallace, even in the groups in which the females, from being protected in domed nests during incubation, have not had their bright colours eliminated through natural selection, the males often differ in a slight, and occasionally in a considerable degree from the females. This is a significant fact, for such differences in colour must be accounted for by some of the variations in the males having been from the first limited in transmission to the same sex; as it can hardly be maintained that these differences, especially when very slight, serve as a protection to the female. Thus all the species in the splendid group of the trogons build in holes; and Mr. Gould gives figures* of both sexes of twenty-five species, in all of which, with one partial exception, the sexes differ sometimes slightly, sometimes conspicuously, in colour,- the males being always finer than the females, though the latter are likewise beautiful. All the species of kingfishers build in holes, and with most of the species the sexes are equally brilliant, and thus far Mr. Wallace's rule holds good; but in some of the Australian species the colours of the females are rather less vivid than those of the male; and in one splendidly-coloured species, the sexes differ so much that they were at first thought to be specifically distinct.*(2) Mr. R. B. Sharpe, who has especially studied this group, has shewn me some American species (Ceryle) in which the breast of the male is belted with black. Again, in Carcineutes, the difference between the sexes is conspicuous: in the male the upper surface is dull-blue banded with black, the lower surface being partly fawn-coloured, and there is much red about the head; in the female the upper surface is reddish-brown banded with black, and the lower surface white with black markings It is an interesting fact, as shewing how the same peculiar style of sexual colouring often characterises allied forms, that in three species of Dacelo the male differs from the female only in the tail being dull-blue banded with black, whilst that of the female is brown with blackish bars; so that here the tail differs in colour in the two sexes in exactly the same manner as the whole upper surface in the two sexes of Carcineutes.
* See his Monograph of the Trogonidae, 1st edition.
*(2) Namely, Cyanalcyon. Gould's Handbook of the Birds of Australia, vol. i., p. 133; see, also, pp. 130, 136.
With parrots, which likewise build in holes, we find analogous cases: in most of the species, both sexes are brilliantly coloured and indistinguishable, but in not a few species the males are coloured rather more vividly than the females, or even very differently from them. Thus, besides other strongly-marked differences, the whole under surface of the male king lory (Aprosmictus scapulatus) is scarlet, whilst the throat and chest of the female is green tinged with red: in the Euphema splendida there is a similar difference, the face and wing coverts moreover of the female being of a paler blue than in the male.* In the family of the tits (Parinae), which build concealed nests, the female of our common blue tomtit (Parus caeruleus), is "much less brightly coloured" than the male: and in the magnificent sultan yellow tit of India the difference is greater.*(2)
* Every gradation of difference between the sexes may be followed in the parrots of Australia. See Gould, op. cit., vol. ii., pp. 14-102.
*(2) Macgillivray's British Birds, vol. ii., p. 433. Jerdon, Birds of India, vol. ii., p. 282.
Again, in the great group of the woodpeckers,* the sexes are generally nearly alike, but in the Megapicus validus all those parts of the head, neck, and breast, which are crimson in the male are pale brown in the female. As in several woodpeckers the head of the male is bright crimson, whilst that of the female is plain, it occurred to me that this colour might possibly make the female dangerously conspicuous, whenever she put her head out of the hole containing her nest, and consequently that this colour, in accordance with Mr. Wallace's belief, had been eliminated. This view is strengthened by what Malherbe states with respect to Indopicus carlotta; namely, that the young females, like the young males, have some crimson about their heads, but that this colour disappears in the adult female, whilst it is intensified in the adult male. Nevertheless the following considerations render this view extremely doubtful: the male takes a fair share in incubation,*(2) and would be thus almost equally exposed to danger; both sexes of many species have their heads of an equally bright crimson; in other species the difference between the sexes in the amount of scarlet is so slight that it can hardly make any appreciable difference in the danger incurred; and lastly, the colouring of the head in the two sexes often differs slightly in other ways.
* All the following facts are taken from M. Malherbe's magnificent Monographie des Picidees, 1861.
*(2) Audubon's Ornithological Biography, vol. ii., p. 75; see also the Ibis, vol. i., p. 268.
The cases, as yet given, of slight and graduated differences in colour between the males and females in the groups, in which as a general rule the sexes resemble each other, all relate to species which build domed or concealed nests. But similar gradations may likewise be observed in groups in which the sexes as a general rule resemble each other, but which build open nests.
As I have before instanced the Australian parrots, so I may here instance, without giving any details, the Australian pigeons.* It deserves especial notice that in all these cases the slight differences in plumage between the sexes are of the same general nature as the occasionally greater differences. A good illustration of this fact has already been afforded by those kingfishers in which either the tail alone or the whole upper surface of the plumage differs in the same manner in the two sexes. Similar cases may be observed with parrots and pigeons. The differences in colour between the sexes of the same species are, also, of the same general nature as the differences in colour between the distinct species of the same group. For when in a group in which the sexes are usually alike, the male differs considerably from the female, he is not coloured in a quite new style. Hence we may infer that within the same group the special colours of both sexes when they are alike, and the colours of the male, when he differs slightly or even considerably from the female, have been in most cases determined by the same general cause; this being sexual selection.
* Gould's Handbook of the Birds of Australia, vol. ii., pp. 109-149.
It is not probable, as has already been remarked, that differences in colour between the sexes, when very slight, can be of service to the female as a protection. Assuming, however, that they are of service, they might be thought to be cases of transition; but we have no reason to believe that many species at any one time are undergoing change. Therefore we can hardly admit that the numerous females which differ very slightly in colour from their males are now all commencing to become obscure for the sake of protection. Even if we consider somewhat more marked sexual differences, is it probable, for instance, that the head of the female chaffinch,- the crimson on the breast of the female bullfinch,- the green of the female greenfinch,- the crest of the female golden-crested wren, have all been rendered less bright by the slow process of selection for the sake of protection? I cannot think so; and still less with the slight differences between the sexes of those birds which build concealed nests. On the other hand, the differences in colour between the sexes, whether great or small, may to a large extent be explained on the principle of the successive variations, acquired by the males through sexual selection, having been from the first more or less limited in their transmission to the females. That the degree of limitation should differ in different species of the same group will not surprise any one who has studied the laws of inheritance, for they are so complex that they appear to us in our ignorance to be capricious in their action.*
* See remarks to this effect in Variation of Animals and Plants under Domestication, vol. ii., chap. xii.
As far as I can discover there are few large groups of birds in which all the species have both sexes alike and brilliantly coloured, but I hear from Mr. Sclater, that this appears to be the case with the Musophagae or plantain-eaters. Nor do I believe that any large group exists in which the sexes of all the species are widely dissimilar in colour: Mr. Wallace informs me that the chatterers of S. America (Cotingidae) offer one of the best instances; but with some of the species, in which the male has a splendid red breast, the female exhibits some red on her breast; and the females of other species shew traces of the green and other colours of the males. Nevertheless we have a near approach to close sexual similarity or dissimilarity throughout several groups: and this, from what has just been said of the fluctuating nature of inheritance, is a somewhat surprising circumstance. But that the same laws should largely prevail with allied animals is not surprising. The domestic fowl has produced a great number of breeds and sub-breeds, and in these the sexes generally differ in plumage; so that it has been noticed as an unusual circumstance when in certain sub-breeds they resemble each other. On the other hand, the domestic pigeon has likewise produced a vast number of distinct breeds and sub-breeds, and in these, with rare exceptions, the two sexes are identically alike.
Therefore if other species of Gallus and Columba were domesticated and varied, it would not be rash to predict that similar rules of sexual similarity and dissimilarity, depending on the form of transmission, would hold good in both cases. In like manner the same form of transmission has generally prevailed under nature throughout the same groups, although marked exceptions to this rule occur. Thus within the same family or even genus, the sexes may be identically alike, or very different in colour. Instances have already been given in the same genus, as with sparrows, flycatchers, thrushes and grouse. In the family of pheasants the sexes of almost all the species are wonderfully dissimilar, but are quite alike in the eared pheasant or Crossoptilon auritum. In two species of Chloephaga, a genus of geese, the male cannot be distinguished from the females, except by size; whilst in two others, the sexes are so unlike that they might easily be mistaken for distinct species.*
* The Ibis, vol. vi., 1864, p. 122.
The laws of inheritance can alone account for the following cases, in which the female acquires, late in life, certain characters proper to the male, and ultimately comes to resemble him more or less completely. Here protection can hardly have come into play. Mr. Blyth informs me that the females of Oriolus melanocephalus and of some allied species, when sufficiently mature to breed, differ considerably in plumage from the adult males; but after the second or third moults they differ only in their beaks having a slight greenish tinge. In the dwarf bitterns (Ardetta), according to the same authority, "the male acquires his final livery at the first moult, the female not before the third or fourth moult; in the meanwhile she presents an intermediate garb, which is ultimately exchanged for the same livery as that of the male." So again the female Falco peregrinus acquires her blue plumage more slowly than the male. Mr. Swinhoe states that with one of the drongo shrikes (Dicrurus macrocercus) the male, whilst almost a nestling, moults his soft brown plumage and becomes of a uniform glossy greenish-black; but the female retains for a long time the white striae and spots on the axillary feathers; and does not completely assume the uniform black colour of the male for three years. The same excellent observer remarks that in the spring of the second year the female spoon-bill (Platalea) of China resembles the male of the first year, and that apparently it is not until the third spring that she acquires the same adult plumage as that possessed by the male at a much earlier age. The female Bombycilla carolinensis differs very little from the male, but the appendages, which like beads of red sealing-wax ornament the wing-feathers,* are not developed in her so early in life as in the male. In the male of an Indian parrakeet (Paloeornis javanicus) the upper mandible is coral-red from his earliest youth, but in the female, as Mr. Blyth has observed with caged and wild birds, it is at first black and does not become red until the bird is at least a year old, at which age the sexes resemble each other in all respects. Both sexes of the wild turkey are ultimately furnished with a tuft of bristles on the breast, but in two-year-old birds the tuft is about four inches long in the male and hardly apparent in the female; when, however, the latter has reached her fourth year, it is from four to five inches in length.*(2)
* When the male courts the female, these ornaments are vibrated, and "are shewn off to great advantage," on the outstretched wings: A. Leith Adams, Field and Forest Rambles, 1873, p. 153.
*(2) On Ardetta, Translation of Cuvier's Regne Animal, by Mr. Blyth, footnote, p. 159. On the peregrine falcon, Mr. Blyth, in Charlesworth's Mag. of Nat. Hist., vol. i., 1837, p. 304. On Dicrurus, Ibis, 1863, p. 44. On the Platalea, Ibis, vol. vi., 1864, p. 366. On the Bombycilla, Audubon's Ornitholog. Biography, vol. i., p. 229. On the Palaeornis, see, also, Jerdon, Birds of India, vol. i., p. 263. On the wild turkey, Audubon, ibid., vol. i., p. 15; but I hear from Judge Caton that in Illinois the female very rarely acquires a tuft. Analogous cases with the females of Petrcocssyphus are given by Mr. R. Sharpe, Proeedings of the Zoological Society, 1872, p. 496.
These cases must not be confounded with those where diseased or old females abnormally assume masculine characters, nor with those where fertile females, whilst young, acquire the characters of the male, through variation or some unknown cause.* But all these cases have so much in common that they depend, according to the hypothesis of pangenesis, on gemmules derived from each part of the male being present, though latent, in the female; their development following on some slight change in the elective affinities of her constituent tissues.
* Of these latter cases Mr. Blyth has recorded (Translation of Cuvier's Regne Animal, p. 158) various instances with Lanius, Ruticilla, Linaria, and Anas. Audubon has also recorded a similar case (Ornitholog. Biography, vol. v., p. 519) with Pyranga aestiva.
A few words must be added on changes of plumage in relation to the season of the year. From reasons formerly assigned there can be little doubt that the elegant plumes, long pendant feathers, crests, &c., of egrets, herons, and many other birds, which are developed and retained only during the summer, serve for ornamental and nuptial purposes, though common to both sexes. The female is thus rendered more conspicuous during the period of incubation than during the winter; but such birds as herons and egrets would be able to defend themselves. As, however, plumes would probably be inconvenient and certainly of no use during the winter, it is possible that the habit of moulting twice in the year may have been gradually acquired through natural selection for the sake of casting off inconvenient ornaments during the winter. But this view cannot be extended to the many waders, whose summer and winter plumages differ very little in colour. With defenceless species, in which both sexes, or the males alone, become extremely conspicuous during the breeding-season,- or when the males acquire at this season such long wing or tail-feathers as to impede their flight, as with Cosmetornis and Vidua,- it certainly at first appears highly probable that the second moult has been gained for the special purpose of throwing off these ornaments. We must, however, remember that many birds, such as some of the birds of paradise, the Argus pheasant and peacock, do not cast their plumes during the winter; and it can hardly be maintained that the constitution of these birds, at least of the Gallinaceae, renders a double moult impossible, for the ptarmigan moults thrice in the year.* Hence it must be considered as doubtful whether the many species which moult their ornamental plumes or lose their bright colours during the winter, have acquired this habit on account of the inconvenience or danger which they would otherwise have suffered.
* See Gould's Birds of Great Britain.
I conclude, therefore, that the habit of moulting twice in the year was in most or all cases first acquired for some distinct purpose, perhaps for gaining a warmer winter covering; and that variations in the plumage occurring during the summer were accumulated through sexual selection, and transmitted to the offspring at the same season of the year; that such variations were inherited either by both sexes or by the males alone, according to the form of inheritance which prevailed. This appears more probable than that the species in all cases originally tended to retain their ornamental plumage during the winter, but were saved from this through natural selection, resulting from the inconvenience or danger thus caused.
I have endeavoured in this chapter to shew that the arguments are not trustworthy in favour of the view that weapons, bright colours, and various ornaments, are now confined to the males owing to the conversion, by natural selection, of the equal transmission of characters to both sexes, into transmission to the male sex alone. It is also doubtful whether the colours of many female birds are due to the preservation, for the sake of protection, of variations which were from the first limited in their transmission to the female sex. But it will be convenient to defer any further discussion on this subject until I treat, in the following chapter, of the differences in plumage between the young and old. | <urn:uuid:429cf924-5712-4f80-a526-9d8fbe64f107> | CC-MAIN-2013-20 | http://literature.org/authors/darwin-charles/the-descent-of-man/chapter-15.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.971138 | 11,677 | 3.265625 | 3 |
How much fun is it to be a child in your home? Do you ever stop to think about how the house looks from their point of view? My task for this week is to conduct a Child’s Eye Audit of our living space, to try and make the rooms more child- and play-friendly. The audit need only take a few minutes and might suggest simple changes to make to improve the play space.
To conduct a child’s eye audit, sit or kneel down so you’re at your child’s eye-level and consider the following things.
1. Safety first. Most importantly, the room needs to be safe and it’s useful t0 review this aspect of your home from time to time as children grow taller, become more mobile or more adventurous. Think about what your child can reach, what you don’t want them to reach and make any necessary adjustments.
2. Child’s eye view. Sit back for a minute on the floor and scan the room. What’s visible to your child at their height, and what’s not? You might display all their lovely paintings on the wall and fridge door – but are they too high for your child to actually see? Is their view just of empty walls? Hang some art work at a lower level or set up a low shelf or table with a display of things they can enjoy.
3. Within reach. Consider how accessible your toys are. Do you have an enabling environment where your child can independently help themselves to toys and resources to use in their play or is everything out of reach? Try to find a balance so you can keep the space tidy whilst still allowing free access. Open shelving and low baskets work well for us with some materials such as paint stored higher up.
4. Ring the changes. Do you always have they same toys out? Sometimes putting away familiar toys and bringing out some forgotten ones can spark new creativity and fun. Don’t have a complete change of resources though, as children do like to know where favourite toys are. With Christmas on the way now is a good time to have a toy audit, donating ones your child has grown out of to the charity shop and getting ideas for their Christmas list.
5. Invitation to play. Do you have any toys that never get played with, or activities that you child rarely takes part in? What can you change to make things more inviting? If you’d like to encourage some more reading, perhaps you could set up a cosy reading corner or story tent – with comfy cushions, a basket of tempting books and a favourite teddy to share with? If your toy kitchen has been ignored for a while, add some new resources to catch your child’s eye: a muffin tin and paper cake cases, some jars of real dried pasta, a recipe book from your shelf or lay the table for a birthday tea and surprise your child with a new play possibility.
Do you sometimes review things from your child’s point of view? What changes have you made to make your space more child- and play- friendly? Leave a comment and share an idea with us.
I’m writing this at one o’clock in the afternoon and the sky is grey and the rain is tumbling down. It’s making me think about how the weather affects our play, and particularly I’m thinking about how much time we spend outdoors in autumn and winter. I don’t think there’s any question that playing outside is wonderful for children: the fresh air, the feeling of space, the sensory benefits of being in nature. I certainly know with my own two girls, and all the children I’ve looked after, that if we’re having a grumpy sort of day, getting outside – in the garden, park or just for a walk – most often is all that’s needed to lighten everyone’s mood.
But it’s getting colder now, and windy and rainy and dark. If you’re the type who is happy to be outside all the time in all weathers, I really do salute you. I however am naturally inclined to prefer a hot cup of coffee and a warm blanket inside! We do play outside everyday, whatever the weather, but there’s no denying we play outdoors less in winter – which I’m guessing is the same for lots of you? So, I’m resolving to put more thought into getting out there and planning on bringing you some posts over the next few months that inspire us to venture out. I’d also like to invite you to share your ideas too. The Play Academy carnival on Friday is open to any of your posts and I’d also love to hear from you if you’d like to write a guest post here. (On any play subject in fact, not just on playing outside. You can e-mail me cathy (at) nurturestore (dot) co (dot) uk if you have an idea you’d like to write about).
To start us off, my top three tips for getting outside, whatever the weather are…
- Keep yourself warm. If you’re wearing the right clothes, you’re much more likely to enjoy your time outside. Pretty much all the children I know don’t care if it’s cold, windy or raining – they are active kids and just love being outside. So, to help everyone enjoy themselves outside, and to stop you cutting short the children’s outdoor fun because you’ve had enough, my first tip is to make sure you are wearing the right clothes. Layer up, don’t forget your hat and gloves and make sure you are cosy.
- Get active. We’re going to shift our outdoor play away from fairy gardens and dinosaur world’s and include lots more active games. Hopscotch, skipping, what’s the time Mr. Wolf are great fun and will keep everyone on the move.
- Audit your outdoor space. Now is a good time to review your garden and get it ready for the colder months. Think about what you play outside and re-locate things or make changes to suit the weather. We’ll move the sandpit and den to under our covered area and make sure there are lots of props outside ready to spark active play (bikes, balls, kites, hula hoops). We’re not likely to do as much water play outside, so I’ll be thinking of ways to bring this inside.
What about you – are you an all weather family? How do you promote lots of outdoor play, whatever the weather?
Back in January I resolved to make 2010 our Year of Play. I’ve been thinking about this again this month as L has started at school. In last week’s Play Academy link-up I talked about wanting to make sure the girls still have lots of opportunity for playing, as well as schooling. So this weeks Twitter Tips are dedicated to having a playful return to school. The Twitter Tips get tweeted on a Friday at 8.30pm and in previous weeks they’ve started great twitter conversations, with people swapping ideas. The main thing I love about blogging is it being a forum to get inspiration and encouragement from others, so please feel free to add your own ideas in the comments or on our Facebook page. Join in, swap ideas, go play!
How to have a playful Back to School
#goplay Twitter Tip #1If you’re using after school clubs check how playful they are: do they offer free play after a structured school day
#goplay Twitter Tip #2Make the school run fun: cycle, scoot or play i-spy. Leave a little earlier to let the kids play a bit before class
#goplay Twitter Tip #3 Set up a play invitation in the morning to entice the kids to play before they switch on the TV
#goplay Twitter Tip #4 Rediscover some old school favourites such as conkers or fortune tellers
#goplay Twitter Tip #5 Consider how many clubs to join so after school play time isn’t lost in a busy schedule.
#goplay Twitter Tip #6 Encourage playground fun by packing a skipping rope in the book bag. Ready for Ten has a great skipping tutuorial
#goplay Twitter Tip #7 Plan family time for the weekend: it doesn’t have to be expensive or extravagant but do make sure it happens.
#goplay Twitter Tip #8 Consider screen time. Could your kids live without TV for an hour, a day, a week? What could they play instead?
#goplay Twitter Tip #9 Locate the park nearest your school and stop off any day day you can on the way home. Enjoy some #playoutdoors
#goplay Twitter Tip#10 Instead of only setting up a homework area set up a play area too. Add untoys & let them #goplay
How do you feel about the balance between school and play time? How do you manage homework at the weekend? Do your kids attend a playful school?
Happily shared with Top Ten Tuesday.
Use the linky below to add your post to the Play Academy
Our summer holidays are drawing to a close and my Little is starting school on Monday (oh my!). I feel very strongly that our play should keep going. B is moving up to the Juniors and although her school offers are great curriculum including play, art, music, drama and experiments I think it’s inevitable that her lessons will become more and more about schooling. September always feels like the start of the year to me, so I’m keeping in mind my resolution to make 2010 our Year of Play, and we’ll certainly be limiting our after school clubs and weekend commitments to allow plenty of time for playing. How do you feel about finding a balance between schooling (or home educating) and play?
I’m looking forward to getting even more inspiration from your Play Academy ideas this week – hope you’ll add a link.
1. Add your post to the Linky below. Remember to link to the individual post rather than your homepage. If you are not a blogger please visit the NurtureStore Facebook page and share your photo there.
2. Go and visit some of the other blogs on the Linky. Leave a comment and say hi. Get ideas. Tell them you’re visiting from the Play Academy.
3. Add a link back from your own post to this Play Academy – your readers can then come and get ideas too. You can use the Play Academy badge if you like.(Grab the code from the column on the left.)
4. Come back next Friday and swap some more play ideas. The next Play Academy linky will be Friday 10th September. | <urn:uuid:f1607610-8673-4722-9f25-b0d9f0cf0ce3> | CC-MAIN-2013-20 | http://nurturestore.co.uk/category/mind/2010-the-year-of-play | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.946466 | 2,287 | 2.671875 | 3 |
Topics covered: Ideal solutions
Instructor/speaker: Moungi Bawendi, Keith Nelson
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: So. In the meantime, you've started looking at two phase equilibrium. So now we're starting to look at mixtures. And so now we have more than one constituent. And we have more than one phase present. Right? So you've started to look at things that look like this, where you've got, let's say, two components. Both in the gas phase. And now to try to figure out what the phase equilibria look like. Of course it's now a little bit more complicated than what you went through before, where you can get pressure temperature phase diagrams with just a single component. Now we want to worry about what's the composition. Of each of the components. In each of the phases. And what's the temperature and the pressure. Total and partial pressures and all of that. So you can really figure out everything about both phases. And there are all sorts of important reasons to do that, obviously lots of chemistry happens in liquid mixtures. Some in gas mixtures. Some where they're in equilibrium. All sorts of chemical processes. Distillation, for example, takes advantage of the properties of liquid and gas mixtures. Where one of them might be richer, will be richer, and the more volatile of the components. That can be used as a basis for purification. You mix ethanol and water together so you've got a liquid with a certain composition of each. The gas is going to be richer and the more volatile of the two, the ethanol. So in a distillation, where you put things up in the gas, more of the ethanol comes up. You could then collect that gas, right? And re-condense it, and make a new liquid. Which is much richer in ethanol than the original liquid was. Then you could make, then you could put some of them up into the gas phase. Where it will be still richer in ethanol. And then you could collect that and repeat the process. So the point is that properties of liquid gas, two-component or multi-component mixtures like this can be exploited. Basically, the different volatilities of the different components can be exploited for things like purification.
Also if you want to calculate chemical equilibria in the liquid and gas phase, of course, now you've seen chemical equilibrium, so the amount of reaction depends on the composition. So of course if you want reactions to go, then this also can be exploited by looking at which phase might be richer in one reactant or another. And thereby pushing the equilibrium toward one direction or the other. OK. So. we've got some total temperature and pressure. And we have compositions. So in the gas phase, we've got mole fractions yA and yB. In the liquid phase we've got mole fractions xA and xB. So that's our system. One of the things that you established last time is that, so there are the total number of variables including the temperature and the pressure. And let's say the mole fraction of A in each of the liquid and gas phases, right? But then there are constraints. Because the chemical potentials have to be equal, right? Chemical potential of A has to be equal in the liquid and gas. Same with B. Those two constraints reduce the number of independent variables. So there'll be two in this case rather than four independent variables. If you control those, then everything else will follow. What that means is if you've got a, if you control, if you fix the temperature and the total pressure, everything else should be determinable. No more free variables.
And then, what you saw is that in simple or ideal liquid mixtures, a result called Raoult's law would hold. Which just says that the partial pressure of A is equal to the mole fraction of A in the liquid times the pressure of pure A over the liquid. And so what this gives you is a diagram that looks like this. If we plot this versus xB, this is mole fraction of B in the liquid going from zero to one. Then we could construct a diagram of this sort. So this is the total pressure of A and B. The partial pressures are given by these lines. So this is our pA star and pB star. The pressures over the pure liquid A and B at the limits of mole fraction of B being zero and one. So in this situation, for example, A is the more volatile of the components. So it's partial pressure over its pure liquid. At this temperature. Is higher than the partial pressure of B over its pure liquid. A would be the ethanol, for example and B the water in that mixture. OK. Then you started looking at both the gas and the liquid phase in the same diagram. So this is the mole fraction of the liquid. If you look and see, well, OK now we should be able to determine the mole fraction in the gas as well. Again, if we note total temperature and pressure, everything else must follow.
And so, you saw this worked out. Relation between p and yA, for example. The result was p is pA star times pB star over pA star plus pB star minus pA star times yA. And the point here is that unlike this case, where you have a linear relationship, the relationship between the pressure and the liquid mole fraction isn't linear. We can still plot it, of course. So if we do that, then we end up with a diagram that looks like the following. Now I'm going to keep both mole fractions, xB and yB, I've got some total pressure. I still have my linear relationship. And then I have a non-linear relationship between the pressure and the mole fraction in the gas phase. So let's just fill this in. Here is pA star still. Here's pB star. Of course, at the limits they're still, both mole fractions they're zero and one.
OK. I believe this is this is where you ended up at the end of the last lecture. But it's probably not so clear exactly how you read something like this. And use it. It's extremely useful. You just have to kind of learn how to follow what happens in a diagram like this. And that's what I want to spend some of today doing. Is just, walking through what's happening physically, with a container with a mixture of the two. And how does that correspond to what gets read off the diagram under different conditions. So. Let's just start somewhere on a phase diagram like this.
Let's start up here at some point one, so we're in the pure - well, not pure, you're in the all liquid phase. It's still a mixture. It's not a pure substance. pA star, pB star. There's the gas phase. So, if we start at one, and now there's some total pressure. And now we're going to reduce it. What happens? We start with a pure - with an all-liquid mixture. No gas. And now we're going to bring down the pressure. Allowing some of the liquid to go up into the gas phase. So, we can do that. And once we reach point two, then we find a coexistence curve. Now the liquid and gas are going to coexist. So this is the liquid phase. And that means that this must be xB. And it's xB at one, but it's also xB at two, and I want to emphasize that. So let's put our pressure for two. And if we go over here, this is telling us about the mole fraction in the gas phase. That's what these curves are, remember. So this is the one that's showing us the mole fraction in the liquid phase. This nonlinear one in the gas phase. So that means just reading off it, this is xB, that's the liquid mole fraction. Here's yB. The gas mole fraction. They're not the same, right, because of course the components have different volatility. A's more volatile.
So that means that the mole fraction of B in the liquid phase is higher than the mole fraction of B in the gas phase. Because A is the more volatile component. So more, relatively more, of A, the mole fraction of A is going to be higher up in the gas phase. Which means the mole fraction of B is lower in the gas phase. So, yB less than xB if A is more volatile. OK, so now what's happening physically? Well, we started at a point where we only had the liquid present. So at our initial pressure, we just have all liquid. There's some xB at one. That's all there is, there isn't any gas yet. Now, what happened here? Well, now we lowered the pressure. So you could imagine, well, we made the box bigger. Now, if the liquid was under pressure, being squeezed by the box, right then you could make the box a little bit bigger. And there's still no gas. That's moving down like this. But then you get to a point where there's just barely any pressure on top of the liquid. And then you keep expanding the box. Now some gas is going to form.
So now we're going to go to our case two. We've got a bigger box. And now, right around where this was, this is going to be liquid. And there's gas up here. So up here is yB at pressure two. Here's xB at pressure two. Liquid and gas. So that's where we are at point two here.
Now, what happens if we keep going? Let's lower the pressure some more. Well, we can lower it and do this. But really if we want to see what's happening in each of the phases, we have to stay on the coexistence curves. Those are what tell us what the pressures are. What the partial pressure are going to be in each of the phases. In each of the two, in the liquid and the gas phases. So let's say we lower the pressure a little more. What's going to happen is, then we'll end up somewhere over here. In the liquid, and that'll correspond to something over here in the gas. So here's three.
So now we're going to have, that's going to be xB at pressure three. And over here is going to be yB at pressure three. And all we've done, of course, is we've just expanded this further. So now we've got a still taller box. And the liquid is going to be a little lower because some of it has evaporated, formed the gas phase. So here's xB at three. Here's yB at three, here's our gas phase. Now we could decrease even further. And this is the sort of thing that you maybe can't do in real life. But I can do on a blackboard. I'm going to give myself more room on this curve, to finish this illustration. There. Beautiful. So now we can lower a little bit further, and what I want to illustrate is, if we keep going down, eventually we get to a pressure where now if we look over in the gas phase, we're at the same pressure, mole fraction that we had originally in the liquid phase. So let's make four even lower pressure. What does that mean? What it means is, we're running out of liquid. So what's supposed to happen is A is the more volatile component. So as we start opening up some room for gas to form, you get more of A in the gas phase. But of course, and the liquid is richer in B. But of course, eventually you run out of liquid. You make the box pretty big, and you run out, or you have the very last drop of liquid. So what's the mole fraction of B in the gas phase? It has to be the same as what it started in in the liquid phase. Because after all the total number of moles of A and B hasn't changed any. So if you take them all from the liquid and put them all up into the gas phase, it must be the same. So yB of four. Once you just have the last drop. So then yB of four is basically equal to xB of one. Because everything's now up in the gas phase. So in principle, there's still a tiny, tiny bit of xB at pressure four.
Well, we could keep lowering the pressure. We could make the box a little bigger. Then the very last of the liquid is going to be gone. And what'll happen then is, we're all here. There's no more liquid. We're not going down on the coexistence curve any more. We don't have a liquid gas coexistence any more. We just have a gas phase. Of course, we can continue to lower the pressure. And then what we're doing is just going down here. So there's five. And five is the same as this only bigger. And so forth.
OK, any questions about how this works? It's really important to just gain facility in reading these things and seeing, OK, what is it that this is telling you. And you can see it's not complicated to do it, but it takes a little bit of practice. OK.
Now, of course, we could do exactly the same thing starting from the gas phase. And raising the pressure. And although you may anticipate that it's kind of pedantic, I really do want to illustrate something by it. So let me just imagine that we're going to do that. Let's start all in the gas phase. Up here's the liquid. pA star, pB star. And now let's start somewhere here. So we're down somewhere in the gas phase with some composition. So it's the same story, except now we're starting here. It's all gas. And we're going to start squeezing. We're increasing the pressure. And eventually here's one, will reach two, so of course here's our yB. We started with all gas, no liquid. So this is yB of one. It's the same as yB of two, I'm just raising the pressure enough to just reach the coexistence curve. And of course, out here tells us xB of two, right? So what is it saying? We've squeezed and started to form some liquid. And the liquid is richer in component B. Maybe it's ethanol water again. And we squeeze, and now we've got more water in the liquid phase than in the gas phase. Because water's the less volatile component. It's what's going to condense first.
So the liquid is rich in the less volatile of the components. Now, obviously, we can continue in doing exactly the reverse of what I showed you. But all I want to really illustrate is, this is a strategy for purification of the less volatile component. Once you've done this, well now you've got some liquid. Now you could collect that liquid in a separate vessel.
So let's collect the liquid mixture with xB of two. So it's got some mole fraction of B. So we've purified that. But now we're going to start, we've got pure liquid. Now let's make the vessel big. So it all goes into the gas phase. Then lower p. All gas. So we start with yB of three, which equals xB of two. In other words, it's the same mole fraction. So let's reconstruct that. So here's p of two. And now we're going to go to some new pressure. And the point is, now we're going to start, since the mole fraction in the gas phase that we're starting from is the same number as this was. So it's around here somewhere. That's yB of three equals xB of two. And we're down here. In other words, all we've done is make the container big enough so the pressure's low and it's all in the gas phase. That's all we have, is the gas. But the composition is whatever the composition is that we extracted here from the liquid. So this xB, which is the liquid mole fraction, is now yB, the gas mole fraction. Of course, the pressure is different. Lower than it was before.
Great. Now let's increase. So here's three. And now let's increase the pressure to four. And of course what happens, now we've got coexistence. So here's liquid. Here's gas. So, now we're over here again. There's xB at pressure four. Pure still in component B. We can repeat the same procedure. Collect it. All liquid, put it in a new vessel. Expand it, lower the pressure, all goes back into the gas phase. Do it all again. And the point is, what you're doing is walking along here. Here to here. Then you start down here, and go from here to here. From here to here. And you can purify. Now, of course, the optimal procedure, you have to think a little bit. Because if you really do precisely what I said, you're going to have a mighty little bit of material each time you do that. So yes it'll be the little bit you've gotten at the end is going to be really pure, but there's not a whole lot of it. Because, remember, what we said is let's raise the pressure until we just start being on the coexistence curve. So we've still got mostly gas. Little bit of liquid. Now, I could raise the pressure a bit higher. So that in the interest of having more of the liquid, when I do that, though, the liquid that I have at this higher pressure won't be as enriched as it was down here. Now, I could still do this procedure. I could just do more of them. So it takes a little bit of judiciousness to figure out how to optimize that. In the end, though, you can continue to walk your way down through these coexistence curves and purify repeatedly the component B, the less volatile of them, and end up with some amount of it. And there'll be some balance between the amount that you feel like you need to end up with and how pure you need it to be. Any questions about how this works?
So purification of less volatile components. Now, how much of each of these quantities in each of these phases? So, pertinent to this discussion, of course we need to know that. If you want to try to optimize a procedure like that, of course it's going to be crucial to be able to understand and calculate for any pressure that you decide to raise to, just how many moles do you have in each of the phases? So at the end of the day, you can figure out, OK, now when I reach a certain degree of purification, here's how much of the stuff I end up with. Well, that turns out to be reasonably straightforward to do. And so what I'll go through is a simple mathematical derivation. And it turns out that it allows you to just read right off the diagram how much of each material you're going to end up with.
So, here's what happens. This is something called the lever rule. How much of each component is there in each phase? So let's consider a case like this. Let me draw yet once again, just to get the numbering consistent. With how we'll treat this. So we're going to start here. And I want to draw it right in the middle, so I've got plenty of room. And we're going to go up to some pressure. And somewhere out there, now I can go to my coexistence curves. Liquid. And gas. And I can read off my values. So this is the liquid xB. So I'm going to go up to some point two, here's xB of two. Here's yB of two. Great. Now let's get these written in.
So let's just define terms a little bit. nA, nB. Or just our total number of moles. ng and n liquid, of course, total number of moles. In the gas and liquid phases. So let's just do the calculation for each of these two cases. We'll start with one. That's the easier case. Because then we have only the gas. So at one, all gas. It says pure gas in the notes, but of course that isn't the pure gas. It's the mixture of the two components. So. How many moles of A? Well it's the mole fraction of A in the gas. Times the total number of moles in the gas. Let me put one in here. Just to be clear. And since we have all gas, the number of moles in the gas is just the total number of moles. So this is just yA at one times n total. Let's just write that in. And of course n total is equal to nA plus nB.
So now let's look at condition two. Now we have to look a little more carefully. Because we have a liquid gas mixture. So nA is equal to yA at pressure two. Times the number of moles of gas at pressure two. Plus xA, at pressure two, times the number of moles of liquid at pressure two.
Now, of course, these things have to be equal. The total number of moles of A didn't change, right? So those are equal. Then yA of two times ng of two. Plus xA of two times n liquid of two, that's equal to yA of one times n total. Which is of course equal to yA of one times n gas at two plus n liquid at two. I suppose I could be, add that equality. Of course, it's an obvious one. But let me do it anyway. The total number of moles is equal to nA plus nB. But it's also equal to n liquid plus n gas. And that's all I'm taking advantage of here.
And now I'm just going to rearrange the terms. So I'm going to write yA at one minus yA at two, times ng at two, is equal to, and I'm going to take the other terms, the xA term. xA of two minus yA of one times n liquid at two. So I've just rearranged the terms. And I've done that because now, I think I omitted something here. yA of one times ng. No, I forgot a bracket, is what I did. yA of one there. And I did this because now I want to do is look at the ratio of liquid to gas at pressure two. So, ratio of I'll put it gas to liquid, that's ng of two over n liquid at two. And that's just equal to xA of two minus yA at one minus yA at one minus yA at two.
So what does it mean? It's the ratio of these lever arms. That's what it's telling me. I can look, so I raise the pressure up to two. And so here's xB at two, here's yB at two. And I'm here somewhere. And this little amount and this little amount, that's that difference. And it's just telling me that ratio of those arms is the ratio of the total number of moles of gas to liquid. And that's great. Because now when I go back to the problem that we were just looking at, where I say, well I'm going to purify the less volatile component by raising the pressure until I'm at coexistence starting in the gas phase. Raise the pressure, I've got some liquid. But I also want some finite amount of liquid. But I don't want to just, when I get the very, very first drop of liquid now collected, of course it's enriched in the less volatile component. But there may be a minuscule amount, right? So I'll raise the pressure a bit more. I'll go up in pressure. And now, of course, when I do that the amount of enrichment of the liquid isn't as big as it was if I just raised it up enough to barely have any liquid. Then I'd be out here. But I've got more material in the liquid phase to collect. And that's what this allows me to calculate. Is how much do I get in the end. So it's very handy. You can also see, if I go all the way to the limit where the mole fraction in the liquid at the end is equal to what it was in the gas when I started, what that says is that there's no more gas left any more. In other words, these two things are equal. If I go all the way to the point where I've got all the, this is the amount I started with, in the pure gas phase, now I keep raising it all the way. Until I've got the same mole fraction in the liquid. Of course, we know what that really means. That means that I've gone all the way from pure gas to pure liquid. And the mole fraction in that case has to be the same. And what this is just telling us mathematically is, when that happens this is zero. That means I don't have any gas left. Yeah.
PROFESSOR: No. Because, so it's the mole fraction in the gas phase. But you've started with some amount that it's only going to go down from there.
PROFESSOR: Yeah. Yeah. Any other questions? OK.
Well, now what I want to do is just put up a slightly different kind of diagram, but different in an important way. Namely, instead of showing the mole fractions as a function of the pressure. And I haven't written it in, but all of these are at constant temperature, right? I've assumed the temperature is constant in all these things. Now let's consider the other possibility, the other simple possibility, which is, let's hold the pressure constant and vary the temperature. Of course, you know in the lab, that's usually what's easiest to do. Now, unfortunately, the arithmetic gets more complicated. It's not monumentally complicated, but here in this case, where you have one linear relationship, which is very convenient. From Raoult's law. And then you have one non-linear relationship there for the mole fraction of the gas. In the case of temperature, they're both, neither one is linear. Nevertheless, we can just sketch what the diagram looks like. And of course it's very useful to do that, and see how to read off it. And I should say the derivation of the curves isn't particularly complicated. It's not particularly more complicated than what I think you saw last time to derive this. There's no complicated math involved. But the point is, the derivation doesn't yield a linear relationship for either the gas or the liquid part of the coexistence curve.
OK, so we're going to look at temperature and mole fraction phase diagrams. Again, a little more complicated mathematically but more practical in real use. And this is T. And here is the, sort of, form that these things take. So again, neither one is linear. Up here, now, of course if you raise the temperatures, that's where you end up with gas. If you lower the temperature, you condense and get the liquid. So, this is TA star. TB star. So now I want to stick with A as the more volatile component. At constant temperature, that meant that pA star is bigger than pB star. In other words, the vapor pressure over pure liquid A is higher than the vapor pressure over pure liquid B. Similarly, now I've got constant pressure and really what I'm looking at, let's say I'm at the limit where I've got the pure liquid. Or the pure A. And now I'm going to, let's say, raise the temperature until I'm at the liquid-gas equilibrium. That's just the boiling point. So if A is the more volatile component, it has the lower boiling point. And that's what this reflects. So higher pB star A corresponds to lower TA star A. Which is just the boiling point of pure A.
So, this is called the bubble line. That's called the dew line. All that means is, let's say I'm at high temperature. I've got all gas. Right no coexistence, no liquid yet. And I start to cool things off. Just to where I just barely start to get liquid. What you see that as is, dew starts forming. A little bit of condensation. If you're outside, it means on the grass a little bit of dew is forming. Similarly, if I start at low temperature, all liquid now I start raising the temperature until I just start to boil. I just start to see the first bubbles forming. And so that's why these things have those names.
So now let's just follow along what happens when I do the same sort of thing that I illustrated there. I want to start at one point in this phase diagram. And then start changing the conditions. So let's start here. So I'm going to start all in the liquid phase. That is, the temperature is low. Here's xB. And my original temperature. Now I'm going to raise it. So if I raise it a little bit, I reach a point at which I first start to boil. Start to find some gas above the liquid. And if I look right here, that'll be my composition. Let me raise it a little farther, now that we've already seen the lever rule and so forth. I'll raise it up to here. And that means that out here, I suppose I should do here.
So, here is the liquid mole fraction at temperature two. xB at temperature two. This is yB at temperature two. The gas mole fraction. So as you should expect, what's going to happen here is that the gas, this is going to be lower in B. A, that means that the mole fraction of A must be higher in the gas phase. That's one minus yB. So xA is one minus -- yA, which is one minus yB higher in gas phase. Than xA, which is one minus xB. In other words, the less volatile component is enriched up in the gas phase.
Now, what does that mean? That means I could follow the same sort of procedure that I indicated before when we looked at the pressure mole fraction phase diagram. Namely, I could do this and now I could take the gas phase. Which has less of B. It has more of A. And I can collect it. And then I can reduce the temperature. So it liquefies. So I can condense it, in other words. So now I'm going to start with, let's say I lower the temperature enough so I've got basically pure liquid. But its composition is the same as the gas here. Because of course that's what that liquid is formed from. I collected the gas and separated it. So now I could start all over again. Except instead of being here, I'll be down here. And then I can raise the temperature again. To some place where I choose. I could choose here, and go all the way to hear. A great amount of enrichment. But I know from the lever rule that if I do that, I'm going to have precious little material over here. So I might prefer to raise the temperature a little more. Still get a substantial amount of enrichment. And now I've got, in the gas phase, I'll further enriched in component A. And again I can collect the gas. Condense it. Now I'm out here somewhere, I've got all liquid and I'll raise the temperature again. And I can again keep walking my way over.
And that's what happens during an ordinary distillation. Each step of the distillation walks along in the phase diagram at some selected point. And of course what you're doing is, you're always condensing the gas. And starting with fresh liquid that now is enriched in more volatile of the components. So of course if you're really purifying, say, ethanol from an ethanol water mixture, that's how you do it. Ethanol is the more volatile component. So a still is set up. It will boil the stuff and collect the gas and and condense it. And boil it again, and so forth. And the whole thing can be set up in a very efficient way. So you have essentially continuous distillation. Where you have a whole sequence of collection and condensation and reheating and so forth events. So then, in a practical way, it's possible to walk quite far along the distillation, the coexistence curve, and distill to really a high degree of purification. Any questions about how that works? OK.
I'll leave till next time the discussion of the chemical potentials. But what we'll do, just to foreshadow a little bit, what I'll do at the beginning of the next lecture is what's at the end of your notes here. Which is just to say OK, now if we look at Raoult's law, it's straightforward to say what is the chemical potential for each of the substances in the liquid and the gas phase. Of course, it has to be equal. Given that, that's for an ideal solution. We can gain some insight from that. And then look at real solutions, non-ideal solutions, and understand a lot of their behavior as well. Just from starting from our understanding of what the chemical potential does even in a simple ideal mixture. So we'll look at the chemical potentials. And then we'll look at non-ideal solution mixtures next time. See you then. | <urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857> | CC-MAIN-2013-20 | http://ocw.mit.edu/courses/chemistry/5-60-thermodynamics-kinetics-spring-2008/video-lectures/lecture-21-ideal-solutions/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.963655 | 7,164 | 3.921875 | 4 |
Phantom Phone Calls
ospri.net- Alleged contact with the dead has occurred universally throughout history, taking various forms; as dreams, waking visions and auditory hallucinations, either spontaneous or induced through trance. In many cultures, the spirits of the dead have been sought for their wisdom, advice and knowledge of the future. The dead also seem to initiate their own communication, using whatever means seem to be most effective.
With the advent of electromagnetic technology, mysterious messages have been communicated by telegraph, wireless, phonographs and radio. A curious phenomenon of modern times is the communication via the telephone. Phone calls from the dead seem to be random and occasional
occurrences that happen without explanation. The great majority are exchanges between persons who shared a close emotional tie while both were living: spouses, parents and children, siblings, and occasionally friends and other relatives.
Most communications are "intention" calls, initiated by the deceased to impart a message, such as farewell upon death, a warning of impending danger, or information the living needs to carry out a task. For example, actress Ida Lupino's father, Stanley, who died intestate in London during World War II, called Lupino six months after his death to relate information concerning his estate, the location of some unknown but important papers.
Some calls appear to have no other purpose than to make contact with the living; many of these occur on emotionally charged "anniversary" days, such as Mothers day or Fathers day, a birthday or holiday. In a typical” anniversary” call, the dead may do nothing more than repeat a phrase over and over, such as "Hello, Mom, is that you?"
Persons who have received phone calls from the dead report that the voices are exactly the same as when the deceased was living, furthermore, the voice often uses pet names and words. The telephone usually rings normally, although some recipients say that the ring sounded flat and abnormal. In many cases, the connection is bad, with a great deal of static and line noise, and occasionally the faint voices of the other persons are heard, as though lines have been crossed. In many cases, the voice of the dead one is difficult to hear and grows fainter as the call goes on. Sometimes, the voice just fades away but the line remains open, and the recipient hangs up after giving up on further communication. Sometimes the call is terminated by the dead and the recipient hers the click of disengagement, other times, the line simply goes dead.
The phantom phone calls typically occur when the recipient is in a passive state of mind. If the recipient knows the caller is dead, the shock is great and the phone call very brief, invariably, the caller terminates the call after a few seconds or minutes, or the line goes dead. If the recipient does not know the caller is dead, a lengthy conversation of up to 30 minutes or so may take place, during which the recipient is not aware of anything amiss. In a minority of cases, the call is placed person-to-person, long-distance with the assistance of a mysterious operator. Checks with the telephone company later turn up no evidence of a call being places.
Similar phone calls from the dead are "intention" phone calls occurring between two living persons. Such calls are much rarer than calls from the dead. In a typical "intention" call, the caller thinks about making the call but never does, the recipient nevertheless receives a call. In some cases, emergencies precipitate phantom calls, a surgeon is summoned by a nurse to the hospital to perform an emergency operation, a priest is called by a "relative" to give last rites to a dying man and so forth.
Some persons who claim to have had UFO encounters report receiving harassing phantom phone calls. The calls are received soon after the witness returns home, or within a day or two of the encounter, in many cases, the calls come before the witness has shared the experience with anyone, stranger still, they are often placed to unlisted phone numbers. The unidentified caller warns the witness not to talk and to "forget" what he or she saw.
Phone calls allegedly may be placed to the dead as well. The caller does not find out until sometime after the call that the person on the other end has been dead. In one such case, a woman dreamed of a female friend she had not seen for several years. In the disturbing dream, she witnessed the friend sliding down into a pool of blood. Upon awakening, she worried that the dream was a portent of trouble, and called the friend. She was relieved when the friend answered. The friend explained that she had been in the hospital, had been released and was due to be readmitted in a few days. She demurred when the woman offered to visit, saying she would call later. The return call never came. The woman called her friend again, to be told by a relative that the friend has been dead for six months at the time the conversation took place.
In several cases studied by researchers, the deceased callers make reference to an anonymous” they” and caution that there is little time to talk. The remarks imply that communication between the living and the dead is not only difficult, but not necessarily desirable.
Most phone calls from the dead occur within 24 hours of the death of the caller. Most short calls come from those who have been dead seven days or less: most lengthy calls come from those who have been dead several months. One of the longest death-intervals on record is two years.
In a small number of cases, the callers are strangers who say they are calling on behalf of a third party, whom the recipient later discovered is dead.
Several theories exist as to the origin of phantom phone calls. (1) They are indeed placed by the dead, who somehow manipulate the telephone mechanisms and circuitry: (2) they are deceptions of elemental-type spirits who enjoy playing tricks on the living: (3) they are psychokinetic acts caused subconsciously by the recipient, whose intense desire to communicate with the dead creates a type of hallucinatory experience: (4) they are entirely fantasy created by the recipient.
For the most part, phantom phone calls are not seriously regarded by parapsychologists. In the early 20th century, numerous devices were built by investigators in hopes of capturing ghostly voices: many of them were modifications of the telegraph and wireless. Thomas Alva Edison, whose parents were Spiritualists, believed that a telephone could be invented that would connect the living to the dead. He verified that he was working on such a device, but apparently it never was completed before his death.
"Psychic telephone" experiments were conducted in the 1940's in England and America. Interest in the phenomenon waned until the 1960’s, following the findings of Konstantin Raudive that ghostly voices could be captured on electromagnetic tape. | <urn:uuid:c6d4bada-2535-41ff-b2c6-f0357a52e392> | CC-MAIN-2013-20 | http://phantomuniverse.blogspot.com/2010/02/phone-calls-from-beyond.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.975881 | 1,417 | 2.59375 | 3 |
W hy is it important for scientists to contribute to science education?
Our nation has failed to meet important educational challenges, and our children are ill prepared to respond to the demands of today?s world. Results of the Third International Mathematics and Science Study ( TIMSS )--and its successor, TIMSS-R--show that the relatively strong international performance of U.S. 4th graders successively deteriorates across 8th- and 12th-grade cohorts. Related studies indicate that U.S. PreK-12 curricula lack coherence, depth, and continuity and cover too many topics superficially. By high school, unacceptably low numbers of students show motivation or interest in enrolling in physics (only one-quarter of all students) or chemistry (only one-half).
We are rapidly approaching universal participation at the postsecondary level, but we still have critical science, technology, engineering, and mathematics (STEM) workforce needs and too few teachers who have studied science or mathematics. Science and engineering degrees as a percentage of the degrees conferred each year have remained relatively constant at about 5%. In this group, women and minorities are gravely underrepresented.
The consequences of these conditions are serious. The U.S. Department of Labor estimates that 60% of the new jobs being created in our economy today will require technological literacy, yet only 22% of the young people entering the job market now actually possess those skills. By 2010, all jobs will require some form of technological literacy, and 80% of those jobs haven?t even been created yet. We must prepare our students for a world that we ourselves cannot completely anticipate. This will require the active involvement of scientists and engineers.
How is NSF seeking to encourage scientists to work on educational issues?
The NSF Strategic Plan includes two relevant goals: to develop "a diverse, internationally competitive, and globally engaged workforce of scientists, engineers, and well-prepared citizens" and to support "discovery across the frontiers of science and engineering, connected to learning, innovation, and service to society." To realize both of these goals, our nation?s scientists and engineers must care about the educational implications of their work and explore educational issues as seriously and knowledgeably as they do their research questions. The phrase "integration of research and education" conveys two ideas. First, good research generates an educational asset, and we must effectively use that asset. Second, we need to encourage more scientists and engineers to pursue research careers that focus on teaching and learning within their own disciplines.
All proposals submitted to NSF for funding must address two merit criteria: intellectual merit and broader impacts.
In everyday terms, our approach to evaluating the broader impact of proposals is built on the philosophy that scientists and engineers should pay attention to teaching and value it, and that their institutions should recognize, support, and reward faculty, as well as researchers in government and industry, who take their role as educators seriously and approach instruction as a scholarly act. We think of education very broadly, including formal education (K-graduate and postdoctoral study) and informal education (efforts to promote public understanding of science and research outside the traditional educational environment).
What does it mean to take education seriously and explore it knowledgeably?
Any scholarly approach to education must be intentional, be based on a valid body of knowledge, and be rigorously assessed. That is, our approach to educational questions must be a scholarly act. NSF actively invests in educational reform and models that encourage scientists and engineers to improve curriculum, teaching, and learning in science and mathematics at all levels of the educational system from elementary school to graduate study and postdoctoral work.
We recognize that to interest faculty and practicing scientists and engineers in education, we must support research that generates convincing evidence that changing how we approach the teaching of science and mathematics will pay off in better learning and deeper interest in these fields.
Here are a few of the most recent efforts to stimulate interest in education that might be of interest to Next Wave readers. (For more information, go to the NSF Education and Human Resources directorate's Web site .)
The GK-12 program supports fellowships and training to enable STEM graduate students and advanced undergraduates to serve in K-12 schools as resources in STEM content and applications. Outcomes include improved communication and teaching skills for the Fellows, increased content knowledge for preK-12 teachers, enriched preK-12 student learning, and stronger partnerships between higher education and local schools.
The Centers for Learning and Teaching ( CLT ) program is a "comprehensive, research-based effort that addresses critical issues and national needs of the STEM instructional workforce across the entire spectrum of formal and informal education." The goal of the CLT program is to support the development of new approaches to the assessment of learning, research on learning within the disciplines, the design and development of effective curricular materials, and research-based approaches to instruction--and through this work to increase the number of people who do research on education in the STEM fields. This year (FY 02) we are launching some prototype higher education centers to reform teaching and learning in our nation's colleges and universities through a mix of research, faculty development and exploration of instructional practices that can promote learning. Like other NSF efforts, the Centers incorporate a balanced strategy of attention to people, ideas and tools. We hope to encourage more science and engineering faculty to work on educational issues in both K-12 and in postsecondary education.
If you are interested in these issues and want to pursue graduate or postdoctoral study, or want to develop a research agenda on learning in STEM fields, find the location and goals of the currently funded centers and also check later this summer to find out which higher education CLT prototypes are funded.
The following solicitations all involve the integration of research and education as well as attention to broadening participation in STEM careers:
The Science, Technology, Engineering, and Mathematics Talent Expansion Program ( STEP ) program seeks to increase the number of students (U.S. citizens or permanent residents) pursuing and receiving associate or baccalaureate degrees in established or emerging fields within STEM.
The Faculty Early Career Development ( CAREER ) program recognizes and supports the early career development activities of those teacher-scholars who are most likely to become the academic leaders of the 21st century.
The Course, Curriculum, and Laboratory Improvement (CCLI) program seeks to improve the quality of STEM education for all students and targets activities affecting learning environments, course content, curricula, and educational practices. CCLI offers three tracks: educational materials development , national dissemination , and adaptation and implementation .
The Integrative Graduate Education and Research Training ( IGERT ) program addresses the challenges of preparing Ph.D. scientists and engineers with the multidisciplinary backgrounds and the technical, professional, and personal skills needed for the career demands of the future.
The Vertical Integration of Research and Education in the Mathematical Sciences ( VIGRE ) program supports institutions with Ph.D.-granting departments in the mathematical sciences in carrying out innovative educational programs, at all levels, that are integrated with the department?s research activities.
The Increasing the Participation and Advancement of Women in Academic Science and Engineering Careers (ADVANCE) program seeks to increase the participation of women in the scientific and engineering workforce through the increased representation and advancement of women in academic science and engineering careers.
The Science, Technology, Engineering and Mathematics Teacher Preparation ( STEMTP ) program involves partnerships among STEM and education faculty working with preK-12 schools to develop exemplary preK-12 teacher education models that will improve the science and mathematics preparation of future teachers.
The Noyce Scholarship Supplements program supports scholarships and stipends for STEM majors and STEM professionals seeking to become preK-12 teachers.
The views expressed are those of the authors and do not necessarily reflect those of the National Science Foundation. | <urn:uuid:21bc3a09-e45d-497b-add1-4880324aff25> | CC-MAIN-2013-20 | http://sciencecareers.sciencemag.org/print/career_magazine/previous_issues/articles/2002_07_12/nodoi.4298361476632626608 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.950222 | 1,620 | 3.03125 | 3 |
Excerpts for Thames : The Biography
The River as Fact
It has a length of 215 miles, and is navigable for 191 miles. It is the longest river in England but not in Britain, where the Severn is longer by approximately 5 miles. Nevertheless it must be the shortest river in the world to acquire such a famous history. The Amazon and the Mississippi cover almost 4,000 miles, and the Yangtze almost 3,500 miles; but none of them has arrested the attention of the world in the manner of the Thames.
It runs along the borders of nine English counties, thus reaffirming its identity as a boundary and as a defence. It divides Wiltshire from Gloucestershire, and Oxfordshire from Berkshire; as it pursues its way it divides Surrey from Middlesex (or Greater London as it is inelegantly known) and Kent from Essex. It is also a border of Buckinghamshire. It guarded these once tribal lands in the distant past, and will preserve them into the imaginable future.
There are 134 bridges along the length of the Thames, and forty-four locks above Teddington. There are approximately twenty major tributaries still flowing into the main river, while others such as the Fleet have now disappeared under the ground. Its "basin," the area from which it derives its water from rain and other natural forces, covers an area of some 5,264 square miles. And then there are the springs, many of them in the woods or close to the streams beside the Thames. There is one in the wood below Sinodun Hills in Oxfordshire, for example, which has been described as an "everlasting spring" always fresh and always renewed.
The average flow of the river at Teddington, chosen because it marks the place where the tidal and non-tidal waters touch, has been calculated at 1,145 millions of gallons (5,205 millions of litres) each day or approximately 2,000 cubic feet (56.6 cubic metres) per second. The current moves at a velocity between 1Ú2 and 23Ú4 miles per hour. The main thrust of the river flow is known to hydrologists as the "thalweg"; it does not move in a straight and forward line but, mingling with the inner flow and the variegated flow of the surface and bottom waters, takes the form of a spiral or helix. More than 95 per cent of the river's energy is lost in turbulence and friction.
The direction of the flow of the Thames is therefore quixotic. It might be assumed that it would move eastwards, but it defies any simple prediction. It flows north-west above Henley and at Teddington, west above Abingdon, south from Cookham and north above Marlow and Kingston. This has to do with the variegated curves of the river. It does not meander like the Euphrates, where according to Herodotus the voyager came upon the same village three times on three separate days, but it is circuitous. It specialises in loops. It will take the riparian traveller two or three times as long to cover the same distance as a companion on the high road. So the Thames teaches you to take time, and to view the world from a different vantage.
The average "fall" or decline of the river from its beginning to its end is approximately 17 to 21 inches (432 to 533 mm) per mile. It follows gravity, and seeks out perpetually the simplest way to the sea. It falls some 600 feet (183 m) from source to sea, with a relatively precipitous decline of 300 feet (91.5 m) in the first 9 miles; it falls 100 (30.4 m) more in the next 11 miles, with a lower average for the rest of its course. Yet averages may not be so important. They mask the changeability and idiosyncrasy of the Thames. The mean width of the river is given as 1,000 feet (305 m), and a mean depth of 30 feet (9 m); but the width varies from 1 or 2 feet (0.3 to 0.6 m) at Trewsbury to 51Ú2 miles at the Nore.
The tide, in the words of Tennyson, is that which "moving seems asleep, too full for sound and foam." On its flood inward it can promise benefit or danger; on its ebb seaward it suggests separation or adventure. It is one general movement but it comprises a thousand different streams and eddies; there are opposing streams, and high water is not necessarily the same thing as high tide. The water will sometimes begin to fall before the tide is over. The average speed of the tide lies between 1 and 3 knots (1.15 and 3.45 miles per hour), but at times of very high flow it can reach 7 knots (8 miles per hour). At London Bridge the flood tide runs for almost six hours, while the ebb tide endures for six hours and thirty minutes. The tides are much higher now than at other times in the history of the Thames. There can now be a difference of some 24 feet (7.3 m) between high and low tides, although the average rise in the area of London Bridge is between 15 and 22 feet (4.5 and 6.7 m). In the period of the Roman occupation, it was a little over 3 feet (0.9 m). The high tide, in other words, has risen greatly over a period of two thousand years.
The reason is simple. The south-east of England is sinking slowly into the water at the rate of approximately 12 inches (305 mm) per century. In 4000 BC the land beside the Thames was 46 feet (14 m) higher than it is now, and in 3000 BC it was some 31 feet (9.4 m) higher. When this is combined with the water issuing from the dissolution of the polar ice-caps, the tides moving up the lower reaches of the Thames are increasing at a rate of 2 feet (0.6 m) per century. That is why the recently erected Thames Barrier will not provide protection enough, and another barrier is being proposed.
The tide of course changes in relation to the alignment of earth, moon and sun. Every two weeks the high "spring" tides reach their maximum two days after a full moon, while the low "neap" tides occur at the time of the half-moon. The highest tides occur at the times of equinox; this is the period of maximum danger for those who live and work by the river. The spring tides of late autumn and early spring are also hazardous. It is no wonder that the earliest people by the Thames venerated and propitiated the river.
The general riverscape of the Thames is varied without being in any sense spectacular, the paraphernalia of life ancient and modern clustering around its banks. It is in large part now a domesticated river, having been tamed and controlled by many generations. It is in that sense a piece of artifice, with some of its landscape deliberately planned to blend with the course of the water. It would be possible to write the history of the Thames as a history of a work of art.
It is a work still in slow progress. The Thames has taken the same course for ten thousand years, after it had been nudged southward by the glaciation of the last ice age. The British and Roman earthworks by the Sinodun Hills still border the river, as they did two thousand years before. Given the destructive power of the moving waters, this is a remarkable fact. Its level has varied over the millennia--there is a sudden and unexpected rise at the time of the Anglo-Saxon settlement, for example--and the discovery of submerged forests testifies to incidents of overwhelming flood. Its appearance has of course also altered, having only recently taken the form of a relatively deep and narrow channel, but its persistence and identity through time are an aspect of its power.
Yet of course every stretch has its own character and atmosphere, and every zone has its own history. Out of oppositions comes energy, out of contrasts beauty. There is the overwhelming difference of water within it, varying from the pure freshwater of the source through the brackish zone of estuarial water to the salty water in proximity to the sea. Given the eddies of the current, in fact, there is rather more salt by the Essex shore than by the Kentish shore. There are manifest differences between the riverine landscapes of Lechlade and of Battersea, of Henley and of Gravesend; the upriver calm is in marked contrast to the turbulence of the long stretches known as River of London and then London River. After New Bridge the river becomes wider and deeper, in anticipation of its change.
The rural landscape itself changes from flat to wooded in rapid succession, and there is a great alteration in the nature of the river from the cultivated fields of Dorchester to the thick woods of Cliveden. From Godstow the river becomes a place of recreation, breezy and jaunty with the skiffs and the punts, the sports in Port Meadow and the picnic parties on the banks by Binsey. But then by some change of light it becomes dark green, surrounded by vegetation like a jungle river; and then the traveller begins to see the dwellings of Oxford, and the river changes again. Oxford is a pivotal point. From there you can look upward and consider the quiet source; or you can look downstream and contemplate the coming immensity of London.
In the reaches before Lechlade the water makes its way through isolated pastures; at Wapping and Rotherhithe the dwellings seem to drop into it, as if overwhelmed by numbers. The elements of rusticity and urbanity are nourished equally by the Thames. That is why parts of the river induce calm and forgetfulness, and others provoke anxiety and despair. It is the river of dreams, but it is also the river of suicide. It has been called liquid history because within itself it dissolves and carries all epochs and generations. They ebb and flow like water.
The River as Metaphor
The river runs through the language, and we speak of its influence in every conceivable context. It is employed to characterise life and death, time and destiny; it is used as a metaphor for continuity and dissolution, for intimacy and transitoriness, for art and history, for poetry itself. In The Principles of Psychology (1890) William James first coined the phrase "stream of consciousness" in which "every definite image of the mind is steeped . . . in the free water that flows around it." Thus "it flows" like the river itself. Yet the river is also a token of the unconscious, with its suggestion of depth and invisible life.
The river is a symbol of eternity, in its unending cycle of movement and change. It is one of the few such symbols that can readily be understood, or appreciated, and in the continuing stream the mind or soul can begin to contemplate its own possible immortality.
In the poetry of John Denham's "Cooper's Hill" (1642), the Thames is a metaphor for human life. How slight its beginning, how confident its continuing course, how ineluctable its destination within the great ocean:
Hasting to pay his tribute to the sea,
Like mortal life to meet eternity.
The poetry of the Thames has always emphasised its affiliations with human purpose and with human realities. So the personality of the river changes in the course of its journey from the purity of its origins to the broad reaches of the commercial world. The river in its infancy is undefiled, innocent and clear. By the time it is closely pent in by the city, it has become dank and foul, defiled by greed and speculation. In this regress it is the paradigm of human life and of human history. Yet the river has one great advantage over its metaphoric companions. It returns to its source, and its corruption can be reversed. That is why baptism was once instinctively associated with the river. The Thames has been an emblem of redemption and of renewal, of the hope of escaping from time itself.
When Wordsworth observed the river at low tide, with the vista of the "mighty heart" of London "lying still," he used the imagery of human circulation. It is the image of the river as blood, pulsing through the veins and arteries of its terrain, without which the life of London would seize up. Sir Walter Raleigh, contemplating the Thames from the walk by his cell in the Tower, remarked that the "blood which disperseth itself by the branches or veins through all the body, may be resembled to these waters which are carried by brooks and rivers overall the earth." He wrote his History of the World (1610) from his prison cell, and was deeply imbued with the current of the Thames as a model of human destiny. It has been used as the symbol for the unfolding of events in time, and carries the burden of past events upon its back. For Raleigh the freight of time grew ever more complex and wearisome as it proceeded from its source; human life had become darker and deeper, less pure and more susceptible to the tides of affairs. There was one difference Raleigh noticed in his history, when he declared that "for this tide of man's life, after it once turneth and declineth, ever runneth with a perpetual ebb and falling stream, but never floweth again."
The Thames has also been understood as a mirror of morality. The bending rushes and the yielding willows afford lessons in humility and forbearance; the humble weeds along its banks have been praised for their lowliness and absence of ostentation. And who has ventured upon the river without learning the value of patience, of endurance, and of vigilance? John Denham makes the Thames the subject of native discourse in a further sense:
Though deep, yet clear; though gentle, yet not dull;
Strong without rage; without o'erflowing, full.
This suggests that the river represents an English measure, an aesthetic harmony to be sought or wished for, but in the same breath Denham seems to be adverting to some emblem of Englishness itself. The Thames is a metaphor for the country through which it runs. It is modest and moderate, calm and resourceful; it is powerful without being fierce. It is not flamboyantly impressive. It is large without being too vast. It eschews extremes. It weaves its own course without artificial diversions or interventions. It is useful for all manner of purposes. It is a practical river.
When Robert Menzies, an erstwhile Australian prime minister, was taken to Runnymede he was moved to comment upon the "secret springs" of the "slow English character." This identification of the land with the people, the characteristics of the earth and water with the temperament of their inhabitants, remains a poignant one. There is an inward and intimate association between the river and those who live beside it, even if that association cannot readily be understood.
From the Hardcover edition. | <urn:uuid:c8589dab-6a33-4d56-9c69-99faf059b9e4> | CC-MAIN-2013-20 | http://sherloc.imcpl.org/enhancedContent.pl?contentType=ExcerptDetail&isbn=9780385528474 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.962477 | 3,140 | 3.390625 | 3 |
Groundhogs, as a species, have a large range in size. There are the medium-sized rodents I grew up with, averaging around 4 kg, and groundhogs—like a certain Phil—that are probably more like 14 kg. This is the likely source of my earlier confusion, as that's a huge discrepancy in size. Evidently, it's all in the diet, much like humans.
Where I grew up, in rural Northern Minnesota, we called the groundhog a woodchuck; I thought that the groundhog was some fat cat, East Coast, liberal rodent. As it would turn out, they are actually one in the same creature—Marmota monax, a member of the squirrel family. Woodchucks spend a lot of their time in burrows. It is their safe haven from their many predators, and they are quick to flee to it at the first sign of danger. They will sometimes emit a loud whistle on their way to alert others in the area that something is awry. Groundhogs enjoy raiding our gardens and digging up sod, thereby destroying what we've spent countless hours toiling upon.
Look for groundhog signs. You might not even know there is a groundhog around until your garden has been devoured or your tractor damaged by a collapsed groundhog den. Things to look for are large nibble marks on your prized veggies, gnaw marks on the bark of young fruit trees, root vegetables pulled up (or their tops trimmed off), groundhog-sized holes (25–30 cm) anywhere near your garden, or mounds of dirt near said holes. If you see these signs, take action. Don't wait or it will be too late! If you know it will be a problem and do nothing, you can't blame the animal.
Set groundhog traps. This technique takes some skill as you need to be able to pick a spot in the path of the animal, camouflage it, and mask your strong human scent. Setting a spring trap, whether coil or long-spring, is usually just a matter of compressing the springs and setting a pin that keeps the jaws open into the pan or trigger. Make sure your trap is anchored securely with a stake. Check your traps often, and dispatch the animal quickly and humanely. Shooting them in the head or a hearty whack to the head with club will do the trick. If you can't deal with this, you have no business setting traps. Call a professional.
Guns kill groundhogs. I have never shot a groundhog. I rarely have had problems with them, and they move so damned fast it is difficult to get a shot off. If I had to, I know how I would do it. First, be sure it is legal in your area, and be sure to follow gun safety protocols. After that, it's just a matter of learning where your target is going to be comfortable and let their guard down. I would follow their tracks back to their den, find a spot downwind to sit with a clear shooting lane, and make sure nothing you shouldn't hit with a bullet is down range. Then, I would wait, my sights set on the den, until the groundhog stuck its head up—quick and easy.
Demolish the groundhog burrows. If you find a couple holes around your yard, they are likely the entrances to an elaborate tunnel maze carved into the earth beneath you. About all you can do, short of digging the whole mess up, is to try and fill it in from the top side. First, fill it with a bunch of rocks and then soil—make sure to really pack it in. This will make it difficult for the groundhog to reclaim its hole without a lot of work. You probably want to do this in tandem with other control methods such as trapping, shooting, or fumigating to prevent the groundhog from just digging a new hole.
Do some landscaping and build barriers. As with the control of many pests, it is advisable to keep a yard free of brush, undercover, and dead trees. These types of features are attractive to groundhogs as cover, and without it, they are less likely to want to spend time there. If you want to keep a groundhog out of an area, consider a partially buried fence. This will require a lot of work, but it is going to help a lot. Make sure it extends up at least a meter, and that it is buried somewhere around 30 cm deep. Angle the fencing outward 90 degrees when you bury it, and it will make digging under it a very daunting task for your furry friend.
Try using fumigants to kill groundhogs. What is nice about this product is that you can kill the animal and bury it all in one stroke. The best time to do this is in the spring when the mother will be in the den with her still helpless young. Also, the soil will likely be damp, which helps a lot. You should definitely follow the directions on the package, but the way they usually work is that you cover all but one exit, set off the smoke bomb, shove it down the hole, and quickly cover it up. Check back in a day or two to see if there is any sign of activity, and if so, do it again or consider a different control method. It is important that you don't do this if the hole is next to your house or if there is any risk of a fire.
Poisons are a last resort. I am not a fan of poisons because it is difficult to target what will eat said poison in the wild. Also, you are left with the issue of where the groundhog will die and how bad it will smell if it is somewhere under your house. Or, if it is outside somewhere, who will be affected by eating the dead animal? Where does it end? If you want to use poison, you're on your own.
Use live traps. This is a good option for those of you not too keen on killing things. Try jamming the door open and leaving bait inside for the taking a couple of times so they get used to it. Then, set it normally and you've got your groundhog (or a neighborhood cat). Now what? The relocation is just as important; you need to choose a place that is far away from other humans and can likely support a groundhog. Good luck.
Predator urine. The idea is simple: form a perimeter around an area you want to protect. If the groundhog doesn't recognize the smell as a natural predator, it is probably not going to work too well. Look for brands that have wolf and bobcat urine. Apply regularly, or as the manufacturer recommends. Remember, if it rains, the urine has probably washed away.
Repellents. Another popular method involves pepper-based repellents. These deter groundhogs by tasting horrible and burning their mucous membranes. You can do a perimeter with powdered cayenne pepper or just apply it to the things you want spared in your garden. Be sure to wash your vegetables off before using them (which you should be doing anyway). | <urn:uuid:077623b0-183e-4d40-bf5d-168228829785> | CC-MAIN-2013-20 | http://simplepestcontrol.com/groundhog-control.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.967829 | 1,467 | 3.03125 | 3 |
The Operations Layer defines the operational processes and procedures necessary to deliver Information Technology (IT) as a Service. This layer leverages IT Service Management concepts that can be found in prevailing best practices such as ITIL and MOF. The main focus of the Operations Layer is to execute the business requirements defined at the Service Delivery Layer. Cloud-like service attributes cannot be achieved through technology alone and require a high level of IT Service Management maturity.
Change Management process is responsible for controlling the life cycle of all changes. The primary objective of Change Management is to eliminate or at least minimize disruption while desired changes are made to services. Change Management focuses on understanding and balancing the cost and risk of making the change versus the benefit of the change to either the business or the service. Driving predictability and minimizing human involvement are the core principles for achieving a mature Service Management process and ensuring changes can be made without impacting the perception of continuous availability.
Standard (Automated) Change
Non-Standard (Mechanized) Change
It is important to note that a record of all changes must be maintained, including Standard Changes that have been automated. The automated process for Standard Changes should include the creation and population of the change record per standard policy in order to make sure auditability.
Automating changes also enables other key principles such as:
The Service Asset and Configuration Management process is responsible for maintaining information on the assets, components, and infrastructure needed to provide a service. Critical configuration data for each component, and its relationship to other components, must be accurately captured and maintained. This configuration data should include past and current states and future-state forecasts, and be easily available to those who need it. Mature Service Asset and Configuration Management processes are necessary for achieving predictability.
A virtualized infrastructure adds complexity to the management of Configuration Items (CIs) due to the transient nature of the relationship between guests and hosts in the infrastructure. How is the relationship between CIs maintained in an environment that is potentially changing very frequently?
A service comprises software, platform, and infrastructure layers. Each layer provides a level of abstraction that is dependent on the layer beneath it. This abstraction hides the implementation and composition details of the layer. Access to the layer is provided through an interface and as long as the fabric is available, the actual physical location of a hosted VM is irrelevant. To provide Infrastructure as a Service (IaaS), the configuration and relationship of the components within the fabric must be understood, whereas the details of the configuration within the VMs hosted by the fabric are irrelevant.
The Configuration Management System (CMS) will need to be partitioned, at a minimum, into physical and logical CI layers. Two Configuration Management Databases (CMDBs) might be used; one to manage the physical CIs of the fabric (facilities, network, storage, hardware, and hypervisor) and the other to manage the logical CIs (everything else). The CMS can be further partitioned by layer, with separate management of the infrastructure, platform, and software layers. The benefits and trade-offs of each approach are summarized below.
CMS Partitioned by Layer
CMS Partitioned into Physical and Logical
Table 2: Configuration Management System Options
Partitioning logical and physical CI information allows for greater stability within the CMS, because CIs will need to be changed less frequently. This means less effort will need to be expended to accurately maintain the information. During normal operations, mapping a VM to its physical host is irrelevant. If historical records of a VM’s location are needed, (for example, for auditing or Root Cause Analysis) they can be traced through change logs.
The physical or fabric CMDB will need to include a mapping of fault domains, upgrade domains, and Live Migration domains. The relationship of these patterns to the infrastructure CIs will provide critical information to the Fabric Management System.
The Release and Deployment Management processes are responsible for making sure that approved changes to a service can be built, tested, and deployed to meet specifications with minimal disruption to the service and production environment. Where Change Management is based on the approval mechanism (determining what will be changed and why), Release and Deployment Management will determine how those changes will be implemented.
The primary focus of Release and Deployment Management is to protect the production environment. The less variation is found in the environment, the greater the level of predictability – and, therefore, the lower the risk of causing harm when new elements are introduced. The concept of homogenization of physical infrastructure is derived from this predictability principle. If the physical infrastructure is completely homogenized, there is much greater predictability in the release and deployment process.
While complete homogenization is the ideal, it may not be achievable in the real world. Homogenization is a continuum. The closer an environment gets to complete homogeneity, the more predictable it becomes and the fewer the risks. Full homogeneity means not only that identical hardware models are used, but all hardware configuration is identical as well. When complete hardware homogeneity is not feasible, strive for configuration homogeneity wherever possible.
Figure 2: Homogenization Continuum
The Scale Unit concept drives predictability in Capacity Planning and agility in the release and deployment of physical infrastructure. The hardware specifications and configurations have been pre-defined and tested, allowing for a more rapid deployment cycle than in a traditional data center. Similarly, known quantities of resources are added to the data center when the Capacity Plan is triggered. However, when the Scale Unit itself must change (for example, when a vendor retires a hardware model), a new risk is introduced to the private cloud.
There will likely be a period where both n and n-1 versions of the Scale Unit exist in the infrastructure, but steps can be taken to minimize the risk this creates. Work with hardware vendors to understand the life cycle of their products and coordinate changes from multiple vendors to minimize iterations of the Scale Unit change. Also, upgrading to the new version of the Scale Unit should take place one Fault Domain at a time wherever possible. This will make sure that if an incident occurs with the new version, it can be isolated to a single Fault Domain.
Homogenization of the physical infrastructure means consistency and predictability for the VMs regardless of which physical host they reside on. This concept can be extended beyond the production environment. The fabric can be partitioned into development, test, and pre-production environments as well. Eliminating variability between environments enables developers to more easily optimize applications for a private cloud and gives testers more confidence that the results reflect the realities of production, which in turn should greatly improve testing efficiency.
The virtualized infrastructure enables workloads to be transferred more easily between environments. All VMs should be built from a common set of component templates housed in a library, which is used across all environments. This shared library includes templates for all components approved for production, such as VM images, the gold OS image, server role templates, and platform templates. These component templates are downloaded from the shared library and become the building blocks of the development environment. From development, these components are packaged together to create a test candidate package (in the form of a virtual hard disk (VHD) that is uploaded to the library. This test candidate package can then be deployed by booting the VHD in the test environment. When testing is complete, the package can again be uploaded to the library as a release candidate package – for deployment into the pre-production environment, and ultimately into the production environment.
Since workloads are deployed by booting a VM from a VHD, the Release Management process occurs very quickly through the transfer of VHD packages to different environments. This also allows for rapid rollback should the deployment fail; the current release can be deleted and the VM can be booted off the previous VHD.
Virtualization and the use of standard VM templates allow us to rethink software updates and patch management. As there is minimal variation in the production environment and all services in production are built with a common set of component templates, patches need not be applied in production. Instead, they should be applied to the templates in the shared library. Any services in production using that template will require a new version release. The release package is then rebuilt, tested, and redeployed, as shown below.
Figure 3: The Release Process
This may seem counter-intuitive for a critical patch scenario, such as when an exploitable vulnerability is exposed. But with virtualization technologies and automated test scripts, a new version of a service can be built, tested, and deployed quite rapidly.
Variation can also be reduced through standardized, automated test scenarios. While not every test scenario can or should be automated, tests that are automated will improve predictability and facilitate more rapid test and deployment timelines. Test scenarios that are common for all applications, or the ones that might be shared by certain application patterns, are key candidates for automation. These automated test scripts may be required for all release candidates prior to deployment and would make sure further reduction in variation in the production environment.
Knowledge Management is the process of gathering, analyzing, storing, and sharing knowledge and information within an organization. The goal of Knowledge Management is to make sure that the right people have access to the information they need to maintain a private cloud. As operational knowledge expands and matures, the ability to intelligently automate operational tasks improves, providing for an increasingly dynamic environment.
An immature approach to Knowledge Management costs organizations in terms of slower, less-efficient problem solving. Every problem or new situation that arises becomes a crisis that must be solved. A few people may have the prior experience to resolve the problem quickly and calmly, but their knowledge is not shared. Immature knowledge management creates greater stress for the operations staff and usually results in user dissatisfaction with frequent and lengthy unexpected outages. Mature Knowledge Management processes are necessary for achieving a service provider’s approach to delivering infrastructure. Past knowledge and experience is documented, communicated, and readily available when needed. Operating teams are no longer crisis-driven as service-impacting events grow less frequent and are quickly resolves when they do occur.
When designing a private cloud, development of the Health Model will drive much of the information needed for Knowledge Management. The Health Model defines the ideal states for each infrastructure component and the daily, weekly, monthly, and as-needed tasks required to maintain this state. The Health Model also defines unhealthy states for each infrastructure component and actions to be taken to restore their health. This information will form the foundation of the Knowledge Management database.
Aligning the Health Model with alerts allows these alerts to contain links to the Knowledge Management database describing the specific steps to be taken in response to the alert. This will help drive predictability as a consistent, proven set of actions will be taken in response to each alert.
The final step toward achieving a private cloud is the automation of responses to each alert as defined in the Knowledge Management database. Once these responses are proven successful, they should be automated to the fullest extent possible. It is important to note, though, that automating responses to alerts does not make them invisible and forgotten. Even when alerts generate a fully automated response they must be captured in the Service Management system. If the alert indicates the need for a change, the change record should be logged. Similarly, if the alert is in response to an incident, an incident record should be created. These automated workflows must be reviewed regularly by Operations staff to make sure the automated action achieves the expected result. Finally, as the environment changes over time, or as new knowledge is gained, the Knowledge Management database must be updated along with the automated workflows that are based on that knowledge.
The goal of Incident Management is to resolve events that are impacting, or threaten to impact, services as quickly as possible with minimal disruption. The goal of Problem Management is to identify and resolve root causes of incidents that have occurred as well as identify and prevent or minimize the impact of incidents that may occur.
Pinpointing the root cause of an incident can become more challenging when workloads are abstracted from the infrastructure and their physical location changes frequently. Additionally, incident response teams may be unfamiliar with virtualization technologies (at least initially) which could also lead to delays in incident resolution. Finally, applications may have neither a robust Health Model nor expose all of the health information required for a proactive response. All of this may lead to an increase in reactive (user initiated) incidents which will likely increase the Mean-Time-to-Restore-Service (MTRS) and customer dissatisfaction.
This may seem to go against the resiliency principle, but note that virtualization alone will not achieve the desired resiliency unless accompanied by highly mature IT Service Management (ITSM) maturity and a robust automated health monitoring system.
The drive for resiliency requires a different approach to troubleshooting incidents. Extensive troubleshooting of incidents in production negatively impacts resiliency. Therefore, if an incident cannot be quickly resolved, the service can be rolled back to the previous version, as described under Release and Deployment. Further troubleshooting can be done in a test environment without impacting the production environment. Troubleshooting in the production environment may be limited to moving the service to different hosts (ruling out infrastructure as the cause) and rebooting the VMs. If these steps do not resolve the issue, the rollback scenario could be initiated.
Minimizing human involvement in incident management is critical for achieving resiliency. The troubleshooting scenarios described earlier could be automated, which will allow for identification and possible resolution of the root much more quickly than non-automated processes. But automation may mask the root cause of the incident. Careful consideration should be given to determining which troubleshooting steps should be automated and which require human analysis.
Human Analysis of Troubleshooting
If a compute resource fails, it is no longer necessary to treat the failure as an incident that must be fixed immediately. It may be more efficient and cost effective to treat the failure as part of the decay of the Resource Pool. Rather than treat a failed server as an incident that requires immediate resolution, treat it as a natural candidate for replacement on a regular maintenance schedule, or when the Resource Pool reaches a certain threshold of decay. Each organization must balance cost, efficiency, and risk as it determines an acceptable decay threshold – and choose among these courses of action:
The benefits and trade-off of each of the options are listed below:
Option 4 is the least desirable, as it does not take advantage of the resiliency and cost reduction benefits of a private cloud. A well-planned Resource Pool and Reserve Capacity strategy will account for Resource Decay.
Option 1 is the most recommended approach. A predictable maintenance schedule allows for better procurement planning and can help avoid conflicts with other maintenance activities, such as software upgrades. Again, a well-planned Resource Pool and Reserve Capacity strategy will account for Resource Decay and minimize the risk of exceeding critical thresholds before the scheduled maintenance.
Option 3 will likely be the only option for self-contained Scale Unit scenarios, as the container must be replaced as a single Scale Unit when the decay threshold is reached.
The goal of Request Fulfillment is to manage requests for service from users. Users should have a clear understanding of the process they need to initiate to request service and IT should have a consistent approach for managing these requests.
Much like any service provider, IT should clearly define the types of requests available to users in the service catalog. The service catalog should include an SLA on when the request will be completed, as well as the cost of fulfilling the request, if any.
The types of requests available and their associated costs should reflect the actual cost of completing the request and this cost should be easily understood. For example, if a user requests an additional VM, its daily cost should be noted on the request form, which should also be exposed to the organization or person responsible for paying the bill.
It is relatively easy to see the need for adding resources, but more difficult to see when a resource is no longer needed. A process for identifying and removing unused VMs should be put into place. There are a number of strategies to do this, depending on the needs of a given organization, such as:
The benefits and trade-offs of each of these approaches are detailed below:
Option 4 affords the greatest flexibility, while still working to minimize server sprawl. When a user requests a VM, they have the option of setting an expiration date with no reminder (for example, if they know they will only be using the workload for one week). They could set an expiration deadline with a reminder (for example, a reminder that the VM will expire after 90 days unless they wish to renew). Lastly, the user may request no expiration date if they expect the workload will always be needed. If the last option is chosen, it is likely that underutilized VMs will still be monitored and owners notified.
Finally, self-provisioning should be considered, if appropriate, when evaluating request fulfillment options to drive towards minimal human involvement. Self-provisioning allows great agility and user empowerment, but it can also introduce risks depending on the nature of the environment in which these VMs are introduced.
For an enterprise organization, the risk of bypassing formal build, stabilize, and deploy processes may or may not outweigh the agility benefits gained from the self-provisioning option. Without strong governance to make sure each VM has an end-of-life strategy, the fabric may become congested with VM server sprawl. The pros and cons of self-provisioning options are listed in the next diagram:
The primary decision point for determining whether to use self-provisioning is the nature of the environment. Allowing developers to self-provision into the development environment greatly facilitates agile development, and allows the enterprise to maintain release management controls as these workloads are moved out of development and into test and production environments.
A user-led community environment isolated from enterprise mission-critical applications may also be a good candidate for self-provisioning. As long as user actions are isolated and cannot impact mission critical applications, the agility and user empowerment may justify the risk of giving up control of release management. Again, it is essential that in such a scenario, expiration timers are included to prevent server sprawl.
The goal of Access Management is to make sure authorized users have access to the services they need while preventing access by unauthorized users. Access Management is the implementation of security policies defined by Information Security Management at the Service Delivery Layer.
Maintaining access for authorized users is critical for achieving the perception of continuous availability. Besides allowing access, Access Management defines users who are allowed to use, configure, or administer objects in the Management Layer. From a provider’s perspective, it answers questions like:
From a consumer’s perspective, it answers questions such as:
Access Management is implemented at several levels and can include physical barriers to systems such as requiring access smartcards at the data center, or virtual barriers such as network and Virtual Local Area Network (VLAN) separation, firewalling, and access to storage and applications.
Taking a service provider’s approach to Access Management will also make sure that resource segmentation and multi-tenancy is addressed.
Resource Pools may need to be segmented to address security concerns around confidentiality, integrity, and availability. Some tenants may not wish to share infrastructure resources to keep their environment isolated from others. Access Management of shared infrastructure requires logical access control mechanisms such as encryption, access control rights, user groupings, and permissions. Dedicated infrastructure also relies on physical access control mechanisms, where infrastructure is not physically connected, but is effectively isolated through a firewall or other mechanisms.
The goal of systems administration is to make sure that the daily, weekly, monthly, and as-needed tasks required to keep a system healthy are being performed.
Regularly performing ongoing systems administration tasks is critical for achieving predictability. As the organization matures and the Knowledge Management database becomes more robust and increasingly automated, systems administration tasks is no longer part of the job role function. It is important to keep this in mind as an organization moves to a private cloud. Staff once responsible for systems administration should refocus on automation and scripting skills – and on monitoring the fabric to identify patterns that indicate possibilities for ongoing improvement of existing automated workflows. | <urn:uuid:809862ce-ee94-40a5-8400-9b6e42bd25fc> | CC-MAIN-2013-20 | http://social.technet.microsoft.com/wiki/contents/articles/4518.private-cloud-planning-guide-for-operations.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.926484 | 4,173 | 2.640625 | 3 |
Published May 2008
Properly located digital signage in high traffic areas on school campuses provides students and faculty with a convenient resource to stay up to date about the latest school news and activities.
Signage in Education
By Anthony D. Coppedge
Technology gets high marks.
Digital media and communications have come to play a vital role in people’s everyday lives, and a visit to the local K-12 school, college or university campus quickly illustrates the many ways in which individuals rely on audio and visual technologies each day. The shift from analog media to digital, represented by milestones ranging from the replacement of the Walkman by the MP3 player to the DTV transition currently enabling broadcasts beyond the home to mobile devices, has redefined the options that larger institutions, including those in our educational system, have for sharing information across the campus and facilities.
Flexible And Efficient
Digital signage, in particular, is proving to be a flexible and efficient tool for delivering specific and up-to-date information within the educational environment. As a high-resolution, high-impact medium, it lives up to the now-widespread expectation that visual media be crisp and clear, displayed on a large screen. Although the appeal of implementing digital signage networks does stem, in part, from plummeting screen prices and sophisticated content delivery systems, what’s equally or more important is that digital signage provides valuable information to the people who need it, when and where they need it. On school campuses—whether preschool, elementary, high school or post-secondary institutions—it does so effectively, for both educational purposes and for the security and safety of staff, administration and the student body as a whole.
School campuses have begun leveraging digital signage technology in addition to, or in place of, printed material, such as course schedules, content and location; time-sensitive school news and updates; maps and directions; welcome messages for visitors and applicants; and event schedules. Digital signage simplifies creation and delivery of multiple channels of targeted content to different displays on the network. Although a display in the college admissions office might provide prospective students with a glimpse into student life, for example, another display outside a lab or seminar room might present the courses or lectures scheduled for that space throughout the day.
This model of a distribution concept illustrates a school distributing educational content over a public TV broadcast network.
At the K-12 level, digital signage makes it easy to deliver information such as team or band practice schedules, or to post the cafeteria menu and give students information encouraging sound food choices. Digital signage in the preschool and daycare setting makes it easy for teachers and caregivers to share targeted educational programming with their classes.
Among the most striking benefits of communicating through digital signage is the quality of the pictures and the flexibility with which images, text and video can be combined in one or more windows to convey information. Studies have shown that dynamic signage is noticed significantly more often than are static displays and, furthermore, that viewers are more likely to remember that dynamic content.
Though most regularly updated digital signage content tends to be text-based, digital signage networks also have the capacity to enable the live campus-wide broadcast of key events: a speech by a visiting dignitary, the basketball team’s first trip to the state or national tournament, or even the proceedings at commencement and graduation. When time is short, it’s impractical to gather the entire student body in one place or there simply isn’t the time or means to deliver the live message in any other way.
The ability to share critical information to the entire school community, clearly and without delay, has made digital signage valuable as a tool for emergency response and communications. Parents, administrators, teachers and students today can’t help but be concerned about the school’s ability to respond quickly and effectively to a dangerous situation, whether the threat be from another person, an environmental hazard, an unpredictable weather system or some other menace.
Digital signage screens installed across a school campus can be updated immediately to warn students and staff of the danger, and to provide unambiguous instructions for seeking shelter or safety: where to go and what to do.
Although early digital signage systems relied on IP-based networks and point-to-point connections between a player and each display, current solutions operate on far less costly and much more scalable platforms. Broadcast-based digital signage models allow content to be distributed remotely from a single data source via transport media, such as digital television broadcast, satellite, broadband and WiMAX.
The staff member responsible for maintaining the digital signage network can use popular content creation toolsets to populate both dynamic and static displays. This content is uploaded to a server that, in turn, feeds the digital signage network via broadcast, much like datacasting, to the receive site for playout. By slotting specific content into predefined display templates, each section with its own playlist, the administrator can schedule display of multiple elements simultaneously or a single-window static, video or animated display.
The playlist enables delivery of the correct elements to the targeted display both at the scheduled time and in the appropriate layout. In networks with multicast-enabled routers, the administrator can schedule unique content for displays in different locations.
In the case of delivering emergency preparedness or response information across a campus, content can be created through the same back-office software used for day-to-day digital signage displays. Within the broadcast-based model, three components ensure the smooth delivery of content to each display.
A transmission component serves as a content hub, allocating bandwidth and inserting content into the broadcast stream based on the schedule dictated by the network’s content management component. Content is encapsulated into IP packets that, in turn, are encapsulated into MPEG2 packets for delivery.
Generic content distribution model for digital signage solution.
The content management component of the digital signage network provides for organization and scheduling of content, as well as targeting of that content to specific receivers. Flexibility in managing the digital signage system enables distribution of the same emergency message across all receivers and associated displays, or the delivery of select messages to particular displays within the larger network.
With tight control over the message being distributed, school administrators can immediately provide the information that students and staff in different parts of the campus need to maintain the safest possible environment. Receivers can be set to confirm receipt of content, in turn assuring administrative and emergency personnel that their communications are, in fact, being conveyed as intended. On the receiving end, the third component of the system, content, is extracted from the digital broadcast stream and fed to the display screen.
The relationships that many colleges and universities share with public TV stations provide an excellent opportunity for establishing a digital signage network. Today, the deployed base of broadcast-based content distribution systems in public TV stations is capable of reaching 50% of the US population. These stations’ DTV bandwidth is used not only for television programming, but also to generate new revenues and aggressively support public charters by providing efficient delivery of multimedia content for education, homeland security and other public services.
Educational institutions affiliated with such broadcasters already have the technology, and much of the necessary infrastructure, in place to launch a digital signage network. In taking advantage of the public broadcaster’s content delivery system, the college or university also can tap into the station’s existing links with area emergency response agencies.
As digital signage technology continues to evolve, educational institutions will be able to extend both urgent alerts and more mundane daily communications over text and email messaging. Smart content distribution systems will push consistent information to screens of all sizes, providing messages not only to displays, but also to the cell phones and PDAs so ubiquitous in US schools.
The continued evolution of MPH technology will support this enhancement in delivery of messages directly to each student. MPH in-band mobile DTV technology leverages ATSC DTV broadcasts to enable extensions of digital signage and broadcast content directly to personal devices, whether stationary or on the move. Rather than rely on numerous unrelated systems, such as ringing bells, written memos and intercom announcements, schools can unify messaging and its delivery, in turn reducing the redundancy involved in maintaining communications with the student body.
An effective digital signage network provides day-to-day benefits for an elementary school, high school, college or university while providing invaluable emergency communications capabilities that increasingly are considered a necessity, irrespective of whether they get put to the test. The selection of an appropriate digital signage model depends, of course, on the needs of the organization.
Educational institutions share many of the same concerns held by counterparts in the corporate world, and key among those concerns is the simple matter of getting long-term value and use out of their technical investments. However, before even addressing the type of content the school wishes to create and distribute, the systems integrator, consultant or other AV and media professional should work with the eventual operators of the digital signage network to identify and map out the existing workflow. Once the system designer, integrator or installer has evaluated how staff currently work in an emergency to distribute information, he then can adjust established processes and adapt them to the digital signage model.
The administrative staff who will be expected to update or import schedules to the digital signage system will have a much lower threshold of acceptance for a workflow that is completely unfamiliar or at odds with all their previous experience. An intuitive, easy-to-use system is more likely to be used in an emergency if it has become familiar in everyday practice.
Turnkey digital signage solutions provide end-to-end functionality without forcing users and integrators to work with multiple systems and interfaces. The key in selecting a vendor lies in ensuring that they share the same vision and are moving in the same direction as the end user.
In addition to providing ease of use, digital signage solutions for the education market also must provide a high level of built-in security, preventing abuse or misuse by hackers, or by those without the knowledge, experience or authority to distribute content over the network. Because the network is a conduit for emergency messaging, its integrity must be protected. So, the installer must not only identify the number of screens to be used and where, but also determine who gets access to the system and how that access remains secure.
Scalable systems that can grow in number of displays or accommodate infrastructure improvements and distribution of higher-bandwidth content will provide the long-term utility that makes the investment worthwhile. By going into the project with an understanding of existing infrastructure, such as cabling, firewalls, etc., and the client’s goals, the professional is equipped to advise the customer as to the necessity, options and costs for enhancing or improving on that infrastructure. As with any other significant deployment of AV technology, the installation of a digital signage network also requires knowledge of the site, local building codes, the availability of power and so forth.
Ralph Bachofen, senior director of Product Management and Marketing, Triveni Digital, has more than 15 years of experience in voice and multimedia over Internet Protocol (IP), telecommunications and the semiconductor business.
The infrastructure requirements of a school in deploying a digital signage network will vary, depending on the type of content being delivered through the system. HD and streaming content clearly are bandwidth hogs, whereas tickers and other text-based messages put a low demand on bandwidth. Most facilities today are equipped with Gigabit Ethernet networks that can handle
the demands of live video delivery and lighter content.
However, even bandwidth-heavy video can be delivered by less robust networks, as larger clips can be “trickled” over time to the site, as long as storage on the unit is adequate. There is no set standard for the bandwidth required, just as there is no single way to use a digital signage solution. It all depends on how the system will be used, and that’s an important detail to address up front.
Most digital signage solutions feature built-in content-creation tools and accept content from third-party applications, as well. Staff members who oversee the system thus can use familiar applications to create up-to-date content for the school’s digital signage network. This continuity in workflow adds to the value and efficiency of the network in everyday use, reducing the administrative burden while serving as a safeguard in the event of an emergency.
For educational institutions, the enormous potential of the digital signage network can open new doors for communicating with students and staff, but only if it is put to use effectively. Comprehensive digital signage solutions offer ease of use to administration, deliver clear and useful messaging on ordinary days and during crises, and feature robust design and underlying technology that supports continual use well into the future. | <urn:uuid:4f104f5b-67cc-4de8-87a7-62cb466de5d1> | CC-MAIN-2013-20 | http://soundandcommunications.com/archive_site/video/2008_05_video.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.922634 | 2,596 | 2.5625 | 3 |
How We Found the Missing Memristor
The memristor--the functional equivalent of a synapse--could revolutionize circuit design
Image: Bryan Christie Design
THINKING MACHINE This artist's conception of a memristor shows a stack of multiple crossbar arrays, the fundamental structure of R. Stanley Williams's device. Because memristors behave functionally like synapses, replacing a few transistors in a circuit with memristors could lead to analog circuits that can think like a human brain.
It’s time to stop shrinking. Moore’s Law, the semiconductor industry’s obsession with the shrinking of transistors and their commensurate steady doubling on a chip about every two years, has been the source of a 50-year technical and economic revolution. Whether this scaling paradigm lasts for five more years or 15, it will eventually come to an end. The emphasis in electronics design will have to shift to devices that are not just increasingly infinitesimal but increasingly capable.
Earlier this year, I and my colleagues at Hewlett-Packard Labs, in Palo Alto, Calif., surprised the electronics community with a fascinating candidate for such a device: the memristor. It had been theorized nearly 40 years ago, but because no one had managed to build one, it had long since become an esoteric curiosity. That all changed on 1 May, when my group published the details of the memristor in Nature.
Combined with transistors in a hybrid chip, memristors could radically improve the performance of digital circuits without shrinking transistors. Using transistors more efficiently could in turn give us another decade, at least, of Moore’s Law performance improvement, without requiring the costly and increasingly difficult doublings of transistor density on chips. In the end, memristors might even become the cornerstone of new analog circuits that compute using an architecture much like that of the brain.
For nearly 150 years, the known fundamental passive circuit elements were limited to the capacitor (discovered in 1745), the resistor (1827), and the inductor (1831). Then, in a brilliant but underappreciated 1971 paper, Leon Chua, a professor of electrical engineering at the University of California, Berkeley, predicted the existence of a fourth fundamental device, which he called a memristor. He proved that memristor behavior could not be duplicated by any circuit built using only the other three elements, which is why the memristor is truly fundamental.
Memristor is a contraction of ”memory resistor,” because that is exactly its function: to remember its history. A memristor is a two-terminal device whose resistance depends on the magnitude and polarity of the voltage applied to it and the length of time that voltage has been applied. When you turn off the voltage, the memristor remembers its most recent resistance until the next time you turn it on, whether that happens a day later or a year later.
Think of a resistor as a pipe through which water flows. The water is electric charge. The resistor’s obstruction of the flow of charge is comparable to the diameter of the pipe: the narrower the pipe, the greater the resistance. For the history of circuit design, resistors have had a fixed pipe diameter. But a memristor is a pipe that changes diameter with the amount and direction of water that flows through it. If water flows through this pipe in one direction, it expands (becoming less resistive). But send the water in the opposite direction and the pipe shrinks (becoming more resistive). Further, the memristor remembers its diameter when water last went through. Turn off the flow and the diameter of the pipe ”freezes” until the water is turned back on.
That freezing property suits memristors brilliantly for computer memory. The ability to indefinitely store resistance values means that a memristor can be used as a nonvolatile memory. That might not sound like very much, but go ahead and pop the battery out of your laptop, right now—no saving, no quitting, nothing. You’d lose your work, of course. But if your laptop were built using a memory based on memristors, when you popped the battery back in, your screen would return to life with everything exactly as you left it: no lengthy reboot, no half-dozen auto-recovered files.
But the memristor’s potential goes far beyond instant-on computers to embrace one of the grandest technology challenges: mimicking the functions of a brain. Within a decade, memristors could let us emulate, instead of merely simulate, networks of neurons and synapses. Many research groups have been working toward a brain in silico: IBM’s Blue Brain project, Howard Hughes Medical Institute’s Janelia Farm, and Harvard’s Center for Brain Science are just three. However, even a mouse brain simulation in real time involves solving an astronomical number of coupled partial differential equations. A digital computer capable of coping with this staggering workload would need to be the size of a small city, and powering it would require several dedicated nuclear power plants.
Memristors can be made extremely small, and they function like synapses. Using them, we will be able to build analog electronic circuits that could fit in a shoebox and function according to the same physical principles as a brain.
A hybrid circuit—containing many connected memristors and transistors—could help us research actual brain function and disorders. Such a circuit might even lead to machines that can recognize patterns the way humans can, in those critical ways computers can’t—for example, picking a particular face out of a crowd even if it has changed significantly since our last memory of it.
The story of the memristor is truly one for the history books. When Leon Chua, now an IEEE Fellow, wrote his seminal paper predicting the memristor, he was a newly minted and rapidly rising professor at UC Berkeley. Chua had been fighting for years against what he considered the arbitrary restriction of electronic circuit theory to linear systems. He was convinced that nonlinear electronics had much more potential than the linear circuits that dominate electronics technology to this day.
Chua discovered a missing link in the pairwise mathematical equations that relate the four circuit quantities—charge, current, voltage, and magnetic flux—to one another. These can be related in six ways. Two are connected through the basic physical laws of electricity and magnetism, and three are related by the known circuit elements: resistors connect voltage and current, inductors connect flux and current, and capacitors connect voltage and charge. But one equation is missing from this group: the relationship between charge moving through a circuit and the magnetic flux surrounded by that circuit—or more subtly, a mathematical doppelgänger defined by Faraday’s Law as the time integral of the voltage across the circuit. This distinction is the crux of a raging Internet debate about the legitimacy of our memristor [see sidebar, ”Resistance to Memristance ”].
Chua’s memristor was a purely mathematical construct that had more than one physical realization. What does that mean? Consider a battery and a transformer. Both provide identical voltages—for example, 12 volts of direct current—but they do so by entirely different mechanisms: the battery by a chemical reaction going on inside the cell and the transformer by taking a 110â¿¿V ac input, stepping that down to 12 V ac, and then transforming that into 12 V dc. The end result is mathematically identical—both will run an electric shaver or a cellphone, but the physical source of that 12 V is completely different.
Conceptually, it was easy to grasp how electric charge could couple to magnetic flux, but there was no obvious physical interaction between charge and the integral over the voltage.
Chua demonstrated mathematically that his hypothetical device would provide a relationship between flux and charge similar to what a nonlinear resistor provides between voltage and current. In practice, that would mean the device’s resistance would vary according to the amount of charge that passed through it. And it would remember that resistance value even after the current was turned off.
He also noticed something else—that this behavior reminded him of the way synapses function in a brain.
Even before Chua had his eureka moment, however, many researchers were reporting what they called ”anomalous” current-voltage behavior in the micrometer-scale devices they had built out of unconventional materials, like polymers and metal oxides. But the idiosyncrasies were usually ascribed to some mystery electrochemical reaction, electrical breakdown, or other spurious phenomenon attributed to the high voltages that researchers were applying to their devices.
As it turns out, a great many of these reports were unrecognized examples of memristance. After Chua theorized the memristor out of the mathematical ether, it took another 35 years for us to intentionally build the device at HP Labs, and we only really understood the device about two years ago. So what took us so long?
It’s all about scale. We now know that memristance is an intrinsic property of any electronic circuit. Its existence could have been deduced by Gustav Kirchhoff or by James Clerk Maxwell, if either had considered nonlinear circuits in the 1800s. But the scales at which electronic devices have been built for most of the past two centuries have prevented experimental observation of the effect. It turns out that the influence of memristance obeys an inverse square law: memristance is a million times as important at the nanometer scale as it is at the micrometer scale, and it’s essentially unobservable at the millimeter scale and larger. As we build smaller and smaller devices, memristance is becoming more noticeable and in some cases dominant. That’s what accounts for all those strange results researchers have described. Memristance has been hidden in plain sight all along. But in spite of all the clues, our finding the memristor was completely serendipitous.
In 1995, I was recruited to HP Labs to start up a fundamental research group that had been proposed by David Packard. He decided that the company had become large enough to dedicate a research group to long-term projects that would be protected from the immediate needs of the business units. Packard had an altruistic vision that HP should ”return knowledge to the well of fundamental science from which HP had been withdrawing for so long.” At the same time, he understood that long-term research could be the strategic basis for technologies and inventions that would directly benefit HP in the future. HP gave me a budget and four researchers. But beyond the comment that ”molecular-scale electronics” would be interesting and that we should try to have something useful in about 10 years, I was given carte blanche to pursue any topic we wanted. We decided to take on Moore’s Law.
At the time, the dot-com bubble was still rapidly inflating its way toward a resounding pop, and the existing semiconductor road map didn’t extend past 2010. The critical feature size for the transistors on an integrated circuit was 350 nanometers; we had a long way to go before atomic sizes would become a limitation. And yet, the eventual end of Moore’s Law was obvious. Someday semiconductor researchers would have to confront physics-based limits to their relentless descent into the infinitesimal, if for no other reason than that a transistor cannot be smaller than an atom. (Today the smallest components of transistors on integrated circuits are roughly 45 nm wide, or about 220 silicon atoms.)
That’s when we started to hang out with Phil Kuekes, the creative force behind the Teramac (tera-operation-per-second multiarchitecture computer)—an experimental supercomputer built at HP Labs primarily from defective parts, just to show it could be done. He gave us the idea to build an architecture that would work even if a substantial number of the individual devices in the circuit were dead on arrival. We didn’t know what those devices would be, but our goal was electronics that would keep improving even after the devices got so small that defective ones would become common. We ate a lot of pizza washed down with appropriate amounts of beer and speculated about what this mystery nanodevice would be.
We were designing something that wouldn’t even be relevant for another 10 to 15 years. It was possible that by then devices would have shrunk down to the molecular scale envisioned by David Packard or perhaps even be molecules. We could think of no better way to anticipate this than by mimicking the Teramac at the nanoscale. We decided that the simplest abstraction of the Teramac architecture was the crossbar, which has since become the de facto standard for nanoscale circuits because of its simplicity, adaptability, and redundancy.
The crossbar is an array of perpendicular wires. Anywhere two wires cross, they are connected by a switch. To connect a horizontal wire to a vertical wire at any point on the grid, you must close the switch between them. Our idea was to open and close these switches by applying voltages to the ends of the wires. Note that a crossbar array is basically a storage system, with an open switch representing a zero and a closed switch representing a one. You read the data by probing the switch with a small voltage.
Like everything else at the nanoscale, the switches and wires of a crossbar are bound to be plagued by at least some nonfunctional components. These components will be only a few atoms wide, and the second law of thermodynamics ensures that we will not be able to completely specify the position of every atom. However, a crossbar architecture builds in redundancy by allowing you to route around any parts of the circuit that don’t work. Because of their simplicity, crossbar arrays have a much higher density of switches than a comparable integrated circuit based on transistors.
But implementing such a storage system was easier said than done. Many research groups were working on such a cross-point memory—and had been since the 1950s. Even after 40 years of research, they had no product on the market. Still, that didn’t stop them from trying. That’s because the potential for a truly nanoscale crossbar memory is staggering; picture carrying around the entire Library of Congress on a thumb drive.
One of the major impediments for prior crossbar memory research was the small off-to-on resistance ratio of the switches (40 years of research had never produced anything surpassing a factor of 2 or 3). By comparison, modern transistors have an off-to-on resistance ratio of 10 000 to 1. We calculated that to get a high-performance memory, we had to make switches with a resistance ratio of at least 1000 to 1. In other words, in its off state, a switch had to be 1000 times as resistive to the flow of current as it was in its on state. What mechanism could possibly give a nanometer-scale device a three-orders-of-magnitude resistance ratio?
We found the answer in scanning tunneling microscopy (STM), an area of research I had been pursuing for a decade. A tunneling microscope generates atomic-resolution images by scanning a very sharp needle across a surface and measuring the electric current that flows between the atoms at the tip of the needle and the surface the needle is probing. The general rule of thumb in STM is that moving that tip 0.1 nm closer to a surface increases the tunneling current by one order of magnitude.
We needed some similar mechanism by which we could change the effective spacing between two wires in our crossbar by 0.3 nm. If we could do that, we would have the 1000:1 electrical switching ratio we needed.
Our constraints were getting ridiculous. Where would we find a material that could change its physical dimensions like that? That is how we found ourselves in the realm of molecular electronics.
Conceptually, our device was like a tiny sandwich. Two platinum electrodes (the intersecting wires of the crossbar junction) functioned as the ”bread” on either end of the device. We oxidized the surface of the bottom platinum wire to make an extremely thin layer of platinum dioxide, which is highly conducting. Next, we assembled a dense film, only one molecule thick, of specially designed switching molecules. Over this ”monolayer” we deposited a 2- to 3-nm layer of titanium metal, which bonds strongly to the molecules and was intended to glue them together. The final layer was the top platinum electrode.
The molecules were supposed to be the actual switches. We built an enormous number of these devices, experimenting with a wide variety of exotic molecules and configurations, including rotaxanes, special switching molecules designed by James Heath and Fraser Stoddart at the University of California, Los Angeles. The rotaxane is like a bead on a string, and with the right voltage, the bead slides from one end of the string to the other, causing the electrical resistance of the molecule to rise or fall, depending on the direction it moves. Heath and Stoddart’s devices used silicon electrodes, and they worked, but not well enough for technological applications: the off-to-on resistance ratio was only a factor of 10, the switching was slow, and the devices tended to switch themselves off after 15 minutes.
Our platinum devices yielded results that were nothing less than frustrating. When a switch worked, it was spectacular: our off-to-on resistance ratios shot past the 1000 mark, the devices switched too fast for us to even measure, and having switched, the device’s resistance state remained stable for years (we still have some early devices we test every now and then, and we have never seen a significant change in resistance). But our fantastic results were inconsistent. Worse yet, the success or failure of a device never seemed to depend on the same thing.
We had no physical model for how these devices worked. Instead of rational engineering, we were reduced to performing huge numbers of Edisonian experiments, varying one parameter at a time and attempting to hold all the rest constant. Even our switching molecules were betraying us; it seemed like we could use anything at all. In our desperation, we even turned to long-chain fatty acids—essentially soap—as the molecules in our devices. There’s nothing in soap that should switch, and yet some of the soap devices switched phenomenally. We also made control devices with no molecule monolayers at all. None of them switched.
We were frustrated and burned out. Here we were, in late 2002, six years into our research. We had something that worked, but we couldn’t figure out why, we couldn’t model it, and we sure couldn’t engineer it. That’s when Greg Snider, who had worked with Kuekes on the Teramac, brought me the Chua memristor paper from the September 1971 IEEE Transactions on Circuits Theory. ”I don’t know what you guys are building,” he told me, ”but this is what I want.”
To this day, I have no idea how Greg happened to come across that paper. Few people had read it, fewer had understood it, and fewer still had cited it. At that point, the paper was 31 years old and apparently headed for the proverbial dustbin of history. I wish I could say I took one look and yelled, ”Eureka!” But in fact, the paper sat on my desk for months before I even tried to read it. When I did study it, I found the concepts and the equations unfamiliar and hard to follow. But I kept at it because something had caught my eye, as it had Greg’s: Chua had included a graph that looked suspiciously similar to the experimental data we were collecting.
The graph described the current-voltage (I-V) characteristics that Chua had plotted for his memristor. Chua had called them ”pinched-hysteresis loops”; we called our I-V characteristics ”bow ties.” A pinched hysteresis loop looks like a diagonal infinity symbol with the center at the zero axis, when plotted on a graph of current against voltage. The voltage is first increased from zero to a positive maximum value, then decreased to a minimum negative value and finally returned to zero. The bow ties on our graphs were nearly identical [see graphic, ”Bow Ties”].
That’s not all. The total change in the resistance we had measured in our devices also depended on how long we applied the voltage: the longer we applied a positive voltage, the lower the resistance until it reached a minimum value. And the longer we applied a negative voltage, the higher the resistance became until it reached a maximum limiting value. When we stopped applying the voltage, whatever resistance characterized the device was frozen in place, until we reset it by once again applying a voltage. The loop in the I-V curve is called hysteresis, and this behavior is startlingly similar to how synapses operate: synaptic connections between neurons can be made stronger or weaker depending on the polarity, strength, and length of a chemical or electrical signal. That’s not the kind of behavior you find in today’s circuits.
Looking at Chua’s graphs was maddening. We now had a big clue that memristance had something to do with our switches. But how? Why should our molecular junctions have anything to do with the relationship between charge and magnetic flux? I couldn’t make the connection.
Two years went by. Every once in a while I would idly pick up Chua’s paper, read it, and each time I understood the concepts a little more. But our experiments were still pretty much trial and error. The best we could do was to make a lot of devices and find the ones that worked.
But our frustration wasn’t for nothing: by 2004, we had figured out how to do a little surgery on our little sandwiches. We built a gadget that ripped the tiny devices open so that we could peer inside them and do some forensics. When we pried them apart, the little sandwiches separated at their weakest point: the molecule layer. For the first time, we could get a good look at what was going on inside. We were in for a shock.
What we had was not what we had built. Recall that we had built a sandwich with two platinum electrodes as the bread and filled with three layers: the platinum dioxide, the monolayer film of switching molecules, and the film of titanium.
But that’s not what we found. Under the molecular layer, instead of platinum dioxide, there was only pure platinum. Above the molecular layer, instead of titanium, we found an unexpected and unusual layer of titanium dioxide. The titanium had sucked the oxygen right out of the platinum dioxide! The oxygen atoms had somehow migrated through the molecules and been consumed by the titanium. This was especially surprising because the switching molecules had not been significantly perturbed by this event—they were intact and well ordered, which convinced us that they must be doing something important in the device.
The chemical structure of our devices was not at all what we had thought it was. The titanium dioxide—a stable compound found in sunscreen and white paint—was not just regular titanium dioxide. It had split itself up into two chemically different layers. Adjacent to the molecules, the oxide was stoichiometric TiO 2 , meaning the ratio of oxygen to titanium was perfect, exactly 2 to 1. But closer to the top platinum electrode, the titanium dioxide was missing a tiny amount of its oxygen, between 2 and 3 percent. We called this oxygen-deficient titanium dioxide TiO 2-x , where x is about 0.05.
Because of this misunderstanding, we had been performing the experiment backward. Every time I had tried to create a switching model, I had reversed the switching polarity. In other words, I had predicted that a positive voltage would switch the device off and a negative voltage would switch it on. In fact, exactly the opposite was true.
It was time to get to know titanium dioxide a lot better. They say three weeks in the lab will save you a day in the library every time. In August of 2006 I did a literature search and found about 300 relevant papers on titanium dioxide. I saw that each of the many different communities researching titanium dioxide had its own way of describing the compound. By the end of the month, the pieces had fallen into place. I finally knew how our device worked. I knew why we had a memristor.
The exotic molecule monolayer in the middle of our sandwich had nothing to do with the actual switching. Instead, what it did was control the flow of oxygen from the platinum dioxide into the titanium to produce the fairly uniform layers of TiO 2 and TiO 2-x . The key to the switching was this bilayer of the two different titanium dioxide species [see diagram, ”How Memristance Works”]. The TiO 2 is electrically insulating (actually a semiconductor), but the TiO 2-x is conductive, because its oxygen vacancies are donors of electrons, which makes the vacancies themselves positively charged. The vacancies can be thought of like bubbles in a glass of beer, except that they don’t pop—they can be pushed up and down at will in the titanium dioxide material because they are electrically charged.
Now I was able to predict the switching polarity of the device. If a positive voltage is applied to the top electrode of the device, it will repel the (also positive) oxygen vacancies in the TiO 2-x layer down into the pure TiO 2 layer. That turns the TiO 2 layer into TiO 2-x and makes it conductive, thus turning the device on. A negative voltage has the opposite effect: the vacancies are attracted upward and back out of the TiO 2 , and thus the thickness of the TiO 2 layer increases and the device turns off. This switching polarity is what we had been seeing for years but had been unable to explain.
On 20 August 2006, I solved the two most important equations of my career—one equation detailing the relationship between current and voltage for this equivalent circuit, and another equation describing how the application of the voltage causes the vacancies to move—thereby writing down, for the first time, an equation for memristance in terms of the physical properties of a material. This provided a unique insight. Memristance arises in a semiconductor when both electrons and charged dopants are forced to move simultaneously by applying a voltage to the system. The memristance did not actually involve magnetism in this case; the integral over the voltage reflected how far the dopants had moved and thus how much the resistance of the device had changed.
We finally had a model we could use to engineer our switches, which we had by now positively identified as memristors. Now we could use all the theoretical machinery Chua had created to help us design new circuits with our devices.
Triumphantly, I showed the group my results and immediately declared that we had to take the molecule monolayers out of our devices. Skeptical after years of false starts and failed hypotheses, my team reminded me that we had run control samples without molecule layers for every device we had ever made and that those devices had never switched. And getting the recipe right turned out to be tricky indeed. We needed to find the exact amounts of titanium and oxygen to get the two layers to do their respective jobs. By that point we were all getting impatient. In fact, it took so long to get the first working device that in my discouragement I nearly decided to put the molecule layers back in.
A month later, it worked. We not only had working devices, but we were also able to improve and change their characteristics at will.
But here is the real triumph. The resistance of these devices stayed constant whether we turned off the voltage or just read their states (interrogating them with a voltage so small it left the resistance unchanged). The oxygen vacancies didn’t roam around; they remained absolutely immobile until we again applied a positive or negative voltage. That’s memristance: the devices remembered their current history. We had coaxed Chua’s mythical memristor off the page and into being.
Emulating the behavior of a single memristor, Chua showed, requires a circuit with at least 15 transistors and other passive elements. The implications are extraordinary: just imagine how many kinds of circuits could be supercharged by replacing a handful of transistors with one single memristor.
The most obvious benefit is to memories. In its initial state, a crossbar memory has only open switches, and no information is stored. But once you start closing switches, you can store vast amounts of information compactly and efficiently. Because memristors remember their state, they can store data indefinitely, using energy only when you toggle or read the state of a switch, unlike the capacitors in conventional DRAM, which will lose their stored charge if the power to the chip is turned off. Furthermore, the wires and switches can be made very small: we should eventually get down to a width of around 4 nm, and then multiple crossbars could be stacked on top of each other to create a ridiculously high density of stored bits.
Greg Snider and I published a paper last year showing that memristors could vastly improve one type of processing circuit, called a field-programmable gate array, or FPGA. By replacing several specific transistors with a crossbar of memristors, we showed that the circuit could be shrunk by nearly a factor of 10 in area and improved in terms of its speed relative to power-consumption performance. Right now, we are testing a prototype of this circuit in our lab.
And memristors are by no means hard to fabricate. The titanium dioxide structure can be made in any semiconductor fab currently in existence. (In fact, our hybrid circuit was built in an HP fab used for making inkjet cartridges.) The primary limitation to manufacturing hybrid chips with memristors is that today only a small number of people on Earth have any idea of how to design circuits containing memristors. I must emphasize here that memristors will never eliminate the need for transistors: passive devices and circuits require active devices like transistors to supply energy.
The potential of the memristor goes far beyond juicing a few FPGAs. I have referred several times to the similarity of memristor behavior to that of synapses. Right now, Greg is designing new circuits that mimic aspects of the brain. The neurons are implemented with transistors, the axons are the nanowires in the crossbar, and the synapses are the memristors at the cross points. A circuit like this could perform real-time data analysis for multiple sensors. Think about it: an intelligent physical infrastructure that could provide structural assessment monitoring for bridges. How much money—and how many lives—could be saved?
I’m convinced that eventually the memristor will change circuit design in the 21st century as radically as the transistor changed it in the 20th. Don’t forget that the transistor was lounging around as a mainly academic curiosity for a decade until 1956, when a killer app—the hearing aid—brought it into the marketplace. My guess is that the real killer app for memristors will be invented by a curious student who is now just deciding what EE courses to take next year.
About the Author
R. STANLEY WILLIAMS, a senior fellow at Hewlett-Packard Labs, wrote this month’s cover story, ”How We Found the Missing Memristor.” Earlier this year, he and his colleagues shook up the electrical engineering community by introducing a fourth fundamental circuit design element. The existence of this element, the memristor, was first predicted in 1971 by IEEE Fellow Leon Chua, of the University of California, Berkeley, but it took Williams 12 years to build an actual device. | <urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637> | CC-MAIN-2013-20 | http://spectrum.ieee.org/semiconductors/processors/how-we-found-the-missing-memristor/5 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.961854 | 6,717 | 3.171875 | 3 |
In 1962 President John F. Kennedy’s administration narrowly averted possible nuclear war with the USSR, when CIA operatives spotted Soviet surface-to-surface missiles in Cuba, after a six-week gap in intelligence-gathering flights.
In their forthcoming book Blind over Cuba: The Photo Gap and the Missile Crisis, co-authors David Barrett and Max Holland make the case that the affair was a close call stemming directly from a decision made in a climate of deep distrust between key administration officials and the intelligence community.
Using recently declassified documents, secondary materials, and interviews with several key participants, the authors weave a story of intra-agency conflict, suspicion, and discord that undermined intelligence-gathering, adversely affected internal postmortems conducted after the crisis peaked, and resulted in keeping Congress and the public in the dark about what really happened.
We asked Barrett, a professor of political science at Villanova University, to discuss the actual series of events and what might have happened had the CIA not detected Soviet missiles on Cuba.
The Actual Sequence of Events . . .
“Some months after the Cuban Missile Crisis, an angry member of the Armed Services Committee of the House of Representatives criticized leaders of the Kennedy administration for having let weeks go by in September and early October 1962, without detecting Soviet construction of missile sites in Cuba. It was an intelligence failure as serious as the U.S. ignorance that preceded the Japanese attack on Pearl Harbor in 1941, he said.
Secretary of Defense Robert McNamara aggressively denied that there had been an American intelligence failure or ineptitude with regard to Cuba in late summer 1962. McNamara and others persuaded most observers the administration’s performance in the lead-up to the Crisis had been almost flawless, but the legislator was right: The CIA had not sent a U-2 spy aircraft over western Cuba for about a six week period.
There were varying reasons for this, but the most important was that the Kennedy administration did not wish to have a U-2 “incident.” Sending that aircraft over Cuba raised the possibility that Soviet surface-to-air missiles might shoot one down. Since it was arguably against international law for the U.S. to send spy aircrafts over another country, should one be shot down, there would probably be the same sort of uproar as happened in May 1960, when the Soviet Union shot down an American U-2 flying over its territory.
Furthermore, most State Department and CIA authorities did not believe that the USSR would put nuclear-armed missiles into Cuba that could strike the U.S. Therefore, the CIA was told, in effect, not even to request permission to send U-2s over western Cuba. This, at a time when there were growing numbers of reports from Cuban exiles and other sources about suspicious Soviet equipment being brought into the country.As we now know, the Soviets WERE constructing missile sites on what CIA deputy director Richard Helms would call “the business end of Cuba,” i.e., the western end, in the summer/autumn of 1962. Fortunately, by mid-October, the CIA’s director, John McCone, succeeded in persuading President John F. Kennedy to authorize one U-2 flight over that part of Cuba and so it was that Agency representatives could authoritatively inform JFK on October 16th that the construction was underway.The CIA had faced White House and State Department resistance for many weeks about this U-2 matter."
What Could Have Happened . . .
“What if McCone had not succeeded in persuading the President that the U.S. needed to step up aerial surveillance of Cuba in mid-October? What if a few more weeks had passed without that crucial October 14 U-2 flight and its definitive photography of Soviet missile site construction?
Remember to check out Blind over Cuba: The Photo Gap and the Missile Crisis, which is being published this fall!If McCone had been told “no” in the second week of October, perhaps it would have taken more human intelligence, trickling in from Cuba, about such Soviet activity before the President would have approved a risky U-2 flight.The problem JFK would have faced then is that there would have been a significant number of operational medium-range missile launch sites. Those nuclear-equipped missiles could have hit the southern part of the U.S. Meanwhile, the Soviets would also have progressed further in construction of intermediate missile sites; such missiles could have hit most of the continental United States.If JFK had not learned about Soviet nuclear-armed missiles until, say, November 1st, what would the U.S. have done?There is no definitive answer to that question, but I think it’s fair to say that the President would have been under enormous pressure to authorize—quickly--a huge U.S. air strike against Cuba, followed by an American invasion. One thing which discovery of the missile sites in mid-October gave JFK was some time to negotiate effectively with the Soviet Union during the “Thirteen Days” of the crisis. I don’t think there would have been such a luxury if numerous operational missiles were discovered a couple weeks later.No wonder President Kennedy felt great admiration and gratitude toward those at the CIA (with its photo interpreters) and the Air Force (which piloted the key U-2 flight). The intelligence he received on October 16th was invaluable. I think he knew that if that intelligence had not come until some weeks later, there would have been a much greater chance of nuclear war between the U.S. and the Soviet Union.” | <urn:uuid:7da5e687-07e2-4c8f-9fac-fe3f58c7017a> | CC-MAIN-2013-20 | http://tamupress.blogspot.com/2012/07/close-call-what-if-cia-had-not-spotted.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.977158 | 1,150 | 2.828125 | 3 |
Books Yellow, Red, and Green and Blue,
All true, or just as good as true,
And here's the Blue Book just for YOU!
Hard is the path from A to Z,
And puzzling to a curly head,
Yet leads to Books—Green, Yellow and Red.
For every child should understand
That letters from the first were planned
To guide us into Fairy Land
So labour at your Alphabet,
For by that learning shall you get
To lands where Fairies may be met.
And going where this pathway goes,
You too, at last, may find, who knows?
The Garden of the Singing Rose.
As to whether there are really any fairies or not, that is a difficult question. The Editor never saw any himself, but he knew several people who have seen them-in the Highlands-and heard their music.
If ever you are in Nether Lochaber, go to the Fairy Hill, and you may hear the music your-self, as grown-up people have done, but you must go on a fine day.
This book has been especially re-published to raise funds for:
The Great Ormond Street Hospital Children’s Charity
By buying this book you will be donating to this great charity that does so much good for ill children and which also enables families to stay together in times of crisis. And what better way to help children than to buy a book of fairy tales. Some have not been seen in print or heard for over a century. 33% of the Publisher’s profit from the sale of this book will be donated to the GOSH Children’s Charity.
YESTERDAYS BOOKS for TODAYS CHARITIES
LITTLE RED RIDING HOOD
Once upon a time there lived in a certain village a little country girl, the prettiest creature was ever seen. Her mother was excessively fond of her; and her grandmother doted on her still more. This good woman had made for her a little red riding-hood; which became the girl so extremely well that everybody called her Little Red Riding-Hood.
One day her mother, having made some custards, said to her:
"Go, my dear, and see how thy grandmamma does, for I hear she has been very ill; carry her a custard, and this little pot of butter."
Little Red Riding-Hood set out immediately to go to her grandmother, who lived in another village.
As she was going through the wood, she met with Gaffer Wolf, who had a very great mind to eat her up, but he dared not, because of some faggot-makers hard by in the forest. He asked her whither she was going. The poor child, who did not know that it was dangerous to stay and hear a wolf talk, said to him:
"I am going to see my grandmamma and carry her a custard and a little pot of butter from my mamma."
"Does she live far off?" said the Wolf.
"Oh! aye," answered Little Red Riding-Hood; "it is beyond that mill you see there, at the first house in the village."
"Well," said the Wolf, "and I'll go and see her too. I'll go this way and you go that, and we shall see who will be there soonest."
The Wolf began to run as fast as he could, taking the nearest way, and the little girl went by that farthest about, diverting herself in gathering nuts, running after butterflies, and making nosegays of such little flowers as she met with. The Wolf was not long before he got to the old woman's house. He knocked at the door—tap, tap.
"Your grandchild, Little Red Riding-Hood," replied the Wolf, counterfeiting her voice; "who has brought you a custard and a little pot of butter sent you by mamma."
The good grandmother, who was in bed, because she was somewhat ill, cried out:
"Pull the bobbin, and the latch will go up."The Wolf pulled the bobbin, and the door opened, and then presently he fell upon the good woman and ate her up in a moment, for it was above three days that he had not touched a bit. He then shut the door and went into the grandmother's bed, expecting Little Red Riding-Hood, who came some time afterward and knocked at the door—tap, tap.
Little Red Riding-Hood, hearing the big voice of the Wolf, was at first afraid; but believing her grandmother had got a cold and was hoarse, answered:
"’Tis your grandchild, Little Red Riding-Hood, who has brought you a custard and a little pot of butter mamma sends you."
The Wolf cried out to her, softening his voice as much as he could:
"Pull the bobbin, and the latch will go up."
Little Red Riding-Hood pulled the bobbin, and the door opened.
The Wolf, seeing her come in, said to her, hiding himself under the bed-clothes:
"Put the custard and the little pot of butter upon the stool, and come and lie down with me."
Little Red Riding-Hood undressed herself and went into bed, where, being greatly amazed to see how her grandmother looked in her night-clothes, she said to her:
"Grandmamma, what great arms you have got!"
"That is the better to hug thee, my dear."
"Grandmamma, what great legs you have got!"
"That is to run the better, my child."
"Grandmamma, what great ears you have got!"
"That is to hear the better, my child."
"Grandmamma, what great eyes you have got!"
"It is to see the better, my child."
"Grandmamma, what great teeth you have got!"
"That is to eat thee up."
And, saying these words, this wicked wolf fell upon Little Red Riding-Hood, and tried to start eating her. Red Riding Hood screamed “Someone Help Me!” over and over again.
The woodcutter, who was felling trees nearby, heard Red Riding Hood’s screams for help and ran to the cottage. He burst in to find the wolf trying to eat Red Riding Hood.
He swung his axe, and with one blow killed the bad wolf for which Red Riding Hood was ever so grateful.
Great Book! Really interesting read! Was great to see a published version of Jewish tales! Arrived very quickly too - great service!
A thrilling book about a chase across the US! A great story, my son loved it! Quick and Convenient delivery!
Stories of the famous spice route across Asia! Great to see a volume of Phillipine Folklore Stories in Print, only one I've found on the web!
We deliver to destinations all over the world, and here at Abela, we have some of the best rates in the book industry.
We charge shipping dependant on the book you have ordered and where in the world you are ordering from. This will be shown below the price of the book.
The delivery time is typically dependant on where in the world you are ordering from, Should you need a estimated delivery time, please do not hesitate to contact us.
We pride ourselves on the quality of our packaging and damage rates are very low. In the unlikely event there is damage please contact us before returning your item, as you may have to pay for return shipping, if you have not let us know.
Due to the nature of books being read then returned for a refund, unfortunately we do not accept returns unless the item is damaged and we are notified ON THE DAY OF DELIVERY. | <urn:uuid:417be69e-3827-4c17-971c-f3410cf2c856> | CC-MAIN-2013-20 | http://www.abelapublishing.com/the-blue-fairy-book_p23349351.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.976676 | 1,657 | 2.5625 | 3 |
Dr. Carl Auer von Welsbach (1858-1929) had a rare double talent of understanding how to pursue fundamental science and, at the same time, of commercializing himself successfully as a inventor and discoverer.
He discovered 4 elements (Neodymium, Praseodymium, Ytterbium, and Lutetium).
He invented the incandescent mantle, that helped gaslighting at the end of the 19th century to a renaissance.
He developed the Ferrocerium - it`s still used as a flint in every disposable lighter.
He was an eminent authority, and great expert in the field of rare earths (lanthanoides).
He invented the electric metal filament light bulb which is used billions of times today.
Additionally, all his life he took active part in different fields, from photography to ornithology. His personal qualities are remembered highly by the people of Althofen, he not only had an excellent mind but also a big heart. These qualities ensured him a prominent and lasting place not only in Austria`s science and industrial history.
9th of Sept. 1858: Born in Vienna, son of Therese and Alois Ritter Auer von Welsbach ( his father was director of the Imperial printing office the "Staatsdruckerei").
1869-73: went to the secondary school in Mariahilf, (then changed to the secondary school in Josefstadt.)
1873-77: went to secondary school in Josefstadt, graduation.
1877-78: military service, became a second lieutenant.
1878-80: Inscribed into the technical University of Vienna; studies in math, general organic and inorganic chemistry, technical physics and thermodynamics with the Professors Winkler, Bauer, Reitlinger; and Pierre.
1880-82: Changed to the University of Heidelberg; lectures on inorganic experimental chemistry and Lab. experiments with Prof. Bunsen, introduction to spectral analysis and the history of chemistry, mineralogy and physics.
5th of Feb. 1882: Promotion to Doctor of Philosophy at the Ruperta-Carola-University in Heidelberg.
1882: Return to Vienna as unpaid Assistant in Prof. Lieben`s laboratory; work with chemical separation methods for investigations on rare earth elements.
1882-1884: Publications: " Ueber die Erden des Gadolinits von Ytterby", "Ueber die Seltenen Erden".
1885: The first separation of the element "Didymium" with help from a newly developed separation method from himself, based on the fractioned crystalisation of a Didym-ammonium nitrat solution. After the characteristical colouring, Auer gave the green components the name Praseodymium, the pink components the name Neodidymium. In time the latter element was more commonly known as Neodymium.
1885-1892: Work on gas mantle for the incandescent lighting.
Development of a method to produce gas mantle ("Auerlicht) based on the impregnation from cottontissue by means, measures, methods of liquids, that rare earth has been absolved in and the ash from the material in a following glow process.
Production of the first incandescent mantle out of lanthanum oxide, in which the gas flame is surrounded from a stocking; definite improvement in light emmission, but lack of stability in humidity.
Continuous improvements in the chemical composition of the incandescent mantel "Auerlicht", experimentations of Lanthanum oxide-magnesium oxide- variations.
18th of Sept. 1885: The patenting of a gas burner with a "Actinophor" incandescent mantle made up of 60% magnesium oxide, 20% lanthanum oxide and 20% yttrium oxide; in the same year, the magnesium oxide part was replaced with zirconium oxide and the constitution of a second patent with reference to the additional use of the light body in a spirits flame.
9th of April 1886: Introduction the name "Gasgluehlicht" through the Journalist Motiz Szeps after the successful presentation from the Actinophors in the lower Austrian trade union ; regular production of the impregnation liquid, called "Fluid", at the Chemical Institute.
1887: The acquisition of the factory Würth & Co. for chemical-pharmaceutical products in Atzgersdorf and the industrial production of the light bodies.
1889: The beginning of sales problems because of the defaults with the earlier incandescent mantle, ie. it`s fragility, the short length of use, as well as having an unpleasant, cold, green coloured light , and the relatively high price. The factory in Atzgersdorf closes.
The development of fractioned cristallisation methods for the preparation of pure Thorium oxide from and therefore cheap Monazitsand.
The analysis of the connection between the purity of Thorium oxide and its light emission. The ascertainment of the optimal composition of the incandescent mantle in a long series of tests.
1891: Patenting of the incandescent mantle out of 99% Thorium oxide and 1% Cerium oxide, at that period of time, because of the light emission it was a direct competition for the electric carbon-filament lamp. The resuming of production in Atzgersdorf near Vienna and the quick spreading of the incandescent mantle because of their high duration. The beginning of a competition with the electric lighting.
Work with high melting heavy metals to improve and higher the filament temperature, and therefore the light emission as well.
The development of the production of thin filaments.
The making of incandescent mantle with Platinum threads that were covered with high melting Thorium oxide, whereby it was possible to use the lamps over the melting temperature of Platinum.
This variation was discarded because with smelting the platinum threads either the cover would burst or by solidifying it would rip apart.
The taking out of a patent for two manufacturing methods for filaments.
In the patent specification Carl Auer von Welsbach described the manufacturing of filaments through secretion of the high smelting element Osmium onto the metallic-filament.
The development and experimentation of further designing methods such as the pasting method for the manufacturing of suitable high smelting metallic-filaments. With this method Osmium powder and a mixture of rubber or sugar is mixed together and kneaded into a paste. The manufacturing results in that the paste gets stamped through a delicate nozzle discharged cylinder and the filament subsequently dries and sinters. This was the first commercial and industrial process in the powder metallurgy for very high smelting metals.
1898: The acquiring of a industrial property in Treibach and the beginning of the experimentation and discovery work at this location. The taking out of a patent for the metallic-filament lamp with Osmium filament.
1899: Married Marie Nimpfer in Helgoland.
1902: Market introduction of the "Auer-Oslight" the first industrial finished Osmium metallic-filament lamp using the paste method.
The advantages of this metallic-filament lamp over the, at that period of time, widely used carbon-filament lamp were:
57% less electricity consumption; less blackening of the glass; because of the higher filament temperature, a "whiter" light; a longer life span and therefore more economic.
The beginning of the investigation of spark giving metals with the aim ignition mechanisms for lighters, gas lighters and gas lamps as well as projectile and mine ignition.
Carl Auer von Welsbach knew of the possibility to produce sparks by mechanical means from Cerium from his teacher Prof. Bunsen.
The ascertainment of the optimal compound from Cerium-Iron alloys for spark production.
1903: The taking out of a patent for his pyrophoric alloys (by scratching with hard and sharp surfaces a splinter which could ignite itself.) In the patent specification 70% Cerium and 30% Iron was given as an optimal compound.
Further development of a method to produce the latter alloy cheaply.
The optimizing of Bunsen, Hillebrand and Norton´s procedure, used at that
time mainly for producing Cerium, was based on the fusion electrolysis from
smelted Rare Earth chlorides. The problem at that time was in the leading
of the electrolysis to secrete a pore-free and long lasting metal.
This was the first industrial process and commercial utilization of the rare earth metals.
30th of March 1905: A report to the "Akademie der Wissenschaften" in Vienna that the results of the spectroscopic analysis show that Ytterbium is made up of two elements. Auer named the elements after the stars Aldebaranium and Cassiopeium. He ommitted the publication of the attained spectras and the ascertained atomic weights.
1907: The founding of the "Treibacher Chemische Werke GesmbH" in Treibach-Althofen for the production of Ferrocerium- lighter flints under the trade name "Original Auermetall".
The publication of the spectras and the atomic weights of both new, from Ytterbium separated elements, in the completion of his report to the Academie der Wissenschaften.
Priority dispute with the french Chemist Urbain concerning the analysis of Ytterbium.
1908: The solution of the electrolysis of fused salts (cerium chloride) problem, at which the minerals Cerit and Allanite are used as source substances.
1909: The adaption of the procedure, from his collaborator, Dr.Fattinger, to be able to use the Monazitsand residue out of the incandescent mantle production, for the production of cerium metal for the lighter flints.
The production of three different pyrophoric alloys:
"Cer" or Auermetall I : Alloy out of fairly pure Cerium and Iron. Used for igniting purposes.
"Lanthan" or Auermetall II : The Cerium-Iron alloy enriched with the element Lanthan. Used for light signals because of its particularly bright sparking power.
Erdmetall or Auermetall III : Alloy out of Iron and "natural" Cermischmetall; a rare earth metal alloy of corresponding natural deposits.
Both of the first alloys could not win its way through the market. only the easy to produce
Erdmetall, after the renaming it Auermetall I, obtained world wide status as the flint in the lighter industry.
1909: The International Atomic weight Commission decided in favour of Urbain´s publication instead of Auer´s because Urbain handed it in earlier. The Commission of the term from Urbain Neoytterbium- known today as Ytterbium and Lutetium for the new elements.
The carrying-out of large scale chemical separations in the field of radioactive substances.
The production of different preparations of Uran, Ionium (known today as Th230 isotop), a disintegration product in the Uranium-Radium-line, Polonium and Aktinium, that Auer made available, for research use, to such renowned Institutions and scientists as F.W.Aston and Ernest Rutherford at the Cavendish Laboratory in Cambridge (1921) and the "Radiuminstitut der Akademie der Wissenschaften" in Vienna.
1922: A report on his spectroscopic discoveries to the "Akademie der Wissenschaften" in Vienna.
1929:World-wide production of ligther flints reached 100,000 kg.
8th of April 1929: Carl Auer von Welsbach died at the age of 70. | <urn:uuid:f684139c-4f94-4f1f-821a-2847edc6ba5b> | CC-MAIN-2013-20 | http://www.althofen.at/AvW-Museum/Englisch/biographie_e.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.903716 | 2,533 | 3.046875 | 3 |
Ethics of dementia research
What are clinical trials and how are they controlled/governed?
A clinical trial is a biomedical/health-related study into the effects on humans of a new medical treatment (medicine/drug, medical device, vaccine or new therapy), sometimes called an investigational medicinal product (IMP). Before a new drug is authorised and can be marketed, it must pass through several phases of development including trial phases in which its safety, efficacy, risks, optimal use and/or benefits are tested on human beings. Existing drugs must also undergo clinical testing before they can be used to treat other conditions than that for which they were originally intended.
Organisations conducting clinical trials in the European Union must, if they wish to obtain marketing authorisation, respect the requirements for the conduct of clinical trials. These can be found in the Clinical Trials Directive (“Directive 2001/20/EC of the European Parliament and of the Council of 4 April 2001 on the approximation of the laws, regulations and administrative provisions of the Member States relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use”).
There are also guidelines to ensure that clinical trials are carried out in accordance with good clinical practice. These are contained in the “Commission Directive 2005/28/EC of 8 April 2005 laying down principles and detailed guidelines for good clinical practice as regards investigational medicinal products for human use, as well as the requirements for authorisation of the manufacturing or importation of such products” (also known as the Good Clinical Practice or GCP for short). This document provides more concrete guidelines and lends further support to the Clinical Trials Directive.
The London-based European Medicines Agency (EMA) has published additional, more specific guidelines which must also be respected. These include guidelines on inspection procedures and requirements related to quality, safety and efficacy.
Copies of the above-mentioned documents in 22 languages can be found at: http://ec.europa.eu/enterprise/pharmaceuticals/clinicaltrials/clinicaltrials_en.htm
The protection of people participating in clinical trials (and in most cases in other types of research) is further promoted by provisions of:
- the European Convention on Human Rights and Biomedicine (Oviedo Convention, Act 2619/1998),
- the Additional protocol to the Oviedo Convention concerning Biomedical Research
- the Nuremberg Code of 1949,
- the revised Helsinki Declaration of the World Medical Association regarding Ethical Principles for Medical Research Involving Human Subjects,
- The Belmont Report of 18 April 1979 on the Ethical Principles and Guidelines for the Protection of Human Subjects of Research.
What are the different phases of trials?
Testing an experimental drug or medical procedure is usually an extremely lengthy process, sometimes lasting several years. The overall procedure is divided into a series of stages (known as phases) which are described below.
Clinical testing on humans can only begin after a pre-clinical phase, involving laboratory studies (in vitro) and tests on animals, which has shown that the experimental drug is considered safe and effective.
Whilst a certain amount of testing can be carried out by means of computer modelling and by isolating cells and tissue, it becomes necessary at some point in time to test the drug on a living creature. Animal testing is an obligatory stage in the process of obtaining regulatory approval for new drugs and medicines, and hence a legal requirement (EU Directive 2001/83/EC relating to Medicinal Products for Human Use). The necessity of carrying out prior testing on animals is also stated in the World Medical Association’s “Ethical Principles for Medical Research Involving Human Subjects.
In order to protect the well-being of research animals, researchers are guided by three principles which are called the 3Rs:
Reduce the number of animals used to a minimum
Refine the way that experiments are carried out so that the effect on the animal is minimised and animal welfare is improved
Replace animal experiments with alternative (non-animal) techniques wherever possible.
In addition, most countries will have official regulatory bodies which control animal research. Most animals involved in research are mice. However, no animal is sufficiently similar to humans (even genetically modified ones) to make human testing unnecessary. For this reason, the experimental drug must also be tested on humans.
The main phases of clinical trials
Clinical trials on humans can be divided into three main phases (literally, phase I, II and III). Each phase has specific objectives (please see below) and the number of people involved increases as the trial progresses from one phase to the next.
Phase I trials
Phase 1 trials are usually the first step in testing a new drug or treatment on humans after successful laboratory and animal testing. They are usually quite small scale and usually involve healthy subjects or sub-groups of patients who share a particular characteristic. The aims of these trials are:
- to assess the safety of experimental drugs,
- to evaluate any possible side effects,
- to determine a safe dose range,
- to see how the body reacts to the drug (how it is absorbed, distributed and eliminated from the body, the effects that it has on the body and the effects it has on biomarkers).
Dose ranging, sometimes called dose escalation, studies may be used as a means to determine the most appropriate dosage, but the doses administered to the subjects should only be a fraction of those which were found to cause harm to animals in the pre-clinical studies.
The process of determining an optimal dose in phase I involves quite a high degree of risk because this is the first time that the experimental treatment or drug has been administered to humans. Moreover, healthy people’s reactions to drugs may be different to those of the target patient group. For this reason, drugs which are considered to have a potentially high toxicity are usually tested on people from the target patient group.
There are a few sequential approaches to phase I trials e.g. single ascending dose studies, multiple ascending dose studies and food effect.
In single ascending dose studies (SAD), a small group of subjects receive a very low dose of the experimental drug and are then observed in order to see whether that dose results in side effects. For this reason, trials are usually conducted in hospital settings. If no adverse side effects are observed, a second group of subjects are given a slightly higher dose of the same drug and also monitored for side-effects. This process is repeated until a dose is reached which results in intolerable side effects. This is defined as the maximum tolerated dose (MTD).
Multiple ascending dose studies (MAD) are designed to test the pharmacokinetics and pharmacodynamics of multiple doses of the experimental drug. A group of subjects receives multiple doses of the drug, starting at the lowest dose and working up to a pre-determined level. At various times during the period of administration of the drug, and particularly whenever the dose is increased, samples of blood and other bodily fluids are taken. These samples are analysed in order to determine how the drug is processed within the body and how well it is tolerated by the body.
Food effect studies are investigations into the effect of food intake on the absorption of the drug into the body. This involves two groups of subjects being given the same dose of the experimental drug but for one of the groups when fasting and for the other after a meal. Alternatively, this could be done in a cross-over design whereby both groups receive the experimental drug in both conditions in sequence (e.g. when fasting and on another occasion after a meal). Food effect studies allow researchers to see whether eating before the drug is given has any effect on the absorption of the drug by the body.
Phase II trials
Having demonstrated the initial safety of the drug (often on a relatively small sample of healthy individuals), phase II clinical trials can begin. Phase II studies are designed to explore the therapeutic efficacy of a treatment or drug in people who have the condition that the drug is intended to treat. They are sometimes called therapeutic exploratory trials and tend to be larger scale than Phase I trials.
Phase II trials can be divided into Phase IIA and Phase IIB although sometimes they are combined.
Phase IIA is designed to assess dosing requirements i.e. how much of the drug should patients receive and up to what dose is considered safe? The safety assessments carried out in Phase I can be repeated on a larger subject group. As more subjects are involved, some may experience side effects which none of the subjects in the Phase I experienced. The researchers aim to find out more about safety, side effects and how to manage them.
Phase IIB studies focus on the efficacy of the drug i.e. how well it works at the prescribed doses. Researchers may also be interested in finding out which types of a specific disease or condition would be most suitable for treatment.
Phase II trials can be randomised clinical trials which involve one group of subjects being given the experimental drug and others receiving a placebo and/or standard treatment. Alternatively, they may be case series which means that the drug’s safety and efficacy is tested in a selected group of patients. If the researchers have adequately demonstrated that the experimental drug (or device) is effective against the condition for which it is being tested, they can proceed to Phase III.
Phase III trials
Phase III trials are the last stage before clinical approval for a new drug or device. By this stage, there will be convincing evidence of the safety of the drug or device and its efficacy in treating people who have the condition for which it was developed. Such studies are carried out on a much larger scale than for the two previous phases and are often multinational. Several years may have passed since the original laboratory and animal testing.
The main aims of Phase III trials are:
to demonstrate that the treatment or drug is safe and effective for use in patients in the target group (i.e. in people for whom it is intended)
to monitor side effects
to test different doses or different ways of administering the drug
to determine whether the drug could be used at different stages of the disease.
to provide sufficient information as a basis for marketing approval
Researchers may also be interested in showing that the experimental drug works for additional groups of people with conditions other than that for which the drug was initially developed. For example, they may be interested in testing a drug for inflammation on people with Alzheimer’s disease. The drug would have already have proven safe and obtained marketing approval but for a different condition, hence the need for additional clinical testing.
Open label extension trails
Open label extension studies are often carried out immediately after a double blind randomised clinical trial of an unlicensed drug. The aim of the extended study is to determine the safety and tolerability of the experimental drug over a longer period of time, which is generally longer than the initial trial and may extend up until the drug is licensed. Participants all receive the experimental drug irrespective of which arm of the previous trial they were in. Consequently, the study is no longer blind in that everybody knows that each participant is receiving the experimental drug but the participants and researchers still do not know which group participants were in during the initial trial.
Post-marketing surveillance studies (phase IV)
After the three phases of clinical testing and after the treatment has been approved for marketing, there may be a fourth phase to study the long-term effects of drugs or treatment or to study the impact of another factor in combination with the treatment (e.g. whether a particular drug reduces agitation).
Usually, such trials are sponsored by pharmaceutical companies and described as pharmacovigilance. They are not as common as the other types of trials (as they are not necessary for marketing permission). However, in some cases, the EMA grants restricted or provisional marketing authorisation, which is dependent on additional phase IV trails being conducted.
Expanded access to a trial
Sometimes, a person might be likely to benefit from a drug which is at various stages of testing but does not fulfil the conditions necessary for participation in the trial (e.g. s/he may have other health problems). In such cases and if the person has a life-threatening or serious condition for which there is no effective treatment, s/he may benefit from “expanded access” use of the drug. There must, however, be evidence that the drug under investigation has some likelihood of being effective for that patient and that taking it would not constitute an unreasonable risk.
The use of placebo and other forms of comparison
The main purpose of clinical drug studies is to distinguish the effect of the trial drug from other influences such as spontaneous change in the course of the disease, placebo effect, or biased observation. A valid comparison must be made with a control. The American Food and Drugs Administration recognises different types of control namely,
- active treatment with a known effective therapy or
- no treatment,
- historical treatment (which could be an adequately documented natural history of the disease or condition, or the results of active treatment in comparable patients or populations).
The EMA considers three-armed trials (including the experimental medicine, a placebo and an active control) as a scientific gold standard and that there are multiple reasons to support their use in drug development .
Participants in clinical trials are usually divided into two or more groups. One group receives the active treatment with the experimental substance and the other group receives a placebo, a different drug or another intervention. The active treatment is expected to have a positive curative effect whereas the placebo is expected to have zero effect. With regard to the aim to develop more effective treatments, there are two possibilities:
1. the experimental substance is more effective than the current treatment or
2. it is more effective than no treatment at all.
According to article 11 of the International Ethical Guidelines for Biomedical Research (IEGBR) of 2002, participants allocated to the control group in a trial for a diagnostic, therapeutic or preventive intervention should receive an established effective intervention but it may in some circumstances be considered ethically acceptable to use a placebo (i.e. no treatment). In article 11 of the IEGBR, reasons for the use of placebo are:
1. that there is no established intervention
2. that withholding an established effective intervention would expose subjects to, at most, temporary discomfort or delay in relief of symptoms
3. that use of an established effective intervention as comparator would not yield scientifically reliable results and use of placebo would not add any risk of serious or irreversible harm to the subjects.
November 2010, EMA/759784/2010 Committee for Medicinal Products for Human Use
The use of placebo and the issue of irreversible harm
It has been suggested that clinical trials are only acceptable in ethical terms if there is uncertainty within the medical community as to which treatment is most suitable to cure or treat a disease (National Bioethics Commission of Greece, 2005). In the case of dementia, whilst there is no cure, there are a few drugs for the symptomatic treatment of dementia. Consequently, one could ask whether it is ethical to deprive a group of participants of treatment which would have most likely improved their condition for the purpose of testing a potentially better drug (National Bioethics Commission of Greece, 2005). Can they be expected to sacrifice their own best interests for those of other people in the future? It is also important to ask whether not taking an established effective intervention is likely to result in serious or irreversible harm.
In the 2008 amended version of the Helsinki Declaration (World Medical Association, 1964), the possible legitimate use of placebo and the need to protect subjects from harm are addressed.
“32. The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best current proven intervention, except in the following circumstances:
The use of placebo, or no treatment, is acceptable in studies where no current proven intervention exists; or
Where for compelling and scientifically sound methodological reasons the use of placebo is necessary to determine the efficacy or safety of an intervention and the patients who receive placebo or no treatment will not be subject to any risk of serious or irreversible harm. Extreme care must be taken to avoid abuse of this option.” (WMA, 1964 with amendments up to 2008)
The above is also quite similar to the position supported by the Presidential Commission for the Study of Bioethical Issues (PCSBI) (2011). In its recently published report entitled “Moral science: protecting participants in human subjects research ”, the Presidential Commission argues largely in favour of a “middle ground” for ethical research, citing the work of Emanuel and Miller (2001) who state:
“A placebo-controlled trial can sometimes be considered ethical if certain methodological and ethical standards are met. It these standards cannot be met, then the use of placebos in a clinical trial is unethical.” (Emanuel and Miller, 2001 cited in PCSBI, 2011, p. 89).
One of the standards mentioned is the condition that withholding proven effective treatment will not cause more than minimal harm.
The importance of placebo groups for drug development
The ethical necessity to include a placebo arm in a clinical trial may differ depending on the type of drug being developed and whether other comparable drugs exist. For example, a placebo arm would be absolutely necessary in the testing of a new compound for which no drug has yet been developed. This would be combined with comparative arms involving other alternative drugs which have already been proven effective. For studies involving the development of a drug based on an existing compound, a comparative trial would be necessary but not necessarily with a placebo arm, or at least with a smaller placebo arm Nevertheless, the EMA emphasises the value of placebo-controlled trials in the development of new medicinal products even in cases where a proven effective drug exists:
“forbiddingplacebo-controlled trials in therapeutic areas where there are proven, therapeutic methods would preclude obtaining reliable scientific evidence for the evaluation of new medicinal products, and be contrary to public health interest as there is a need for both new products and alternatives to existing medicinal products.” (EMA, 2001).
In 2001, concerns were raised about the interpretation of paragraph 29 of the 2000 version of the Helsinki Declaration in which prudence was called for in the use of placebo in research trials and it was advised that placebo should only be used in cases where there was no proven therapy for the condition under investigation. A document clarifying the position of the WMA regarding the use of placebo was issued by the WMA in 2001 in which it was made clear that the use of placebo might be ethically acceptable even if proven therapy was available. The current version of this statement is article 32 of the 2008 revised Helsinki Declaration (quoted in sub-section 7.2.1).
The PCSBI (2011) highlight the importance of ensuring that the design of clinical trials enables the researchers to resolve controversy and uncertainty over the merits of the trial drug and whether the trial drug is better than an existing drug if there is one. They suggest that studies which cannot resolve such questions or uncertainty are likely to be ignored by the scientific community and this would be unethical as it would mean that people had been unnecessarily exposed to risk without there being any social benefit.
Reasons for participation
People with dementia who take part in clinical trials may do so for a variety of reasons. One possible reason is that they hope to receive some form of treatment that will improve their condition or even result in a cure. This is sometimes called the “therapeutic misconception”. In such cases, clinical trials may seem unethical in that advantage is being taken of the vulnerability of some of the participants. On the other hand, the possibility of participating in such a trial may help foster hope which may even enable a person to maintain their morale.
A review of 61 studies on attitudes to trials has shed some light on why people participate in clinical trials (Edwards, Lilford and Hewison, 1998). In this review, it was found that over 60% of participants in seven studies stated that they did or would participate in clinical trials for altruistic reasons. However, in 4 studies, over 70% of people stated that they participated out of self-interest and in two studies over 50% of people stated that they would participate in such a study out of self-interest. As far as informed consent is concerned, in two studies (which were also part of this review) 47% of responding doctors thought that few patients were actually aware that they were taking part in a clinical trial. On the other hand, an audit of four further studies revealed that at least 80% of participants felt that they had made an autonomous decision. There is no proof whether such perceptions were accurate or not. The authors conclude that self-interest was more common than altruism amongst the reasons given for participating in clinical trials but draw attention to the poor quality of some of the studies reviewed thereby suggesting the need for further research. It should not be necessary for people to justify why they are willing to participate in clinical trials. Reasons for participating in research are further discussed in section 3.2.4 insofar as they relate to end-of-life research.
In a series of focus groups organised in 8 European countries plus Israel and covering six conditions including dementia, helping others was seen as the main reason why people wanted to take part in clinical trials (Bartlam et al., 2010). In a US trial of anti-inflammatory medication in Alzheimer’s disease in which 402 people were considered eligible, of the 359 who accepted, their main reasons for wanting to participate were altruism, personal benefit and family history of Alzheimer’s disease.
Random assignment to study groups
As people are randomly assigned to the placebo or the active treatment group, everyone has an equal chance of receiving the active ingredient or whichever other control groups are included in the study. There are possible advantages and drawbacks to being in each group and people are likely to have preferences for being a particular study group but randomization means that allocation is not in any way linked to the best interests of each participant from a medical perspective. This is not an ethical issue provided that each participant fully understands that the purpose of research is not to provide a tailor-made response to an individual’s medical condition and that while some participants benefit from participation, others do not.
There are, however, medical issues to consider. In the case in double-blind studies, neither the participant nor the investigator knows to which groups a participant has been allocated. Consequently, if a participant encounters medical problems during the study, it is not immediately known whether this is linked to the trial drug or another unrelated factor, but the problems must be addressed and possible contraindications avoided, which may necessitate “de-blinding” (DuBois, 2008).
Although many people would perhaps like to benefit from a new drug which is more effective than existing drugs, people have different ideas about what is an acceptable risk and different reasons for taking part in clinical trials. People who receive the placebo are not exposed to the same potential risks as those given the experimental drug. On the other hand, they have no possibility to benefit from the advantages the drug may offer. Those receiving a drug commonly considered as the standard therapy are not necessarily better off than those receiving a placebo as some participants may already know that they do not respond well to the accepted treatment (DuBois, 2008).
If people who participate in a clinical trial are not informed which arm of the trial they were in, valuable information is lost which might have otherwise contributed towards to treatment decisions made after the clinical trial. Taylor and Wainwright (2005) suggest that “unblinding” should occur at the end of all studies and so as not to interfere with the analysis of data, this could be done by a person who is totally independent of the analysis. This would, however, have implications for open label extended trials as in that case participants, whilst better equipped to give informed consent would have more information than the researchers and this might be conveyed to researchers in anad hocmanner.
Open label extension trails
Open label extension studies (mentioned in sub-section 7.1.8) seem quite fair as they give each participant the opportunity to freely consent to continuing with the study in the full knowledge that s/he will receive the experimental drug. However, Taylor and Wainwright (2005) have highlighted a couple of ethical concerns linked to the consent process, the scientific value of such studies and issues linked to access to drugs at the end of the prior study.
With regard to consent, they argue that people may have had a positive or negative experience of the trial but do not know whether this was due to the experimental drug, another drug or a placebo. They may nevertheless base their decision whether to continue on their experience so far. For those who were not taking the experimental drug, their experience in the follow-up trial may turn out to be very different. Also, if they are told about the possibility of the open label extension trial when deciding whether or not to take part in the initial trial (i.e. with the implication that whatever group they are ascribed to, in the follow-up study they will be guaranteed the experimental drug), this might induce them to participate in the initial study which could be considered as a form of subtle coercion. Finally, researchers may be under pressure to recruit as they can only recruit people in an open label extended trial who took part in the initial study. This may lead them in turn to put pressure (even inadvertently) on participants to continue with the study.
The scientific validity of open label extension trials is questioned by Taylor and Wainwright (2005) on the grounds that people from the experimental arm of the first study who did not tolerate the drug would be unlikely to participate in the extension trial and this would lead to bias in the results. In addition, open-label trials often lack a precise duration other than “until the drug is licensed” which casts doubt on there being a valid research purpose.
The above authors suggest that open label extension studies are dressed up marketing activities which lack the ethical justification for biomedical research which is the prospect of finding new ways of benefiting people’s health. However, it could be argued that the aim of assessing long-term tolerability of a new drug is a worthwhile pursuit and if conducted in a scientific manner could be considered as research. Moreover, not all open label extension trials are open-ended with regard to their duration. The main problem in interpreting open label extension studies is that little is known about the natural course of the disease.
Protecting participants’ well-being at the end of the clinical trial
Some people who participate in a clinical trial and who receive the experimental drug experience an improvement in their condition. This is to be hoped even if benefit to the health of individuals is not the aim of the study. However, at the end of the study, the drug is not yet licenced and there is no legal right to continue taking it. This could be psychologically disturbing to the participants in the trial and also to their families who may have seen a marked improvement in their condition.
Taylor and Wainwright (2005) suggest that the open label trials may serve the purpose of prescribing an unlicensed drug on compassionate grounds, which whilst laudable, should not be camouflaged as scientific research. Rather governments should take responsibility and set up the appropriate legal mechanisms to make it possible for participants whose medical condition merits prolonged treatment with the experimental drug to have access to it.
Minimising pain and discomfort
Certain procedures to which people with dementia or their representatives consent may by burdensome or painful or simply worrying but in accordance with the principles of autonomy or justice/equity, people with dementia have the right to participate. The fact that they have made an informed decision to participate and are willing to tolerate such pain or burden does not release researchers from the obligation to try to minimise it. For example, if repeated blood samples are going to be necessary, an indwelling catheter could be inserted under local anaesthetic to make it easier or medical staff should provide reassurance about the use of various scanning equipment which might be worrying or enable the person’s carer to be present. In order to minimize fear, trained personnel are needed who have experience dealing with people with dementia. The advice of the carer, if there is one, could also be sought.
Drug trials in countries with less developed safeguards
Clinical trials are sometimes carried out in countries where safeguards are not well developed and where the participants and even the general population are likely to have less possibility to benefit from the results of successful trials. For example, some countries have not signed the Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine (1997) (referred to in section 188.8.131.52). The participants in those countries may be exposed to possible risks but have little chance of future medical benefit if the trial is successful. Yet people in countries with stricter safeguards for participants (which are often richer countries) stand to benefit from their efforts and from the risks they take, as they are more likely to be able to afford the drugs once developed. This raises ethical issues linked to voluntariness because there may be, in addition to the less developed safeguards, factors which make participation in such trials more attractive to potential participants. Such practices also represent a lack of equity in the distribution of risk, burden and possible benefit within society and could be interpreted as using people as a means to an end.
Parallels can also be drawn to the situation whereby people in countries where stem cell research is banned profit from the results of studies carried out in countries where it is permitted or to the results of studies carried out in countries where research ethics are slack or inexistent.
For a detailed discussion of the ethical issues linked to the involvement in research of people in other countries, particularly lower and middle income countries where standards of protection may by lower, please refer to the afore-mentioned report by the Presidential Commission for the Study of Bioethical Issues.
- Researchers should consider including a placebo arm in clinical trials when there are compelling and sound methodological reasons for doing so.
- Researchers should ensure that patients are aware that the aim of a randomised controlled trial is to test a hypothesis and provide generalizable knowledge leading to the development of a medical drug or procedure. They should explain how this differs from medical treatment and care which are aimed at enhancing the health and wellbeing of individual patients and where there is a reasonable expectation that this will be successful.
- Researchers should ensure that potential participants understand that they may be allocated to the placebo group.
- It should not be presumed that the treating doctor or contact person having proposed the participant for a trial has been successful in communicating the above information.
- Researchers conducting clinical trials may need training in how to ensure effective communication with people with dementia.
- Appropriate measures should be taken by researchers to minimize fear, pain and discomfort of participants.
- All participants should, when possible, preferably have the option of receiving the experimental drug (if proven safe) after completion of the study.
- Pharmaceutical companies should not be discouraged from carrying out open-label extension studies but this should not be the sole possibility for participants to access the trial drug after the end of the study if it is proving beneficial to them.
- In multi-centre clinical trials, where data is transferred to another country in which data protection laws are perhaps less severe, the data should be treated as stated in the consent form signed by the participant.
Last Updated: jeudi 29 mars 2012 | <urn:uuid:9d3f2101-f19c-4a5e-a7b4-dcd94a6d33f1> | CC-MAIN-2013-20 | http://www.alzheimer-europe.org/FR%20%20%20%20%20%20%20%20%20%20%20%20%EF%BF%BD%20%EF%BF%BD%C2%B3/Ethics/Ethical-issues-in-practice/Ethics-of-dementia-research/Clinical-trials | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.95562 | 6,489 | 3.640625 | 4 |
Our opinion on ...
- Executive Summary
- Necessity for a response
- Genetic testing
- General principles
- Other considerations
The present paper constitutes the input of Alzheimer Europe and its member organisations to the ongoing discussions within Europe about genetic testing (in the context of Alzheimer's disease and other forms of dementia).
Alzheimer Europe would like to recall some general principles which guide this present response:
- Having a gene associated with Alzheimer's disease or another form of dementia does not mean that a person has the disease.
- People who have a gene linked to Alzheimer's disease or another form of dementia have the same rights as anyone else.
- Genetic testing does not only affect the person taking the test. It may also reveal information about other relatives who might not want to know.
- No genetic test is 100% accurate.
- The extent to which health cover is provided to citizens by the State social security system and/or privately contracted by individuals differs from one country to the next.
On the basis of these principles, Alzheimer Europe has developed the following position with regard to genetic testing:
- Alzheimer Europe firmly believes that the use and/or possession of genetic information by insurance companies should be prohibited.
- Alzheimer Europe strongly supports research into the genetic factors linked to dementia which might further our understanding of the cause and development of the disease and possibly contribute to future treatment.
- Based on its current information, Alzheimer Europe does not encourage the use of any genetic test for dementia UNLESS such test has a high and proven success rate either in assessing the risk of developing the disease (or not as the case may be) or in detecting the existence of it in a particular individual.
- Alzheimer Europe requests further information on the accuracy, reliability and predictive value of any genetic tests for dementia.
- Genetic testing should always be accompanied by adequate pre- and post-test counselling.
- Anonymous testing should be possible so that individuals can ensure that such information does not remain in their medical files against their will.
It is extremely important for people with dementia to be diagnosed as soon as possible. In the case of Alzheimer’s disease, an early diagnosis may enable the person concerned to benefit from medication, which treats the global symptoms of the disease and is most effective in the early to mid stages of the disease. Most forms of dementia involve the gradual deterioration of mental faculties (e.g. memory, language and thinking etc.) but in the early stages, it is still possible for the person affected to make decisions concerning his/her finances and care etc. – hence the importance of an early diagnosis.
If it were possible to detect dementia before the first symptoms became obvious, this would give people a greater opportunity to make informed decisions about their future lives. This is one of the potential benefits of genetic testing.
On the other hand, such information could clearly be used in ways which would be contrary to their personal interests, perhaps resulting in employment discrimination, loss of opportunities, stigmatisation, increased health insurance costs or even loss of health insurance to name but a few examples.
The present discussion paper outlines some of the recommendations of Alzheimer Europe and its member organisations and raises a few points which deserve further clarification and discussion.
The necessity for a response by Alzheimer Europe
In the last few years, the issue of genetic testing has been increasingly debated. In certain European countries there are already companies offering such tests. Unfortunately, the general public do not always fully understand what the results of such tests imply and there are no regulations governing how they are carried out i.e. what kind of information people receive, how the results are presented, whether there is any kind of counselling afterwards and the issue of confidentiality etc.
In order to provide information to people with dementia and other people interesting in knowing about their own state of health and in order to protect them from the unscrupulous use of the results of genetic tests, Alzheimer Europe has developed the present Position Paper.
These general principles as well as the Convention of Human Rights and Biomedicine and the Universal Declaration on the Human Genome and Human Rights dictate Alzheimer Europe’s position with regard to genetic testing.
Alzheimer Europe would like to draw a distinction between tests which detect existing Alzheimer's disease and tests which assess the risk of developing dementia Alzheimer's disease at some time in the future:
- Diagnostic testing : Familial early onset Alzheimer’s disease (FAD) is associated with 3 genes. These are the amyloid precursor protein (APP), presenilin-1 and presenilin-2. These genetic mutations can be detected by genetic testing. However, it is important to note that the test only relates to those people with FAD (i.e. about 1% of all people with Alzheimer’s disease). In the extremely limited number of families with this dominant genetic disorder, family members inherit from one of their parents the part of the DNA (the genetic make-up), which causes the disease. On average, half the children of an affected parent will develop the disease. For those who do, the age of onset tends to be relatively low, usually between 35 and 60.
- Assessment for risk testing : Whether or not members of one’s family have Alzheimer’s disease, everyone risks developing the disease at some time. However, it is now known that there is a gene, which can affect this risk. This gene is found on chromosome 19 and it is responsible for the production of a protein called apolipoprotein E (ApoE). There are three main types of this protein, one of which (ApoE4), although uncommon, makes it more likely that Alzheimer’s disease will occur. However, it does not cause the disease, but merely increases the likelihood. For example, a person of 50, would have a 2 in 1,000 chance of developing Alzheimer’s disease instead of the usual 1 in 1,000, but might never actually develop it. Only 50% of people with Alzheimer’s disease have ApoE4 and not everyone with ApoE4 suffers from it.
There is no way to accurately predict whether a particular person will develop the disease. It is possible to test for the ApoE4 gene mentioned above, but strictly speaking such a test does not predict whether a particular person will develop Alzheimer’s disease or not. It merely indicates that he or she is at greater risk. There are in fact people who have had the ApoE4 gene, lived well into old age and never developed Alzheimer’s disease, just as there are people who did not have ApoE4, who did develop the disease. Therefore taking such a test carries the risk of unduly alarming or comforting somebody.
Alzheimer Europe agrees with diagnostic genetic testing provided that pre- and post-test counselling is provided, including a full discussion of the implications of the test and that the results remain confidential.
We do not actually encourage the use of genetic testing for assessing the risk of developing Alzheimer's disease. We feel that it is somewhat unethical as it does not entail any health benefit and the results cannot actually predict whether a person will develop dementia (irrespective of the particular form of ApoE s/he may have).
We are totally opposed to insurance companies having access to results from genetic tests for the following reasons:
- This would be in clear opposition to the fundamental principle of insurance which is the mutualisation of risk through large numbers (a kind of solidarity whereby the vast majority who have relatively good health share the cost with those who are less fortunate).
- Failure to respect this principle would create an uninsurable underclass and lead to a genetically inferior group.
- This in turn could entail the further stigmatisation of people with dementia and their carers.
- In some countries, insurance companies manage to reach decisions on risk and coverage without access to genetic data.
- We therefore urge governments and the relevant European bodies to take the necessary action to prohibit the use or possession of genetic data by insurance companies.
Alzheimer Europe recognises the importance of research into the genetic determinants of Alzheimer’s disease and other forms of dementia. Consequently,
- we support the use of genetic testing for the purposes of research provided that the person concerned has given informed consent and that the data is treated with utmost confidentiality; and
- we would also welcome further discussion about the problem of data management.
In our opinion, any individual who wishes to take a genetic test should be able to choose to do so anonymously in order to ensure that such information does not remain in his/her medical file.
At its Annual General Meeting in Munich on 15 October 2000, Alzheimer Europe adopted recommendations on how to improve the legal rights and protection of adults with incapacity due to dementia. This included a section on bioethical issues. These recommendations obviously need to guide any response of the organisation regarding genetic testing for people who suspect or fear they may have dementia and also those who have taken the test and did develop dementia.
- The adult with incapacity has the right to be informed about his/her state of health.
- Information should, where appropriate, cover the following: the diagnosis, the person's general state of health, treatment possibilities, potential risks and consequences of having or not having a particular treatment, side-effects, prognosis and alternative treatments.
- Such information should not be withheld solely on the grounds that the adult is suffering from dementia and/or has communication difficulties. Attempts should be made to provide information in such a way as to maximise his/her ability to understand, making use of technology and other available techniques to enhance communication. Attention should be paid to any possible difficulty understanding, retaining information and communicating, as well as his/her level of education, reasoning capacity and cultural background. Care should be taken to avoid causing unnecessary anxiety and suffering.
- Written as well as verbal information should always be provided as a back-up. The adult should be granted access to his/her medical file(s). S/he should also have the opportunity to discuss the contents of the medical file(s) with a person of his/her choice (e.g. a doctor) and/or to appoint someone to receive information on his/her behalf.
- Information should not be given against the will of the adult with incapacity.
- The confidentiality of information should extend beyond the lifetime of the adult with incapacity. If any information is used for research or statistical purposes, the identity of the adult with incapacity should remain anonymous and the information should not be traceable back to him/her (in accordance with the provisions of national laws on respect for the confidentiality of personal information). Consideration should be given to access to information where abuse is suspected.
- A clear refusal by the adult with incapacity to grant access to information to any third party should be respected regardless of the extent of his/her incapacity, unless this would be clearly against his/her best interests e.g. carers should have provided to them information on a need to know basis to enable them to care effectively for the adult with incapacity.
- People who receive information about an adult with incapacity in connection with their work (either voluntary or paid) should be obliged to treat such information with confidentiality.
People who take genetic tests and do not receive adequate pre and post test counselling may suffer adverse effects.
Fear of discrimination based on genetic information may deter people from taking genetic tests which could be useful for research into the role of genes in the development of dementia.
Certain tests may be relevant for more than one medical condition. For example, the ApoE test is used in certain countries as part of the diagnosis and treatment of heart disease. There is therefore a risk that a person might consent for one type of medical test and have the results used for a different reason.
Last Updated: jeudi 06 août 2009 | <urn:uuid:62210bfc-b709-4c59-93ac-36ec9784506d> | CC-MAIN-2013-20 | http://www.alzheimer-europe.org/FR%C5%A0%C2%B7%C5%A0%20/Policy-in-Practice2/Our-opinion-on/Genetic-testing | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.943183 | 2,443 | 2.625 | 3 |
Acrylic A synthetic fabric often used as a wool substitute. It is warm, soft, holds colors well and often is stain and wrinkle resistant.
Angora A soft fiber knit from fur of the Angora rabbit. Angora wool is often combined with cashmere or another fiber to strengthen the delicate structure. Dry cleaning is reccommended for Angora products.
Bedford A strong material that is a raised corded fabric (similar to corduroy). Bedford fabric wears well and is usually washable.
Boucle A fabric made with boucle yarn(s) in wool, rayon and or cotton causing the surface of the fabric to appear looped.
Brocade An all-over floral, raised pattern produced in a similar fashion to embroidery.
Burnout Process of printing a design on a fabric woven of paired yarns of different fibers. One kind of yarn is burned out or destroyed leaving the ground unharmed.
Cable Knit Patterns, typically used in sweaters, where flat knit columns otherwise known as cables are overlapped vertically.
Cashmere A soft, silky, lightweight wool spun from the Kashmir goat. Cashmere must be dry-cleaned due to its delicate fibers and is commonly used in sweaters, shawls, outerwear, gloves and scarves for its warmth and soft feel.
Chiffon A common evening wear fabric made from silk, cotton, rayon or nylon. It's delicate in nature and sheer.
Chintz A printed and glazed fabric made of cotton. Chintz is known for its bright colors and bold patterns.
Corduroy Cotton fibers twisted as they are woven to create long, parallel grooves, called wales, in the fabric. This is a very durable material and depending on the width of the wales, can be extremely soft.
Cotton A natural fiber that grows in the seed pod of the cotton plant. It is an inelastic fiber.
Cotton Cashmere A blend of cotton and cashmere fibers, typically 85% to 15% respectively, this combination produces an extremely soft yarn with a matte finish.
Crepe Used as a description of surfaces of fabrics. Usually designates a fabris that is crimped or crinkled.
Crinoline A lightweight, plain weave, stiffened fabric with a low yarn count. Used to create volume beneath evening or wedding dresses.
Crochet Looping threads with a hooked needle that creates a wide, open knit. Typically used on sweaters for warm seasons.
Denim Cotton textile created with a twill weave to create a sturdy fabric. Used as the primary material of blue jeans.
Dobby Woven fabric where the weave of the fabric actually produces the garment's design.
Embroidery Detailed needlework, usually raised and created by yarn, silk, thread or embroidery floss.
Eyelet A form of lace in a thicker material that consists of cut-outs that are integrated and repeated into a pattern. Usually applied to garments for warmer seasons.
Faille A textured fabric with faint ribbing. Wears wonderfully for hours holding its shape due to the stiffness of the texture. Used in wedding dresses and women's clothes.
Fil'Coupe A small jacquard pattern on a light weight fabric, usually silk, in which the threads connecting each design are cut, creating a frayed look.
French Terry A knit cloth that contains loops and piles of yarn. The material is very soft, absorbent and has stretch.
Gabardine A tightly woven twill fabric, made of different fibers such as wool, cotton and silk.
Georgette A crinkly crepe type material usually made out of silk that consists of tightly twisted threads. Georgette is sheer and has a flowy feeling.
Gingham Two different color stripes "woven" in pattern to appear checked.
Glen Plaid Design of woven, broken checks. A form of traditional plaid.
Guipure Lace A lace without a mesh ground, the pattern in held in place by connecting threads.
Herringbone A pattern originating from masonry, consists of short rows of slanted parallel lines. The rows are formatted opposing each other to create the pattern. Herringbone patterns are used in tweeds and twills.
Hopsack A material created from cotton or woolthat is loosely woven together to form a coarse fabric.
Houndstooth A classic design containing two colors in jagged/slanted checks. Similar to Glen Plaid.
Jacquard A fabric of intricate varigated weave or pattern. Typically shown on elegant and more expensive pieces.
Jersey A type of knit material usually made from cotton and known to be flexible, stretchy, soft and very warm. It is created using tight stitches.
Knit A knit fabric is made by interlocking loops of one or more yarns either by hand with knitting needles or by machine.
Linen An exquisite material created from the fibers of the flax plant. Some linen contain slubs or small knots on the fabric. The material wrinkles very easily and is a light fabric perfect for warm weather.
Lurex A metallic fiber woven into material to give the garment shine.
LycraTM Lycra is a type of stretch fabric where the fibers are woven into cotton, silk or synthetic fiber blends. These materials are lightweight, comfortable (need trademark symbol) and breathable, and the stretch will not wear away.
Madras Originating from Madras, India, this fabric is a lightweight, cotton material used for summer clothing. Madras usually has a checked pattern but also comes in plaid or with stripes. Typically made from 100% cotton.
Marled Typically found in sweaters, marled yarn occurs when two colored yards are twisted together.
Matelasse A compound fabric made of cotton, wool or other fibers with quilted character and raised patterns.
Matte A matte finish has a lusterless surface.
Merino Wool Wool sheered from the merino sheep and spun into yarn that is fine but strong.
Modal A type of rayon that is made from natural fibers but goes through a chemical treatment to ensure it has a high threshold of breakage. Modal is soft and breathable which is why it's used as a cotton replacement.
Non-iron A treated cotton that allows our Easy Care Shirts to stay crisp throughout the day and does not need ironing after washing/drying.
Nylon A synthetic fiber that is versatile, fast drying and strong. It has a high resistance to damage.
Ombre A color technique that shades a color from light to dark.
Ottoman A firm, lustrous plain weave fabric with horizontal cords that are larger and rounder than those of the faille. Made of wool, silk, cotton and other manufactured fibers.
Paisley A pattern that consists of crooked teardrop designs in a repetitive manner
Placket The piece of fabric or cloth that is used as a concealing flap to cover buttons, fasteners or attachments. Most commonly seen in the front of button-down shirts. Also used to reinforce openings or slits in garments.
Piping Binding a seam with decoration. Piping is similar to tipping or edging where a decorative material is sewn into the seams.
Pointelle An open-work knitting pattern used on garments to add texture. Typically a cooler and general knit sweater.
Polyester A fabric made from synthetic fibers. Polyester is quick drying, easy to wash and holds its shape well.
Ponte A knit fabric where the fibers are looped in an interlock. The material is very strong and firm.
Poplin A strong woven fabric, heavier in weight, with ribbing.
Rayon A manufactured fiber developed originally as an alternative for silk. Rayon drapes well and looks luxurious.
Sateen A cotton fabric with sheen that resembles satin.
Seersucker Slack-tension weave where yarn is bunched together in certain areas and then pulled taught in others to create this summery mainstay.
Shirring Similar to ruching, shirring gathers material to create folds.
Silk One of the most luxurious fabrics, silk is soft, warm and has shine. It is created from female silkworm's eggs.
Silk Shantung A rough plain weave fabric made of uneven yarns to produce a textured effect, made of fibers such silk in which all knots and lumps are retained.
Space dyed Technique of yarn dyeing to produce a multi-color effect on the yarn itself. Also known as dip dyed yarn.
Spandex Also known as Lycra (trademark symbol), this material is able to expand 600% and still snap back to its original shape and form. Spandex fibers are woven with cotton and other materials to make fabrics stretch.
Tipping Similar to edging, tipping includes embellishing a garment at the edges of the piece, hems, collars etc.
Tissue Linen A type of linen that is specifically made for blouses or shirts due to its thinness and sheerness.
Tweed A loose weave of heavy wool makes up tweed which provides warmth and comfortability.
Twill A fabric woven in a diagonal weave. Commonly used for chinos and denim.
Variegated Multi-colored fabrics where colors are splotched or in patches.
Velour A stretchy knit fabric that looks similar to velvet. Very soft to the touch.
Velvet A soft, silky woven fabric that is similar to velour. Velvet is much more expensive than velour due to the amount of thread and steps it takes to manufacture the material.
Velveteen A more modern adaptation of velvet, velveteen is made from cotton and has a little give. Also known as imitation velvet.
Viscose Created from both natural materials and man-made fibers, viscose is soft and supple but can wrinkle easily.
Wale Only found in woven fabrics like corduroy, wale is the long grooves that give the garment its texture.
Windowpane Dark stripes run horizontal and vertical across a light background to mimic a window panes.
Woven A woven fabric is formed by interlacing threads, yarns, strands, or strips of some material. | <urn:uuid:04a048d3-152b-45eb-ac6e-e7717919a899> | CC-MAIN-2013-20 | http://www.anneklein.com/Matte-Jersey-Sleeveless-Drape-Top/90724376,default,pd.html?variantSizeClass=&variantColor=JJQ47XX&cgid=90316517&pmin=25&pmax=50&prefn1=catalog-id&prefv1=anneklein-catalog | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.931941 | 2,146 | 2.53125 | 3 |
Demands for lower cost manufacturing, lighter components, and recycleability are forcing manufacturers to switch from metal components to plastic. While the assembly processes are different, some of the same concerns apply. Finding a reliable assembly equipment supplier, defining part requirements, getting them involved early, and choosing the right assembly process are the keys to success.
chart outlines the different characteristics, capabilities,
and requirements of a variety of plastic welding processes.
CLICK for the full-size graphic.
Appliance components come in
all shapes and sizes, and each one has its own unique characteristics that
demand an assembly process to fit. Ultrasonic, hot-plate, spin, thermal,
laser, and vibration welding are the most common plastic assembly methods.
Choosing the correct method can be difficult. A supplier who has technical
knowledge in all of the processes is the best choice. They will have knowledge
of all the different process joint designs, can provide assistance in material
selection, and can support once the process is in production.
part requirement before the design of the plastic part is critical.
This will save dollars in tooling costs and help assure that the correct
process to achieve the requirements is chosen. All too often, the plastic
molds are manufactured, the first parts are assembled, and then quality
control determines that the parts will not pass a pressure test. This
is too late; now significant dollars will need to be spent to correct
problem. Requirements such as a need for pressurization, exposure to
extreme cold or heat, cosmetic-part status (requires no blemishes),
and parts assembled
per minute are all factors in determining the correct process and plastic
Each process has unique plastic-joint design requirements to assure
proper weld strength. Assembly equipment suppliers can help design the
weld area joint design. An example of joint design requirements for ultrasonic
assembly is given by Guide to Ultrasonics from Dukane
Charles, IL, U.S.): "Mating services should be in intimate contact around the entire joint. The joint should be in one plane, if possible. A small initial contact area should be established between mating halves. A means of alignment is recommended so that mating halves do not misalign during the weld operation." Obviously,
these joint requirements should all be designed into the part prior to
machining of the injection molds.
What assembly process is correct for a part? As stated earlier, ultrasonic, hot-plate, spin, thermal, vibration, and laser welding are the most common methods used in production today. Each method has unique advantages.
Ultrasonic assembly is a fast, repeatable, and reliable process that allows for sophisticated process control. High-volume small parts that have very tight assembly tolerances lend themselves well to ultrasonic assembly. Ultrasonic systems have the capability of exporting relevant assembly process data for SPC documentation and FDA validation. Ultrasonic welding can be easily integrated into automated systems.
Hot-plate welding can accommodate a wide range of parts sizes and configurations. These machines offer high-reliability hermetic seals and strong mechanical bonds on complex part geometries. The process is fairly simple; the two parts to be jointed are brought in close proximity to a heated platen until the joint area is in a molten state. The platen is removed and the parts are clamped together until the joint cools off and returns to a solid state.
Spin welding is a very cost-effective method for joining large, medium, or small circular parts such as washing machine tubs to agitator components. Water purification filters, thermal mugs, and irrigation assemblies typically are joined using the spin welding process. Careful attention to joint design is critical for parts that require flash-free appearance.
Assemblies that require inserts at multiple points on multiple planes, like computer or vacuum cleaner housings, typically benefit from thermal insertion/staking. Thermal staking is ideal for attachment of non-plastic components to the plastic housing, such as circuit boards and metal brackets. Dates coding, embossing, and degating are other uses for thermal presses. Thermal welding can be a slower assembly process than ultrasonic, so, depending on the volumes of assemblies required, ultrasonic maybe a better choice.
Vibration welding physically moves one of the two parts horizontally under pressure to create heat through surface friction. Compared to ultrasonic welding, vibration welding operates at much lower frequencies, much higher amplitude, and with greater clamping force. The limitation to vibration welding is simply that the joint must be in a single plane in at least one axis in order to allow the vibration motion. Like hot-plate welding, vibration welding is a highly reliable process that can handle large parts in challenging materials or multiple parts per cycle with ease. Chain saw housings, blower and pump assemblies, and large refrigerator bins are examples of potential vibration welding applications. Cycle times for vibration welds are very short, thus they are ideal for high volume and are easily automated.
Laser welding is the newest technology of the processes available today. One benefit of laser welding is that the weld joints produce no flash or particulate outside of the joint. Assemblies that require absolutely no contamination for particulate, like medical filters, are good candidates. A second benefit is that the assembly is not exposed to heat or vibration. Devices that have very sensitive electronic internal components that may be damaged from vibration can now be assembled effectively. Laser welding requires the parts to be transmissive and absorbive, specifically how transparent the parts appear to the laser beam. One material transmits the coherent laser light and the other material absorbs the light and converts it to heat. Parts that appear black to the human eye can be transparent or opaque at the wavelength of the laser light. Clear-to-clear joints and joints that are optically transparent can be readily achieved by use of special coatings. Depending on the part geometry, laser welding can be a slower process then vibration or ultrasonic welding.
Plastic appliance components are the direction of the future - they can be assembled
economically and produce functional products.
This information is provided by Michael Johnston, national sales and
marketing manager, Dukane
Charles, IL, U.S.). | <urn:uuid:134c9c22-8518-4641-bb31-a84d5ca3c9d3> | CC-MAIN-2013-20 | http://www.appliancemagazine.com/editorial.php?article=235&zone=1&first=1 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.917231 | 1,282 | 2.640625 | 3 |
Outmaneuvering Foodborne Pathogens
At various locations, ARS scientists are doing research to make leafy greens and other fresh produce safer for consumers. Produce and leafy greens in the photo are (clockwise from top): romaine lettuce, cabbage, cilantro in a bed of broccoli sprouts, spinach and other leafy greens, green onions, tomatoes, and green leaf lettuce.
If pathogens like E. coli O157:H7 or Salmonella had a motto for survival, it might be: “Find! Bind! Multiply!”
That pretty much sums up what these food-poisoning bacteria do in nature, moving through our environment to find a host they can bind to and use as a staging area for multiplying and spreading.
But ARS food-safety scientists in California are determined to find out how to stop these and other foodborne pathogenic bacteria in their tracks, before the microbes can make their way to leafy greens and other favorite salad ingredients like tomatoes and sprouts.
The research is needed to help prevent the pathogens from turning up in fresh produce that we typically eat uncooked. That’s according to Robert E. Mandrell, who leads the ARS Produce Safety and Microbiology Research Unit. His team is based at the agency’s Western Regional Research Center in Albany, California.
The team is pulling apart the lives of these microbes to uncover the secrets of their success. It’s a complex challenge, in part because the microbes seem to effortlessly switch from one persona to the next. They are perhaps best known as residents of the intestines of warm-blooded animals, including humans. For another role, the pathogens have somehow learned to find, bind, and multiply in the world of green plants.
Sometimes the pathogenic microbes need the help of other microbial species to make the jump from animal inhabitant to plant resident. Surprisingly little is known about these powerful partnerships, Mandrell says. That’s why such alliances among microbes are one of several specific aspects of the pathogens’ lifestyles that the Albany scientists are investigating. In all, knowledge gleaned from these and other laboratory, greenhouse, and outdoor studies should lead to new, effective, environmentally friendly ways to thwart the pathogens before they have a chance to make us ill.
In a greenhouse, microbiologist Maria Brandl examines cilantro that she uses as a model plant to investigate the behavior of foodborne pathogens on leaf surfaces.
A Pathogen Targets Youngest Leaves
Knowing pathogens’ preferences is essential to any well-planned counter-attack. So microbiologist Maria T. Brandl is scrutinizing the little-understood ability of E. coli O157:H7 and Salmonella enterica to contaminate the elongated, slightly sweet leaves of romaine lettuce. With a University of California-Berkeley colleague, Brandl has shown that, if given a choice, E. coli has a strong preference for the young, inner leaves. The researchers exposed romaine lettuce leaves to E. coli and found that the microbe multiplied about 10 times more on the young leaves than on the older, middle ones. One explanation: The young leaves are a better nutrition “buy” for E. coli. “These leaves exude about three times more nitrogen and about one-and-one-half times more carbon than do the middle leaves,” says Brandl.
Scientists have known for decades that plants exude compounds from their leaves and roots that bacteria and fungi can use as food. But the romaine lettuce study, published earlier this year in Applied and Environmental Microbiology, is the first to document the different exudate levels among leaves of the two age classes. It’s also the first to show that E. coli can do more than just bind to lettuce leaves: It can multiply and spread on them.
Research assistant Danielle Goudeau inoculates a lettuce leaf with E. coli O157:H7 in a biological safety cabinet to study the biology of the human pathogen on leafy greens.
Adding nitrogen to the middle leaves boosted E. coli growth, Brandl found. “In view of the key role of nitrogen in helping E. coli multiply on young leaves,” she says, “a strategy that minimizes use of nitrogen fertilizer in romaine lettuce fields may be worth investigating.”
In other studies using romaine lettuce and the popular herb cilantro as models, Brandl documented the extent to which E. coli and Salmonella are aided by Erwinia chrysanthemi, an organism that causes fresh produce to rot.
“When compared to plant pathogens, E. coli and Salmonella are not as ‘fit’ on plants,” Brandl says. But the presence of the rot-producing microbe helped E. coli and Salmonella grow on lettuce and cilantro leaves.
“Soft rot promoted formation of large aggregates, called ‘biofilms,’ of E. coli and Salmonella and increased their numbers by up to 100-fold,” she notes.
The study uncovered new details about genes that the food-poisoning pathogens kick into action when teamed up with plant pathogens such as soft rot microbes.
Brandl, in collaboration with Albany microbiologist Craig Parker, used a technique known as “microarray analysis” to spy on the genes. “The assays showed that Salmonella cells—living in soft rot lesions on lettuce and cilantro—had turned on some of the exact same genes that Salmonella uses when it infects humans or colonizes the intestines of animals,” she says. Some of these activated genes were ones that Salmonella uses to get energy from several natural compounds common to both green plants and to the animal intestines that Salmonella calls home.
Using a confocal laser scanning microscope, microbiologist Maria Brandl examines a mixed biofilm of Salmonella enterica (pink) and Erwinia chrysanthemi (green) in soft rot lesions on cilantro leaves (blue).
A One-Two Punch to Tomatoes
Salmonella also benefits from the presence of another plant pathogen, specifically, Xanthomonas campestris, the culprit in a disease known as “bacterial leaf spot of tomato.” But the relationship between Salmonella and X. campestris may be different than the relation of Salmonella to the soft rot pathogen. Notably, Salmonella benefits even if the bacterial spot pathogen is at very low levels—so low that the plant doesn’t have the disease or any visible symptoms of it.
That’s among the first-of-a-kind findings that microbiologist Jeri D. Barak found in her tests with tomato seeds exposed to the bacterial spot microbe and then planted in soil that had been irrigated with water contaminated with S. enterica.
In a recent article in PLoS ONE, Barak reported that S. enterica populations were significantly higher in tomato plants that had also been colonized by X. campestris. In some cases, Salmonella couldn’t bind to and grow on—or in—tomato plants without the presence of X. campestris, she found.
Listeria monocytogenes on this broccoli sprout shows up as green fluorescence. The bacteria are mainly associated with the root hairs.
“We think that X. campestris may disable the plant immune response—a feat that allows both it and Salmonella to multiply,” she says.
The study was the first to report that even as long as 6 weeks after soil was flooded with Salmonella-contaminated water, the microbe was capable of binding to tomato seeds planted in the tainted soil and, later, of spreading to the plant.
“These results suggest that any contamination that introduces Salmonella from any source into the environment—whether that source is irrigation water, improperly composted manure, or even insects—could lead to subsequent crop contamination,” Barak says. “That’s true even if substantial time has passed since the soil was first contaminated.”
Crop debris can also serve as a reservoir of viable Salmonella for at least a week, Barak’s study showed. For her investigation, the debris was composed of mulched, Salmonella-contaminated tomato plants mixed with uncontaminated soil.
“Replanting fields shortly after harvesting the previous crop is a common practice in farming of lettuce and tomatoes,” she says. The schedule allows only a very short time for crop debris to decompose. “Our results suggest that fields known to have been contaminated with S. enterica could benefit from an extended fallow period, perhaps of at least a few weeks.”
Ordinary Microbe Foils E. coli
While the bacterial spot and soft rot microbes make life easier for certain foodborne pathogens, other microbes may make the pathogens’ existence more difficult. Geneticist Michael B. Cooley and microbiologist William G. Miller at Albany have shown the remarkable effects of one such microbe, Enterobacter asburiae. This common, farm-and-garden-friendly microorganism lives peaceably on beans, cotton, and cucumbers.
In one experiment, E. asburiae significantly reduced levels of E. coli and Salmonella when all three species of microbes were inoculated on seeds of thale cress, a small plant often chosen for laboratory tests.
The study, published in Applied and Environmental Microbiology in 2003, led to followup experiments with green leaf lettuce. In that battle of the microbes, another rather ordinary bacterium, Wausteria paucula, turned out to be E. coli’s new best friend, enhancing the pathogen’s survival sixfold on lettuce leaves.
“It was the first clear example of a microbe’s supporting a human pathogen on a plant,” notes Cooley, who documented the findings in the Journal of Food Protection in 2006.
But E. asburiae more than evened the score, decreasing E. coli survival 20- to 30-fold on lettuce leaves exposed to those two species of microbes.
The mechanisms underlying the competition between E. asburiae and E. coli are still a mystery, says Cooley, “especially the competition that takes place on leaves or other plant surfaces.”
Nevertheless, E. asburiae shows initial promise of becoming a notable biological control agent to protect fresh salad greens or other crops from pathogen invaders. With further work, the approach could become one of several science-based solutions that will help keep our salads safe.—By Marcia Wood, Agricultural Research Service Information Staff.
This research is part of Food Safety, an ARS national program (#108) described on the World Wide Web at www.nps.ars.usda.gov.
To reach scientists mentioned in this article, contact Marcia Wood, USDA-ARS Information Staff, 5601 Sunnyside Ave., Beltsville, MD 20705-5129; phone (301) 504-1662, fax (301) 504-1486.
Listeria monocytogenes on this radish sprout shows up as green fluorescence. The bacteria are mainly associated with the root hairs.
What Genes Help Microbes Invade Leafy Greens?
When unwanted microbes form an attachment, the consequences—for us—can be serious.
That’s if the microbes happen to be human pathogens like Listeria monocytogenes or Salmonella enterica and if the target of their attentions happens to be fresh vegetables often served raw, such as cabbage or the sprouted seeds of alfalfa.
Scientists don’t yet fully understand how the malevolent microbes form colonies that cling stubbornly to and spread across plant surfaces, such as the bumpy leaves of a cabbage or the ultra-fine root hairs of a tender alfalfa sprout.
But food safety researchers at the ARS Western Regional Research Center in Albany, California, are putting together pieces of the pathogen puzzle.
A 1981 food-poisoning incident in Canada, caused by L. monocytogenes in coleslaw, led microbiologist Lisa A. Gorski to study the microbe’s interactions with cabbage. Gorski, with the center’s Produce Safety and Microbiology Research Unit, used advanced techniques not widely available at the time of the cabbage contamination.
“Very little is known about interactions between Listeria and plants,” says Gorski, whose study revealed the genes that Listeria uses during a successful cabbage-patch invasion.
The result was the first-ever documentation of Listeria genes in action on cabbage leaves. Gorski, along with coinvestigator Jeffrey D. Palumbo—now with the center’s Plant Mycotoxin Research Unit—and others, documented the investigation in a 2005 article in Applied and Environmental Microbiology.
Listeria, Behaving Badly
“People had looked at genes that Listeria turns on, or ‘expresses,’ when it’s grown on agar gel in a laboratory,” says Gorski. “But no one had looked at genes that Listeria expresses when it grows on a vegetable.
“We were surprised to find that when invading cabbage, Listeria calls into play some of the same genes routinely used by microbes that are conventionally associated with plants. Listeria is usually thought of as a pathogen of humans. We hadn’t really expected to see it behaving like a traditional, benign inhabitant of a green plant.
“It’s still a relatively new face for Listeria, and requires a whole new way of thinking about it.”
In related work, Gorski is homing in on genetic differences that may explain the widely varying ability of eight different Listeria strains to successfully colonize root hairs of alfalfa sprouts—and to resist being washed off by water.
In a 2004 article in the Journal of Food Protection, Gorski, Palumbo, and former Albany associate Kimanh D. Nguyen reported those differences. Poorly attaching strains formed fewer than 10 Listeria cells per sprout during the lab experiment, while the more adept colonizers formed more than 100,000 cells per sprout.
Salmonella’s Cling Genes
Colleague Jeri D. Barak, a microbiologist at Albany, led another sprout investigation, this time probing the ability of S. enterica to attach to alfalfa sprouts. From a pool of 6,000 genetically different Salmonella samples, Barak, Gorski, and coinvestigators found 20 that were unable to attach strongly to sprouts.
Scientists elsewhere had already identified some genes as necessary for Salmonella to successfully invade and attach to the guts of animals such as cows and chickens. In the Albany experiments, some of those same genes were disrupted in the Salmonella specimens that couldn’t cling to alfalfa sprouts.
Their 2005 article in Applied and Environmental Microbiology helped set the stage for followup studies to tease out other genes that Salmonella uses when it is living on and in plants.
A deeper understanding of those and other genes may lead to sophisticated defense strategies to protect tomorrow’s salad greens—and us.—By Marcia Wood, Agricultural Research Service Information Staff.
Geneticist Michael Cooley collects a sediment sample to test for E. coli O157:H7. The pathogen was found near fields implicated in the 2006 outbreak of E. coli O157:H7 on baby spinach.
Environmental Surveillance Exposes a Killer
It started as a manhunt for a microbe, but it became one of the nation’s most intensive farmscape searches for the rogue pathogen E. coli O157:H7.
ARS microbiologist Robert E. Mandrell and geneticist Michael B. Cooley of the Produce Safety and Microbiology Research Unit in Albany, California, had already been collaborating in their own small-scale study of potential sources of E. coli O157:H7 in the state’s produce-rich Salinas Valley when, in 2005, they were asked to join another one. The new investigation became a 19-month surveillance—by the two scientists and other federal and state experts—of E. coli in Salinas Valley watersheds.
“It may seem like an obvious concept today,” says Mandrell, “but at the time, there was little proof that E. coli contamination of produce before harvest could be a major cause of food-poisoning outbreaks.”
Mandrell and Cooley aided the California Food Emergency Response Team, as this food-detective squad was named, in tracing movement of E. coli through the fertile valley. This surveillance showed that E. coli O157:H7 can travel long distances in streamwater and floodwater.
In 2006, E. coli O157:H7 strains indistinguishable from those causing human illness associated with baby spinach were discovered in environmental samples—including water—taken from a Salinas Valley ranch.
Wild pigs were added to the list of animal carriers of the pathogen when one of the so-called “outbreak strains” of E. coli O157:H7 was discovered in their dung. The team documented its work in 2007 in PLoS ONE and Emerging Infectious Diseases.
The Albany scientists used a relatively new technique to detect E. coli O157:H7 in water. Developed at the ARS Meat Animal Research Center in Clay Center, Nebraska, for animal hides, the method was adapted by the Albany team for the outdoor reconnaissance.
Because of their colleagues’ work, says Cooley, “We had the right method at the right time.”—By Marcia Wood, Agricultural Research Service Information Staff.
"Outmaneuvering Foodborne Pathogens" was published in the July 2008 issue of Agricultural Research magazine. | <urn:uuid:b366eb1a-184b-4885-b5a6-92a6d0c0f67f> | CC-MAIN-2013-20 | http://www.ars.usda.gov/is/AR/archive/jul08/pathogen0708.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.93 | 3,794 | 3.078125 | 3 |
The CIA, the NSA, the FBI and all other three-letter, intelligence-gathering, secret-keeping agencies mimic and are modeled after secret societies. They gather and filter information by compartmentalizing the organization in a pyramid-like hierarchical structure keeping everyone but the elite on a need-to-know basis. The CIA was born from the WWII intelligence arm, the OSS (Office of Strategic Services), and was funded into permanence by the Rockefeller and Carnegie Foundations, which donated $34 million 1945-48 alone. Nearly every person instrumental in the creation of the CIA was already a member of the CFR, including the Rockefellers and Dulles brothers.
In 1945 when the CIA was still the OSS, they began Operation Paperclip which brought over 700 Nazi scientists directly into the forming CIA, NSA, and other high-level government organizations. Since it was illegal to even allow these Nazis into the US, let alone into top-secret government agencies, the CIA convinced the Vatican to issue American passports for these 700+ Nazi scientists under the pretense that it was to keep them out of the hands of the Russians.
“After WWII ended in 1945, victorious Russian and American intelligence teams began a treasure hunt throughout occupied Germany for military and scientific booty. They were looking for things like new rocket and aircraft designs, medicines, and electronics. But they were also hunting down the most precious ‘spoils’ of all: the scientists whose work had nearly won the war for Germany. The engineers and intelligence officers of the Nazi War Machine. Following the discovery of flying discs (foo-fighters), particle/laser beam weaponry in German military bases, the War Department decided that NASA and the CIA must control this technology, and the Nazi engineers that had worked on this technology. There was only one problem: it was illegal. U.S. law explicitly prohibited Nazi officials from immigrating to America--and as many as three-quarters of the scientists in question had been committed Nazis.” -Operation Paperclip Casefile: New World Order and Nazi Germany
Hundreds of Nazi mind-control specialists and doctors who performed horrific experiments on prisoners instantly had their atrocious German histories erased and were promoted into high-level American jobs. Kurt Blome, for instance, was a high-ranking Nazi scientist who experimented with plague vaccines on concentration camp prisoners. He was hired by the U.S. Army Chemical Corps to work on chemical warfare projects. Major General Walter Schreiber was a head doctor during Nazi concentration camp prisoner experiments in which they starved, and otherwise tortured the inmates. He was hired by the Air Force School of Medicine in Texas. Werner Von Braun was technical director of the Nazi Peenemunde Rocket Research Center, where the Germans developed the V2 rocket. He was hired by the U.S. Army to develop guided missiles and then made the first director of NASA!
“Military Intelligence ‘cleansed’ the files of Nazi references. By 1955, more than 760 German scientists had been granted citizenship in the U.S. and given prominent positions in the American scientific community. Many had been longtime members of the Nazi party and the Gestapo, had conducted experiments on humans at concentration camps, had used slave labor, and had committed other war crimes. In a 1985 expose in the Bulletin of the Atomic Scientists Linda Hunt wrote that she had examined more than 130 reports on Project Paperclip subjects - and every one ‘had been changed to eliminate the security threat classification.’ A good example of how these dossiers were changed is the case of Werner von Braun. A September 18, 1947, report on the German rocket scientist stated, ‘Subject is regarded as a potential security threat by the Military Governor.’ The following February, a new security evaluation of Von Braun said, ‘No derogatory information is available on the subject … It is the opinion of the Military Governor that he may not constitute a security threat to the United States.’” -Operation Paperclip Casefile: New World Order and Nazi Germany
Shortly after Operation Paperclip came Operation Mockingbird, during which the CIA trained reporters and created media outlets to disseminate their propaganda. One of Project Mockingbird’s lead roles was played by Philip Graham who would become publisher of The Washington Post. Declassified documents admit that over 25 organizations and 400 journalists became CIA assets which now include major names like ABC, NBC, CBS, AP, Reuters, Time, Newsweek and more.
In 1953 the Iranian coup classified as Operation AJAX was the CIA’s first successful overthrow of a foreign government. In 1951 Iran Parliament and Prime Minister Dr. Mohammed Mosaddeq voted for nationalizing their oil industry which upset western oil barons like the Rockefellers. On April 4th, 1953, CIA director Allen Dulles transferred $1 million to Iranian General Fazlollah Zahedi to be used “in any way that would bring about the fall of Mosaddeq.” Coup leaders first planted anti-Mosaddeq propaganda throughout the Iranian press, held demonstrations, and bribed officials. Then they began committing terror attacks to blame on Mosaddeq hoping to bring public sentiment away from their hero. They machine-gunned civilians, bombed mosques, and then passed out pamphlets saying, “Up with Mosaddeq, up with Communism, down with Allah.” Zahedi’s coup took place between August 15th and 19th after which the CIA sent $5 million more for helping their new government consolidate power. Soon America controlled half of Iran’s oil production and American weapons merchants moved in making almost $20 billion off Iran in the next 20 years.
“In 1953 the Central Intelligence Agency working in tandem with MI6 overthrew the democratically-elected leader of Iran Dr. Mohammed Mosaddeq. Mosaddeq had been educated in the west, was pro-America, and had driven communist forces out of the north of his country shortly after being elected in 1951. Mosaddeq then nationalized the oil fields and denied British Petroleum a monopoly. The CIA’s own history department at cia.gov details how U.S. and British intelligence agents carried out terror attacks and then subsequently blamed them on Mosaddeq … The provocations included propaganda, demonstrations, bribery, agents of influence, and false flag operations. They bombed the home of a prominent religious leader and blamed it on Moseddeq. They attacked mosques, machine-gunned crowds, and then handed out thousands of handbills claiming that Moseddeq had done it … Dr. Mohammed Moseddeq, who was incarcerated for the duration of his life, fared better than any of his ministers who were executed just days after the successful coup for crimes that MI6 and the CIA had committed.” -Alex Jones, “Terrorstorm” DVD
In 1954 the CIA performed its second coup d’etat overthrow of a foreign democracy; this time it was Guatemala, whose popular leader Jacobo Arbenz Guzman, had recently nationalized 1.5 million acres of land for the peasants. Before this, only 2.2% of Guatemala’s land-owners owned 70% of the land, which included that of United Fruit Co. whose board of directors were friends with the Dulles brothers and wanted to keep Guatemala a banana republic. So once again the CIA sent in propagandists and mercenaries, trained militia groups, bombed the capital, and installed their puppet dictator Castillo Armas, who the gave United Fruit Co. and the other 2.2% land-owners everything back. Military dictators ruled Guatemala for the next 30 years killing over 100,000 citizens. Guatemalan coroners were reported saying they could not keep up with the bodies. The CIA called it Operation Success.
“The CIA has overthrown functioning democracies in over twenty countries.” -John Stockwell, former CIA official
They always follow the same strategy. First, globalist interests are threatened by a popular or democratically elected foreign leader; leaders who help their populations nationalize foreign-owned industries, protect workers, redistribute wealth/land and other such actions loved by the lower and middle-class majority, hated by the super-rich minority. Next, the CIA identifies and co-operates with opposition militia groups within the country, promising them political power in trade for American business freedom. Then they are hired, trained and funded to overthrow the current administration through propaganda, rigged elections, blackmail, infiltration/disruption of opposition parties, intimidation, torture, economic sabotage, death squads and assassinations. Eventually the CIA-backed militia group stages a coup and installs their corporate sympathizer-dictator and the former leaders are propagated as having been radicals or communists and the rest of the world is taught to shrug and view American imperialism as necessary world policing. The CIA has now evolved this whole racket into a careful science which they teach at the infamous “School of the Americas.” They also publish books like “The Freedom Fighter’s Manual” and “The Human Resource Exploitation Training Manual” teaching methods of torture, blackmail, interrogation, propaganda and sabotage to foreign military officials.
Starting in 1954 the CIA ran operations attempting to overthrow the communist North Vietnamese government, while supporting the Ngo Dinh Diem regime in South Vietnam. From 1957-1973 the CIA conducted what has been termed “The Secret War” in Laos during which they carried out almost one coup per year in an effort to overthrow their democracy. After several unsuccessful attempts, the US began a bombing campaign, dropping more explosives and planting more landmines on Laos during this Secret War than during all of World War II. Untold thousands died and a quarter of the Laotian people became refugees often living in caves. Right up to the present, Laotians are killed/maimed almost daily from unexploded landmines. In 1959 the US helped install “Papa Doc” Duvalier, the Haitian dictator whose factions killed over 100,000. In 1961 CIA Operation Mongoose attempted and failed to overthrow Fidel Castro. Also in 1961 the CIA assassinated the Dominican Republic’s leader Rafael Trujillo, assassinated Zaire’s democratically-elected Patrice Lumumba, and staged a coup against Ecuador’s President Jose Velasco, after which US President JFK fired CIA director Allen Dulles. In 1963 the CIA was back in the Dominican Republic and Ecuador performing military coups overthrowing Juan Bosch and President Arosemana. In 1964 another CIA-funded/armed coup overthrew Brazil’s democratically-elected Joao Goulart replacing him with Dictator General Castelo Branco, CIA-trained secret police, and marauding death squads. In 1965 the CIA performed coups in Indonesia and Zaire and installed oppressive military dictators; General Suharto in Indonesia would then go on to slaughter nearly a million of his countrymen. In 1967 a CIA-backed coup overthrew the government of Greece. In 1968 they helped capture Che Guevara in Bolivia. In 1970 they overthrew Cambodia’s popular Prince Sahounek, an action that greatly strengthened the once minor opposition Khmer Rouge party who went on to murder millions. In 1971 they backed a coup in Bolivia and installed Dictator Hugo Banzer who went on torture and murder over 2000 of his political opponents. In 1973 they assassinated Chile’s democratically-elected Salvador Allende and replaced him with General Augusto Pinochet who murdered thousands of his civilians. On and on it goes; The Association for Responsible Dissent put out a report estimating that by 1987, 6 million people worldwide had died resulting from CIA covert ops. Since then there have been many untold millions more.
“Throughout the world, on any given day, a man, woman or child is likely to be displaced, tortured, killed or disappeared, at the hands of governments or armed political groups. More often than not, the United States shares the blame.” -Amnesty International annual report on U.S. Military aid and human rights, 1996
1979-1989 CIA Operation Cyclone, with joint funding from Britain’s MI6, heavily armed and trained over 100,000 Afghani Mujahideen (“holy warriors”) during the Soviet war in Afghanistan. With the help of the Pakistani ISI (Inter-Services Intelligence), billions of dollars were given to create this Islamic army. Selig Harrison from the Woodrow Wilson International Centre for Scholars stated, “The CIA made a historic mistake in encouraging Islamic groups from all over the world to come to Afghanistan. The US provided $3 billion [now many more billion] for building up these Islamic groups, and it accepted Pakistan’s demand that they should decide how this money should be spent … Today that money and those weapons have helped build up the Taliban … [who] are now making a living out of terrorism.”
“The United States has been part and parcel to supporting the Taliban all along, and still is let me add … You have a military government in Pakistan now that is arming the Taliban to the teeth … Let me note; that [US] aid has always gone to Taliban areas … And when people from the outside try to put aid into areas not controlled by the Taliban, they are thwarted by our own State Department … Pakistan [has] initiated a major resupply effort, which eventually saw the defeat, and caused the defeat, of almost all of the anti-Taliban forces in Afghanistan.” -Congressional Rep. Dana Rohrbacher, the House International Relations Committee on Global Terrorism and South Asia, 2000
British Foreign Secretary Robin Cook stated before the House of Commons that “Al Qaeda” is not actually a terrorist group, but a database of international Mujahadden and arms dealers/smugglers used by the CIA to funnel arms, money, and guerrillas. The word “Al Qaeda” itself literally translates to “the database.” Not only did the CIA create the Taliban and Al-Qaeda, they continued funding them right up to the 9/11 attacks blamed on them. For example, four months prior to 9/11, in May, 2001, Colin Powell gave another $43 million in aid to the Taliban.
“Not even the corporate US media could whitewash these facts and so explained it away by alleging that US officials had sought cooperation from Pakistan because it was the original backer of the Taliban, the hard-line Islamic leadership of Afghanistan accused by Washington of harboring Bin Laden. Then the so called ‘missing link’ came when it was revealed that the head of the ISI was the principal financier of the 9/11 hijackers ... Pakistan and the ISI is the go between of the global terror explosion. Pakistan's military-intelligence apparatus, which literally created and sponsored the Taliban and Al Qaeda, is directly upheld and funded by the CIA. These facts are not even in dispute, neither in the media nor in government. Therefore when we are told by the neocon heads of the new world order that they are doing everything in their power to dismantle the global terror network what we are hearing is the exact opposite of the truth. They assembled it, they sponsored it and they continue to fund it. As any good criminal should, they have a middleman to provide plausible deniability, that middleman is the ISI and the military dictatorship of Pakistan.” -Steve Watson, “U.S. Intel Officer: Al Qaeda Leadership Allowed to Operate Freely” (http://www.infowars.net/articles/july2007/160707ISI.htm)
In a late 1980’s Newsweek article, outspoken opponent of President Bush and recently assassinated Pakistani Prime Minister Benazir Bhutto, told George Bush Sr., “you are creating a Frankenstein,” concerning the growing Islamist movement. She also came out in 2007 to say that Osama Bin Laden was already long dead having been murdered by Omar Sheikh. She was murdered herself a month after the interview, only two weeks before the Pakistani 2008 general elections. | <urn:uuid:96c4ba0b-5575-4365-9b5c-1164b7bca6d5> | CC-MAIN-2013-20 | http://www.atlanteanconspiracy.com/2008/09/history-of-cia.html?showComment=1358287968011 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.962873 | 3,320 | 2.859375 | 3 |
Issued from the woods of the Loess Hills a few miles east of
NATCHEZ, MISSISSIPPI, USA
April 29, 2012
|CATTLE EGRETS AMONG CATTLE
As in Mexico, around here if you pass by a pasture you're likely to see Cattle Egrets standing among or on the cows, as shown at http://www.backyardnature.net/n/12/120429eg.jpg.
Cattle Egrets in their breeding plumage, like the ones in the picture, can be distinguished from other white egrets and herons by the patches of light orange-brown on their crests and chests. Nonbreeding Cattle Egrets can be all white, and then their relatively thick, yellow beaks and thicker, shorter necks separate them from similar-sized, white herons and egrets found here, such as Snowy Egrets and juvenile Little Blue Herons.
I remember the first time Cattle Egrets were spotted in the rural part of western Kentucky where I grew up, possibly in 1963. Their appearance was so unusual that a farmer not particularly interested in Nature called my parents and said that a whole flock of big white birds had appeared in his pasture, and we went up to take a look. I was in college before I learned that they were Cattle Egrets, BUBULCUS IBIS.
My ornithology teacher told how the birds were undergoing one of the fastest and most widely ranging expansions of distribution ever seen among birds. Originally Cattle Egrets were native to southern Spain and Portugal, tropical and subtropical Africa and humid tropical and subtropical Asia. In the late 1800s they began expanding their range into southern Africa, and were first sighted in the Americas, on the boundary of Guiana and Suriname, in 1877, apparently having flown across the Atlantic Ocean. They didn't get permanently established there until the 1930s, though, but then they began expanding into much of the rest of the Americas, reaching western Kentucky around the early 60s. The species appears still to be expanding northward in western North America, but in the Northeast it seems to be in decline. Though they can turn up as far north as southern Canada, coast to coast, mostly they breed in the US Southeast.
The Wikipedia expert says that Cattle Egrets eat ticks and flies from cattle. They do that, but anyone who watches our birds awhile sees that mainly as the cattle move around they stir up creatures in the grass, which the egrets prey on. The cows' fresh manure also attracts flies for them.
MATING BOX TURTLES
It's interesting to see how turtles manage it, but for many readers familiar with box turtles in other parts of North America the picture may raise the question of why those in our picture bear different colors and patterns than theirs. What's happening is that Box Turtles are represented by six intergrading subspecies.
Hillary's Gulf Coast location is supposed to be home to the Gulf Coast subspecies, Terrapene carolina ssp. major. However, that subspecies is described as having a brownish top shell, or carapace, sometimes with a few dull spots or rays, but nothing like these bright, yellow lines. I can't say what's going on. Apparently Box Turtle taxonomy is a bit tricky.
RESTING CRANE FLY
That looks like a mosquito but you can see from how much of the leaf he covers that he's far too large to be any mosquito species found here. Also, he lacks the hypodermic-like proboscis mosquitoes use to suck blood. No conspicuous mouthparts are visible on our crane fly because adult crane flies generally hardly eat at all, only occasionally lapping up a bit of pollen or sugar-rich flower nectar. Their maggot-like larvae feed on plant roots. Some species can damage crops.
Oosterbroek's monumental, 2012 Catalogue of the Craneflies of the World -- free and online at http://ip30.eti.uva.nl/ccw/ --recognizes 15,345 cranefly species, 1630 of them just in our Nearctic ecozone, which embraces the US, Canada, Greenland, and most of northern Mexico.
That's why when I shipped the picture to volunteer identifier Bea in Ontario it took more time than usual for her verdict to come in, and she was comfortable only with calling it the genus TIPULA.
Whatever our species, it's a pleasure to take the close shown at http://www.backyardnature.net/n/12/120429cg.jpg.
What are those things below the wings looking like needles with droplets of water at their ends? Those are "halteres," which commonly occur among the Fly Order of Insects, the Diptera. Though their purpose isn't known with certainty, it's assumed that they help control flight, enabling flies to make sudden mid-air changes in direction. From the evolutionary perspective, halteres are modified back wings. Most insects have two pairs, or four, wings, but not the Diptera, as the name implies -- di-ptera, as they say "two-wings" in classical Greek.
ADMIRING THE WHITE OAK
In that picture I'm holding a leaf so you can see its underside, much paler than other leaves' topsides. The tree's gray bark of narrow, vertical blocks of scaly plates is shown at http://www.backyardnature.net/n/12/120429qc.jpg.
I'm accustomed to seeing White Oaks on relatively dry upland soils so I was a little surprised when the tree in the picture showed up on a stream bank growing among Sycamores. In fact, White Oaks are fairly rare around here, completely absent in many upland forests where I'd expect them to be. Years ago I mentioned this in a Newsletter and a local reader responded that in this region White Oaks were wiped out many years ago by people cutting them as lumber and, more importantly, using them in the whisky distilling business. The online Flora of North America says that "In the past Quercus alba was considered to be the source of the finest and most durable oak lumber in America for furniture and shipbuilding."
There beside the stream, last year's crop of our White Oak's acorns had been washed away, but this season's were there in their first stages of growth, as seen at http://www.backyardnature.net/n/12/120429qb.jpg.
Traditionally early North Americans regarded the inner bark of White Oaks as highly medicinal. Extracts made from soaking the inner bark in water are astringent (puckery) and were used for gargling, and the old herbals describe the extract as tonic, stimulating and antiseptic. Other listed uses include for "putrid sore throat," diphtheria, hemorrhages, spongy or bleeding gums, and hemorrhoids. Many applications suggest adding a bit of capsicum, or hot pepper, to the extract.
Basically the notion seems to be that the bark's tannin -- the puckery element -- does the main medicinal service. Other oaks actually have more tannin than White Oak, but medicines made with them can be too harsh. White Oak extracts seem to have just the right amount.
The same tannin situation exists with regard to the edibility of acorns. The acorns of other oaks contain more tannin so they require more time and effort to make them edible. White Oak acorns have much less tannin, but even still there's enough to make them too bitter for humans to eat without treatment, which traditionally has been leaching acorn pulp in running water.
By the way, instructions for the kitchen leaching of acorn pulp appear at http://www.ehow.com/how_8427141_leach-acorns.html.
AMERICAN HOLLY FLOWERING
American Hollies are a different species from the English Holly often planted as ornamentals. American Holly bears larger leaves and produces fewer fruits. Hollies come in male or female trees (they're dioecious), and you can tell from the flowers in the upper, left of the above picture that here we have a male tree. A close-up of a male flower with its four out-thrusting stamens is at http://www.backyardnature.net/n/12/120429hp.jpg.
On a female flower the stamens would be rudimentary and there'd be an ovary -- the future fruit -- in the blossom's center.
Maybe because people are so used to seeing English Hollies planted up north often it's assumed that they're northern trees. In fact, American Holly is mainly native to the US Southeast, though along the Coastal Plain it reaches as far north as southern Connecticut. Around here it's strictly an understory tree.
The fruits are mildly toxic but you must eat a lot of them to get sick. Birds, deer, squirrels and other animals eat the fruits, which are drupes bearing several hard "stones." No critter seems to relish them, though, saving them mostly to serve as "emergency food" when other foods run out. That might explain why we see hollies holding their red fruits deep into the winter.
"BEGGAR'S LICE" ON MY SOCKS
Several kinds of plants produce stickery little fruits like that and they all can be called Beggar's Lice. When I tracked down the plant attaching its fruits to me, it was what's shown at http://www.backyardnature.net/n/12/120429my.jpg.
Several beggar's-lice-producing plants are similar to that, so before being sure what I really had I had to "do the botany." Here are details I focused on:
Leaves and stems were hairy, and leaves were rounded toward the base, sometimes clasping the stem, as shown at http://www.backyardnature.net/n/12/120429mw.jpg.
A close-up of a "beggar's louse" is shown stuck in my arm hairs at http://www.backyardnature.net/n/12/120429mx.jpg.
That last picture is sort of tricky. For, you expect the thing stuck to you to be a fruit with hooked spines, but the thing in the picture isn't a fruit. It's actually a baglike calyx surrounding much smaller fruit-like things. I crumbled some calyxes between my fingers and part of what resulted is shown at http://www.backyardnature.net/n/12/120429mv.jpg.
The four shiny things are not seeds. Maybe you've seen that the ovary of most mint flowers is divided into four more-or-less distinct parts. Each of those parts is called a nutlet, and that's what you're seeing. But other plant families beside the Mint produce nutlets.
Our beggar's-louse-producing plant is MYOSOTIS DISCOLOR, a member of the Borage Family, the Boraginaceae, which on the phylogenetic Tree of Life is adjacent to the Mint Family. Myosotis discolor is an invasive from Europe that so far has set up residence here and there in eastern and western North America, but so far seems to be absent in the center.
The English name is often given as Changing Forget-me-not, because Myosotis is the Forget-me-not genus, and in Latin dis-color says "two-colored," apparently referring to the fact that the flowers can be white or blue, though all I've seen here are white. But, this rangy little plant you never notice until its calyxes stick to you seems to have nothing to do with Forget-me-nots, unless you look at technical features. I think some editor must have made up the name "Changing Forget-me-not." Our plant very clearly is one of several "Beggar's Lice."
OATS ALONG THE ROAD
A spikelet plucked from the panicle is shown at http://www.backyardnature.net/n/12/120429ov.jpg.
The same spikelet opened to show the florets inside the glumes at http://www.backyardnature.net/n/12/120429ou.jpg.
This is Oat grass, AVENA SATIVA, the same species producing the oats of oatmeal. Oat spikelets differ from those of the vast majority of other grasses by the very large, boat-shaped glumes subtending the florets.
Glumes are analogous to a regular flower's calyx, so in that last picture of a spikelet, the glumes are the two large, green-and-white striped items at the left in the photograph. The vast majority of grass spikelets bear glumes much shorter than the florets above them. Also, notice that the slender, stiff, needlelike item, the awn, arises from a floret inside the spikelet and not from a glum.
Remember that you can review grass flower terminology at http://www.backyardnature.net/fl_grass.htm.
The spikelets of most Oat plants don't bear needlelike awns. You're likely to see both awned and awnless kinds growing as weeds in our area. When I first saw the awns I thought this might be one of the "Wild Oat" species, for several species reside in the Oat genus Avena, and one of those grows wild in the US Southeast. However, florets of the other species bear long, brownish hairs, and you can see that ours are hairless, or "glabrous." The other species' awns also are twisted, but regular Oat awns, when present, are rigid and straight. Both Oat species are native to Eurasia.
How did that Oat plant make its way to the side of our isolated Mississippi backroad? Near where the grass grew there was a large game farm where exotic animals are kept so hunters can pay high fees to kill them. I'm betting that the animals are fed oats. Our plant was in an often-flooded spot downstream from the farm, so maybe an oat grain had washed there.
That's a roadcut through a special kind of very fine-grained clay called loess. The word loess derives from the German Löß. A deep mantel of loess was deposited here at the end of the last Ice Age about 10,000 years ago. Deep loess deposits occur in a narrow band of upland immediately east of the Mississippi River over most of its entire course. The loess region sometimes is called the Loess Hills. Loess profoundly affects the area's ecology. For one thing, the farther east you go from the Mississippi River, the thinner the loess is, the poorer and more acidic the soil becomes, and the more pines you get instead of broadleaf deciduous trees.
Loess is so important here, and so interesting, that years ago I developed a web portal called "Loess Hills of the Lower Mississippi Valley," at http://www.backyardnature.net/loess/loess.html.
I had hoped to engage local folks in an effort to recognize the Loess Hills as a very interesting, scenic and biologically important, distinct region with ecotourism potential, but nothing ever came from it. At that site you can learn how "loess" can be pronounced, how it came to exist here, what's special about it, and much more.
One thing special about loess is that it erodes into vertical-sided roadcuts as in the picture. People such as road engineers who try to create gentle slopes are doomed to failure. I wish my farming Maya friends in the Yucatan, who must deal with very thin, rocky soil, could see the thick mantel of rich loess we have here.
NO MORE EMAILED NEWSLETTERS
From now on, to read the Newsletters you'll just have to remember to check out the most recently issued edition at http://www.backyardnature.net/n/.
Today's Newsletter is there now waiting for readers, with stories about Cattle Egrets, mating Box Turtles, craneflies, flowering holly trees, Beggar's Lice and more.
If you're on Facebook you can find the Facebook Newsletter page by searching for "Jim Conrad's Naturalist Newsletter." The weekly message left there will link to individual pages with images embedded in text. In today's message, for instance, you can click on "Cattle Egrets" and see a regular web page with text and a photo. I've configured my Facebook page to have a subscribe tab but so far one hasn't appeared. My impression is that if you "like" the Newsletter page, each week you'll receive a message with its link. Maybe not. I'm still figuring it out.
So, this is the end of eleven years of weekly delivered emails.
At first I was upset and annoyed, and thought of writing the 2,158 subscribers suggesting that complaints be made to FatCow at email@example.com. However, something interesting has happened.
Last week about a dozen subscribers accepted my invitation to check out the Newsletter's Facebook page. When they "liked" the page, I got to see their pictures, or at least their avatars. There were all kinds of folks, old and young, skinny and fat, white and brown, serious and joking, one fellow on a boat in Maine, a lady in India with a dot, or Bindi, in the middle of her forehead, someone's baby picture... What an amazing thing that all these people were interested in what I'd written!
So, in a way, FatCow.com's treatment has been a gift. It's resensitized me to my readership. Also, it's nudged me into a mental space where now I'm mentally prepared for the whole BackyardNature.net site to be removed permanently, for whatever reason they come up with. That extra sense of independence means a lot to me. Now if need be I'm ready to write Newsletters and just keep them in my computer, or write them in a notebook hidden in my trailer, or write them on leaves that I let float down the Mississippi River. I've already learned how to make ink from oak galls.
So, we're evolving here. I'm yielding when it's clear that the forces against us control critical resources, but I'm ready to experiment with new possibilities as they appear, and I continue to think, feel and write about the world around us, and share when I'm allowed to.
Good luck in your own evolutions. And thanks for these years of weekly inviting me into your lives.
Best wishes to all Newsletter readers,
To subscribe OR unsubscribe to this Newsletter, go to www.backyardnature.net/news/natnat.php.
Post your own backyard-nature observations and thoughts at http://groups.google.com/group/backyard-nature/
All previous Newsletters are archived at www.backyardnature.net/n/.
Visit Jim's backyard nature site at www.backyardnature.net | <urn:uuid:ed524a66-2327-4328-a7fc-f00a0d14d470> | CC-MAIN-2013-20 | http://www.backyardnature.net/n/12/120429.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959242 | 4,095 | 3.59375 | 4 |
Tuscaloosa, located at the falls of the Black Warrior River in west central Alabama, is the the fifth-largest city in Alabama with a population of 90,468, and the seat of Tuscaloosa County. It is named for the Choctaw chieftain Tuskalusa (meaning Black Warrior), who battled and was defeated by Hernando de Soto in 1540 in the Battle of Mauvila.
Best known as the home of the University of Alabama, Tuscaloosa is also the center of industry, commerce, healthcare, and education for the region commonly known as West Alabama.
The area at the fall line of what would be later known as the Black Warrior River had long been well known to the various Indian tribes whose shifting fortunes brought them to West Alabama. The river shoals at Tuscaloosa represented the southernmost site on the river which could be forded under most conditions. Inevitably, a network of Indian trails converged upon the place, the same network which, in the first years of the 19th Century began to lead a few white frontiersmen to the area.
The pace of white settlement increased greatly after the War of 1812, and a small assortment of log cabins soon arose near the large Creek village at the fall line of the river, which the settlers named in honor of the legendary Chief Tuscaloosa. In 1817, Alabama became a territory, and on December 13, 1819, the territorial legislature incorporated the town of Tuscaloosa, exactly one day before the United States Congress admitted Alabama to the Union as a state.
From 1826 to 1846 Tuscaloosa was the capital of Alabama. During this period, in 1831, the University of Alabama was established. The town's population and economy grew rapidly until the departure of the capital to Montgomery caused a rapid decline in population. Establishment of the Bryce State Hospital for the Insane in Tuscaloosa in the 1850s helped restore the city's fortunes. During the Civil War following Alabama's secession from the Union, several thousand men from Tuscaloosa fought in the Confederate armies. During the last weeks of the War, a brigade of Union troops raiding the city burned the campus of the University of Alabama. Tuscaloosa, too, suffered much damage from the battle and shared fully in the South's economic sufferings which followed the defeat.
The construction of a system of locks and dams on the Black Warrior River by the U.S. Army Corps of Engineers in the 1890s opened up an inexpensive link to the Gulf seaport of Mobile, stimulating especially the mining and metallurgical industries of the region. By the advent of the 20th Century, the growth of the University of Alabama and the mental health-care facilities in the city, along with strong national economy fueled a steady growth in Tuscaloosa which continued unabated for 100 years. Manufacturing plants of large firms such as Michelin and JVC located in town during the latter half of the 20th Century. However, it was the announcement of the addition of the Mercedes-Benz US International assembly plant in 1993 that best personified the new era of economic prosperity for Tuscaloosa.
Geography and climate
According to the U.S. Census Bureau, Tuscaloosa has a total area of 66.7 square miles. 56.2 mi² of it is land and 10.5 mi² of it (15.7%) is water. Most of water within the city limits is in Lake Tuscaloosa, which is entirely in the city limits, and the Black Warrior River.
Tuscaloosa lies approximately 60 miles southwest of Birmingham, at the fall line of the Black Warrior River on the boundary between the Appalachian Highland and the Gulf Coastal Plain approximately 120 miles upriver from its confluence with the Tombigbee River in Demopolis. Consequently, the geography of the area around Tuscaloosa is quite diverse, being hilly and forested to the northeast and low-lying and marshy to the southwest.
The area experiences a typical Southern subtropical climate with four distinct seasons. The Gulf of Mexico heavily influences the climate by supplying the region with warm, moist air. During the fall, winter and spring seasons, the interaction of this warm, moist air with cooler, drier air from the North along fronts create precipitation.
Notable exceptions occur during hurricane season where storms may move from due south to due north or even from east to west during land-falling hurricanes. The interaction between low- and high-pressure air masses is most pronounced during the severe weather seasons in the spring and fall. During the summer, the jet streams flows well to the north of the southeastern U.S., and most precipitation is consequently convectional, that is, caused by the warm surface heating the air above.
Winter lasts from mid-December to late-February; temperatures range from the mid-20s to the mid-50s. On average, the low temperature falls at freezing or below about 50 days a year. While rain is abundant (an average 5.09 in. per month from Dec.-Feb.), measurable snowfall is rare; the average annual snowfall is about 0.6 inches. Spring usually lasts from late-February to mid-May; temperatures range from the mid-50s to the low-80s and monthly rainfall amounts average about 5.05 in. (128 mm) per month. Summers last from mid-May to mid-September; temperatures range from the upper-60s to the mid-90s, with temperatures above 100°F not uncommon, and average rainfall dip slightly to 3.97 in. per month. Autumn, which spans from mid-September to early-December, tends to be similar to Spring terms of temperature and precipitation.
As of the census of 2000 there were 77,906 people, 31,381 households, and 16,945 families residing in the city. The population density was 1,385.2/mi². There were 34,857 housing units at an average density of 619.8/mi². The racial makeup of the city was 54% White and 43% Black or African American. 1.40% of the population were Hispanic or Latino of any race.
There were 31,381 households out of which 23.9% had children under the age of 18 living with them, 35.0% were married couples living together, 15.7% had a female householder with no husband present, and 46.0% were non-families. 35.2% of all households were made up of individuals and 9.3% had someone living alone who was 65 years of age or older. The average household size was 2.22 and the average family size was 2.93.
In the city the population was spread out with 19.8% under the age of 18, 24.5% from 18 to 24, 25.4% from 25 to 44, 18.5% from 45 to 64, and 11.8% who were 65 years of age or older. The median age was 28 years. For every 100 females there were 90.8 males. For every 100 females age 18 and over, there were 87.9 males.
The median income for a household in the city was $27,731, and the median income for a family was $41,753. Males had a median income of $31,614 versus $24,507 for females. The per capita income for the city was $19,129. About 14.2% of families and 23.6% of the population were below the poverty line, including 25.3% of those under age 18 and 13.4% of those age 65 or over.
Government and Politics
Tuscaloosa has a strong-mayor variant, mayor-council form of government, led by a mayor and a seven-member city council. The mayor is elected by the city at-large and serves four-year terms. Council members are elected to single-member districts every four years as well. Neither the mayor nor the members of the city council is term-limited. All elected offices are nonpartisan.
The mayor administers the day-to-day operations of the city, including overseeing the various city departments, over whom he has hiring and firing power. The mayor also acts as ambassador of the city. The mayor sits in city council meetings and has a tie-breaking vote. The current Mayor of Tuscaloosa is Walter Maddox, who was elected to office is September 2005. Prior to Maddox, Alvin A. DuPont had served as mayor for 24 years.
The city council is a legislative body that considers policy and passes law. The council also passes the budget for mayoral approval. Any resolution passed by the council is binding law. The majority of work in the council is done by committee, a usually consisting of a chairman, two other council members, and relevant non-voting city employees.
|3||Cynthia Lee Almond||2005|
|7||William Tinker, III||2005|
Tuscaloosa, as the largest county seat in western Alabama, serves a hub of state and federal government agencies. In addition to the customary offices associated with the county courthouse, namely two District Court Judges, six Circuit Court Judges, the District Attorney and the Public Defender, several Alabama state government agencies have regional offices in Tuscaloosa, such as the Alabama Department of Transportation and the Alabama State Troopers. Also, several federal agencies operate bureaus out of the Federal Courthouse in Tuscaloosa.
Tuscaloosa is located partially in both the 6th and 7th Congressional Districts, which are represented by Spencer Bachus and Artur Davis respectively. On the state level, the city is split among the 5th, 21st, and 24th Senate districts and 62nd, 63rd, and 70th House districts in the Alabama State Legislature.
Despite its image as a college town, Tuscaloosa boasts a diversified economy based on all sectors of manufacturing and service. 25% of the labor force in the Tuscaloosa Metropolitan Statistical area is employed by the federal, state, and local government agencies. 16.7% is employed in manufacturing; 16.4% in retail trade and transportation; 11.6% in finance, information, and private enterprise; 10.3% in mining and construction; and 9.2% in hospitality. Education and healthcare account for only 7.2% of the area workforce with the remainder employed in other services.
The city's industrial base includes Elk Corporation of Alabama, Nucor Steel Tuscaloosa, BF Goodrich Tire Manufacturing, JVC America, Phifer Incorporated, Gulf States Paper Corporation, and the Mercedes-Benz U.S. International, Inc., assembly plant.
Health-care and education serve as the cornerstone of Tuscaloosa's service sector, which includes the University of Alabama, DCH Regional Medical Center, Bryce State Mental Hospital, the William D. Partlow Developmental Center, and the Tuscaloosa VA Medical Center.
The University of Alabama is the dominant institution of higher learning. Enrolling approximately 24,000 students, UA has been a part of Tuscaloosa's identity since it opened its doors in 1831. Stillman College, which opened in 1875, is a historically Black liberal arts college which enrolls approximately 1,200 students. Additionally, Shelton State Community College, one of the largest in Alabama, is located in the city. The school enrolls 8,000 students from all backgrounds and income levels.
The Tuscaloosa City School System serves the city. It is overseen by the Board of Education, which is composed of eight members elected by district and a chairman is elected by a citywide vote. Operating with a $100 million budget, the system enrolls approximately 10,300 students. The system consists of 19 schools: 11 elementary schools, 3 middle schools, 3 high schools (Paul Bryant High School, Central High School, and Northridge High School), and 2 specialty schools (the Tuscaloosa Center for Technology and Oak Hill School for special needs students). In 2002, the system spent $6,313 per pupil, the 19th highest amount of the 120 school systems in the state.
Tuscaloosa is home to a variety of cultural sites and events reflective of its historical and modern role in Alabama and the Southeast in general. Many of these cultural events are sponsored by the University of Alabama. Numerous performing arts groups and facilities, historical sites, and museums dedicated to subjects as varying as American art and collegiate football dot the city. During football season the area known as "The Strip" pulsates with students, alumni, locals and visitors.
The Tuscaloosa Public Library is a city/county agency with nearly 200,000 items on catalog. 46,857 registered patrons use the library on a regular basis — roughly 28 % of the population of the county. There are currently with three branches: the Main Branch on Jack Warner Parkway, the Weaver-Bolden Branch, and the Brown Branch in Taylorville.
Most of the museums in Tuscaloosa are found downtown or on the campus of the University. Downtown is the home of Children’s Hands-On Museum of Tuscaloosa and the Murphy African-American Museum. The Alabama Museum of Natural History and the Paul Bryant Museum are located on the University campus. The Westervelt-Warner Museum of American Art is located in northern Tuscaloosa at Jack Warner's NorthRiver Yacht Club. Moundville Archaeological Park and the Jones Archaeological Museum are located 15 miles south of Tuscaloosa in Moundville.
The University Alabama also currently fields championship–caliber teams in football, men's baseball, men's and women's basketball, women's gymnastics, and women's softball. These teams play in athletics facilities on the University campus, including Bryant-Denny Stadium, Coleman Coliseum, Sewell-Thomas Baseball Stadium, Alabama Softball Complex, and the Ol' Colony Golf Complex.
Stillman College fields teams in football, basketball, and other sports. In the past decade, Stillman has gone through a renaissance of renovations, including a new football stadium.
Shelton State fields men's and women's basketball, baseball, and softball teams, each with on-campus facilities.
Tuscaloosa is part of the Birmingham-Tuscaloosa-Anniston television market, which is the 40th largest in the nation. All major networks have a presence in the market. WBMA-LP is the ABC affiliate, WIAT-TV is the CBS affiliate, WBRC 6 is the Fox affiliate, WVTM-TV is the NBC affiliate, WBIQ 10 is the PBS affiliate, WTTO is the CW affiliate, and WABM is the MyNetworkTV affiliate. Additionally, WVUA-CA, an independent station, is operated by the University of Alabama.
Health and medicine
DCH Regional Medical Center is the main medical facility in Tuscaloosa. Other major medical centers in Tuscaloosa include the 702-bed VA Medical Center and the 422-bed Bryce State Mental Hospital.
The city lies at the intersection of U.S. Highway 11, U.S. Highway 43, and U.S. Highway 82, Alabama State Route 69, Alabama State Route 215, and Alabama State Route 216) and the duplexed (conjoined) I-20 and I-59. Interstate 359 spurs off from I-20/I-59 and heads northward, ending just shy of the Black Warrior River in downtown Tuscaloosa.
Tuscaloosa is served by the Tuscaloosa Transit Authority which operates the Tuscaloosa Trolley System.
The Tuscaloosa Regional Airport, is located on the north side of the Black Warrior River west of downtown Northport.
Barge traffic routinely transports goods along the Black Warrior River from Birmingham and Tuscaloosa to the Alabama State Docks at Mobile, on the coast of the Gulf of Mexico. Via the Tennessee-Tombigbee Waterway, the city is connected to the Ohio River valley.
"Tuscaloosa, Alabama." Wikipedia, The Free Encyclopedia. 26 April 2007, 02:03 UTC . Accessed 30 April 2007. | <urn:uuid:c2ed6c85-f348-40de-9aa5-65969c027430> | CC-MAIN-2013-20 | http://www.bhamwiki.com/w/Tuscaloosa | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.962809 | 3,379 | 3.234375 | 3 |
What is bone cancer?
Bone is the framework that supports the body. Most bones are hollow. Bone marrow is the soft tissue inside hollow bones. The main substance of bone is made up of a network of fibrous tissue onto which calcium salts are laid down. This makes the bone very hard and strong. At each end of the bone is a softer bone-like tissue called cartilage that acts as a cushion between bones. The outside of the bone is covered with a layer of fibrous tissue.
The bone itself contains 2 kinds of cells. Osteoblasts are cells that form the bone. Osteoclasts are cells that dissolve bone. Although we think that bone does not change, the truth is that it is very active. New bone is always forming and old bone dissolving.
The marrow of some bones is only fatty tissue. In other bones the marrow is a mixture of fat cells and the cells that make blood cells. These blood-forming cells make red blood cells, white blood cells, and platelets.
Types of bone tumors
Most of the time when someone is told they have cancer in their bones, the doctor is talking about a cancer that started somewhere else and then spread to the bone. This is called metastatic cancer (not bone cancer). This can happen to people with many different types of advanced cancer, such as breast cancer, prostate cancer, lung cancer, and many others. Under a microscope, theses cancer cells in the bone look like the cancer cells that they came from. If someone has lung cancer that has spread to the bone, the cells there will look and act like lung cancer cells and they will be treated the same way.
To learn more about cancer that has spread to bone, please see the American Cancer Society document Bone Metastasis, as well as the document on the place where the cancer started (Breast Cancer, Lung Cancer (Non-Small Cell), Prostate Cancer, etc.).
Other kinds of cancers that are sometimes called “bone cancers” start in the bone marrow – in the blood-forming cells – not the bone itself. These are not true bone cancers. The most common of these is multiple myeloma. Certain lymphomas (which more often start in lymph nodes) and all leukemias start in bone marrow. To learn more about these cancers, refer to the document for each.
A primary bone tumor starts in the bone itself. True (or primary) bone cancers are called sarcomas. A sarcoma is a cancer that starts in bone, muscle, tendons, ligaments, fat tissue, or some other tissues in the body.
There are different types of bone tumors. Their names are based on the bone or tissue that is involved and the kind of cells that make up the tumor. Some are cancer (malignant). Others are not cancer (benign). Most bone cancers are called sarcomas.
Benign bone tumors do not spread to other tissues and organs. They can usually be cured by surgery. The information here does not cover benign bone tumors.
Bone tumors that are cancer (malignant)
Osteosarcoma: Osteosarcoma (also called osteogenic sarcoma) is the most common true bone cancer. It is most common in young people between the ages of 10 and 30. But about 10% of cases are people in their 60s and 70s. This cancer is rare during middle age. More males than females get this cancer. These tumors start most often in bones of the arms, legs, or pelvis. This type of bone cancer is not discussed in this document, but is covered in detail in our document, Osteosarcoma.
Chondrosarcoma: This is cancer of the cartilage cells. Cartilage is a softer form of bone-like tissue. Chondrosarcoma is the second most common true bone cancer. It is rare in people younger than 20. After age 20, the risk of this cancer keeps on rising until about age 75. Women get this cancer as often as men.
Chondrosarcomas can develop in any place where there is cartilage. It most often starts in cartilage of the pelvis, leg, or arm, but it can start in many other places, too.
Chondrosarcomas are given a grade, which measures how fast they grow. The lower the grade, the slower the cancer grows. When cancer grows slowly, the chance that it will spread is lower and the outlook is better. There are also some special types of chondrosarcoma that respond differently to treatment and have a different outlook for the patient. These special types look different when seen under a microscope.
Ewing tumor: This cancer is also called Ewing sarcoma. It is named after Dr. James Ewing, the doctor who first described it in 1921. It is the third most common bone cancer. Most Ewing tumors start in bones, but they can start in other tissues and organs. This cancer is most common in children and teenagers. It is rare in adults older than 30. This type of bone cancer is not discussed in this document, but is covered in detail in our document, Ewing Family of Tumors.
Malignant fibrous histiocytoma (MFH): This cancer more often starts in the soft tissues around bones (such as ligaments, tendons, fat, and muscle) rather than in the bone itself. If it starts in the bones, it most often affects the legs or arms. It usually occurs in older and middle-aged adults. MFH mostly tends to grow into nearby tissues, but it can spread to distant sites, like the lungs. (Another name for this cancer is pleomorphic undifferentiated sarcoma.)
Fibrosarcoma: This is another type of cancer that starts more often in “soft tissues” than it does in the bones. Fibrosarcoma usually occurs in older and middle-aged adults. Leg, arm, and jaw bones are most often affected.
Giant cell tumor of bone: This type of bone tumor has both benign (not cancer) and malignant forms. The benign form is most common. These don’t often spread to distant sites, but after surgery they tend to come back where they started. Each time they come back after surgery they are more likely to spread to other parts of the body. These tumors often affect the arm or leg bones of young and middle-aged adults.
Chordoma: This tumor usually occurs in the base of the skull and bones of the spine. It is found most often in adults older than 30. It is about twice as common in men than in women. Chordomas tend to grow slowly and usually do not spread to other parts of the body. But they often come back in the same place if they are not removed completely. When they do spread, they tend to go to the lymph nodes, lungs, and liver.
Last Medical Review: 12/05/2012
Last Revised: 01/24/2013 | <urn:uuid:cc6f91ff-3151-4cd9-8163-a7274ef9de2f> | CC-MAIN-2013-20 | http://www.cancer.org/cancer/bonecancer/overviewguide/bone-cancer-overview-what-is-bone-cancer | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959633 | 1,460 | 3.90625 | 4 |
Multistate Outbreak of Human Salmonella Chester Infections (Final Update)
Posted September 9, 2010
This outbreak appears to be over. However, Salmonella is an important cause of human illness in the United States. More information about Salmonella, and steps people can take to reduce their risk of infection, can be found on the CDC Salmonella Web Page.
Persons Infected with the Outbreak Strain of Salmonella Chester, United States, by State
Infections with the Outbreak Strain of Salmonella Chester, by Week of Illness Onset
CDC collaborated with public health officials in many states, the U.S. Department of Agriculture's Food Safety and Inspection Service (USDA/FSIS), and the U.S. Food and Drug Administration (FDA) to investigate a multistate outbreak of Salmonella serotype Chester infections. Investigators used DNA analysis of Salmonella bacteria obtained through diagnostic testing to identify cases of illness that were part of this outbreak.
As of 9:00 AM EDT on August 27, 2010, a total of 44 individuals infected with a matching strain of Salmonella Chester have been reported from 18 states since April 11, 2010. The number of ill people identified in each state with this strain is as follows: AK (1), CA (5), CO (2), GA (8), IL (1), KY (1), MA (2), MN (2), MO (1), NC (1), OK (1), OR (2), SC (2), TN (1), TX (3), UT (3), VA (4), and WA (4). Among those for whom information is available about when symptoms started, illnesses began between April 4, 2010 and June 16, 2010. Case-patients ranged in age from <1 to 88 years old, and the median age was 36 years. Fifty-four percent of patients were female. Among the 43 patients with available hospitalization information, 16 (37%) were hospitalized. No deaths were reported.
The outbreak can be visually described with a chart showing the number of people who became ill each day. This chart is called an epidemic curve or epi curve. For more details, please see the Salmonella Outbreak Investigations: Timeline for Reporting Cases.
Investigation of the Outbreak
A widely distributed contaminated food product might cause illnesses across the United States. The identity of the contaminated product often is not readily apparent. In outbreaks like this one, identification of the contaminated product requires conducting detailed standardized interviews with persons who were ill. It may also require conducting interviews with non-ill members of the public ("controls") to get information about foods recently eaten and other exposures to compare with information from the ill persons. The investigation is often supplemented by laboratory testing of suspected products.
Collaborative investigative efforts of officials in many local, state, and federal public health, agriculture, and regulatory agencies linked this outbreak to Marie Callender’s Cheesy Chicken & Rice single-serve frozen entrées. During June 14-18, 2010, CDC and public health officials in multiple states conducted an epidemiologic study by comparing foods eaten by 19 ill and 22 well persons. Analysis of this study suggested that eating a Marie Callender's frozen meal was a source of illness. Ill persons (89%) were significantly more likely than well persons (14%) to report eating a frozen meal. All ill persons (100%) who ate frozen meals reported eating a Marie Callender's frozen meal. None of the well persons who ate a frozen meal reported eating a Marie Callender's frozen meal. There was insufficient data from this study to implicate a specific frozen meal type. However, many of the ill persons reported eating a Marie Callender's Cheesy Chicken & Rice frozen entrée in the week before becoming ill. Additionally, two unopened packages of Marie Callender’s Cheesy Chicken & Rice single-serve frozen entrées collected from two patients’ homes (one collected in Minnesota on June 18, and one in Tennessee on July 19) yielded Salmonella Chester isolates with a genetic fingerprint indistinguishable from the outbreak pattern.
On June 17, 2010, ConAgra Foods announced a precautionary recall of Marie Callender's Cheesy Chicken & Rice single-serve frozen entrées after being informed by the CDC of a possible association between this product and the outbreak of Salmonella Chester infections.
On June 17, 2010, USDA's FSIS announced ConAgra's recall.
View recalled food package [PDF - 6 pages] posted by FSIS.
Clinical Features/Signs and Symptoms
Most people infected with Salmonella develop diarrhea, fever, and abdominal cramps 12–72 hours after infection. Infection is usually diagnosed by culture of a stool sample. The illness usually lasts 4 to 7 days. Although most people recover without treatment, severe infections can occur. Infants, elderly people, and those with weakened immune systems are more likely than others to develop severe illness. When severe infection occurs, Salmonella may spread from the intestines to the bloodstream and then to other body sites and can cause death unless the person is treated promptly with antibiotics.
More general information about Salmonella can be found here under Salmonella FAQs.
Advice to Consumers
- Salmonella is sometimes present in raw foods (e.g., chicken, produce, and spices) which can be used as ingredients in not-ready-to-eat frozen dinners.
- Consumers should follow the instructions on the package label of the frozen dinner. Conventional ovens are better at cooking foods thoroughly. Microwave ovens vary in strength and tend to cook foods unevenly.
- If you choose to cook the frozen dinner using a microwave, be sure to:
- Cook the food for the time specified for your microwave's wattage.
- Let the food "stand" for the stated time, so cooking can continue.
- Use a food thermometer to make sure that it is fully cooked to an internal temperature of 165 degrees Fahrenheit.
- Individuals who think they might have become ill from eating a Marie Callender's frozen dinner should consult their health care providers.
- Consumers who have Marie Callender's Cheesy Chicken & Rice single-serve frozen entrées in their freezer should discard them or return them to their retailer for a refund.
- Consumers are urged to read and follow the preparation instructions on the label of all frozen entrees. If the package says “Do Not Microwave,” consumers should follow that instruction and use a conventional oven. Consumers should use a food thermometer to make sure the entrees reach at least 165 degrees Fahrenheit.
- General Information: Salmonella
- Description of the Steps In a Foodborne Outbreak Investigation
- CDC's Role During a Multi-State Foodborne Outbreak Investigation
- Two Minnesota cases of Salmonella infection linked to national recall of frozen meals
- Cooking Safely in the Microwave Oven
CDC's Role in Food Safety
As an agency within the U.S. Department of Health and Human Services (HHS), CDC leads federal efforts to gather data on foodborne illnesses, investigate foodborne illnesses and outbreaks, and monitor the effectiveness of prevention and control efforts. CDC is not a food safety regulatory agency but works closely with the food safety regulatory agencies, in particular with HHS's U.S. Food and Drug Administration (FDA) and the Food Safety and Inspection Service within the U.S. Department of Agriculture (USDA). CDC also plays a key role in building state and local health department epidemiology, laboratory, and environmental health capacity to support foodborne disease surveillance and outbreak response. Notably, CDC data can be used to help document the effectiveness of regulatory interventions. | <urn:uuid:e80979e4-dc90-433e-87c7-2ff9f0083129> | CC-MAIN-2013-20 | http://www.cdc.gov/salmonella/chester/index.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.93729 | 1,598 | 2.65625 | 3 |
NOAA scientists agree the risks are high, but say Hansen overstates what science can really say for sure
Jim Hansen at the University of Colorado’s World Affairs Conference (Photo: Tom Yulsman)
Speaking to a packed auditorium at the University of Colorado’s World Affairs Conference on Thursday, NASA climatologist James Hansen found a friendly audience for his argument that we face a planetary emergency thanks to global warming.
Despite the fact that the temperature rise has so far been relatively modest, “we do have a crisis,” he said.
With his characteristic under-stated manner, Hansen made a compelling case. But after speaking with two NOAA scientists today, I think Hansen put himself in a familiar position: out on a scientific limb. And after sifting through my many pages of notes from two days of immersion in climate issues, I’m as convinced as ever that journalists must be exceedingly careful not to overstate what we know for sure and what is still up for scientific debate.
Crawling out on the limb, Hansen argued that global warming has already caused the levels of water in Lake Powell and Lake Mead — the two giant reservoirs on the Colorado River than insure water supplies for tens of millions of Westerners — to fall to 50 percent of capacity. The reservoirs “probably will not be full again unless we decrease CO2 in the atmosphere,” he asserted.
Hansen is arguing that simply reducing our emissions and stabilizing CO2 at about 450 parts per million, as many scientists argue is necessary, is not nearly good enough. We must reduce the concentration from today’s 387 ppm to below 35o ppm.
“We have already passed into the dangerous zone,” Hansen said. If we don’t reduce CO2 in the atmosphere, “we would be sending the planet toward an ice free state. We would have a chaotic journey to get there, but we would be creating a very different planet, and chaos for our children.” Hansen’s argument (see a paper on the subject here) is based on paleoclimate data which show that the last time atmospheric CO2 concentrations were this high, the Earth was ice free, and sea level was far higher than it is today.
“I agree with the sense of urgency,” said Peter Tans, a carbon cycle expert at the National Oceanic and Atmospheric Administration here in Boulder, in a meeting with our Ted Scripps Fellows in Environmental Journalism. “But I don’t agree with a lot of the specifics. I don’t agree with Jim Hansen’s naming of 350 ppm as a tipping point. Actually we may have already gone too far, except we just don’t know.”
A key factor, Tans said, is timing. “If it takes a million years for the ice caps to disappear, no problem. The issue is how fast? Nobody can give that answer.”
Martin Hoerling, a NOAA meteorologist who is working on ways to better determine the links between climate change and regional impacts, such as drought in the West, pointed out that the paleoclimate data Hansen bases his assertions on are coarse. They do not record year-to-year events, just big changes that took place over very long time periods. So that data give no indication just how long it takes to de-glaciate Antarctica and Greenland.
Hoerling also took issue with Hansen’s assertions about lakes Powell and Mead. While it is true that “the West has had the most radical change in temperature in the U.S.,” there is no evidence yet that this is a cause of increasing drought, he said.
Flows in the Colorado River have been averaging about 12 million acre feet each year, yet we are consuming 14 million acre feet. “Where are we getting the extra from? Well, we’re tapping into our 401K plan,” he said. That would be the two giant reservoirs, and that’s why their water levels have been declining.
“Why is there less flow in the river?” Hoerling said. “Low precipitation — not every year, but in many recent years, the snow pack has been lower.” And here’s his almost counter-intuitive point: science shows that the reduced precipitation “is due to natural climate variability . . . We see little indication that the warming trend is affecting the precipitation.”
In my conversation with Tans and Hoerling today, I saw a tension between what they believe and what they think they can demonstrate scientifically.
“I like to frame the issue differently,” Tans said. “Sure, we canot predict what the climate is going to look like in a couple of dcades. There are feedbacks in the system we don’t understand. In fact, we don’t even know all the feedbacks . . . To pick all this apart is extremely difficult — until things really happen. So I’m pessimistic.”
There is, Tans said, “a finite risk of catastrophic climate change. Maybe it is 1 in 6, or maybe 1 in 20 or 1 in 3. Yet if we had a risk like that of being hit by an asteroid, we’d know what to do. But the problem here is that we are the asteroid.”
Tans argues that whether or not we can pin down the degree of risk we are now facing, one thing is obvious: “We have a society based on ever increasing consumption and economic expectations. Three percent growth forever is considered ideal. But of course it’s a disaster.”
Hoerling says we are living like the Easter Islanders, who were faced with collapse from over consumption of resources but didn’t see it coming. Like them, he says, we are living in denial.
“I think we are in that type of risk,” Tans said. “But is that moving people? It moves me. But I was already convinced in 1972.” | <urn:uuid:f9441dcc-dc2a-4077-aac8-1b49394182e2> | CC-MAIN-2013-20 | http://www.cejournal.net/?p=1590 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.964949 | 1,273 | 2.546875 | 3 |
1854-89 THREE DOLLARS INDIAN HEAD
In 1853 the United States negotiated the "Gadsden Purchase"settlement of a boundary dispute with Mexico that resulted in the U.S. acquiring what would become the southern portions of Arizona and New Mexico for ten million dollars. The following year Commodore Matthew Perry embarked upon his famed expedition to re-open Japan to the Western world and establish trade. Spreading beyond its borders in many ways, a few years earlier the United States had joined the worldwide move to uniform postage rates and printed stamps when the Congressional Act of March 3, 1845 authorized the first U.S. postage stamps, and set the local prepaid letter rate at five cents. This set the stage for a close connection between postal and coinage history.
Exactly six years later, the postage rate was reduced to three cents when New York Senator Daniel S. Dickinson fathered legislation that simultaneously initiated coinage of the tiny silver three-cent piece as a public convenience. The large cents then in circulation were cumbersome and unpopular, and the new denomination was designed to facilitate the purchase of stamps without using the hated "coppers."
This reasoning was carried a step further when the Mint Act of February 21, 1853 authorized a three-dollar gold coin. Congress and Mint Director Robert Maskell Patterson were convinced that the new coin would speed purchases of three-cent stamps by the sheet and of the silver three-cent coins in roll quantities. Unfortunately, at no time during the 35-year span of this denomination did public demand justify these hopes. Chief Engraver James Barton Longacre chose an "Indian Princess" for his obverse not a Native American profile, but actually a profile modeled after the Greco-Roman Venus Accroupie statue then in a Philadelphia museum. Longacre used this distinctive sharp-nosed profile on his gold dollar of 1849 and would employ it again on the Indian Head cent of 1859. On the three-dollar coin Liberty is wearing a feathered headdress of equal-sized plumes with a band bearing LIBERTY in raised letters. She's surrounded by the inscription UNITED STATES OF AMERICA. Such a headdress dates back to the earliest known drawings of American Indians by French artist Jacques le Moyne du Morgue's sketches of the Florida Timucua tribe who lived near the tragic French colony of Fort Caroline in 1562. It was accepted by engravers and medalists of the day as the design shorthand for "America."
Longacre's reverse depicted a wreath of tobacco, wheat, corn and cotton with a plant at top bearing two conical seed masses. The original wax models of this wreath still exist on brass discs in a Midwestern collection and show how meticulous Longacre was in preparing his design. Encircled by the wreath is the denomination 3 DOLLARS and the date. There are two boldly different reverse types, the small DOLLARS appearing only in 1854 and the large DOLLARS on coins of 1855-89. Many dates show bold "outlining" of letters and devices, resembling a double strike but probably the result of excessive forcing of the design punches into the die steel, causing a hint of their sloping "shoulders" to appear as part of the coin's design. The high points of the obverse design that first show wear are the cheek and hair above the eye; on the reverse, check the bow knot and leaves.
A total of just over 535,000 pieces were issued along with 2058 proofs. The first coins struck were the 15 proofs of 1854. Regular coinage began on May 1, and that first year saw 138,618 pieces struck at Philadelphia (no mintmark), 1,120 at Dahlonega (D), and 24,000 at New Orleans (O). These two branch mints would strike coins only in 1854. San Francisco produced the three-dollar denomination in 1855, 1856, and 1857, again in 1860, and apparently one final piece in 1870. Mintmarks are found below the wreath.
Every U.S. denomination boasts a number of major rarities. The three-dollar gold coinage of 1854-1889 is studded with so many low-mintage dates that the entire series may fairly be called rare. In mint state 1878 is the most common date, followed by the 1879, 1888, 1854 and 1889 issues. Every other date is very rare in high grade, particularly 1858, 1865, 1873 Closed 3 and all the San Francisco issues. Minuscule mintages were the rule in the later years. Proof coins prior to 1859 are extremely rare and more difficult to find than the proof-only issues of 1873 Open 3, 1875 and 1876, but many dates are even rarer in the higher Mint State grades. This is because at least some proofs were saved by well- heeled collectors while few lower-budget collectors showed any interest in higher-grade business strikes of later-date gold. Counterfeits are known for many dates; any suspicious piece should be authenticated.
The rarest date of all is the unique 1870-S, of which only one example was struck for inclusion in the new Mint's cornerstone. Either the coin escaped, or a second was struck as a pocket piece for San Francisco Mint Coiner J.B. Harmstead. In any event, one coin showing traces of jewelry use surfaced in the numismatic market in 1907. It was sold to prominent collector William H. Woodin, and when Thomas L. Elder sold the Woodin collection in 1911, the coin went to Baltimore's Waldo C. Newcomer. Later owned by Virgil Brand, it was next sold by Ted and Carl Brandts of Ohio's Celina Coin Co. and Stack's of New York to Louis C. Eliasberg in 1946 for $11,500. In Bowers and Merena's October 1982 sale of the U.S. Gold Collection, this famous coin sold for a record $687,500.
The three-dollar denomination quietly expired in 1889 along with the gold dollar and nickel three-cent piece. America's coinage was certainly more prosaic without this odd denomination gold piece, but its future popularity with collectors would vastly outstrip the lukewarm public reception it enjoyed during its circulating life. | <urn:uuid:ce5e0d75-e5f8-4ce2-8b94-86d9527d0dd4> | CC-MAIN-2013-20 | http://www.coinsite.com/CoinSite-PF/PParticles/$3goldix.asp | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.963839 | 1,295 | 3.671875 | 4 |
[Note: in Japan, it is customary to refer to a person with their last name first. We have retained this practice in the below excerpt from Kurosawa’s text.]
The gate was growing larger and larger in my mind’s eye. I was location-scouting in the ancient capital of Kyoto for Rashomon, my eleventh-century period film. The Daiei management was not very happy with the project. They said the content was difficult and the title had no appeal. They were reluctant to let the shooting begin. Day by day, as I waited, I walked around Kyoto and the still-more-ancient capital of Nara a few miles away, studying the classical architecture. The more I saw, the larger the image of the Rashomon gate became in my mind.
At first I thought my gate should be about the size of the entrance gate to Toji Temple in Kyoto. Then it became as large as the Tengaimon gate in Nara, and finally as big as the main two-story gates of the Ninnaji and Todaiji temples in Nara. This image enlargement occurred not just because I had the opportunity to see real gates dating from that period, but because of what I was learning, from documents and relics, about the long-since-destroyed Rashomon gate itself.
“Rashomon” actually refers to the Rajomon gate; the name was changed in a Noh play written by Kanze Nobumitsu. “Rajo” indicates the outer precincts of the castle, so “Rajomon” means the main gate to the castle’s outer grounds. The gate for my film Rashomon was the main gate to the outer precincts of the ancient capital--–--–Kyoto was at that time called “Heian-Kyo.” If one entered the capital through the Rajomon gate and continued due north along the main thoroughfare of the metropolis, one came to the Shujakumon gate at the end of it, and the Toji and Saiji temples to the east and west, respectively. Considering this city plan, it would have been strange had the outer main gate not been the biggest gate of all. There is tangible evidence that it in fact was: The blue roof tiles that survive from the original Rajomon gate show that it was large. But, no matter how much research we did, we couldn’t discover the actual dimensions of the vanished structure.
As a result, we had to construct the Rashomon gate to the city based on what we could learn from looking at extant temple gates, knowing that the original was probably different. What we built as a set was gigantic. It was so immense that a complete roof would have buckled the support pillars. Using the artistic device of dilapidation as an excuse, we constructed only half a roof and were able to get away with our measurements. To be historically accurate, the imperial palace and the Shujakumon gate should have been visible looking north through our gate. But on the Daiei back lot such distances were out of the question, and even if we had been able to find the space, the budget would have made it impossible. We made do with a cut-out mountain to be seen through the gate. Even so, what we built was extraordinarily large for an open set.
When I took this project to Daiei, I told them the only sets I would need were the gate and the tribunal courtyard wall where all the survivors, participants and witnesses of the rape and murder that form the story of the film are questioned. Everything else, I promised them, would be shot on location. Based on this low-budget set estimate, Daiei happily took on the project.
Later, Kawaguchi Matsutaro, at that time a Daiei executive, complained that they had really been fed a line. To be sure, only the gate set had to be built, but for the price of that one mammoth set they could have had over a hundred ordinary sets. But, to tell the truth, I hadn’t intended so big a set to begin with. It was while I was kept waiting all that time that my research deepened and my image of the gate swelled to its startling proportions.
When I had finished Scandal for the Shochiku studios, Daiei asked if I wouldn’t direct one more film for them. As I cast about for what to film, I suddenly remembered a script based on the short story “Yabu no naka” (“In a Grove”) by Akutagawa Ryunosuke. It had been written by Hashimoto Shinobu, who had been studying under director Itami Mansaku. It was a very well-written piece, but not long enough to make into a feature film. This Hashimoto had visited my home, and I talked with him for hours. He seemed to have substance, and I took a liking to him. He later wrote the screenplays for Ikiru (1952) and Shichinin no samurai (Seven Samurai, 1954) with me. The script I remembered was his Akutagawa adaptation called “Male-Female.”
Probably my subconscious told me it was not right to have put that script aside; probably I was—without being aware of it–wondering all the while if I couldn’t do something with it. At that moment the memory of it jumped out of one of those creases in my brain and told me to give it a chance. At the same time I recalled that “In a Grove” is made up of three stories, and realized that if I added one more, the whole would be just the right length for a feature film. Then I remembered the Akutagawa story “Rashomon.” Like “In a Grove,” it was set in the Heian period (794-1184). The film Rashomon took shape in my mind.
Since the advent of the talkies in the 1930s, I felt, we had misplaced and forgotten what was so wonderful about the old silent movies. I was aware of the aesthetic loss as a constant irritation. I sensed a need to go back to the origins of the motion picture to find this peculiar beauty again; I had to go back into the past.
In particular, I believed that there was something to be learned from the spirit of the French avant-garde films of the 1920s. Yet in Japan at this time we had no film library. I had to forage for old films, and try to remember the structure of those I had seen as a boy, ruminating over the aesthetics that had made them special.
Rashomon would be my testing ground, the place where I could apply the ideas and wishes growing out of my silent-film research. To provide the symbolic background atmosphere, I decided to use the Akutagawa “In a Grove” story, which goes into the depths of the human heart as if with a surgeon’s scalpel, laying bare its dark complexities and bizarre twists. These strange impulses of the human heart would be expressed through the use of an elaborately fashioned play of light and shadow. In the film, people going astray in the thicket of their hearts would wander into a wider wilderness, so I moved the setting to a large forest. I selected the virgin forest of the mountains surrounding Nara, and the forest belonging to the Komyoji temple outside Kyoto.
There were only eight characters, but the story was both complex and deep. The script was done as straightforwardly and briefly as possible, so I felt I should be able to create a rich and expansive visual image in turning it into a film. Fortunately, I had as cinematographer a man I had long wanted to work with, Miyagawa Kazuo; I had Hayasaka to compose the music and Matsuyama as art director. The cast was Mifune Toshiro, Mori Masayuki, Kyo Machiko, Shimura Takashi, Chiaki Minoru, Ueda Kichijiro, Kato Daisuke and Honma Fumiko; all were actors whose temperaments I knew, and I could not have wished for a better line-up. Moreover, the story was supposed to take place in summer, and we had, ready to hand, the scintillating midsummer heat of Kyoto and Nara. With all these conditions so neatly met, I could ask nothing more. All that was left was to begin the film.
However, one day just before the shooting was to start, the three assistant directors Daiei had assigned me came to see me at the inn where I was staying. I wondered what the problem could be. It turned out that they found the script baffling and wanted me to explain it to them. “Please read it again more carefully,” I told them. “If you read it diligently, you should be able to understand it because it was written with the intention of being comprehensible.” But they wouldn’t leave. “We believe we have read it carefully, and we still don’t understand it at all; that’s why we want you to explain it to us.” For their persistence I gave them this simple explanation:
Human beings are unable to be honest with themselves about themselves. They cannot talk about themselves without embellishing. This script portrays such human beings–the kind who cannot survive without lies to make them feel they are better people than they really are. It even shows this sinful need for flattering falsehood going beyond the grave—even the character who dies cannot give up his lies when he speaks to the living through a medium. Egoism is a sin the human being carries with him from birth; it is the most difficult to redeem. This film is like a strange picture scroll that is unrolled and displayed by the ego. You say that you can’t understand this script at all, but that is because the human heart itself is impossible to understand. If you focus on the impossibility of truly understanding human psychology and read the script one more time, I think you will grasp the point of it.
After I finished, two of the three assistant directors nodded and said they would try reading the script again. They got up to leave, but the third, who was the chief, remained unconvinced. He left with an angry look on his face. (As it turned out, this chief assistant director and I never did get along. I still regret that in the end I had to ask for his resignation. But, aside from this, the work went well.)
During the rehearsals before the shooting I was left virtually speechless by Kyo Machiko’s dedication. She came in to where I was still sleeping in the morning and sat down with the script in her hand. “Please teach me what to do,” she requested, and I lay there amazed. The other actors, too, were all in their prime. Their spirit and enthusiasm was obvious in their work, and equally manifest in their eating and drinking habits.
They invented a dish called Sanzoku-yaki, or “Mountain Bandit Broil,” and ate it frequently. It consisted of beef strips sautéed in oil and then dipped in a sauce made of curry powder in melted butter. But while they held their chopsticks in one hand, in the other they’d hold a raw onion. From time to time they’d put a strip of meat on the onion and take a bite out of it. Thoroughly barbaric.
The shooting began at the Nara virgin forest. This forest was infested with mountain leeches. They dropped out of the trees onto us, they crawled up our legs from the ground to suck our blood. Even when they had had their fill, it was no easy task to pull them off, and once you managed to rip a glutted leech out of your flesh, the open sore seemed never to stop bleeding. Our solution was to put a tub of salt in the entry of the inn. Before we left for the location in the morning we would cover our necks, arms and socks with salt. Leeches are like slugs—they avoid salt.
In those days the virgin forest around Nara harbored great numbers of massive cryptomerias and Japanese cypresses, and vines of lush ivy twined from tree to tree like pythons. It had the air of the deepest mountains and hidden glens. Every day I walked in this forest, partly to scout for shooting locations and partly for pleasure. Once a black shadow suddenly darted in front of me: a deer from the Nara park that had returned to the wild. Looking up, I saw a pack of monkeys in the big trees about my head.
The inn we were housed in lay at the foot of Mount Wakakusa. Once a big monkey who seemed to be the leader of the pack came and sat on the roof of the inn to stare at us studiously throughout our boisterous evening meal. Another time the moon rose from behind Mount Wakakusa, and for an instant we saw the silhouette of a deer framed distinctly against its full brightness. Often after supper we climbed up Mount Wakakusa and formed a circle to dance in the moonlight. I was still young and the cast members were even younger and bursting with energy. We carried out our work with enthusiasm.
When the location moved from the Nara Mountains to the Komyoji temple forest in Kyoto, it was Gion Festival time. The sultry summer sun hit with full force, but even though some members of my crew succumbed to heat stroke, our work pace never flagged. Every afternoon we pushed through without even stopping for a single swallow of water. When work was over, on the way back to the inn we stopped at a beer hall in Kyoto’s downtown Shijo-Kawaramachi district. There each of us downed about four of the biggest mugs of draft beer they had. But we ate dinner without any alcohol and, upon finishing, split up to go about our private affairs. Then at ten o’clock we’d gather again and pour whiskey down our throats with a vengeance. Every morning we were up bright and clear-headed to do our sweat-drenched work.
Where the Komyoji temple forest was too thick to give us the light we needed for shooting, we cut down trees without a moment’s hesitation or explanation. The abbot of Komyoji glared fearfully as he watched us. But as the days went on, he began to take the initiative, showing us where he thought trees should be felled.
When our shoot was finished at the Komyoji location, I went to pay my respects to the abbot. He looked at me with grave seriousness and spoke with deep feeling. “To be honest with you, at the outset we were very disturbed when you went about cutting down the temple trees as if they belonged to you. But in the end we were won over by your wholehearted enthusiasm. ‘Show the audience something good.’ This was the focus of all your energies, and you forgot yourselves. Until I had the chance to watch you, I had no idea that the making of a movie was a crystallization of such effort. I was very deeply impressed.”
The abbot finished and set a folding fan before me. In commemoration of our filming, he had written on the fan three characters forming a Chinese poem: “Benefit All Mankind.” I was left speechless.
We set up a parallel schedule for the use of the Komyoji location and open set of the Rashomon gate. On sunny days we filmed at Komyoji; on cloudy days we filmed the rain scenes at the gate set. Because the gate set was so huge, the job of creating rainfall on it was a major operation. We borrowed fire engines and turned on the studio’s fire hoses to full capacity. But when the camera was aimed upward at the cloudy sky over the gate, the sprinkle of the rain couldn’t be seen against it, so we made rainfall with black ink in it. Every day we worked in temperatures of more than 85º Fahrenheit, but when the wind blew through the wide-open gate with the terrific rainfall pouring down over it, it was enough to chill the skin.
I had to be sure that this huge gate looked huge to the camera. And I had to figure out how to use the sun itself. This was a major concern because of the decision to use the light and shadows of the forest as the keynote of the whole film. I determined to solve the problem by actually filming the sun. These days it is not uncommon to point the camera directly at the sun, but at the time Rashomon was being made it was still one of the taboos of cinematography. It was even thought that the sun’s rays shining directly into your lens would burn the film in your camera. But my cameraman, Miyagawa Kazuo, boldly defied this convention and created superb images. The introductory section in particular, which leads the viewer through the light and shadow of the forest into a world where the human heart loses its way, was truly magnificent camera work. I feel that this scene, later praised at the Venice International Film Festival as the first instance of a camera entering the heart of a forest, was not only one of Miyagawa’s masterpieces but a world-class masterpiece of black-and-white cinematography.
And yet, I don’t know what happened to me. Delighted as I was with Miyagawa’s work, it seems I forgot to tell him. When I said to myself, “Wonderful,” I guess I thought I had said “Wonderful” to him at the same time. I didn’t realize I hadn’t until one day Miyagawa’s old friend Shimura Takashi (who was playing the woodcutter in Rashomon) came to me and said, “Miyagawa’s very concerned about whether his camera work is satisfactory to you.” Recognizing my oversight for the first time, I hurriedly shouted “One hundred percent! One hundred for camera work! One hundred plus!”
There is no end to my recollections of Rashomon. If I tried to write about all of them, I’d never finish, so I’d like to end with one incident that left an indelible impression on me. It has to do with the music.
As I was writing the script, I heard the rhythms of a bolero in my head over the episode of the woman’s side of the story. I asked Hayasaka to write a bolero kind of music for the scene. When we came to the dubbing of that scene, Hayasaka sat down next to me and said, “I’ll try it with the music.” In his face I saw uneasiness and anticipation. My own nervousness and expectancy gave me a painful sensation in my chest. The screen lit up with the beginning of the scene, and the strains of the bolero music softly counted out the rhythm. As the scene progressed, the music rose, but the image and the sound failed to coincide and seemed to be at odds with each other. “Damn it,” I thought. The multiplication of sound and image that I had calculated in my head had failed, it seemed. It was enough to make me break out in a cold sweat.
We kept going. The bolero music rose yet again, and suddenly picture and sound fell into perfect unison. The mood created was positively eerie. I felt an icy chill run down my spine, and unwittingly I turned to Hayasaka. He was looking at me. His face was pale, and I saw that he was shuddering with the same eerie emotion I felt. From that point on, sound and image proceeded with incredible speed to surpass even the calculations I had made in my head. The effect was strange and overwhelming.
And that is how Rashomon was made. During the shooting there were two fires at the Daiei studios. But because we had mobilized the fire engines for our filming, they were already primed and drilled, so the studios escaped with very minor damage.
After Rashomon I made a film of Dostoevsky’s The Idiot (Hakuchi, 1951) for the Shochiku studios. This Idiot was ruinous. I clashed directly with the studio heads, and then when the reviews on the completed film came out, it was as if they were a mirror reflection of the studio’s attitude toward me. Without exception, they were scathing. On the heels of this disaster, Daiei rescinded its offer for me to do another film with them.
I listened to this cold announcement at the Chofu studios of Daiei in the Tokyo suburbs. I walked out through the gate in the gloomy daze, and, not having the will even to get on the train, I ruminated over my bleak situation as I walked all the way home to Komae. I concluded that for some time I would have to “eat cold rice” and resigned myself to this fact. Deciding that it would serve no purpose to get excited about it, I set out to go fishing at the Tamagawa River. I cast my line into the river. It immediately caught on something and snapped in two. Having no replacement with me, I hurriedly put my equipment away. Thinking this was what it was like when bad luck catches up with you, I headed back home.
I arrived home depressed, with barely enough strength to slide open the door to the entry. Suddenly my wife came bounding out. “Congratulations!” I was unwittingly indignant: “For what?” “Rashomon has the Grand Prix.” Rashomon had won the Grand Prix at the Venice International Film Festival, and I was spared from having to eat cold rice.
Once again an angel had appeared out of nowhere. I did not even know that Rashomon had been submitted to the Venice Film Festival. The Japan representative to Italiafilm, Giuliana Stramigioli, had seen it and recommended it to Venice. It was like pouring water into the sleeping ears of the Japanese film industry.
Later Rashomon won the American Academy Award for Best Foreign Language Film. Japanese critics insisted that these two prizes were simply reflections of Westerners’ curiosity and taste for Oriental exoticism, which struck me then, and now, as terrible. Why is it that Japanese people have no confidence in the worth of Japan? Why do they elevate everything foreign and denigrate everything Japanese? Even the woodblock prints of Utamoro, Hokusai and Sharaku were not appreciated by Japanese until they were first discovered by the West. I don’t know how to explain this lack of discernment. I can only despair of the character of my own people.
Excerpted from Something Like an Autobiography, trans., Audie E. Bock. Translation Copyright ©1982 by Vintage Books. Reprinted by permission of Vintage Books, a division of Random House. | <urn:uuid:7db29226-63a3-4e2a-9fff-87120625a08c> | CC-MAIN-2013-20 | http://www.criterion.com/current/posts/196-akira-kurosawa-on-rashomon | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.982666 | 4,863 | 2.703125 | 3 |
The immune system
immune; immunity; disease; bacteria; viruses; white; cells; lymph; germs; mucous; mucus; glands;
What is immunity?
Immunity (say im-yoon-it-i) means that you are protected against something. There are different kinds of immunity. This topic is about how different parts of our bodies work together to keep us from getting sick. Immunity to some diseases is passed on from our mothers before we are born. Immunisation (having your 'shots') helps our body's immune defence system protect us from diseases .
body's immune system
Every body has an inbuilt immune system which protects it from diseases and germs. This system has a lot of different parts which work together to keep out any harmful germs, and attack and destroy any which manage to get inside your body
- Every day your body is exposed to millions of germs, and you do not get sick from them because of your immune system.
- Every time you do get sick because of a germ, your immune system works to get rid of it and then it remembers how to fight the infection if the same germ comes again.
- Usually the older you get, the more germs you become immune to.
So, let's have a look at the immune system, starting from the outside of the body.
The skin is the first line of defence in your immune system.
You know how you put plastic wrap over leftovers to keep them fresh enough for later? Well, your skin is like a plastic wrap to keep germs from getting into your body.
- The epidermis (outside layer of skin) has special cells which warn the body about incoming germs.
- Glands in the skin also make substances that can kill some bacteria (anti-bacterial chemicals). This means you don't get infections on your skin unless your skin is damaged, such as by a cut or a graze.
Your nose, mouth and eyes are the next point of attack.
- The mucous membranes which line the mouth, throat, lungs and bowel, act like a barrier to germs, just as the skin does.
- Saliva in the mouth and the tears which wash your eyes have special enzymes (chemicals) in them which break down the cell walls of many bacteria and viruses.
- The mucous that is made in your nose, throat and lungs traps bacteria, viruses and dust.
- Acid in your stomach kills most germs, and starts to digest your food.
- Lymph (limf) is a clear fluid that is very similar to blood plasma, the clear liquid in blood, but it carries only white blood cells, not red blood cells.
- The lymph flows through all the parts of the body picking up fluid around cells and carrying it back to large veins near the heart. It also carries white blood cells to the places that they are needed.
- Some bacteria or viruses that have entered the body are collected by the lymph and passed on to the lymph nodes where they are filtered out and destroyed. Lymph nodes are sometimes called glands.
Your doctor can often tell if you have an infection by checking out the lymph nodes (glands) in your neck and under your arms to see if they're swollen. If they are, it shows that they are working to get rid of bacteria or viruses.
In your blood you have red blood cells and white blood cells, and in lymph there are white blood cells.
There are several different types of white cells which work together to seek out and destroy bacteria and viruses.
All of them start off in the bone marrow, growing from 'stem cells'.
The disease-fighting white blood cells are specialists. Some of the white blood cells are:
- Neutrophils (say new-tro-fills), which move around the body in the blood and seek out foreign material (things that don't belong in your body).
- Macrophages (say mak-row-far-jes) are the biggest blood cells. Some live in different parts of the body and help to keep it clean, eg. in the lungs. Others swim around cleaning up other white blood cells that have been damaged while doing their jobs, eg. cleaning up pus that has been caused by neutrophils when they work to clear out bacteria from a wound.
- Lymphocytes (say lim-fo-sites) work on bacterial and viral infections
There are two different types:
- B cells produce antibodies. Each cell watches out for a particular germ, and when that germ arrives, the cell starts to produce more antibodies which begin the process of killing that germ. Antibodies attach themselves to the germs so that other cells can recognise that these germs need to be destroyed.
- T cells look for cells in your body that are hiding invaders (germs) or body cells that are different to normal healthy cells (such as cells that could develop into a cancer) and kill them.
does your immune system know which cells to attack?
Your body has lots of friendly bacteria around it which help your body work properly - eg. some bacteria inside your bowel help you to digest your food and break it up into the different things that are needed in various parts of the body.
- These friendly bacteria live on the surfaces of the body, such as on our skin or inside the bowel.
- They do not try to invade the body, so the immune system does not try to get rid of them.
- Other germs which cause illness, try to enter the body.
- Antibodies, which are made by the lymphocytes, attach to the invaders so that the other white blood cells can destroy them. They 'tag' them so they can be easily noticed.
As well as attacking germs, your immune system recognises and destroys other cells which do not belong in your body.
- The cells in your own body are marked with a special system called Human Leukocyte Antigen or HLA (say Hew-man lew-ko-site anti-jen).
- Your immune system can recognise these markings as 'you'. Any cells which do not have the right markings are 'not you' and are therefore attacked.This happens if, for example, you have a blood transfusion with the wrong types of blood cells. Your body's immune system recognises that these cells do not belong in your body, so it destroys them.
How you know your immune system is working
You know your immune system is working:
- if you get better after you are sick
- if cuts heal without getting infected
- if you don't catch the same diseases over and over again
- when you get swollen glands
- when you get swelling and soreness around a cut.
Your immune system is in there working to get rid of any infection.
When things go wrong with the immune system
Sometimes the immune system will make a mistake.
- It may attack your own body as if it were the enemy, eg. insulin dependent diabetes (the type that most often starts in children and young people) is caused by the immune system attacking the cells in the pancreas that make insulin.
- Allergies are caused by the immune system over-reacting to something that is not really a threat, like when pollen triggers hay fever or asthma.
- If tissue is transplanted from one person to another - eg. a skin or organ transplant - then the immune system will attack the new part. The immune system has to be suppressed by drugs to allow the transplant to work.
- When the immune system is damaged, such as when people have a serious illness called AIDS, they get lots of infections and are much more likely to get cancers. Their body cannot recognise the infection or abnormal cells very well and the immune system does not destroy them as well as usual.
The immune system is absolutely amazing. It deals with millions of bacteria and viruses every day to keep us healthy.
Keeping up to date with immunisations can help your body to build immunity to some serious diseases too.
We've provided this information to help you to understand important things about staying healthy and happy. However, if you feel sick or unhappy, it is important to tell your mum or dad, a teacher or another grown-up. | <urn:uuid:ce2ffe26-b35c-4206-9d9e-c664fdbda89f> | CC-MAIN-2013-20 | http://www.cyh.sa.gov.au/HealthTopics/HealthTopicDetailsKids.aspx?p=335&np=152&id=2402 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.949602 | 1,713 | 3.59375 | 4 |
Beginning in October 2006, some beekeepers began reporting losses of 30-90 percent of their hives. While colony losses are not unexpected during winter weather, the magnitude of loss suffered by some beekeepers was highly unusual.
This phenomenon, which currently does not have a recognizable underlying cause, has been termed "Colony Collapse Disorder" (CCD). The main symptom of CCD is simply no or a low number of adult honey bees present but with a live queen and no dead honey bees in the hive. Often there is still honey in the hive, and immature bees (brood) are present.
ARS scientists and others are in the process of carrying out research to discover the cause(s) of CCD and develop ways for beekeepers to respond to the problem.
Why should the public care about honey bees?
Bee pollination is responsible for $15 billion in added crop value, particularly for specialty crops such as almonds and other nuts, berries, fruits, and vegetables. About one mouthful in three in the diet directly or indirectly benefits from honey bee pollination. While there are native pollinators (honey bees came from the Old World with European colonists), honey bees are more prolific and the easiest to manage for the large scale pollination that U.S. agriculture requires. In California, the almond crop alone uses 1.3 million colonies of bees, approximately one half of all honey bees in the United States, and this need is projected to grow to 1.5 million colonies by 2010.
The number of managed honey bee colonies has dropped from 5 million in the1940s to only 2.5 million today. At the same time, the call for hives to supply pollination service has continued to climb. This means honey bee colonies are trucked farther and more often than ever before.
Honey bee colony health has also been declining since the 1980s with the advent of new pathogens and pests. The spread into the United States of varroa and tracheal mites, in particular, created major new stresses on honey bees.
Is there currently a crisis in food production because of CCD?
While CCD has created a very serious problem for beekeepers and could threaten the pollination industry if it becomes more widespread, fortunately there were enough bees to supply all the needed pollination this past spring. But we cannot wait to see if CCD becomes an agricultural crisis to do the needed research into the cause and treatment for CCD.
The cost of hives for pollination has risen this year. But much of that is due to growing demand. Some of the price increase may also be due to higher cost of gas and diesel and other increases related to energy and labor costs. Commercial beekeepers truck hives long distances to provide pollination services, so in particular they must deal with rising expenses.
Varroa mites (one is visible on the back of this bee) are a major threat to honey bee health and are becoming resistant to two compounds (coumaphos and fluvalinate) used to control them. Beekeepers now have a simple assay to determine whether mites are resistant and thus ensure use of appropriate control measures. Click the image for more information about it.
Are there any theories about what may be causing CCD?
Case studies and questionnaires related to management practices and environmental factors have identified a few common factors shared by those beekeepers experiencing CCD, but no common environmental agents or chemicals stand out as causative. There are three major possibilities that are being looked into by researchers.
Pesticides may be having unexpected negative effects on honey bees.
A new parasite or pathogen may be attacking honey bees. One possible candidate being looked at is a pathogenic gut microbe called Nosema. Viruses are also suspected.
A perfect storm of existing stresses may have unexpectedly weakened colonies leading to collapse. Stress, in general, compromises the immune system of bees (and other social insects) and may disrupt their social system, making colonies more susceptible to disease.
These stresses could include high levels of infection by the varroa mite (a parasite that feeds on bee blood and transmits bee viruses); poor nutrition due to apiary overcrowding, pollination of crops with low nutritional value, or pollen or nectar scarcity; and exposure to limited or contaminated water supplies. Migratory stress brought about by increased needs for pollination might also be a contributing factor.
Has CCD ever happened before?
The scientific literature has several mentions of honey bee disappearancesóin the 1880s, the 1920s and the 1960s. While the descriptions sound similar to CCD, there is no way to know for sure if the problems were caused by the same agents as today's CCD.
There have also been unusual colony losses before. In 1903, in the Cache Valley in Utah, 2000 colonies were lost to an unknown "disappearing disease" after a "hard winter and a cold spring." More recently, in 1995-96, Pennsylvania beekeepers lost 53 percent of their colonies without a specific identifiable cause.
What about cell phonesódo they have anything to do with CCD?
The short answer is no.
There was a very small study done in Germany that looked at whether a particular type of base station for cordless phones could affect honey bee homing systems. But, despite all the attention that this study has received, it has nothing to do with CCD. Stefan Kimmel, the researcher who conducted the study and wrote the paper, recently e-mailed The Associated Press to say that there is "no link between our tiny little study and the CCD-phenomenon ... anything else said or written is a lie."
Newly emerged honey bee, Apis mellifera, the subject of genome sequencing work aimed at improving bee traits and management. Click the image for more information about it.
What is ARS doing about CCD?
In April 2007, ARS held a Colony Collapse Disorder Research Workshop that brought together over 80 of the major bee scientists, industry representatives, extension agents, and others to discuss a research agenda. They identified areas where more information is needed and the highest-priority needs for additional research projects related to CCD.
A CCD Steering Committee, led by ARS and USDA's Cooperative State Research, Education, and Extension Service, developed a Research Action Plan to coordinate a comprehensive response for discovering what factors may be causing CCD and what actions need to be taken.
One of the tools that will help in this research is the recently sequenced honey bee genome to better understand bees' basic biology and breed better bees, and to better diagnose bee pests and pathogens and their impacts on bee health and colony collapse. The use of this genome information certainly will have great applications in improving honey bee breeding and management.
The search for factors that are involved in CCD is focusing on four areas: pathogens, parasites, environmental stresses, and bee management stresses such as poor nutrition. It is unlikely that a single factor is the cause of CCD; it is more likely that there is a complex of different components.
In September 2007, a research team that included ARS published the results of an intensive genetic screening of CCD-affected honey bee colonies and non-CCD-affected hives.
The only pathogen found in almost all samples from honey bee colonies with CCD, but not in non-CCD colonies, was the Israeli acute paralysis virus (IAPV), a dicistrovirus that can be transmitted by the varroa mite. It was found in 96.1 percent of the CCD-bee samples.
This research does not identify IAPV as the cause of CCD. What this research found was strictly a strong correlation of the appearance of IAPV and CCD together. No cause-and-effect connection can be inferred from the genetic screening data. (More information about this study)
Honey bees devour a new, nutrient-rich food developed by ARS researchers. Click the image for more information about it.
This was the first report of IAPV in the United States. IAPV was initially identified in honey bee colonies in Israel in 2002, where the honey bees exhibited unusual behavior, such as twitching wings outside the hive and a loss of worker bee populations.
The study also found IAPV in honey bees from Australia that had been imported into the United States, as well as in royal jelly imported from China. Australian bees began to be imported from Australia into the United States in 2005. Questions were raised about a connection between those imported bees and the appearance of IAPV in the United States. Beekeepers sought out Australian imports of bees as a way to replenish their hive populations.
To determine whether IAPV has been present in the United States since before the importation of honey bees from Australia, a follow up detailed genetic screening of several hundred honey bees that had been collected between 2002 and 2007 from colonies in Maryland, Pennsylvania, California and Israel was conducted by ARS researchers.
The results of the follow study showed IAPV has been in this country since at least 2002, which challenges the idea that IAPV is a recent introduction from Australia. (More information about this study)
This study in no way rules IAPV out as a factor in CCD. Research by several groups will now focus on understanding differences in virulence across strains of IAPV and on interactions with other stress factors. Even if IAPV proves to be a cause of CCD, there still may also be other contributing factors-which researchers are pursuing.
What should beekeepers do now about CCD?
Since little is known about the cause(s) of CCD right now, mitigation must be based on improving general honey bee health and habitat and countering known mortality factors by using best management practices.
What can I as a member of the public do to help honey bees?
The best action you can take to benefit honey bees is to not use pesticides indiscriminately, especially not to use pesticides at mid-day when honey bees are most likely to be out foraging for nectar.
In addition, you can plant and encourage the planting of good nectar sources such as red clover, foxglove, bee balm, and joe-pye weed. For more information, see www.nappc.org.]
ARS Honey Bee Research | <urn:uuid:6117655e-ff6b-4001-9834-1cee09b05d14> | CC-MAIN-2013-20 | http://www.dirtdoctor.com/Honeybee-Colony-Collapse-Disorder_vq2259.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.957155 | 2,135 | 3.734375 | 4 |
When I was learning carpentry from a master carpenter, I tried to do something with a tool close at hand instead of going to get the correct tool for the job. Of course, I butchered the piece of wood and eventually had to get the right tool, which got the job done in a fraction of the time that I wasted with the convenient-to-reach tool. The same lesson applies to power quality tools, which range from a simple screwdriver to a $24,000, 32-gigahertz (GHz) spectrum analyzer. Not only can you waste time and not get the answer you are looking for, you might even be led to the wrong answer using the wrong tool for the task.
Before going into the variety of tools available, here’s a quick safety reminder. Always assess the hazards and skills required for the task, and ensure you or whoever is doing it is a “qualified person” per the National Electrical Code definition. Ensure all personnel within the possible hazard area wear the proper personal protective equipment per NFPA 70E and other local requirements. And wherever possible, make connections on de-energized circuits only. Even something as simple as taking a panel cover off to tighten a screw can be disastrous. Accidents don’t always happen to someone else.
On the low end of the price range is an item in everyone’s tool kit: a digital multimeter (DMM). A DMM can measure a number of steady-state power quality phenomena, such as voltage imbalance. It can also be used to find voltage drops across contacts and other devices that should have very low drops. Excessive neutral-to-ground voltage is often a steady-state condition.
Clamp-on power meters are slightly more expensive ($300–$3,000) but used similarly. Though only single-phase, they can be useful for current imbalance and power factor, and many have limited harmonic measuring capabilities. Be wary of the 3 assumptions some meters make, which contend that all three phases are identical and, therefore, give you three-phase answers with a simple multiplication. Also, most clamp-on meters use current transformers that cannot measure (or tolerate) direct current (DC).
Power loggers generally have capabilities similar to power meters but can take unattended readings for extended periods. They are useful for finding time-correlated problems, such as the voltage drops at a certain time each day. Several manufacturers offer both single- and three-phase loggers ($500–$3,500) that come with software for downloading the data onto a computer for analysis. Some plug right into an outlet to let you piggyback the equipment being monitored for simple and safe connections.
Most electrical contractors doing power quality work have several monitors, which can monitor a wide range of power quality phenomena. Read the specifications and the user’s guide before taking one out to troubleshoot for a suspected power quality problem. This is especially important when looking for transients and higher order harmonics. If the sampling rate of the instrument is 64 times per cycle, it is not possible to determine harmonics above the 32nd, and even that is suspect in the real world of measuring. If the current probe is a Rogowski coil (flex-probes) and you are measuring in a room with a half dozen 500-horsepower motors running off adjustable speed drives, much of the current data is going to be skewed by the antenna-like pickup characteristic of those probes. If using a current transformer that isn’t rated for DC and there is an inrush current condition on a saturated transformer with a DC offset, it won’t produce reliable data.
However, using the instrument within its limitations provides a wealth of data that virtually no other instrument can simultaneously do for you. Right in the sweet spot of power quality monitors are capturing the waveforms of disturbances, such as the arcing transients that occur before the voltage sag is cleared by the distribution circuit protection device or the slight frequency and phase shift that occurs when switching from utility power to a backup power source that resulted in a particularly susceptible piece of equipment dropping off line. Whether doing a benchmark survey to compare the site data to the commissioning data or troubleshooting a process interruption that only occurs once per month but with large financial consequences, a power quality monitor in the $3,000–8,000 price range can do exactly that.
Though they don’t have the same triggering, capture and characterization functionality as a power quality monitor, a high-speed (200 megahertz–1 GHz) digital oscilloscope ($3,000–5,000) can be a valuable tool to have at your disposal when looking at noise or transients that are above the bandwidth of power quality monitors. Likewise, a spectrum analyzer ($10,000–15,000) can provide a more complete and wider picture of the steady-state signals that are present in a system and fall below the fundamental frequency or above the harmonic range of most power quality analyzers. For random or burst signals, a noise logger ($4,000–8,000) is an invaluable tool, and for a hands-free, no-contact look for hot spots that can result from high impedance contacts or harmonic losses in motors and transformers, the thermal or infrared camera ($4,000–20,000) is the tool of choice.
Of course, the most used tool for power quality tasks is likely the screwdriver (priceless).
BINGHAM, a contributing editor for power quality, can be reached at 732.287.3680. | <urn:uuid:20342dbc-37d4-47e7-8758-faea1b8cbd9d> | CC-MAIN-2013-20 | http://www.ecmag.com/section/your-business/what%E2%80%99s-your-toolkit?qt-issues_block=0 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.928073 | 1,151 | 2.65625 | 3 |
Twenty Ideas for Engaging ProjectsSeptember 12, 2011 | Suzie Boss
The start of the school year offers an ideal time to introduce students to project-based learning. By starting with engaging projects, you'll grab their interest while establishing a solid foundation of important skills, such as knowing how to conduct research, engage experts, and collaborate with peers. In honor of Edutopia's 20th anniversary, here are 20 project ideas to get learning off to a good start.
1. Flat Stanley Refresh: Flat Stanley literacy projects are perennial favorites for inspiring students to communicate and connect, often across great distances. Now Flat Stanley has his own apps for iPhone and iPad, along with new online resources. Project founder Dale Hubert is recently retired from the classroom, but he's still generating fresh ideas to bring learning alive in the "flatlands."
2. PBL is No Accident: In West Virginia, project-based learning has been adopted as a statewide strategy for improving teaching and learning. Teachers don't have to look far to find good project ideas. In this CNN story about the state's educational approach, read about a project that grew out of a fender-bender in a school parking lot. When students were asked to come up with a better design for the lot, they applied their understanding of geometry, civics, law, engineering, and public speaking. Find more good ideas in West Virginia's Teach21 project library.
3. Defy Gravity: Give your students a chance to investigate what happens near zero gravity by challenging them to design an experiment for NASA to conduct at its 2.2 second drop tower in Brookpark, Ohio. Separate NASA programs are offered for middle school and high school. Or, propose a project that may land you a seat on the ultimate roller coaster (aka: the "vomit comet"), NASA aircraft that produces periods of micro and hyper gravity ranging from 0 to 2 g's. Proposal deadline is Sept. 21, and flight week takes place in February 2012.
4. Connect Across Disciplines: When students design and build kinetic sculptures, they expand their understanding of art, history, engineering, language arts, and technology. Get some interdisciplinary project insights from the Edutopia video, Kinetic Conundrum. Click on the accompanying links for more tips about how you can do it, too.
5. Honor Home Languages: English language learners can feel pressured to master English fast, with class time spent correcting errors instead of using language in meaningful ways. Digital IS, a site published by the National Writing Project, shares plans for three projects that take time to honor students' home languages and cultures, engaging them in critical thinking, collaboration, and use of digital tools. Anne Herrington and Charlie Moran curate the project collection, "English Language Learners, Digital Tools, and Authentic Audiences."
6. Rethink Lunch: Make lunch into a learning opportunity with a project that gets students thinking more critically about their mid-day meal. Center for Ecoliteracy offers materials to help you start, including informative including informative essays and downloadable planning guides. Get more ideas from this video about a middle-school nutrition project, "A Healthy School Lunch."
7. Take a Learning Expedition: Expeditionary Learning schools take students on authentic learning expeditions, often in neighborhoods close to home. Check out the gallery for project ideas about everything from the tools people use in their work to memories of the Civil Rights Movement.
8. Find a Pal: If PBL is new to you, consider joining an existing project. You'll benefit from a veteran colleague's insights, and your students will get a chance to collaborate with classmates from other communities or even other countries. Get connected at ePals, a global learning community for educators from more than 200 countries.
9. Get Minds Inquiring: What's under foot? What are things made of? Science projects that emphasize inquiry help students make sense of their world and build a solid foundation for future understanding. The Inquiry Project supports teachers in third to fifth grades as they guide students in hands-on investigations about matter. Students develop the habits of scientists as they make observations, offer predictions, and gather evidence. Companion videos show how scientists use the same methods to explore the world. Connect inquiry activities to longer-term projects, such as creating a classroom museum that showcases students' investigations.
10. Learn through Service: When cases of the West Nile virus were reported in their area, Minnesota students sprang into action with a project that focused on preventing the disease through public education. Their project demonstrates what can happen when service-learning principles are built into PBL. Find more ideas for service-learning projects from the National Youth Leadership Council.
11. Locate Experts: When students are learning through authentic projects, they often need to connect with experts from the world outside the classroom. Find the knowledgeable experts you need for STEM projects through the National Lab Network. It's an online network where K-12 educators can locate experts from the fields of science, technology, engineering and mathematics.
12. Build Empathy: Projects that help students see the world from another person's perspective build empathy along with academic outcomes. The Edutopia video, "Give Me Shelter", shows what compassionate learning looks like in action. Click on the companion links for more suggestions about how you can do it, too.
13. Investigate Climate Science: Take students on an investigation of climate science by joining the newest collaborative project hosted by GLOBE, Global Learning and Observations to Benefit the Environment. The Student Climate Research Campaign includes three components: introductory activities to build a foundation of understanding, intensive observing periods when students around the world gather and report data, and research investigations that students design and conduct. Climate project kicks off Sept. 12.
14. Problem-Solvers Unite: Math fairs take mathematics out of the classroom and into the community, where everyone gets a chance to try their hand at problem solving. Galileo Educational Network explains how to host a math fair. In a nutshell, students set up displays of their math problems but not the solutions. Then they entice their parents and invited guests to work on solutions. Make the event even more engaging by inviting mathematicians to respond to students' problems.
15. Harvest Pennies : Can small things really add up to big results? It seems so, based on results of the Penny Harvest. Since the project started in New York in 1991, young philanthropists nationwide have raised and donated more than $8 million to charitable causes, all through penny drives. The project website explains how to organize students in philanthropy roundtables to study community issues and decide which causes they want to support.
16. Gather Stories: Instead of teaching history from textbooks, put students in the role of historian and help them make sense of the past. Learn more about how to plan oral history projects in the Edutopia story, "Living Legends." Teach students about the value of listening by having them gather stories for StoryCorps.
17. Angry Bird Physics: Here's a driving question to kickstart a science project: "What are the laws of physics in Angry Birds world?" Read how physics teachers like Frank Noschese and John Burk are using the web version of the popular mobile game in their classrooms.
18. Place-Based Projects: Make local heritage, landscapes, and culture the jumping-off point for compelling projects. That's the idea behind place-based education, which encourages students to look closely at their communities. Often, they wind up making significant contributions to their communities, as seen in the City of Stories project.
19. News They Can Use: Students don't have to wait until they're grown-ups to start publishing. Student newspapers, radio stations, and other journalism projects give them real-life experiences now. Award-winning journalism teacher Esther Wojcicki outlines the benefits this post on the New York Times Learning Network. Get more ideas about digital-age citizen journalism projects at MediaShift Idea Lab.
20. The Heroes They Know: To get acquainted with students at the start of the year and also introduce students to PBL processes, High Tech High teacher Diana Sanchez asked students to create a visual and textual representation of a hero in their own life. Their black-and-white exhibits were a source of pride to students, as Sanchez explains in her project reflection . Get more ideas from the project gallery at High Tech High, a network of 11 schools in San Diego County that emphasize PBL. To learn more, watch this Edutopia video interview with High Tech High founding principal Larry Rosenstock.
Please tell us about the projects you are planning for this school year. Questions about PBL? Draw on the wisdom of your colleagues by starting discussions or asking for help in the PBL community. | <urn:uuid:6d6b12c8-51d9-4d93-b798-19f2bbb48a21> | CC-MAIN-2013-20 | http://www.edutopia.org/blog/20-ideas-for-engaging-projects-suzie-boss?quicktabs_edutopia_blogs_sidebar_popular_list=0 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.944685 | 1,793 | 3.515625 | 4 |
1. Linguistic Background
The languages that are currently spoken in the Pacific
region can be divided broadly into three groups: the Australian and New Guinean
languages formed by people who participated in the region’s earliest migrations
over a period of 20,000-30,000 years starting several tens of thousands of years
ago, and the Austronesian languages spoken by Mongoloid people who migrated
from the Asian continent around 3,000 B.C. The region has numerous languages,
including 250 Aboriginal languages in Australia and 750 Papuan languages on
the island of New Guinea (including the Indonesian territory of Irian Jaya)
and neighboring areas. There are also 350 Austronesian languages in Melanesia,
20 in Polynesia, 12 in Micronesia and 100 in New Guinea (Comrie, Matthews, and
Polinsky 1996). There is wide variation not only among language groups, but
also among the families of languages. Few language families have been identified
among the languages of Australia and New Guinea using the methods of comparative
linguistics. Pacific languages are also characterized by the small size of speaker
populations and by the absence of dominant languages. However, there are usually
bilingual people who can speak or at least understand the languages of neighboring
populations, and it is believed that this situation has existed for a long time.
In terms of cultural factors, it appears that the diversification of languages
in the Pacific region was accelerated by the emblematic function of language
in the creation of a clear distinction between “ingroup” and “outgroup.”
The languages of New Guinea and the region around it show diverse linkages and wide variations between languages. The Austronesian languages of the Pacific region are mostly classified as Oceanian languages, while the Chamorro and Palau languages of Micronesia are classified into the languages of Western Malaya and Polynesia (WMP, Indonesian family), and the indigenous languages of Maluku and Irian Jaya in Eastern Indonesia into the Central Malayo-Polynesian (CMP) or the South Halmahera-West New Guinea (SHWNG) subgroups. In particular, there are strong similarities between the linguistic characteristics of the CMP and SHWNG languages and those of the Melanesian branch of the Oceanian languages. These linguistic conditions and characteristics are attributable to ethnic migrations within the region over a long period of time, accompanied by contacts and linguistic merging with indigenous Papuan people. Papuan languages are still found in parts of Indonesia, including Northern Halmahera and the islands of Pantar and Alor and central and eastern Timor in the Province of Nusa Tenggara. In New Guinea, contact with Papuan languages has caused some Austronesian languages to exhibit a word order change from subject-verb-object to subject-object-verb (Austronesian Type 2) (Sakiyama 1994).
2. Linguistic Strata
With the start of colonization by the European powers
in the nineteenth century, a new set of linguistic circumstances developed in
the region. First, pidgin languages based on European and Melanesian languages
gradually emerged as common languages. The establishment of plantations in Samoa
and in Queensland, Australia, which had concentrations of people who spoke Melanesian
languages, was important in providing breeding grounds for pidgin languages.
A pidgin language is formed from elements of the grammar of both contributing
languages, though the pidgin languages tend to be looked down upon from the
perspective of the more dominant of the two parent languages. The region’s
newly formed common languages, including Tok Pisin, Bislama, and Solomon Pidgin,
flourished after they were taken back to the homelands of the various speakers.
This was possible because Vanuatu, the Solomon Islands and Papua New Guinea
were all multilingual societies without dominant languages. The number of speakers
of pidgin languages increased rapidly in this environment. At the same time,
the continuing existence of ethnic minority languages came under threat.
Examples of pidgins that were creolized (adopted as mother languages in their own right) include Solomon Pijin, which eventually had over 1,000 speakers aged five and over (1976) in the Solomon Islands. Bislama, a mixture of over 100 indigenous languages grafted upon a base of English and French, is now spoken by almost the entire population of Vanuatu (170,000 in 1996) and is partially creolized. Of particular interest is the fact that a group of more than 1,000 people who emigrated to New Caledonia have adopted Bislama as their primary language. The situation in Papua New Guinea, which has a population of 4,300,000 (1996), is even more dramatic. By 1982 the number of people using Tok Pisin as their primary language had reached 50,000, while another 2,000,000 used it as a second language (Grimes 1996).
3. Minority Languages and Common Languages in the Pacific Region
The Atlas of the World’s Languages in Danger of Disappearing published by UNESCO (Wurm 1996) provides merely a brief overview of the current situation in Papua New Guinea, Australia, the Solomon Islands, and Vanuatu. There is no mention of Micronesia, New Caledonia, or Polynesia, presumably because of a lack of information resulting from the large number of languages in these areas. The following report covers areas and languages that I have researched and endangered languages covered by field studies carried out by Japanese researchers.
3.1 Belau (Palau), Micronesia
According to Belau (Palau) government statistics (1990),
the total population of 15,122 people includes 61 people living on outlying
islands in Sonsorol State, and 33 in Hatohobei (Tochobei) State. Apart from
the Sonsorol Islands, Sonsorol State also includes the islands of Fanah, Meril
and Pulo An. In addition to the Hatohobei language, the language mix on these
outlying islands also includes nuclear Micronesian (Chuukic) languages, which
are the core Oceanian languages spoken in the Carolines. They differ from Palauan,
which is an Indonesian language. To lump these languages together as the Sonsorol
languages with a total of 600 speakers (Wurm and Hattori 1981-83) is as inaccurate
as combining the Miyako dialects of Okinawa into a single classification.
The number of Chuukic speakers has declined steadily since these figures were compiled. Starting in the German colonial period of the early twentieth century, people have been relocated from these outlying islands to Echang on Arakabesan Island in Belau. Today there are several hundred of these people. Many of those born in the new location only speak Palauan. A study by S. Oda (1975) estimated that there were 50 speakers of Pulo Annian. The language of Meril continued to decline and has now become extinct.
From the early part of the twentieth century until the end of World War II, Micronesia was under Japanese rule, administered by the South Seas Mandate. Japanese was used as a common language, and its influence is still evident today. The linguistic data on Micronesia presented by Grimes (1996) is distorted by the fact that, while the number of English speakers is shown, no mention is made of Japanese. A study carried out in 1970 (Wurm, Mühlhäusler, and Tryon 1996) found that people aged 35 and over could speak basic Japanese. This group is equivalent to people aged 63 and over in 1998. An estimate based on Belau government statistics (1990) suggests that more than 1,000 of these people are still alive. In the State of Yap in the Federated States of Micronesia, where the percentage of females attending school is said to have been low, we can assume that the number of Japanese speakers has fallen below 500.
It has been suggested that if Japan had continued to rule Micronesia, Japanese would certainly have become the sole language in the region, and indigenous languages would have disappeared (Wurm, Mühlhäusler, and Tryon 1996). This seems an overly harsh appraisal of Japan’s language policy. Except in the schools, as a matter of fact no significant steps were taken to promote the use of Japanese. Micronesia previously had no common language for communication between different islands. Even today, old people from different islands use Japanese as a common language (Sakiyama 1995; Toki 1998). However, the role of this Japanese pidgin appears to have ended within a single generation, and in this sense it too is an endangered language. Pidgin Japanese continues to be used as a lingua franca by Taiwanese in their fifties and older (Wurm, Mühlhäusler, and Tryon 1996), and the number of speakers is estimated to have been 10,000 in 1993 (Grimes 1996).
3.2 Yap, Micronesia
Ngulu Atoll is situated between the Yap Islands and the Belau Islands. The Nguluwan language is a mixture of Yapese and Ulithian, which belongs to the Chuukic family. It has inherited the Ulithian phonetic system and a partial version of Yap grammar (Sakiyama 1982). Nguluwan appears to have evolved through bilingualism between Yapese and Ulithian, and to describe it as a dialect of Ulithian (Grimes 1996) is inappropriate. In 1980 there were 28 speakers. Even with the inclusion of people who had migrated to Guror on Yap Island, where the parent village is located, the number of speakers was fewer than 50. Speakers are being assimilated rapidly into the Yapese language and culture.
3.3 Maluku, Indonesia
The book Atlas Bahasa Tanah Maluku (Taber et al. 1996) covers 117 ethnic languages (Austronesian, Papuan), including numbers of speakers for each language, areas of habitation and migration, access routes, simple cultural information, and basic numbers and expressions. This work is especially valuable since it corrects inaccuracies and errors in the 1977 Classification and Index of the World's Languages by C. Y. L. Voegelin and F. M. Voegelin. It also distinguishes languages and dialects according to their a priori mutual intelligibility. Fifteen languages are listed as having fewer than 1,000 speakers. They include the Nakaela language of Seram, which has only 5 speakers, the Amahai and Paulohi languages, also of Seram, which are spoken by 50 people each, and the South Nuaulu and Yalahatan languages, which have 1,000 speakers each on Seram Island. The data, however, are not complete. For example, the Bajau language is not included, presumably because of the difficulty of accessing the various solitary islands where the Bajau people live. The author researched the Yalahatan language in 1997 and in 1998, and the Bajau language (2,000 speakers) on Sangkuwang Island in 1997.
3.4 Irian Jaya, Papua New Guinea
Detailed information about the names, numbers of speakers,
and research data for over 800 languages spoken in New Guinea and its coastal
regions can be found in the works by the Barrs (1978), Voorhoeve (1975), and
Wurm (1982). However, not only the minority languages but even the majority
languages other than a few have yet to be surveyed and researched adequately.
There are many languages for which vocabulary collection has yet to be undertaken.
It appears that dictionaries or grammars have been published for less than one-tenth
of the region’s languages. However, the gospel has been published in several
dozen languages using orthographies established by SIL. Papuan languages range
from those with substantial speaker populations, including Enga, Chimbu (Kuman),
and Dani, which are spoken by well over 100,000 people, to endangered languages
such as Abaga with 5 speakers (150 according to Wurm ), Makolkol with
7 (unknown according to Wurm), and Sene with under 10. There are very many languages
for which the number of speakers is unknown and more up-to-date information
is needed. Also, despite having substantially more than 1,000 speakers (Wurm
1982; Grimes 1996), Murik is in danger of extinction due to the creolization
of Tok Pisin (Foley 1986). Moreover, it is questionable whether the present
lists include all of the region’s languages.
Information about Irian Jaya is even sparser. A study on popular languages carried out by the author in 1984-85 revealed that Kuot (New Ireland), Taulil (New Britain), and Sko (Irian Jaya) all had several hundred speakers and that, in the case of Taulil in particular, an increasing number of young people were able to understand what their elders were saying but could no longer speak the language themselves. There has been a rapid shift to Kuanua, an indigenous language used in trade with neighboring Rabaul, which is replacing Taulil.
3.5 Solomon Islands, Melanesia
The total population of the Solomon Islands is 390,000 (1996). There are 63 Papuan, Melanesian, and Polynesian indigenous languages, of which only 37 are spoken by over 1,000 people (Grimes 1996). The Papuan Kazukuru languages (Guliguli, Doriri) of New Georgia, which were known to be endangered as early as 1931, have become extinct already, leaving behind just some scant linguistic information. The Melanesian Tanema and Vano languages of the Santa Cruz Islands and the Laghu language of the Santa Isabel Islands were extinct by 1990. This does not mean that the groups speaking them died out, but rather that the languages succumbed to the shift to Roviana, a trade language used in neighboring regions, or were replaced by Solomon Pijin (Sakiyama 1996).
3.6 Vanuatu, Melanesia
The situation in Vanuatu is very similar to that in the
Solomon Islands. The official view, written in Bislama, is as follows:
I gat sam ples long 110 lanwis evriwan so i gat bigfala lanwis difrens long Vanuatu. Pipol blong wan velej ol i toktok long olgeta bakegen evridei nomo long lanwis be i no Bislama, Inglis o Franis. (Vanuatu currently has 110 indigenous languages, which are all very different linguistically. On an everyday basis people in villages speak only their local languages, not Bislama, English, or French). (Vanuatu, 1980, Institute of Pacific Studies)
Among the Melanesian and Polynesian indigenous languages spoken by 170,000 people, or 93% of the total population (1996), there are many small minority tongues. These include Aore, which has only a single speaker (extinct according to Wurm and Hattori [1981-83]); Maragus and Ura (with 10 speakers each); Nasarian, and Sowa (with 20); and Dixon Reef, Lorediakarkar, Mafea, and Tambotalo (with 50). If languages with around 100 speakers are included, this category accounts for about one-half of the total number of languages (Grimes 1996). The spread of Bislama has had the effect of putting these languages in jeopardy.
3.7 New Caledonia, Melanesia
New Caledonia has a total population of 145,000 people,
of whom 62,000 are indigenous. As of 1981, there were 28 languages, all Melanesian
except for the one Polynesian language Uvean. The only languages with over 2,000
speakers are Cemuhi, Paicî, Ajië, and Xârâcùù, along with Dehu and Nengone,
which are spoken on the Loyalty Islands.
Dumbea (Paita), which is spoken by several hundred people, has been described by T. Shintani and Y. Paita (1983). And M. Osumi (1995) has described Tinrin, which has an estimated 400 speakers. Speakers of Tinrin are bilingual in Xârâcùù or Ajië. Nerë has 20 speakers and Arhö 10, while Waamwang, which had 3 speakers in 1946, is now reported to be extinct (Grimes 1996). Descendants of Javanese, who began to migrate to New Caledonia in the early part of the twentieth century, now number several thousand. The Javanese language spoken by these people, which has developed in isolation from the Javanese homeland, has attracted attention as a new pidgin language.
When Europeans first arrived in Australia in 1788, it is estimated that there were 700 different tribes in a population of 500,000-1,000,000 (Comrie, Matthews, and Polinsky 1996). By the 1830s Tasmanian had become extinct, and today the number of Aboriginal languages has fallen to less than one-half what it once was. However, T. Tsunoda left detailed records of the Warrungu language, the last speaker of which died in 1981, and the Djaru language, which has only 200 speakers (Tsunoda 1974, 1981). Yawuru, which belongs to the Nyulnyulan family, reportedly has fewer than 20 speakers, all aged in their sixties or older. The language is described by K. Hosokawa (1992).
The Pacific has been heavily crisscrossed by human migration
from ancient to modern times. All Pacific countries except the Kingdom of Tonga
were colonized. This historical background is reflected in the existence of
multilevel diglossia in all regions of the Pacific.
Depending on the generation, the top level of language in Micronesia is either English (the official language) or pidgin Japanese (used as a lingua franca among islands). The next level is made up of the languages of major islands that exist as political units, such as Palauan, Yapese and Ponapean. On the lowest level are the various ethnic languages spoken mainly on solitary islands.
In the Maluku Islands of Indonesia, local Malay languages such as Ambonese Malay, North Maluku Malay and Bacanese Malay, form a layer beneath the official language, Indonesian. Under them are the dominant local languages, such as Hitu, which is spoken by 15,000 people on Ambon Island, and Ternate and Tidore, which are spoken in the Halmahera region. These are important as urban languages. On the lowest level are the various vernaculars.
In Papua New Guinea, standard English forms the top level, followed by Papua New Guinean English. Tok Pisin and Hiri Motu are used as common languages among the various ethnic groups. Beneath these layers are the regional or occupational common languages. For example, Hiri Motu is used as the law enforcement lingua franca in coastal areas around the Gulf of Papua, Yabem as a missionary language along the coast of the Huon Gulf, and Malay as a trade language in areas along the border with Indonesia. On the next level are the ethnic and tribal languages used on a day-to-day basis.
An example of a similar pattern in Polynesia can be found in Hawaii, where English and Hawaiian English rank above Da Kine Talk or Pidgin To Da Max, which are mixtures of English and Oceanic languages and are used as common languages among the various Asian migrants who have settled in Hawaii. Beneath these are ethnic languages, including Hawaiian and the various immigrant languages, such as a common Japanese based on the Hiroshima dialect, as well as Cantonese, Korean, and Tagalog.
All of the threatened languages are in danger because of their status as indigenous minority languages positioned at the lowest level of the linguistic hierarchy. Reports to date have included little discussion of the multilevel classification of linguistic strata from a formal linguistic perspective. It will be necessary in the future to examine these phenomena from the perspectives of sociolinguistics or linguistic anthropology.
Barr, Donald F., and Sharon G. Barr. 1978. Index of Irian Jaya Languages. Prepublication draft. Abepura, Indonesia: Cenderawashih University and Summer Institute of Linguistics.
Comrie, Bernard, Stephan Matthews, and Maria Polinsky. 1996. The Atlas of Languages. New York: Chackmark Books.
Foley, William A. 1986. The Papuan Languages of New Guinea. Cambridge, New York: Cambridge University Press.
Grimes, Barbara F., ed. 1996. Ethnologue: Languages of the World. Dallas: International Academic Bookstore.
Hosokawa, Komei. 1992. The Yawuru language of West Kimberley: A meaning-based description. Ph.D. diss., Australian National University.
Oda, Sachiko. 1977. The Syntax of Pulo Annian. Ph. D. diss., University of Hawaii.
Osumi, Midori. 1995. Tinrin grammar. Oceanic Linguistics Special Publication, No. 25. Honolulu: University of Hawaii Press.
Sakiyama, Osamu. 1982. The characteristics of Nguluwan from the viewpoint of language contact. In Islanders and Their Outside World.Aoyagi, Machiko, ed. Tokyo: Rikkyo University.
---. 1994. Hirimotu go no ruikei: jijun to gochishi (Affix order and postpositions in Hiri Motu: A cross-linguistic survey). Bulletin of the National Museum of Ethnology,vol. 19 no. 1: 1-17.
---. 1995. Mikuroneshia Berau no pijin ka nihongo (Pidginized Japanese in Belau, Micronesia). Shiso no kagaku, vol. 95 no. 3: 44-52.
---. 1996. Fukugouteki na gengo jokyo (Multilingual situation of the Solomon Islands). In Soromon shoto no seikatsu shi: bunka, rekishi, shakai (Life History in the Solomons: Culture, history and society). Akimichi, Tomoya et al, eds. Tokyo: Akashi shoten.
Shintani, Takahiko and Yvonne Païta. 1990. Grammaire de la Langue de Païta. Nouméa, New Caledonia: Société d'études historiques de la Nouvelle-Calédonie.
Taber, Mark and et al. 1996. Atlas bahasa tanah Maluku (Maluku Languages Atlas). Ambon, Indonesia: Summer Institute of Linguistics and Pusat Pengkajian dan Pengembangan Maluku, Pattimura University.
Toki, Satoshi, ed. 1998. The remnants of Japanese in Micronesia. Memoirs of the Faculty of Letters, Osaka University, Vol. 38.
Tsunoda, Tasaku. 1974. A grammar of the Warrungu language, North Queensland. Master's thesis, Monash University.
---. 1981. The Djaru Language of Kimberley, Western Australia. Pacific Linguistics, ser. B, No. 78. Canberra: Australian National University.
Voorhoeve, C. L. 1975. Languages of Irian Jaya: Checklist, Preliminary classification, language maps, wordlists. Canberra: Australian National University.
Wurm, Stephen A. 1982. Papuan Languages of Oceania. Tübingen: Gunter Narr Verlag.
---. and Shiro Hattori, eds. 1981-83. Language Atlas of the Pacific Area. Pacific Linguistics, ser. C, No. 66-67. Canberra: Australian National University.
---, Peter Mühlhäusler, and Darrel T. Tryon. 1996. Atlas of languages of intercultural communication in the Pacific, Asia, and the Americas. 3 vols. Trends in Linguistics. Documentation 13. New York: Mouton de Gruyter.
*Translation of the author’s essay “Taiheiyo chiiki no kiki gengo”, Gekkan Gengo, Taishukan Publishing Co., 28(2), 102-11, 1999, with the permission of the publisher.
Any comments and suggestions to firstname.lastname@example.org | <urn:uuid:a58eb30a-d617-46b9-af87-d5abe8362da9> | CC-MAIN-2013-20 | http://www.elpr.bun.kyoto-u.ac.jp/essay/sakiyama.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.92494 | 5,264 | 3.6875 | 4 |
Hold the salt: UCLA engineers develop revolutionary new desalination membrane
Process uses atmospheric pressure plasma to create filtering 'brush layer'
Desalination can become more economical and used as a viable alternate water resource.
By Wileen Wong Kromhout
Originally published in UCLA Newsroom
Researchers from the UCLA Henry Samueli School of Engineering and Applied Science have unveiled a new class of reverse-osmosis membranes for desalination that resist the clogging which typically occurs when seawater, brackish water and waste water are purified.
The highly permeable, surface-structured membrane can easily be incorporated into today's commercial production system, the researchers say, and could help to significantly reduce desalination operating costs. Their findings appear in the current issue of the Journal of Materials Chemistry.
Reverse-osmosis (RO) desalination uses high pressure to force polluted water through the pores of a membrane. While water molecules pass through the pores, mineral salt ions, bacteria and other impurities cannot. Over time, these particles build up on the membrane's surface, leading to clogging and membrane damage. This scaling and fouling places higher energy demands on the pumping system and necessitates costly cleanup and membrane replacement.
The new UCLA membrane's novel surface topography and chemistry allow it to avoid such drawbacks.
"Besides possessing high water permeability, the new membrane also shows high rejection characteristics and long-term stability," said Nancy H. Lin, a UCLA Engineering senior researcher and the study's lead author. "Structuring the membrane surface does not require a long reaction time, high reaction temperature or the use of a vacuum chamber. The anti-scaling property, which can increase membrane life and decrease operational costs, is superior to existing commercial membranes."
The new membrane was synthesized through a three-step process. First, researchers synthesized a polyamide thin-film composite membrane using conventional interfacial polymerization. Next, they activated the polyamide surface with atmospheric pressure plasma to create active sites on the surface. Finally, these active sites were used to initiate a graft polymerization reaction with a monomer solution to create a polymer "brush layer" on the polyamide surface. This graft polymerization is carried out for a specific period of time at a specific temperature in order to control the brush layer thickness and topography.
"In the early years, surface plasma treatment could only be accomplished in a vacuum chamber," said Yoram Cohen, UCLA professor of chemical and biomolecular engineering and a corresponding author of the study. "It wasn't practical for large-scale commercialization because thousands of meters of membranes could not be synthesized in a vacuum chamber. It's too costly. But now, with the advent of atmospheric pressure plasma, we don't even need to initiate the reaction chemically. It's as simple as brushing the surface with plasma, and it can be done for almost any surface."
In this new membrane, the polymer chains of the tethered brush layer are in constant motion. The chains are chemically anchored to the surface and are thus more thermally stable, relative to physically coated polymer films. Water flow also adds to the brush layer's movement, making it extremely difficult for bacteria and other colloidal matter to anchor to the surface of the membrane.
"If you've ever snorkeled, you'll know that sea kelp move back and forth with the current or water flow," Cohen said. "So imagine that you have this varied structure with continuous movement. Protein or bacteria need to be able to anchor to multiple spots on the membrane to attach themselves to the surface — a task which is extremely difficult to attain due to the constant motion of the brush layer. The polymer chains protect and screen the membrane surface underneath."
Another factor in preventing adhesion is the surface charge of the membrane. Cohen's team is able to choose the chemistry of the brush layer to impart the desired surface charge, enabling the membrane to repel molecules of an opposite charge.
The team's next step is to expand the membrane synthesis into a much larger, continuous process and to optimize the new membrane's performance for different water sources.
"We want to be able to narrow down and create a membrane selection system for different water sources that have different fouling tendencies," Lin said. "With such knowledge, one can optimize the membrane surface properties with different polymer brush layers to delay or prevent the onset of membrane fouling and scaling.
"The cost of desalination will therefore decrease when we reduce the cost of chemicals [used for membrane cleaning], as well as process operation [for membrane replacement]. Desalination can become more economical and used as a viable alternate water resource."
Cohen's team, in collaboration with the UCLA Water Technology Research (WaTeR) Center, is currently carrying out specific studies to test the performance of the new membrane's fouling properties under field conditions.
"We work directly with industry and water agencies on everything that we're doing here in water technology," Cohen said. "The reason for this is simple: If we are to accelerate the transfer of knowledge technology from the university to the real world, where those solutions are needed, we have to make sure we address the real issues. This also provides our students with a tremendous opportunity to work with industry, government and local agencies."
A paper providing a preliminary introduction to the new membrane also appeared in the Journal of Membrane Science last month.
Published: Thursday, April 08, 2010 | <urn:uuid:c0b175bb-65fb-420e-a881-a80b91d00ecd> | CC-MAIN-2013-20 | http://www.environment.ucla.edu/water/news/article.asp?parentid=6178 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.924981 | 1,115 | 2.8125 | 3 |
Second of two parts.
In the previous column, we toured the present-day East Cemetery on the Old Post Road. But in looking at its origin in 1830 by a town body called the First Located School Society, we found ourselves wandering back to 17th-century England, when the king granted his Connecticut colony a hefty slice of North America.
We then jumped ahead a century to see Connecticut give almost all of it away to cancel Revolutionary War debts and settle a shooting war with Pennsylvania. Connecticut's consolation prize was an isolated fragment of its former colonial real estate holdings, more than five hundred miles west of Hartford.
You've probably heard of Case Western Reserve University in Cleveland, but you probably didn't know that it's named after the above-noted remnant of Connecticut's legacy from Charles II, which became known as the Western Reserve. Three million acres all told, it extended south to the 41st parallel from the shores of Lake Erie and west from the newly drawn western border of Pennsylvania. You know it today as northern Ohio.
Yep, Connecticut lost a bunch of land but still owned a big piece of Ohio. Impressive! But what about the First Located School Society and the East Cemetery? Bear with me.
After the Revolutionary War, a few Connecticut settlers tried to make a go of it in the Western Reserve, but severe weather, lack of supply and agricultural routes, and less-than-welcoming Indians made things very difficult. Connecticut, perhaps for lack of a better idea for the Western Reserve, decided to sell it. In 1795, a group of private investors picked it up for $1.2 million, or about 40 cents an acre. The investors sent a guy named Moses Cleaveland to survey the area, and he got to have the city, with a slight spelling change, named after him. Connecticut relinquished its legal authority over the Western Reserve a few years later, and it was absorbed into the Northwest Territory. Hey, we could have owned Cleveland!
This is where the First Located School Society comes in. But first, let me explain why there's one county and several townships in Ohio named after Fairfield.
As the 19th century approached, the Indian threat passed, and supply routes improved. More and more Connecticut families emigrated to the Western Reserve to get cheap, fertile land and live alongside like-minded New Englanders. The settlers naturally named their new towns after their old ones, so Fairfield and other Connecticut towns are well represented in Ohio. The westernmost 500,000 acres of the Western Reserve, known as the Firelands, had long ago been set aside for residents of eight Connecticut towns, including Fairfield, who suffered the loss of their homes to the British in the Revolutionary War. Affected families were to receive individual land grants, but bureaucracy, Indians, and the War of 1812 killed the well-intentioned project.
Finally, we can finish the story of the East Cemetery. What did Connecticut do with that $1.2 million windfall from the 1795 sale of the Western Reserve? The legislature nobly created a Perpetual School Fund, to be administered by the towns through civil authorities called school societies, now extinct as public education evolved.
The inaugural meeting of the First Located School Society of Fairfield took place on Oct. 27, 1796, "In order to form and Organize themselves in a School Society according to one Statute Law of this State." The Society immediately laid out six school districts. Here is how the first district was described, exactly as recorded in the original record book:
"Voted and Agreed that the first District for a School in this Society -- to begin at Black Rock a little Easterly of John Wheeler's house, and to run Northwardly of David Wheeler's house -- and from thence to run down to the River Eastward to Grovers hill point -- and from thence running up the Harbour as far as to the place first Set out, -- all the Inhabitants contained within said Limits to be one District for a School in said Society, and to be called by the name of the Black Rock District."
Five more districts were similarly defined, with boundaries that worked well for 1796 Fairfielders, but might be a little shaky these days.
In 1830, in concert with other Connecticut school societies, the First Located School Society took on the task of establishing a burial ground, for the "better accommodation of the Inhabitants ..." It seems that this was an obligation legislated by the state, but why cemeteries would fall to the school societies is a mystery.
A committee identified a plot of land in the center of town owned by Mrs. Sarah Taylor, bought it for $600, dubbed it the East Cemetery, and launched itself into the cemetery business. Business must have been good. Within the year, the School Society opened the West Cemetery on the Post Road, complete with sections for "Strangers" and "Colored People."
So, it all comes together: King Charles II made Connecticut a very big North American landholder. But then, Revolutionary War debt and the Yankee-Pennamite Wars took most of it away but begat the Western Reserve, which begat the Perpetual School Fund, which begat the First Located School Society of Fairfield, which begat the East Cemetery on the Old Post Road.
Ron Blumenfeld is a Fairfield writer and retired pediatrician. His "Moving Forward, Looking Back" appears every other Wednesday. He can be reached at email@example.com. | <urn:uuid:4435fa64-06ab-40d4-ae6a-934240e243d1> | CC-MAIN-2013-20 | http://www.fairfieldcitizenonline.com/opinion/article/Moving-Forward-Looking-Back-Swapping-northern-4270675.php | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.972175 | 1,138 | 2.90625 | 3 |
If farmers are to increase food production and food security, they need better access to agricultural support systems, such as credit, technology, extension services and agricultural education, as well as to the rural organizations that often channel other services. Both men and women smallholders and poor farmers have frequently been cut off from these essential agricultural support systems, which seldom take into account the different responsibilities and needs of men and women farmers. In spite of their enormous potential and their crucial roles in agricultural production, women in particular have insufficient access to production inputs and support services.
This trend underlines the need to implement measures aimed at enhancing the access of small farmers, especially women, to production inputs - particularly since the working environment of development organizations has
changed as a result of market liberalization and a reduced role for the state worldwide. National agricultural extension systems are no exception to this rule, and must respond by making internal and external adjustments. Great attention is required so that the adjustments do not become detrimental to women and men small farmers. For example, FAO's field experiences over the last decade have pointed to the need for extension programmes that are more strategically planned, needs-based, participatory and problem solving.
Women's access to and use of agricultural support systems is also severely limited by the heavy burden on time and energy that results from their triple responsibilities - productive activities (such as work in the fields), reproductive activities (such as child rearing, cooking and household chores) and community management.
In order to improve production, farmers need access to financial capital. Buying seeds, fertilizer and other agricultural inputs often requires short-term loans, which are repaid when the crops are harvested. Installing major improvements, such as irrigation pumps, or acquiring new technology that increases future yields is impossible without access to long-term credit.
Smallholders, particularly women, often face difficulties in obtaining credit. This is a direct consequence of their lacking access to land, participation in development projects and extension programmes and membership in rural organizations, all of which are important channels for obtaining loans and credit information. In several countries of sub-Saharan Africa, where women and men farmers are roughly equal in number, it is estimated that women farmers receive only 10 percent of the loans granted to smallholders and less than 1 percent of the total credit advanced to the agriculture sector.
Credit delivery can be improved by setting up microfinance institutions in rural areas and reorienting the banking system to cater to the needs of small farmers, especially women. The Grameen Bank in Bangladesh, which first pioneered the microcredit approach in 1976, currently reaches more than 2 million people. Since it was founded, the bank has lent more than US$2.1 billion, most of it in the form of loans of a few hundred dollars for small agriculture, distribution, crafts and trading enterprises. Numerous studies have shown that women are generally more reliable and punctual in repaying their loans than men are.
A programme providing credit and nutrition for women significantly improved both the participating women's incomes and their children's nutritional status. This is the conclusion of a study that examined the impact of a credit and education programme run by the NGO Freedom from Hunger.
Men and women smallholders also suffer financially from limited access to the marketing services that would allow them to turn surplus produce into cash income. Women face particular difficulties because marketing infrastructure and organizations are rarely geared towards either small-scale producers or the crops that women grow. Although women all over the world are active as traders, hawkers and street and market vendors, little has been done to improve transport and market facilities to support this vital economic sector. Even where rural women play an important role in wholesale trade, their full membership in marketing service institutions is still difficult because they may be illiterate or lack independent legal status.
Planning for action
The FAO Gender and Development Plan of Action includes commitments by different Divisions of FAO to increasing the equality of access to a wide range of agricultural support systems, including markets, credit, technology, extension and training.
Rural finance and marketing services
Rural groups and organizations
Agricultural research and technology
Agricultural education and extension
Microcredit and education boost incomes and nutrition
Astudy examined the impact of a microcredit and educational programme implemented by the NGO Freedom from Hunger. In Ghanaian villages, women who participated in the programme used microcredit loans to launch income-generating activities such as preparing and selling palm oil, fish and cooked foods. They increased their non-farm income by $36 per month, twice as much as the women who had not taken part in the programme. Through the programme's educational component, participating women also gained valuable knowledge about their children's nutrition and heath needs.
Membership of cooperatives, farmers' organizations, trade unions and other organizations represents one of the best ways for rural men and women to gain access to resources, opportunities and decision-making. Cooperatives and farmers' associations generally make it possible for farmers to share the costs and rewards of services that they could not afford on their own. They can be an invaluable channel for obtaining technology, information, training and credit. They can also give smallholders a much louder voice in local and national decision-making. By instituting common food processing, storage and marketing activities, organizations can increase the exchange of goods and services and the access to national and regional markets.
Participation in such organizations can be especially important to smallholders and poor farmers, both men and women. But women are frequently deterred from joining because membership is often restricted to recognized landowners or heads of household. Even when women are responsible for the day-to-day management of both households and holdings, their husbands or other male relatives are often considered the official heads.
In many regions, women farmers' membership of these organizations is restricted by custom. Where they are able to belong to rural organizations, women often do not share equally in either the decision-making or the benefits, and are excluded from leadership positions. Furthermore, their many household chores may make it impossible for them to attend meetings and devote the time that is necessary for full participation. Investment in labour-saving technologies to relieve the burden of women's unpaid productive and reproductive tasks is needed in order to given them more free time.
In recent years there has been some success in reducing the obstacles to women's participation in rural organizations. At the same time, the use and establishment of traditional and new women's groups to promote women's participation in rural development has grown rapidly. However, experience has shown that women's empowerment often requires a step-by-step process to remove the barriers to their membership in organizations that are traditionally dominated by men. Furthermore, it is necessary to give them support, individually or collectively, to enable them to gain the knowledge and self-confidence needed to make choices and take greater control of their lives.
In all regions of the developing world, women typically work far longer hours than men do. Studies in Asia and Africa show that women work as much as 13 extra hours a week. As a result, they may have little available time to seek out support services, and very different priorities for the kind of support required.
Overall, the agricultural research agenda has neglected the needs of smallholders, especially women farmers, and failed to take advantage of their invaluable knowledge about traditional farming methods, indigenous plant and animal varieties and coping techniques for local conditions. Such knowledge could hold the key to developing sustainable approaches that combine modern science with the fruits of centuries of experimentation and adaptation by men and women farmers.
Most research has focused on increasing the yields of commercial crops and staple grains on high-input farms, where high-yielding varieties can be cultivated under optimal conditions. Smallholders can rarely afford these technology «packages», which are also generally ill suited to the climatic and soil conditions in areas where most of the rural poor live. The crops that farmers in such areas rely on and the conditions that they face have not featured prominently in agricultural research. Sorghum and millet, for example, have received very little research attention and funding, despite their high nutritional value and ability to tolerate difficult conditions. Similarly, relatively little research has been devoted to the secondary crops grown by women, which often provide most of their family's nutritional needs.
In addition, agricultural tools and implements are also rarely designed to fit women's physical capabilities or work, so they do not meet women's needs. The impact of new technologies is seldom evaluated from a gender perspective. The introduction of harvesting, threshing and milling machinery, for example, has very little direct effect on yields but eliminates thousands of hours of paid labour. According to one study, if all the farmers in Punjab, India, who cultivate more than 4 ha were to use combine harvesters, they would lose more than 40 million paid working days, without any increase in farm production or cropping intensity. Most of the lost labour and income would be women's.
«Schools where men and women farmers learn how to increase yields and reduce their reliance on pesticides by relying on natural predators.»
Developing technology to meet women's specific needs can yield major gains in food production and food security. In Ghana, for example, technology was introduced to improve the irrigation of women's off-season crops. Larger and more reliable harvests increased both food and economic security during the periods between major crops. In El Salvador, where women play an extremely important role in agriculture, it is estimated that as many as 60 percent of households are headed by women. One of the major goals of this country's agriculture sector reform was to improve research and extension activities by focusing on the role of women smallholders. To address women farmers' needs, the project promoted women's participation to help guide the research programme at National Agricultural Technology Centre farms.
Farmer field schools in Cambodia
In fields across Cambodia, men and women farmers gather every week to go to school. They are among the 30 000 Cambodian farmers - more than one-third of them women - who have taken part in FAO-supported farmer field schools (FFS). In the schools, farmers observe how crops develop and monitor pests throughout the growing season. They also learn how natural predators, such as wasps and spiders, can help control pests and how the heavy use of pesticides often kills them off, leaving crops even more vulnerable. These schools emphasize the active participation and empowerment of both men and women farmers. In at least six provinces in Cambodia, farmers have formed integrated pest management (IPM) groups after completing their training, and are carrying out further field studies and experiments. More than 300 farmers have completed additional training and are now organizing farmer field schools in their own areas. «;I always knew pesticides were bad for my health,» one participant said, «but now I know for sure.» After completing the school, farmers rely more on cultural practices and natural enemies to control pests, and experience fewer cases of poisoning.
Agricultural extension programmes provide farmers with a lifeline of information about new technologies, plant varieties and market opportunities. In almost all countries, however, the agricultural extension system fails to reach women farmers effectively. Among other reasons, this is because they are excluded from rural organizations. An FAO survey showed that, worldwide, female farmers receive only 5 percent of all agricultural extension services and only 15 percent of agricultural extension agents are women. In Egypt, where women make up more than half of the agricultural labour force, only 1 percent of extension officers are female.
«An FAO extension project in Honduras that focused on woman-to-woman training boosted both subsistence production and household food security.»
This reflects the lack of information and understanding about the important role played by women. Extension services usually focus on commercial rather than subsistence crops, which are grown mainly by women and which are often the key to household food security. Available data rarely reflect women's responsibility for much of the day-to-day work and decision-making on the family farm. Nor do they recognize the many other important food production and food processing activities that women commonly perform, such as home gardening, tending livestock, gathering fuel or carrying water.
Extension programmes can be tailored to address women's priority needs only when men and women farmers are listened to at the village level and when such methods as participatory rural appraisal are employed. In recent years, a number of countries have launched determined efforts to make their extension services more responsive to women's needs. In the Gambia, for example, the proportion of female agricultural extension workers has increased from 5 percent in 1989 to more than 60 percent today. Growth in the number of female extension workers has been matched by increased attention to women's involvement and priorities. A special effort has been made to encourage women's participation in small ruminant and poultry extension services.
In Nicaragua, efforts to ensure that extension services match client needs - including giving more attention to the diverse needs of men and women farmers - led to increased use of those services, by 600 percent for women and 400 percent for men.
Extension programmes that fail to take women into account also fail to address the improved technology and methods that might yield major gains in productivity and food security. Furthermore, they often schedule training times and locations that make it impossible for women to participate, in addition to existing socio-cultural reasons.
Recommended new approaches include the Strategic Extension Campaign (SEC), which was developed by FAO and introduced in Africa, the Near East, Asia and Latin America. This methodology emphasizes how important it is for field extension workers and small farmers to participate in the strategic planning, systematic management and field implementation of agricultural extension and training programmes. Its extension strategies and messages are specifically developed and tailored to the results of a participatory problem identification and needs assessment.
Training Programme for Women's Incorporation in Rural Development
Several hundred peasant women in Honduras were trained to serve as «food production liaisons». After receiving their training, the liaisons worked with grassroots women's groups. They focused on impoverished rural areas where chronic malnutrition is widespread and 70 percent of all breastfeeding mothers suffer from vitamin A deficiency. Women involved with the project increased the subsistence production of nutritious foods. Credits to develop poultry production proved an effective way of increasing motivation, nutritional levels and incomes. Some of the grassroots women's groups involved with the project sought credit through extension agencies or from the Rotating Fund for Peasant Women. The credit was used to initiate other social and productive projects, including purchasing a motorized maize mill and planting soybeans for milk. | <urn:uuid:6d3aa950-7ad5-4c80-bb68-e826841338b7> | CC-MAIN-2013-20 | http://www.fao.org/docrep/005/Y3969E/y3969e05.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.966089 | 2,945 | 3.21875 | 3 |
US 4884575 A
A cardiac pacemaker pulse generator is adapted to .generate electrical stimuli at a first pacing rate, and to selectively increase the rate to a second higher pacing rate. A timer triggers the rate increase to establish the higher rate as an exercise rate folloing the passage of a preset period of time after the timer is enabled. An external magnet controlled by the patient activates a reed switch to enable the timer to commence timing. The pulse generator is further adapted to respond to a second pass of the magnet over the reed switch after enabling of the timer to thereupon disable the timer before the preset period of time has expired. If the second pass of the magnet occurs after the exercise rate has begun, the element for increasing the rate is disabled to return the pulse generator to the lower pacing rate. The change in pacing rates is made in steps.
1. In combination with an implantable cardiac pacemaker for delivering electrical stimuli to the heart of a patient to pace the heart rate,
said pacemaker comprising:
pulse generator means for selectively producing said electrical stimuli at a fixed resting rate and at a higher exercise rate,
lead means associated with said pulse generator for delivering said stimuli to a selected chamber of the heart, and
timer means for stepping-up said pulse generator means from said resting rate to said exercise rate after an adjustable preset delay following activation of said timer means, said preset delay being of a duration perceptible by the patient; and
external control means for patient initiation of a first command to said pacemaker to activate said timer means.
2. In combination with an implantable cardiac pacemaker for delivering electrical stimuli to the heart of a patient to pace the heart rate, said pacemaker comprising:
pulse generator means for selectively producing said electrical stimuli at a fixed resting rate and a higher exercise rate,
lead means associated with said pulse generator for delivering said stimuli to a selected chamber of the heart, and
delay means for stepping-up said pulse generator means from said resting rate to said exercise rate after an adjustable preset delay following activation of said delay means,
means associated with said pulse generator means and said delay means for maintaining said exercise rate for a predetermined time interval following said preset delay and then returning said pulse generator means to said resting rate; and
an external control means for patient-initiation of a command to said pacemaker to activate said delay means.
3. The combination according to claim 2, wherein said delay means is responsive to a second command initiated by the patient from said external control means at any time after receipt of the first said command and before the expiration of said predetermined time interval, to cancel the activation of said delay means.
4. The combination according to claim 3, wherein the stepping up and returning of said rates at which said stimuli are produced by said pulse generator means is effected gradually.
5. An implantable pulse generator unit for a cardiac pacemaker for use with an external magnet to permit patient-initiated adjustment of pacing rate from a resting rate to an exercise rate and vice versa, said unit comprising:
generator means for generating electrical stimuli at said resting rate,
control means associated with said generator means responsive, when enabled, for controllably increasing the rate at which electrical stimuli are generated from said generator means from said resting rate to said exercise rate, and
timer means responsive to positioning of said external magnet in proximity to said pulse generator unit for enabling said control means an adjustable preset delay period after said positioning, said preset delay period being of a duration perceptible to the patient.
6. An implantable pulse generator unit for a cardiac pacemaker for use with an external magnet to permit patient-initiated adjustment of pacing rate from a resting rate to an exercise rate and vice versa, said unit comprising:
generator means for generating electrical stimuli at said resting rate,
control means associated with said generator means responsive, when enabled, for controllably increasing the rate at which electrical stimuli are generated by said generator means from said resting rate to said exercise rate,
said control means including timing means for maintaining the rate at which electrical stimuli are generated by said generator means at said exercise rate for a predetermined time interval; and
delay means responsive to positioning of said external magnet in proximity to said pulse generator unit for enabling said control means an adjustable preset delay period thereafter.
7. The pulse generator unit of claim 6, wherein said control means automatically returns said generator means to said resting rate following the expiration of said predetermined time interval.
8. The pulse generator unit of claim 7, wherein said control means gradually increases the rate at which electrical stimuli are generated by said generator means from said resting rate to said exercise rate, and gradually returns said generator means to said resting rate following the expiration of said predetermined time interval.
9. The pulse generator unit of claim 6, wherein said delay means is responsive to a repositioning of said external magnet in proximity to said pulse generator unit after said control means has been enabled, for disabling said control means.
10. A cardiac pacemaker pulse generator for generating electrical stimuli to be delivered to the heart of a patient to pace the heart rate, said generator comprising:
means for generating said electrical stimuli at a first pacing rate,
means electrically connected to said stimuli generating means for selectively increasing the rate at which said stimuli are generated to a second higher pacing rate,
timing means for triggering said rate increasing means to increase said first pacing rate to a second higher pacing rate upon passage of an adjustable preselected period of time after said timing means is enabled, said preselected period of time being of a duration perceptible by the patient,
means responsive to a command signal from a patient-activated external device for enabling said timing means to commence timing.
11. The pulse generator according to claim 10, wherein
said enabling means is further responsive to a second command signal after said timing means is enabled, to disable said timing means prior to passage of said preselected period of time.
12. The pulse generator according to claim 10, further including
means responsive to a second command signal while said stimuli are being generated at said second higher pacing rate, for disabling said rate increasing means and thereby returning the rate at which said stimuli are generated by said stimuli generating means to said first pacing rate.
13. The pulse generator according to claim 12, wherein
said rate increasing means is responsive, when disabled, to decrementally reduce the rate at which said stimuli are generated by said stimuli generating means.
14. The pulse generator according to claim 10, wherein
said rate increasing means is responsive to said timing means reaching preset time intervals toward passage of said preselected period of time, for incrementally increasing the rate at which said stimuli are generated by said stimuli generating means in steps as each preset time interval is reached.
15. The method of pacing a pacemaker patient's heart rate using a magnet-controlled implantable pulse generator to adjust the stimulation rate from a resting rate to an exercise rate and vice versa, comprising the steps of
maintaining the stimulation rate of said pulse generator at said resting rate,
initiating a command signal to reset the stimulation rate of said pulse generator to said exercise rate after an adjustable programmed delay period following said command signal, and
returning the stimulation rate of said pulse generator to said resting rate in increments following a predetermined interval of time at said exercise rate.
The present invention relates generally to medical devices, and more particularly to implantable artificial cardiac pacemakers adapted to provide patient-variable stimulation rates appropriate to a condition of exercise by the patient.
The resting heart rate of sinus rhythm, that is, the rate determined by the spontaneously rhythmic electrophysiologic property of the heart's natural pacemaker, the sinus node, is typically in the range from about 65 to about 85 beats per minute (bpm) for adults. Disruption of the natural cardiac pacing and propagation system may occur with advanced age and/or cardiac disease, and is often treated by implanting an artificial cardiac pacemaker in the patient to restore and maintain the resting heart rate to the proper range.
In its simplest form, an implantable pacemaker for treatment of bradycardia (abnormally low resting rate, typically below 60 beats per minute (bpm)) includes an electrical pulse generator powered by a self-contained battery pack, and a catheter lead including at the distal end a stimulating cathodic electrode electrically coupled to the pulse generator. The lead is implanted intravenously to position the cathodic electrode in stimulating relation to excitable myocardial tissue in the selected chamber on the right side of the patient's heart. The pulse generator unit is surgically implanted in a subcutaneous pouch in the patient's chest, and has an integral electrical connector to receive a mating connector at the proximal end of the lead. In operation of the pacemaker, the electrical pulses are delivered (typically, on demand) via the lead/electrode system, including an anodic electrode such as a ring behind the tip for bipolar stimulation or a portion of the pulse generator case for unipolar stimulation, and the body tissue and fluid, to stimulate the excitable myocardial tissue.
Pacemakers may operate in different response modes, such as asynchronous (fixed rate), inhibited (stimulus generated in absence of specified cardiac activity), or triggered (stimulus delivered in presence of specified cardiac activity). Further, present-day pacers range from the simple fixed rate device that offers pacing with no sensing (of cardiac activity) function, to fully automatic dual chamber pacing and sensing functions (so-called DDD pacemakers) which may provide a degree of physiologic pacing by at least a slight adjustment of heart rate according to varying metabolic conditions in a manner akin to the natural pacing of the heart. Thus, some DDD pacemaker patients experience an increased pacing rate with physical exertion, with concomitantly higher cardiac output, and thereby, an ability to handle low levels of exercise. Unfortunately, a significant percentage of the pacemaker patient population, who suffer from atrial flutter, atrial fibrillation or sick-sinus syndrome, for example, cannot obtain the benefit of exercise-responsive pacing with conventional atrial-triggered pacemakers. Moreover, the DDD-type pacemakers are complex and costly to manufacture, which is reflected in a higher price to the patient.
It is a principal object of the present invention to provide a relatively simple and inexpensive pacemaker which provides pacing at a desired resting rate, and which is subject to limited control by the patient to provide a desired exercise rate for a preset period of time following which the pacemaker returns to the resting rate.
Various types of rate responsive pacemakers have been proposed which would sense a physiological parameter that varies as a consequence of physical stress, such as respiration, blood oxygen saturation or blood temperature, or merely detect physical movement, and correspondingly adjust the pacing rate. Many of these rate responsive pacemakers may also be relatively complex, and therefore expensive to the patient.
The present invention is directed toward a low cost pacemaker which can be adjusted at will by the patient, subject to the limited amount of control programmed into the device by the physician for that patient. According to the invention, patient control is manifested by bringing an external magnet into proximity with an implanted reed switch associated with the pacemaker. Of course, limited magnet control has been afforded to the patient in the past for some purposes, such as to enable transtelephonic monitoring of the pacemaker functions. Also, techniques are presently available which permit external adjustment of the stimulation rate of the pacemaker after implantation, as by means of a programming unit available to the physician. For obvious reasons, it is undesirable to give the patient the same latitude to control his pacemaker.
In U.S. Pat. No. 3,623,486, Berkovits disclosed a pacemaker adapted to operate at either of two stimulation rates, and switchable from one to the other by the physician using an external magnet. In this manner, the physician would be able to control the pacer mode and rate according to the needs of the particular patient. The purpose, in part, was to provide a pacemaker which had some adaptability to the patient's requirements. However, once set by the physician, the selected resting rate was maintained for that patient by the implanted pacer.
Another technique for external adjustment of pacing rate by the physician is found in the disclosures of U.S. Pat. No. 3,198.195 to Chardack, and U.S. Pat. No. 3,738,369 to Adams et al. In each, rate control is exercised by inserting a needle through a pacemaker aperture beneath the patient's skin to adjust a mechanism. In the Adams et al. disclosure, the needle is used to change the position of a magnet within the paper to actuate a rate-controlling reed switch.
In U.S. Pat. No. 3,766,928, Goldberg et al. describe an arrangement for continuous adjustment of rate by a physician using an external magnet that cooperates with a magnet attached to the shaft of a rate potentiometer in the implanted pacemaker, to provide the initial setting of pacing rate desirable for the particular patient.
More recent proposals offer the patient limited control over the pacing rate. In U.S. Pat. No. 4,365,633, Loughman et al. disclose a pacemaker programmer which is conditioned by the physician to give the patient the capability to select any of three distinct rates: for sleep, for an awake resting state, and for exercise. The programmer generates a pulsating electromagnetic field, and allows the patient to select any of those three modes with an abrupt change in rate when the coil pod of the programmer is positioned over the implanted pacemaker. It is, of course, necessary to have the programmer at hand in order to change the stimulation rate, and the use of the device in public can be a source of extreme embarassment to the patient.
In U.S. Pat. No. 4,545,380, Schroeppel describes a technique for manual adjustment of rate control contrasted with the activity sensing, automatic rate control disclosed by Dahl in U.S. Pat. No. 4,140,132. According to the Schroeppel patent, a piezoelectric sensor and associated circuitry are combined with the implanted pulse generator of the pacemaker to allow the patient to change from a resting rate to a higher rate by sharp taps on his chest near the site of the piezoelectric sensor. Such an arrangement requires that the sensor be sufficiently sensitive to respond to the patient's sharp taps, and yet be insensitive to the everyday occurrences the patient encounters while undergoing normal activities and which could otherwise result in false triggerings. These include presence in the vicinity of loud noise such as is generated by street traffic, being jostled in a crowd, experiencing bumps and vibrations while riding in a vehicle, and the like. Further, even when controlled in the manner described, this type of switching results in an abrupt, non-physiological change of rate.
Accordingly, it is another object of the present invention to provide a pacemaker which is capable of being controlled externally by the patient to assume exercise and non-exercise rate modes, in a manner that allows discreet and yet reliable control.
Yet another object of the invention is to provide a cardiac pacemaker whose stimulation rate is controllable by and according to a schedule selected by the patient.
Briefly, according to the present invention a cardiac pacemaker is manually controllable by the patient to preset time intervals of operation at a relatively high (exercise) rate and lower (resting) rate according to the patient's own predetermined schedule of exercise and rest. An important aspect of the invention is that the pulse generator may be implemented to undergo an adjustment of stimulation rate from a fixed resting rate of, say, 75 bpm, to a preselected exercise rate of, say, 120 bpm, following a predetermined period of time after activation by the patient using an external magnet, that is, after a predetermined delay following a patient-initiated command signal, and to remain at the higher rate for a preselected time interval. Thus, the patient may effectively "set a clock" in his pacemaker to elevate his heart rate at the time and for the duration of a scheduled exercise session, such as a game of tennis. Moreover, he may activate the pacemaker in this manner in the privacy of his own home well in advance of the exercise session.
According to another aspect of the invention, the pulse generator is implemented to return automatically to the resting rate at the expiration of the preselected exercise rate time interval. Hence, the patient need not carry his magnet with him to readjust the pacer to the resting rate at the completion of the scheduled exercise session. According to this aspect, after operating at the elevated stimulation rate for a time interval preselected to be suitable for the exercise session, say, one hour, the generator resets itself to return to the initial resting rate.
According to another feature of the invention, the rate is incremented and decremented in steps from one rate setting to the other to avoid abrupt changes, and therefore to provide a more physiological rate control than has heretofore been available in manually controlled pacemakers.
A further feature of the invention is that the pulse generator may be activated to disable the exercise rate command at any time after it has been given, including that to produce an early conclusion to an already-commenced exercise session. For example, if a scheduled tennis game or bicycling run is called off by the patient's partner after the patient has programmed in the higher rate, he need merely apply the magnet in proximity to the implanted pulse generator again to cancel the previous command and maintain the fixed resting rate. Similarly, if the exercise session is shortened, the rate may be returned to the resting rate by simply applying the magnet over the pulse generator.
The above and still further objects, aspects, features and attendant advantages of the present invention will become apparent to those of ordinary skill in the field to which the invention applies from a consideration of the following detailed description of a preferred embodiment thereof, taken in conjunction with the accompanying drawing, in which:
FIG. 1 is a block circuit diagram of a pulse generator unit of a cardiac pacemaker according to a preferred embodiment of the invention.
Referring now to FIG. 1, an implantable pulse generator unit 10 includes a pulse generator 12 and batteries 15 housed in a biocompatible metal case 17. Pulse generator 12 is implemented to be rate limited to generate output pulses at rates up to either of two low/high limit rates--for example, 75 pulses per minute (ppm) and 120 ppm, respectively--and to be incremented from the lower rate to the higher rate and decremented from the higher rate to the lower rate under the control of an up/down counter 18 associated with the pulse generator 12 in unit 10. Counter 18 may be set by application of a voltage level to its "up" input to commence counting toward the higher rate, and thereby to incrementally step the pulse repetition frequency up to that rate, and may be reset by application of a voltage level to its "down" input to commence counting toward the lower rate, and thereby decrementally step the pulse repetition frequency down to that rate. This is accomplished under the control of set and reset output voltage levels generated by a flip-flop circuit 21 also housed in case 17. The pulse generator unit 10 also includes a reed switch 25 which is actuable by placement of a magnet 27, external to the skin of the patient in whom the unit 10 is implanted, in proximity to case 17.
Reed switch 25, when actuated, serves to enable a delay timer 29 in unit 10. The delay timer responds to the enabling input to commence timing of its preset time delay interval. At the end of the delay interval, delay timer 29 produces a pulse for application to the flip-flop 21. Subsequent actuation of the reed switch before the timer 29 has timed out serves to disable the timer and reset it in preparation for a subsequent enabling signal from the reed switch. If timer 29 has already timed out before the reed switch is again actuated, the timer will respond to the disabling input, when the reed switch is actuated, to produce another pulse for application to the flip-flop 21. The flip-flop is thereupon reset and produces its reset output voltage level.
The set and reset output voltage levels of flip-flop 21 are also applied respectively to "set" and "reset" inputs of an interval timer 30. Upon being set, the interval timer commences timing out a predetermined time interval, and, at the expiration of that interval, generates a pulse for application to flip-flop 21. Upon being reset, the interval timer 30 is returned to the start of the predetermined time interval in preparation for initiating the timing of that interval on receipt at its "set" input of the next set output voltage level from the flip-flop.
The preset time period of delay timer 29 and the predetermined time interval of interval timer 30 are programmable by the physician according to the desires and needs of the particular patient. If, for example, the patient has a regularly scheduled early morning brisk walking session of one hour with friends, and resides near the starting point of the walk, the time period of the delay timer 29 may be programmed to be fifteen minutes. The time interval of the interval timer 30 is programmed to be one hour in length.
In operation, the pulse generator produces output pulses at the resting rate prescribed (and programmed) by the physician for the particular patient--in this exemplary embodiment, a resting rate of 75 bpm. The pulses are delivered to the stimulating cathodic electrode 35 in the right ventricle of the heart 40 via a lead 42, the reference electrode (anode) and the body tissue and fluids, according to the mode in which the pacemaker is designed to operate.
In the preferred embodiment, the pacemaker continues to operate at that rate unless and until the patient elects to initiate the exercise rate cycle. To do so, the patient places the magnet 27 in proximity to the implanted pulse generator unit 10 at about fifteen minutes prior to the appointed time for the exercise session, as a command to actuate reed switch 25. The patient may then choose to leave the magnet at home or take it along in the glove compartment of his car, since actuation of the reed switch has enabled the delay timer 29 and nothing more need be done by the patient to enable the pacemaker to commence the exercise rate at the expiration of the preset delay period.
Before the end of that period the patient has arrived at the starting point for the exercise session, and at the end of the delay period, the delay timer applies a pulse to flip-flop 21 which responds by generating a set output voltage level. The set voltage is applied to both the "up" input of counter 18 and the "set" input of interval timer 30. Accordingly, the counter commences its count, preferably at a relatively slow rate of, say, ten counts per minute, and correspondingly incrementally steps the pulse generator 12 output rate up to the upper rate limit of 120 ppm, and thereby gradually increases the patient's heart rate from 75 bpm to 120 bpm as the patient commences to exercise. Hence, the patient's heart rate and cardiac output are now at levels adequate for the patient to carry out the exercise session.
The pulse generator continues to supply pulses at the upper rate limit until interval timer 30, which commenced its predetermined time interval with the application of the set input voltage, times out, whereupon the interval timer produces an output pulse which is applied to flip-flop 21 to reset the latter. The flip-flop responds by providing a reset output voltage level for application to the "down" input of counter 18 and the "reset" input of the interval timer. Accordingly, the counter decrementally steps the pulse repetition frequency of the pulse generator down, preferably at the ten pulses per minute rate, to the lower rate limit of 75 ppm corresponding to a heart rate of 75 bpm. In this manner, the patient's heart rate is reduced gradually from the exercise rate to the resting rate at a time commensurate with the end of the exercise session. Also, the resetting of the interval timer by the set output voltage level of the flip-flop assures that the timer is ready to commence timing its predetermined interval on receipt of the next "set" input.
In the event that the exercise session is called off at any time after the delay timer 29 has been enabled and before the interval timer has timed out, the patient need merely place the magnet 27 once again in proximity to the implanted pulse generator unit. If the delay timer has not yet timed out, it is disabled by the actuation of the reed switch, and hence, flip-flop 21 remains reset, interval timer 30 remains reset, counter 18 is at its low count, and pulse generator 12 is at its lower rate limit. If the delay timer has timed out, it produces an output pulse in reponse to the disabling input from the reed switch, thereby resetting the flip-flop, resetting the interval timer, returning counter 18 toward its low count and pulse generator 12 toward its lower rate limit. To that end, delay timer 29 is provided with an internal clock such that, once enabled to time out the delay interval, it cannot be again enabled to do so until the passage of a preselected time interval, which is one hour and fifteen minutes in the present example, unless it has first been disabled during that overall interval. Of course, to cancel the exercise rate, the patient must have the magnet available to issue the second command but, as previously noted, once the delay timer is enabled through actuation of the reed switch the magnet may be kept in a convenient location, such as the glove compartment of the patient's car, to allow cancellation of the exercise rate in private.
Although a presently preferred embodiment has been described herein, it will be evident to those skilled in the art that variations and modifications of the preferred embodiment may be carried out without departing from the spirit and scope of the invention. Accordingly, it is intended that the present invention shall be limited only to the extent required by the appended claims and the applicable rules of law. | <urn:uuid:c4bc0780-21b4-4fac-b167-2758ef4d0cbb> | CC-MAIN-2013-20 | http://www.google.de/patents/US4884575 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.923385 | 5,452 | 2.546875 | 3 |
US 5459828 A
A method of producing a raster font from a contour font entailing the steps of deriving font metrics and character metrics of font characters in terms of arbitrary font units; scaling the font characters to a selected size and output resolution (pixels per unit length); altering the thickness of vertical and horizontal strokes of each character to a desired thickness, from the measured font metrics and character metrics, and including a difference applied to the thickness of the strokes by the printer process, to cause the strokes to be close to an integer number of pixels and thickness and to compensate for thinning and thickening which the printing engine might produce; bringing the leading and trailing edges of the characters to integer pixel locations, where such locations are based on and scaling the character between the leading and trailing edges proportionally therebetween, and producing a rasterized font from the altered contour font character.
1. A printer processor implemented method for producing a raster font from a contour font defined by a list of points connected by curves, said raster font suitable for printing on a selected printer having known reproduction characteristics, including the steps of:
a) deriving for a contour font a set of font metrics and character metrics of a character in the font defined in terms of arbitrary font units;
b) scaling a character contour defined in arbitrary font units to a selected size in units of pixels;
c) altering thickness of character strokes by adjusting vertical and horizontal coordinates of each point defining the character contour in directions defined by a vector normal to the character contour at each point, by an amount required to obtain a desired thickness from the measured font metrics and character metrics, and an amount required to add to difference thickness thereto in accordance with the selected printer reproduction characteristics, said alteration amounts together causing the vertical and horizontal strokes to be sufficiently close to an integer number of pixels or half pixels so as to cause subsequent numerical rounding to produce uniform results across the font;
d) grid aligning the contour of each character so that leading and trailing edges, and top and bottom edges of the contour of each character fall on whole or half pixel positions; and
e) applying a rasterization function to the contour to convert each contour font character to a bitmap.
2. The method as defined in claim 1 wherein in said grid alignment step, after aligning said leading and top edges of said contours of each character on a whole pixel position, the length of any lines joining leading and trailing edges, and lines joining top and bottom edges, are rounded to an integer number of whole or half pixels, and the trailing edge and bottom edges are aligned at whole pixel positions.
3. In a printing system for printing on a selected printer having reproduction characteristics known and available as contour font correction data, wherein a font to be printed has a set of predefined font metrics and character metrics for each character in the font defined in terms of arbitrary font units, the method of preparing a contour font defined by a list of points connected by curves, for printing on the selected printer including the ordered steps of:
a) scaling each character in the contour font to a selected print resolution in pixels per unit length;
b) altering thickness of character strokes by adjusting vertical and horizontal coordinates of each point defining the contour of each character to a desired thickness in directions defined by a vector, normal to the character contour at each point, by an amount required to obtain a desired thickness from the measured font metrics and character metrics, and an amount required to add a difference thickness thereto in accordance with the contour font correction data for a particular printer, to cause the vertical and horizontal stroke thickness to approximate an integer number of pixels so as to cause subsequent numerical rounding to produce uniform results across the font;
c) grid aligning the contour of each character so that leading and trailing edges, and top and bottom edges of the contour of each character fall on whole pixel positions; and
d) applying a rasterization function to the contour convert each contour font character to a bitmap.
4. The method as defined in claim 3 wherein in said grid alignment step, after aligning said leading and top edges of said contours of each character on a whole pixel position, the length of any lines joining leading and trailing edges, and lines joining top and bottom edges, are rounded to an integer number of pixels or half pixels, and the trailing edge and bottom edges are aligned at whole pixel positions.
A microfiche Appendix, having 5 fiche and 398 frames, is included herewith.
The present invention relates generally to the production of raster fonts from contour fonts, and more particularly, to a method of producing raster fonts from contour fonts taking into account characteristics of the contour font and the printer system which will ultimately print the font.
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office files or records, but otherwise reserves all rights whatsoever.
Cross reference is made to U.S. patent application Ser. No. 07/416,211 by S. Marshall, entitled "Rapid Halfbitting Stepper", and assigned to the same assignee as the present invention.
U.S. Pat. No. 4,675,830 to Hawkins is incorporated herein by reference for the purposes of background information on contour fonts. U.S. patent application Ser. No. 07/416,211 by S. Marshall, entitled "Rapid Halfbitting Stepper", and assigned to the same assignee as the present invention, is incorporated by reference herein for the purposes of teaching rasterization.
"Contour fonts" is a term that refers to the use of outlines or contours to describe the shapes of characters used in electronic printing. In a contour font, each character shape is represented by one or more closed curves or paths that traces the boundary of the character. The contour is specified by a series of mathematical equations, which may be in any of several forms, the most common being circular arcs, straight lines, and polynomial expressions. The shape of the contour font is that of the ideal design of the character and, generally, does not depend on parameters associated with any printer. Contour fonts are ideal for use as master representations of typefaces.
Bitmap fonts or raster fonts are composed of the actual characters images that will be printed on a page, and are made by scaling contours to the appropriate size, quantizing or sampling them at the resolution of the printer, and filling the interiors of the characters with black bits or pixels. Achieving high quality in this process is difficult, except at very high resolutions, and requires knowledge of both the marking technology and typographic design considerations. Often, a bitmap font is delivered to a printer. There is a separate bitmap font for each size of a font, and sometimes separate fonts for landscape and portrait orientations.
The advantage of a contour font is that it can be scaled to any size and rotated to any angle by simple mathematics. Therefore, a single font suffices to represent all possible printing sizes and orientation, reducing font storage requirements, reducing the cost of font handling.
The difficulty in this approach is in achieving high quality character images during the sampling process which generates the raster characters from the contour masters. If the contour character is simply sampled, there will be random .+-.1 pixel variations in stroke thickness. If the printing process tends to erode black areas (common in write-white laser xerography) characters will be consistently too thin. If the printing process tends to fatten black areas (common in write black laser xerography), characters will be too thick.
At the high resolution employed in phototypesetters, usually greater than 1,000 spi, no special techniques are required for scaling and sampling the contour font to generate a raster font of any size. This is because although simple sampling necessarily has random one-bit errors, such errors are small compared to the size of the character, making errors insignificant. At 300, 400, and 600 spi though, character strokes are only three or four bits thick and each bit is important. The simplistic methods used by typesetter manufacturers are not sufficient.
U.S. Pat. No. 4,675,830 to Hawkins, uses defined points in a contour font that must be grid aligned to pixel positions, but the stem widths or edges are not aligned.
Of particular importance in generating fonts of optimal appearance are maintenance of uniform and correct stroke thickness among characters of a font and on different printing engines, uniform alignment of characters on a baseline, and uniform spacing of characters.
In accordance with the invention, there is provided a method for conversion of contour fonts to bitmap fonts with automatic thickening and thinning of strokes, and snapping of character edges to pixel or half pixel boundaries.
In accordance with the invention, there is provided a method of producing a raster font from a contour font entailing the steps of: first, deriving font metrics and character metrics of font characters in terms of arbitrary font units; scaling the font characters to a selected size and output resolution (pixels per unit length); altering the thickness of vertical and horizontal strokes of each character to a desired thickness, from the measured font metrics and character metrics, and including a difference applied to the thickness of the strokes by the printer process, to cause the strokes to be close to an integer number of pixels and thickness and to compensate for thing and thickening which the printing engine might produce; bringing the leading and trailing edges of the characters to integer pixel locations, where such locations are based on and scaling the character between the leading and trailing edges proportionally therebetween, and producing a rasterized font from the altered contour font character.
These and other aspects of the invention will become apparent from the following description used to illustrate a preferred embodiment of the invention in conjunction with the accompanying drawings in which:
FIG. 1 shows a block diagram of the inventive optimized scaler rasterizer system.
FIGS. 2A-2E illustrate the development of a raster font from a contour font, using the system described in FIG. 1.
With reference to the drawing, where the showing is for the purpose of illustrating an embodiment of the invention and not for the purpose of limiting same, the Figure shows a block diagram of the present invention which will be referred to and described hereinafter.
FIG. 1 shows a block diagram of the contour rasterization process of the present invention. Beginning with a contour font 10, and with a character "H" shown in contour for illustration purposes at FIG. 2A the contour font is analyzed initially at hint generation step 20. At the hint generation, the parameters defining the font are determined, including measurement of the following metrics and character hints:
TABLE 1______________________________________Font Metric Comments______________________________________Cap-height Height of the H, I or similar letterX-Height Height of the lower case xAscender Height of the lower case k, b, or similar letterDescender Position of the bottom of the lower case p or qThickness of Upper Vertical stroke thicknessCase Stems on upper case H or KThickness of Upper Horizontal Stroke onCase Cross-Strokes upper case E or FThickness of Lower Vertical stroke thicknessCase Stems on lower case k or lThickness of Lower Case Horizontal strokeCross-Strokes thickness on the fThickness of AuxiliaryCharacter StemsThickness of AuxiliaryCharacter Cross-StrokesHairline thickness Thickness of the cross bar on the e or the thin part of the o______________________________________
(See, Appendix, page 13, ICFFontIODefs. Mesa)
Character hints are generated for each character and include the following:
TABLE 2______________________________________Character Metric Comments______________________________________Position of all horizontal Left sides of strokes areedges and indications of leading edges and rightwhether each edge is a sides or strokes areleading or trailing edge. trailing edges.Position of all verticaledges and indication ofwhether each edge is aleading or trailing edge.Direction of the normalvector (perpendicular)to the contour at eachcontrol point in thecontour, pointingtoward the whiteregion.______________________________________
At hint generation 20, the font metrics and character hints are computed. Since no special information on the actual character contours, beyond the contours themselves, is required to perform these computations, any font may be accepted as input. Height thickness metrics are obtained either by examining images of specific individual characters or by averaging amongst several characters. Optionally, if these values are supplied externally, that is, the provider of the font provides these values, the external values may be used instead of the computed values. Edge positions are determined by looking for long vertical or horizontal portions of contours. Normal vectors are perpendicular to the contour, and are computed from contour equations and by determining which side of the contour is black and which side is white. For those points required for curve reconstruction, but which are not on the curve, the normals are calculated as if a normal vector extended from the curve through those points.
In the attached Appendix, the source code, in the MESA language of the Xerox Corporation, is provided demonstrating one possible embodiment of the source code to accomplish the described goals. The Mesa programming language operates on a microprocessor referred to as the Mesa microprocessor, which has been well documented, for example, in Xerox Development Environment, Mesa Language Manual, Copyright 1985 Xerox Corporation, Part No. 610E00170. This particular software is derived from the Typefounders product of the Xerox Corporation, Stamford, Conn. The Typefounders product accomplished all these character and font metrics, but did not provide them externally. (See Appendix, pages 67-319,for relevant Typefounder software modules called by software implementing the current invention including: CharacterOpsDefs.mesa, CharacterOpslmplA.mesa, CharacterOpslmpIB.mesa, pages 67-105; ContourOpsDefs.mesa, ContourOpslmplA.mesa, ContourOpslmplB.mesa, ContourOpslmpIC.mesa, ContourOpslmplD.mesa, pages 106-195; FontOpsDefs.mesa, FontOpslmpl.mesa, pages 196-221; ImageOpsDefs.mesa, ImageOpslmplA.mesa, ImageOpslmplB.mesa, pages 222-265 TypefounderUtilsdefs.mesa, TypefounderlmplA.mesa, TypefounderlmpIB.mesa, pages 266-319) Additional software was added, which makes these values available for subsequent processing (See Appendix, page 1, TypeDefs.mesa for translation of the Typfounder data structure; page 36, MetricsDef.mesa, Metricslmpl.mesa, for measurement of font metrics; page 47, EdgeOpsDef. mesa, EdgeOpslmpl.mesa, for measurement of leading and trailing edge position) and performs the perpendiculars calculations (see, Appendix, page 56, NormalOpsdefs.mesa, NormalOpslmpl.mesa). This information is used for creation of a data structure for "hints" (see, Appendix, page 13, ICFFontlODefs. Mesa for creation of hint format for next steps). Of course, while in the Appendix, the various coded algorithms operating on the contour font data for the hint creation step 20 are given in the Mesa language, implementation is easily made in the Unix-based "C" language. The remainder of the system, and the algorithms incorporated will be described in the Appendix in the Unix-based "C" language.
Selecting a contour font for use enables a program that looks for font data, and designates its final position in an output, while calling the various programs forming the steps that will be described further hereinbelow (see, Appendix, page 320, raster.c). The contour font rasterization program herein described is useful on a variety of hardware platforms, attributes of which can be selected for enhanced operation of the system, such as for example, a greater degree of precision in the calculations (the difference between 8 bit calculation and 32 bit calculation). (see, Appendix, page 340, std.h)
At transform step 30, (see, Appendix, page 343, xform.c) the contour font is converted from arbitrary contour font units, which are supplied by the provider of the font, to a particular size, expressed in units of pixels. Typically, contour font units are provided in terms of the contour itself, i.e., the height or size of the contour font is one (1). That is, lengths of characters are placed in terms of the size of the font character itself. These values must be transformed into pixel unit values, or whatever other value is required, e.g. the scaled font may be 30 pixels tall. Additionally, it is at this point that the contour font is rotated for either landscape or portrait mode printing, as required. Rotation and scaling is accomplished in accordance with a previously determined transformation matrix equation 35, which mathematically determines the conversion of the contour font from font measurements to pixel values at a selected orientation which can be used by the printer. The transformed character H is shown at FIG. 2B.
Subsequent to transformation step 30, at thickening or thinning step 40, font characters are thickened or thinned based on requirements of the transformation, and requirements of the printing process. The character contour is adjusted to make the strokes thicker or thinner to compensate for the xerographic or other marking process to follow. There are three components of the thickening or thinning value. The first compensates for xerographic or other imaging effects. That is, if for example, the marking technology will thin strokes by half a pixel, then strokes are thickened by half a pixel in this step. The amount of thickening or thinning specified in the printer profile 50 separately for X and Y directions, and is created at the manufacturer of the printer, and inserted at the printer profile 50. (see, Appendix, page 348, thicken.c)
The second component of thickening, called residual thickening, is applied to insure uniformity of output strokes after the sampling or rasterization step. This amount for horizontal thickening on upper case letters, for example, is equal to the difference between the calculated ideal output vertical stem thickness, which is obtained by scaling the font metric to the proper size, and the result of rounding that thickness off to the actual pixel width which will be obtained after rasterization. This rounding is performed to the nearest whole pixel if half bitting is not enabled and to the nearest half pixel, if half bitting is enabled. There are separate values for horizontal or vertical directions and for upper case, lower case and auxiliary characters.
The third component of thickening and thinning applies only to very small characters, and prevents drop-outs of fine lines. This amount is equal to the difference between the calculated scaled thickness of the hairlines, after thickening by the font thickening steps, and the minimum stroke thickness specified in the printer profile. When applied, this thickening brings fine lines up to the value of the minimum stroke thickness. The value is zero if the hairline is already greater than the minimum stroke thickness. (This process, referred to as "adaptive thickening," is not disclosed in the source code in the Appendix.)
The actual thickening or thinning applied is equal to the sum of these three components. Each component has an independent value in the X and Y directions. The direction to move each contour control point is specified by its normal vector. The thickened character H is shown at FIG. C.
At step 60, the snap function or grid alignment function is applied. The coordinate system of the character is varied in the horizontal direction to move vertical and horizontal edges to positions where pixel boundaries will be after rasterization, i.e., to a whole pixel position. This is to assure uniform stroke thickness in the rasterized character images. The process is to piecewise stretch or shrink the character to force edges to align the pixel boundaries. On the left hand sides of the characters, the left edge of each stroke is moved to the closest pixel boundary, while the right edge of the stroke is moved to the pixel boundary specified by rounding the stroke thickness. This process gives priority to maintaining uniform stroke thickness over absolute stroke position. That is to say, that after the left edge of the character has been moved to a whole pixel position, the thickness of the stroke, or portion of the character, is examined to determine its thickness. The thickness has already been adjusted in the thickness of thinning step, so that it is close to a whole pixel width. Accordingly, the right edge of the character is then moved to the nearest whole pixel, based on rounding the thickness of the pixel, as opposed to moving the right hand side to the nearest pixel. On the right hand sides of characters, the rolls of left and right edges of strokes are reversed. Right edges of strokes are anchored, while left edges are rounded relatively to corresponding right edges. (see, Appendix, page 355, snap.c).
In one variant of this scheme, the positions of left and right index points or width points, which are those points which determine character spacing and are made to coincide in constructing words, are snapped before the vertical edges.
In the vertical direction, snapping is performed to piecewise stretch characters so that positions of baseline, cap-height, x-height, and descender fall on pixel boundaries. Baseline and descender position are treated as bottoms of strokes, that is, anchored, while cap-height and x-height are treated as tops of strokes, computed relative to the baseline. All characters are snapped to all of these positions, ensuring uniform character alignment. After these font metric positions are snapped, horizontal edges are snapped in the same manner as vertical edges, with lower edges of strokes anchored and upper edges snapped relative to the lower edges in the lower half of the character and upper edges of strokes anchored and lower edges snapped relative to the upper edges in the upper half of the character.
In both horizontal and vertical directions, snapping is performed one edge at a time. That is, the first edge is snapped, stretching the coordinate system of the character slightly on one side of the snapped edge and shrinking it slightly on the other side. The second edge is then snapped, with its pre-snapping position perhaps already modified slightly by the first snap. This sequential snapping helps preserve local character features better than simultaneous snapping of all edges does. When the second edge is snapped, its area of influence on the coordinate grid extends only up to the first snapped edge, which stays in place. This process is then repeated for the remainder of the edges. The snapped character H is shown at FIG. 2D.
Once each character in the adjusted contour font has been placed in the grid and appropriately thickened and thinned, the final step is to sample the adjusted contour on discrete grid. This step 70 can optionally produce half bitted output images, as controlled by the printer profile. Light half bitting produces half bitting on curves and diagonals, while heavy half bitting will also produce half bitted vertical and horizontal edges.
Rasterization in a preferred embodiment of this invention is in accordance with the process described in U.S. patent application Ser. No. 07/416,211 by S. Marshall, entitled "Rapid Halfbitting Stepper", and assigned to the present assignee of the present invention. This application is incorporated by reference herein for the purposes of teaching rasterization. (see, Appendix, page 364, step.c and page 368, step.h for rasterization with halfbitting; page 372, bezline.c for stepping around curve; page 396, fill.c for filling). The rasterized character is shown at FIG. 2D.
It will not doubt be appreciated that numerous changes and modifications are likely to occur to those skilled in the art, and it is intended in the appended claims to cover all those changes and modifications which fall within the spirit and scope of the present invention. | <urn:uuid:5cf8f2cc-bc5c-42e1-a62b-bddf45ac1cba> | CC-MAIN-2013-20 | http://www.google.de/patents/US5459828 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.89947 | 5,093 | 2.59375 | 3 |
Staff Photo: Jason Braverman Girl Scouts Kennedy Watson, from left, Leah Royes and Kaitlyn Hamlette, of troop 4525 in Snellville, draw signs to promote cookie sales.
Lemon-wedge cookies dusted with powdered sugar and filled with lemon zest flavor
The shortbread cookie
Oatmeal cookies with peanut butter filling
Vanilla cookies covered in caramel and toasted coconut, then striped with chocolate
Cookie topped with peanut butter then completely covered in chocolate
Thin wafer covered in a peppermint chocolate
1912 — On March 12, 1912, founder Juliette Gordon Low gathered 18 girls to register the first troop of American Girl Guides. It was renamed Girl Scouts the following year.
1920s — The first Girl Scout Troops on Foreign Soil were established in China, Mexico, Saudi Arabia and Syria for American girls living in other countries.
1930s — The first sale of commercially baked Girl Scout Cookies took place.
1940s — Girls collected 1.5 million articles of clothing that were then shipped overseas to children and adult victims of war.
1950s — The March 1952 issue of “Ebony” magazine reported, “Girl Scouts in the South are making steady progress toward breaking down racial taboos.”
1960s — The social unrest of the 1960s was reflected in organization actions and Girl Scout program change, including introduction in 1963 of four program age–levels for girls: Brownie, Junior, Cadette and Senior Girl Scouts.
1970s — Girl Scouts contributed to a White House Conference on food, nutrition and health.
1980s — “The Contemporary Issues” series was developed in the 1980s to help girls and their families deal with serious social issues. The first, “Tune In to Well Being, Say No to Drugs,” was introduced in collaboration with a project initiated by First Lady Nancy Reagan.
1990s — Girl Scouting experienced a renewed emphasis on physical fitness with the inauguration of a health and fitness national service project in 1994 and the GirlSports initiative in 1996.
2000s — Grants from Fortune 500 companies such as Lucent Technologies, Intel and Lockheed Martin supported science and technology exploration programs for girls.
2012 — Girl Scouts of the USA has declared 2012 the Year of the Girl: a celebration of girls, recognition of their leadership potential and a commitment to creating a coalition of like-minded organizations and individuals in support of balanced leadership in the workplace and in communities across the country.
SNELLVILLE -- It's that time of year. Across Gwinnett -- and the nation -- young girls dressed in green, brown, tan and blue vests are selling the famous Girl Scouts cookies by the boxes and they have a new cookie this year, the Savannah Smiles.
Haven't heard of it? It's the latest creation to celebrate the organization's 100th anniversary.
The girls from Troop No. 4525 in Snellville, just like millions of other girls, are bound and determined to sell their cookies to anyone who will buy a box for $3.50.
"This is exciting to me because I started out as a Girl Scout with my sister in Brooklyn, N.Y.," Troop Co-leader Qualena Odom-Royes said. "Now being able to share it with Leah (my daughter) and these other girls is exciting and wonderful."
The troop worked on posters to advertise their confectionery sweets and set individual sales goals for 2012.
With much childhood exuberance, Jocelyn Spencer, 8, decided on 81,000 boxes.
"I'm going to get everyone in my family to sell cookies," she said.
And she's not the only one aiming big. The other girls in the troop set goals in the hundreds.
"I'm going to try to raise 400 because I really want all of the prizes," Kennedy Watson, 8, said.
Girl Scout Cookies In Recipes
Try using Girl Scout Cookies as part of fun recipes
In addition to the usual cookies, Girl Scouts of the USA has introduced its latest creation, the Savannah Smiles, to commemorate its 100th year. These celebratory baked goods were created in honor of Girl Scouts founder Juliette Gordon Low's hometown of Savannah and are similar in taste to past customer favorites with bursts of lemon flavor.
The cookie is shaped like a wedge, covered in powered sugar and filled with lemon crisps.
"The Savannah Smiles is actually closer to the original cookies made for the Girl Scout sales. It was one of the first varieties out there," Troop Co-leader Adrienne Cole said.
The cookie is such a new addition to the Girl Scouts, the troops and their leaders haven't gotten to taste-test the lemon flavored treat.
"I really want to try the new cookie," Kaitlyn Hamlette, 6, said. "I like lemony stuff, so I really want to try it."
Ada Hamlette of Loganville, Kaitlyn's mother added, "Everyone is excited about the new cookie and they want to try them. They look like they'll be delicious."
To boost the Savannah Smiles' sales, Troop No. 4525 thought of a strategic marketing approach: Give out free samples while selling boxes around the county.
"I want to give a box of milk to everyone who eats a sample," Spencer said.
Cole chimed in, "Maybe we can get Kroger to donate some milk."
Your hips may be mad that you bought the cookies, but your heart won't feel the same. All of the proceeds from Girl Scouts of Greater Atlanta's fundraising activities, including the cookie drive, stay in the council to serve the girls and volunteers in many ways. The money delivers programs to 41,500 girl members in a 34-county territory, trains more than 18,000 adult member volunteers, provides approximately $52,000 in scholarships for higher education and so much more.
The Girls Scouts of the USA haven't started selling their cookies online yet, but could in the next few years. The organization recommends never buying Girl Scout Cookies on any sites, including Amazon, eBay and other auction or community sites. There is no guarantee of freshness or authenticity.
To keep up with a technological age, the organization is using its website to help buyers easily find troops to purchase from in the area. Starting Feb. 17, the public can use the Cookie Locator, a program set up to help locate girls selling in your neighborhood by entering your ZIP code. To use the locator, visit cookielocator.littlebrownie.com.
To learn more about the Girl Scout's 100th anniversary and the Savannah Smiles, visit www.girlscouts.org. | <urn:uuid:75027875-cebe-4b88-8101-bd4f0ab2f4b9> | CC-MAIN-2013-20 | http://www.gwinnettdailypost.com/news/2012/jan/21/cookie-time-girl-scouts-celebrating-100th/?sports | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.952191 | 1,389 | 2.765625 | 3 |
Healthy Eating Plate
Harvard’s New Guide to Healthy Eating
Start here to learn more about the new Healthy Eating Plate created by nutrition experts at Harvard School of Public Health, in conjunction with Harvard Health Publications. The Healthy Eating Plate can be your blueprint for planning a healthy balanced meal, and it fixes key flaws in the U.S. Department of Agriculture’s MyPlate.
How can you follow the Healthy Eating Plate? Here’s a rundown, section by section:
- Fill half of your plate with vegetables and fruits. The more color, and the more variety on this part of the plate, the better. Potatoes and French fries don’t count as vegetables on the Healthy Eating Plate, because they are high in fast-digested starch (carbohydrate), which has the same roller-coaster effect on blood sugar and insulin as white bread and sweets. These surges, in the short term, can lead to hunger and overeating, and in the long term, can lead to weight gain, type 2 diabetes, and other health problems. Read more about vegetables and fruits, or read more about carbohydrates and health.
- Save a quarter of your plate for whole grains—not just any grains: Whole grains—whole wheat, brown rice, and foods made with them, such as whole wheat pasta—have a gentler effect on blood sugar and insulin than white bread, white rice, and other so-called “refined grains.” That’s why the Healthy Eating Plate says to choose whole grains—the less processed, the better—and limit refined grains. Read more about whole grains.
- Put a healthy source of protein on one quarter of your plate: Chose fish, chicken, beans or nuts, since these contain beneficial nutrients, such as the heart-healthy omega-3 fatty acids in fish, and the fiber in beans. An egg a day is okay for most people, too (people with diabetes should limit their egg intake to three yolks a week, but egg whites are fine). Limit red meat—beef, pork, and lamb—and avoid processed meats—bacon, cold cuts, hot dogs, and the like—since over time, regularly eating even small amounts of these foods raises the risk of heart disease, type 2 diabetes, and colon cancer. Read more about healthy proteins.
- Use healthy plant oils. The glass bottle near the Healthy Eating Plate is a reminder to use healthy vegetable oils, like olive, canola, soy, corn, sunflower, peanut, and others, in cooking, on salad, and at the table. Limit butter, and avoid unhealthy trans fats from partially hydrogenated oils. Read more about healthy fats.
- Drink water, coffee or tea. On the Healthy Eating Plate, complete your meal with a glass of water, or if you like, a cup of tea or coffee (with little or no sugar). (Questions about caffeine and kids? Read more.) Limit milk and dairy products to one to two servings per day, since high intakes are associated with increased risk of prostate cancer and possibly ovarian cancer. Limit juice to a small glass per day, since it is as high in sugar as a sugary soda. Skip the sugary drinks, since they provide lots of calories and virtually no other nutrients. And over time, routinely drinking sugary drinks can lead to weight gain, increase the risk of type 2 diabetes, and possibly increase the risk of heart disease. Read more about healthy drinks, or read more about calcium, milk, and health.
- Stay active. The small red figure running across the Healthy Eating Plate’s placemat is a reminder that staying active is half of the secret to weight control. The other half is eating a healthy diet with modest portions that meet your calorie needs. Read 20 tips for staying active.
Comparing the Harvard Healthy Eating Plate to the USDA’s MyPlate shows the shortcomings of MyPlate. Read a head-to-head comparison of the Healthy Eating Plate vs. the USDA’s MyPlate.
You can use the Healthy Eating Plate side by side with the Healthy Eating Pyramid, a simple and trustworthy guide to healthy eating created by faculty in the Department of Nutrition at Harvard School of Public Health. Read an in-depth article about the Healthy Eating Plate and the Healthy Eating Pyramid. Or read answers to common questions about the Healthy Eating Plate.
Download the Healthy Eating Plate
The Healthy Eating Plate image on this Web site is owned by the Harvard University. It may be downloaded and used without permission for educational and other non-commercial uses with proper attribution, including the following copyright notification and credit line:
Copyright © 2011, Harvard University. For more information about The Healthy Eating Plate, please see The Nutrition Source, Department of Nutrition, Harvard School of Public Health, www.thenutritionsource.org, and Harvard Health Publications, health.harvard.edu.
Any other use, including commercial reuse or mounting on other systems, requires permission from the Department of Nutrition at Harvard School of Public Health. To request permission, please contact us using the Healthy Eating Plate reprint request form on this Web site.
The aim of the Harvard School of Public Health Nutrition Source is to provide timely information on diet and nutrition for clinicians, allied health professionals, and the public. The contents of this Web site are not intended to offer personal medical advice. You should seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read on this Web site. The information does not mention brand names, nor does it endorse any particular products. | <urn:uuid:d5d999fa-d1cc-4fd9-9ee8-ebccf2bf3001> | CC-MAIN-2013-20 | http://www.hsph.harvard.edu/nutritionsource/healthy-eating-plate/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.927782 | 1,170 | 3.046875 | 3 |
For the first time since 1415, we are going to have a conclave or papal election with no Pope to bury beforehand. But the procedures for the election are not affected by this ‘novelty.’ Dan Brown might seem an unlikely source for information on a Papal election but the detailed description in Angels and Demons was apparently taken from a book by a Jesuit scholar and is quite accurate.
When the See of Rome is declared vacant – normally when a Pope dies – most of the senior prelates who were the Pope’s ‘ministers’ resign. Power is entrusted to a cardinal called the Camerlengo, in this case Cardinal Tarcisio Bertone, who under normal circumstances would bury the Pope.
The last papal election took 48 hours, but such speed wasn’t always the case. In the 13th century the papacy was vacant for a year-and-a-half and an election was forced by the people of Rome who locked up the cardinals until a pope was elected. In another case, the people not only locked up the cardinals, they tore off the roof of the building and put the cardinals on a diet of bread and water.
Now, with Benedict’s resignation the cardinals are essentially being given a ‘month’s notice’ and can plan their trip to Rome and think about what they want in the new pope. In a sense, because of the pre-planning that can be brought to bear on this election, there is potentially a lot of time to consider who should be the next leader of the world’s 1 billion catholics.
The cardinals will stay in a specially constructed 5 storey residence inside the Vatican walls, Santa Marta. and will be conveyed by coach to the Sistine Chapel for morning and afternoon sessions there. The process of election must begin no more than 20 days after the see is vacant. Only cardinals under the age of 80 can enter the conclave – the word means ‘with a key’ in Latin, referring to the fact that the Cardinals are locked into the election hall.
There are 119 eligible to vote, 67 of whom were appointed by Benedict and the rest by JPII. Cardinals who are excommunicated can actually attend but not those who have resigned. A cardinal who resigned and joined Napolean Bonaparte attempted to enter the conclave in 1800 but was refused.
On the morning of the conclave the cardinals con-celebrate Mass in St Peter’s Basilica. In the afternoon they gather in the Pauline Chapel in the Apostolic Palace and solemnly process in full red and white regalia with red hats and enter the Sistine chapel, with the doors locked behind them. All telephones, cell phones, radios, televisions and internet connections are removed from use whether in the chapel or in the residence. They cannot leave except in the case of grave illness. Also permitted in the conclave are two medical doctors, a nurse for very ill cardinals, and religious priests who can hear confessions in various languages. These have to swear absolute and perpetual secrecy.
The cardinals swear an oath of secrecy not to discuss the elections outside the Chapel and everyone else is ordered out in the Latin words “Extra omnes,” “Everybody out.” The doors of the Sistine Chapel and the Residence of the Cardinals are closed.
Inside, a meditation is given concerning the grave duty of the cardinals and they are exhorted to “only have God before your eyes.” The rest of the time is spent for prayer and voting in silence, there are no campaign speeches. Negotiations and arguments have to take place outside.
In the Chapel, which dates from the 15th Century, and under the ceiling adorned with Michelangelo's Last Judgement, the cardinals can cast their vote. There will be four ballots daily until a clear majority is found for one candidate.
It is severely frowned upon to seek the office of Pope and canvassing for it is severely prohibited, especially prior to a Pope dying. It is an office bestowed upon a person rather than their contesting for it. However an outsider could be elected pope and in theory it could be a lay person willing to be ordained a priest and bishop but the weight of tradition suggests it will be one of the cardinals gathered in the Sistine chapel. The last non cardinal was Pope Urban in 1378.
However, discussions prior to a ballot among cardinals do occur privately but public campaigning would be counter productive. Dinners are good vehicles for discussions. However, the best known cardinals tend to be the ones that work in the Vatican and meet other bishops, and cardinals when they come on business to Rome.
The ballot is secret and Pope John Paul II abolished two methods of election: by compromise or by common consent. Since 1179, a new Pope requires a two thirds majority. Now, after 33 ballots, a simple majority is enough. If there is no progress in choosing a candidate, a day of prayer is set aside. However, not since 1831 has an election lasted more than four days.
The ballot papers themselves are rectangular with “I elect as supreme pontiff” printed at the top and each cardinal prints or writes a name in a way that disguises his handwriting. One at a time they approach the altar with the folded ballot held up, he kneels and prays and then places the ballot in a silver and gilded bronze urn, much like a wok with a lid. Cardinals called ‘scrutineers’ count the ballots. After the ballots are read aloud they are placed on a thread and placed in another urn. They are then burnt.
Since 1903, white smoke from the chimney of the Sistine Chapel has signalled the election of a pope; black smoke signals another vote.
When a pope is finally elected, the cardinal dean asks him, “Do you accept your canonical election as supreme pontiff?” Rarely does anyone say no. St Philip Benize was offered it in 1271 and fled and hid until another candidate was chosen.
Sacristy of Tears
After the ‘yes’, he is led into the ‘Sacristy of Tears’ or commonly called ‘Room of Tears,’ a small room off the Sistine Chapel. It is here that the enormity of what has just happened hits the new Pope, though the tears may be of sorrow or joy. Traditionally they were said to be tears of humility as a Pope was following in the footsteps of St Peter; others would contend the tears were because essentially the Pope becomes a ‘prisoner’ of the Vatican and would be bowed down by the weight of the office.
The new Pope picks one of three sizes of Papal garments – presumably small, medium or large – and return to the High Altar to receive the homage of the cardinals. Meanwhile, the ballots are burnt in a stove with a chemical which turns the smoke white, telling the world of the newly elected Pontiff.
A Cardinal is then sent to the Loggia of the Benedictions on the façade of St Peter’s where he says: “Annuntio vobis gaudium magnum. I announce to you a great joy! Habemus Papam. We have a Pope!” He then announces the name of the cardinal and the name he has chosen to use as Pope.
The new Pope steps onto the balcony and therefore onto the stage of world and church history. He may grant his first blessing to the world.
Some days later, the new Pope will go down into the foundations of St Peter’s to the tomb of Peter, the first Pope and first bishop of Rome, and pray. Then with all the cardinals, he will process into St Peter’s Square to begin his first public Mass and receive the main symbols of his office: the fisherman’s ring and the pallium. | <urn:uuid:fe661bb9-3884-4391-9c13-161ffd3496bc> | CC-MAIN-2013-20 | http://www.irishcentral.com/news/What-to-expect-at-Conclave-in-unusual-case-of-Pope-Benedict-stepping-down-from-Vatican-role-191795261.html?mob-ua=mobile | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.971419 | 1,662 | 2.78125 | 3 |
Judaic Treasures of the
Jerusalem was and remains the holiest of cities in the Holy Land, but Jews also gave a measure of holiness to three other cities there: Hebron, Safed, and Tiberias. The holiness of Jerusalem arises in part from what remains there, but more from what took place there. So it is with its sister cities. Hebron is where the patriarchs and matriarchs lived and are buried, and it was the first capital of King David. Tiberias, on the shore of the Sea of Galilee, was chosen by the patriarch of the Jews in the second century as his seat. The Palestinian Talmud was largely composed in its great rabbinical academy. in the environs of Safed, high in the Galilean hills, are the graves of the leading rabbis of late antiquity. Its stature as a holy city was enhanced in the sixteenth century, when it was the greatest center of Jewish mysticism and seat of Jewish legal scholarship. To gain entree into the company of the three more ancient holy cities, it called itself Beth-El, suggesting identity with the biblical site which Jacob called "The Gate of Heaven."
A striking pastel-colored manuscript "holy site map" links these four holy cities together. Drawn and painted in Palestine in the second half of the nineteenth century, it depicts those venues which indicate their holiness. There is a suggestion of their geographic positioning, but the "map" is far more a statement of the place these cities hold in Jewish veneration, than of the geographical site they occupy. To pious Jewish families, such wall plaques were more meaningful depictions of the Holy Land than the most aesthetically beautiful and topographically exact representations.
A small illustrated guide book to the burial places of biblical figures and saintly rabbis in the Holy Land, Zikaron Birushalayim, appeared in Constantinople in 1743. It tells the pious pilgrim where graves may be found and what prayers are to be said. Prefaced by a panegyric to the land, it cites a midrashic statement that, in time to come, when Jerusalem shall be rebuilt, three walls-one of silver, one of gold, and the innermost of multicolored precious stones-will encompass the dazzling city.
In Hebron the pilgrim is not only directed to the holy grave sites, but also regaled with wondrous tales. One tells of a sexton of the community sent down to search for a ring which had fallen to the depths of the patriarchs' burial cave, finding at the floor of the cave three ancient men seated on chairs, engaged in study. He greets them; they return his greetings, give him the ring, and instruct him not to disclose what he had learned. When he ascended and was asked what he had seen, he replied: "Three elders sitting on chairs. As for the rest, I am not permitted to tell."
For actual pilgrims, the little volume provided factual information. For pilgrims in their own imagination, it offered edification through tales and quaint illustrations. The last page has a woodcut of Jericho, a seven-walled city, below which a man surrounded by a multitude is sounding a shofar. The most striking woodcut is of an imposing building, representing the Temple in Jerusalem.
In the concentric circles of holiness cited in the Tanhuma, Jerusalem is at the center of the Holy Land; the Temple at the center of Jerusalem; and the Holy of Holies at the center of the Holy Temple. The Temple is prominently featured in illustrated books about the Holy Land. In A Pisgah-Sight of Palestine, three chapters describe and three engravings portray the Temple. Early Hebrew books are quite poor in illustrations, because relatively few deal with subjects that demand visual presentation.
Among these are books dealing with laws concerning the Temple, and since its architecture and vessels are pertinent to the laws, they invite illustration. A case in point is Sefer Hanukat Ha-Bayit by Moses (Hefez) Gentili (1663-1711), published in Venice in 1696. A treatise on the building of the Second Temple, it abounds in engraved architectural illustrations, including a menorah, the seven-branched candlestick; most notable is a large pull-out map of the Temple, identifying fifty-eight components of the Temple's structure. The engravings were added after the printing, as was the map, and a copy containing both is rare.
Moses Gentili, born in Trieste, lived in Venice where he taught Talmud and Midrash, and perhaps philosophy and science as well. His best-known work, Melekhet Mahashevet, Venice, 1710, a commentary on the Pentateuch, contains a picture of the author, a clean-shaven, ministerial-looking gentleman.
There are many published descriptions and illustrations of the Temple's appearance in many languages. Some are based on careful study of the available sources, others are creations of the imagination, generally inspired by the grandest building the artist knew. A plan of the Temple to be is found at the end of the 1789 Grodno edition of Zurat Beit Ha-Mikdash (The Form of the Holy Temple) by Yom Tov Lipmann Heller (1579-1654), one of the major figures in Jewish scholarship in the first half of the seventeenth century. The book is one of Heller's earliest works and is a projection of the plan of the Temple as envisaged in the prophecy of Ezekiel.
We have thus traversed the Holy Land and glimpsed its holy places, past, present, and future. | <urn:uuid:2fda8e5f-0260-44ff-b050-ac0449bb2e80> | CC-MAIN-2013-20 | http://www.jewishvirtuallibrary.org/jsource/loc/Holy1.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.966429 | 1,177 | 3.4375 | 3 |
Acrylic A synthetic fabric often used as a wool substitute. It is warm, soft, holds colors well and often is stain and wrinkle resistant.
Angora Rabbit Hair A soft fiber knit from fur of the Angora rabbit. Angora wool is often combined with cashmere or another fiber to strengthen the delicate structure. Dry cleaning is recommended for Angora products.
Bedford A strong material that is a raised corded fabric (similar to corduroy). Bedford fabric wears well and is usually washable.
Boot Footwear which covers the entire foot and extends to the height of the anklebone or up to the thigh.
Bootie A shoe that resembles a boot in style but is not as high.
Brocade An all-over floral, raised pattern produced in a similar fashion to embroidery.
Cable Knit Patterns, typically used in sweaters, where flat knit columns otherwise known as cables are overlapped vertically.
Cashmere A soft, strong and silky, lightweight wool spun from the Kashmir goat. Cashmere is commonly used in sweaters, shawls, outerwear, gloves and scarves for its warmth and soft feel.
Chiffon A common evening wear fabric made from silk, cotton, rayon or nylon. It's delicate in nature and sheer.
Chintz A printed and glazed fabric made of cotton. Chintz is known for its bright colors and bold patterns.
Circumference The measurement around the shaft of a boot taken at the widest part.
Corduroy Cotton blend fibers twisted as they are woven to create long, parallel grooves, called wales, in the fabric. This is a very durable material and depending on the width of the wales, can be extremely soft.
Cotton A natural fiber that grows in the seed pod of the cotton plant. It is an inelastic fiber.
Crepe Used as a description of surfaces of fabrics. Usually designates a fabric that is crimped or crinkled.
Crinoline A lightweight, plain weave, stiffened fabric with a low yarn count. Used to create volume beneath evening or wedding dresses.
Crochet Looping threads with a hooked needle that creates a wide, open lace. Typically used on sweaters for warm seasons.
Cushioning Padding on the sole of a shoe for added comfort and stabilization.
DenimCotton blend fabric created with a twill weave to create a sturdy fabric. Used as the primary material of blue jeans.
DobbyWoven fabric where the weave of the fabric actually produces the garment's design.
Embroidery Detailed needlework, usually raised and created by yarn, thread or embroidery floss.
Embossed Leather Leather imprinted with a design or exotic skin texture, such as snake, ostrich or croco.
Eyelet A form of lace in a thicker material that consists of cut-outs that are integrated and repeated into a pattern. Usually applied to garments for warmer seasons.
Faille A slightly ribbed, woven fabric of silk, cotton, or rayon.
French Terry A knit cloth that contains loops and piles of yarn. The material is very soft, absorbent and has stretch.
Georgette A crinkly crepe type material usually made out of silk that consists of tightly twisted threads. Georgette is sheer and flowing nature.
Gingham It is a fabric made from dyed cotton year. It is most often know to be woven in a blue and white check or plaid pattern. It is made from corded, medium to fine yarns, with the color running in the warp yarns. There is no right or wrong side of this fabric.
Glen PlaidA woolen fabric, with a woven twill design of large and small checks. A form of traditional plaid originating in Scotland.
Heel Height It is the measurement of a vertical line from the point where the sole meets the heel down to the floor. Heel height is measured in increments of 1/8 of an inch.
Herringbone A pattern originating from masonry, consists of short rows of slanted parallel lines. The rows are formatted opposing each other to create the pattern. Herringbone patterns are used in tweeds and twills.
Hopsack A material created from cotton or wool that is loosely woven together to form a coarse fabric.
Houndstooth A classic design containing two colors in jagged/slanted checks. Similar to Glen Plaid.
Insole The inside lining of the shoe that is underneath the bottom of the foot. Another term for footbed.
Instep The arched section of the foot between the toes and the ankle, or the part of the shoe which covers that area.
Jacquard A fabric of intricate variegated weave or pattern. Typically shown on elegant and more expensive pieces.
Jersey A type of knit material known to be flexible, stretchy, soft and very warm. It is created using tight stitches.
Knit A knit fabric is made by interlocking loops of one or more yarns either by hand with knitting needles or by machine.
LinenAn exquisite material created from the fibers of the flax plant. Some linen contain slubs or small knots on the fabric. The material is a light fabric perfect for warm weather.
LiningThe leather, fabric or synthetic material used on the inside of a shoe.
Lamé A metallic or plastic fiber woven into material to give the garment shine.
Lycra ®TMSpandex fibers add stretch to fabric when the fibers are woven with other fiber blends. These materials are lightweight, comfortableTM and breathable, and the stretch will not wear away.
Madras Originating from Madras, India, this fabric is a lightweight, cotton material used for summer clothing. Madras usually has a checked pattern but also comes in plaid or with stripes. Typically made from 100% cotton.
Marled Typically found in sweaters, marled yarn occurs when two colored yards are twisted together.
Matte A matte finish has a lusterless surface.
Merino Wool Wool sheered from the merino sheep and spun into yarn that is fine but strong.
Modal A type of rayon that is made from natural fibers but goes through a chemical treatment to ensure it has a high threshold of breakage. Modal is soft and breathable which is why it's used as a cotton replacement.
Non-iron A treated cotton that allows our Easy Care Shirts to stay crisp throughout the day and does not need ironing after washing/drying.
Nylon A synthetic fiber that is versatile, fast drying and strong. It has a high resistance to damage.
Ombre A color technique that shades a color from light to dark.
Paisley A pattern that consists of crooked teardrop designs in a repetitive manner.
Patent Leather Leather made from cattle hide that has been varnished to give a hard and glossy finish.
Placket The piece of fabric or cloth that is used as a concealing flap to cover buttons, fasteners or attachments. Most commonly seen in the front of button-down shirts. Also used to reinforce openings or slits in garments.
Piping Binding a seam with decoration. Piping is similar to tipping or edging where a decorative material is sewn into the seams.
Pointelle An open-work knitting pattern used on garments to add texture. Typically a cooler and general knit sweater.
Polyester A fabric made from synthetic fibers. Polyester is quick drying, easy to wash and holds its shape well.
Ponte A knit fabric where the fibers are looped in an interlock. The material is very strong and firm.
Poplin A strong woven fabric, heavier in weight, with ribbing.
Pump Classically a high, medium, or low heeled, totally enclosed shoe. Variations include an open toe or ornament.
Rayon A manufactured fiber developed originally as an alternative for silk. Rayon drapes well and looks luxurious.
Sateen A fabric woven with sheen that resembles satin.
Seersucker Slack-tension weave where yarn is bunched together in certain areas and then pulled taught in others to create this summery mainstay.
Shaft Height Measurement of the shaft of the boot, which is from the top of the boot to the inside seam where the instep and the sole meet.
Shirring Similar to ruching, shirring gathers material to create folds.
Silk One of the most luxurious fibers, silk is soft, warm and has shine. It is obtained from the cocoons of the silkworm's larvae.
Sole The outsole, or bottom part of a shoe.
Space dyed Technique of yarn dyeing to produce a multi-color effect on the yarn itself. Also known as dip dyed yarn.
SpandexElastomeric fiber, this material is able to expand 600% and still snap back to its original shape and form. Spandex fibers are woven with cotton and other fibers to make fabrics stretch.
Stacked Heel A heel made of leather or leawood covering that gives the appearance of wood.
Synthetic Materials Man-made materials designed to look or function like leather.
Tipping Similar to edging, tipping includes embellishing a garment at the edges of the piece, hems, collars etc.
Tissue Linen A type of linen, which is specifically made for blouses or shirts due to its thinness and sheerness.
Tweed A loose weave of heavy wool makes up tweed, which provides warmth and comfort.
Twill A fabric woven in a diagonal weave. Commonly used for chinos and denim.
Variegated Multi-colored fabrics where colors are splotched or in patches.
Velour A stretchy knit fabric, typically made from cotton or polyester. It has a similar soft hand to velvet.
VelvetA pile fabric in which the cut threads are very evenly distributed, with a short dense pile, giving it a distinct feel.
Velveteen A more modern adaptation of velvet, velveteen is made from cotton and has a little give. Also known as imitation velvet.
ViscoseA cellulosic man-made fibers, viscose is soft and supple but can wrinkle easily.
Wale Only found in woven fabrics like corduroy, wale is the long grooves that give the garment its texture.
Wedge Heel A heel that lies flat to the ground and extends from the shank to the back of the shoe.
Windowpane Dark stripes run horizontal and vertical across a light background to mimic a window pane.
Woven A woven fabric is formed by interlacing threads, yarns, strands, or strips of some material. | <urn:uuid:70ec883d-7f47-4172-8115-7a1124765db6> | CC-MAIN-2013-20 | http://www.jny.com/Classic-Slim-Fit-Pant/26551684,default,pd.html?variantSizeClass=&variantColor=JJ3WCXX&cgid=24983446&pmin=0&pmax=25&prefn1=catalog-id&prefv1=jonesny-catalog&srule=New%20Arrivals | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.929634 | 2,246 | 2.625 | 3 |
Unnatural Selection: Choosing Boys Over Girls, and the Consequences of a World Full of Men
by Mara Hvistendahl
One of history's more curious encounters occurred in early March 1766 at a country estate in southern England, near Dorking. The estate belonged to Daniel Malthus, a gentleman of independent means and wide intellectual interests. The philosophers David Hume and Jean-Jacques Rousseau were traveling in the neighborhood, seeking a house for Rousseau, who had just recently arrived in England under Hume's patronage after having been driven out of Switzerland.
Daniel Malthus was known to both philosophers, at least by correspondence, so they paid him a brief visit, in the course of which they saw his son Thomas, then just three weeks old. So there, presumably in the same room, were Hume, Rousseau, and the infant Thomas Malthus. It was an odd grouping: the serene empiricist, the neurotic social optimist, and the future oracle of demographic doom.
Hume had actually dabbled in demography himself some years earlier. He had been one of the first to argue against the belief, common until his time, that the ancient world was more populous than the modern world. Demography, along with its cousin discipline of economics, was "in the air" during the later eighteenth century, waiting for the grown-up Malthus to cast his cold eye upon it in his momentous Essay on the Principle of Population (1798).
Of these two cousin disciplines, it is a nice point for argument which better deserves to be called "the dismal science." I would vote for demography. It must be hard to maintain a cheerful composure while scrutinizing the ceaseless, often inexplicable ebbs and flows of nativity and mortality.
It is a strange thing, too — and a depressing one for anyone of an empirical temperament — that what ought to be the most exact of all the human sciences has such a sorry record of prediction. What, after all, could be more certain than that a nation with number N of five-year-olds today will have N fifteen-year-olds in a decade's time, give or take some small margin for attrition and migration? The human sciences don't come any more precise than that. Yet large-scale predictions by demographers have been confounded again and again, from those of Malthus himself to that of Paul Ehrlich, who told us in his 1968 bestseller The Population Bomb that "The battle to feed humanity is over … Billions will die in the 1980s."
Ehrlich's book was very much of its time. The third quarter of the twentieth century was dogged by fears of a Malthusian catastrophe. Popular fiction echoed those fears in productions like John Brunner's novel Stand on Zanzibar (1968) and Richard Fleischer's movie Soylent Green (1973). It was assumed, reasonably enough, that populous poor countries were most at risk, being closest to the limits of food supply. Governments and international organizations therefore got to work promoting birth control in what we had just recently learned to call the Third World, with programs that were often brutally coercive.
Birth rates soon fell; though how much of the drop was directly due to the programs, and how much was an inevitable consequence of modernization, is disputed. The evidence is strong that women liberated from pre-modern subordination to their husbands, and given easy access to contraception, will limit their pregnancies with or without official encouragement.
There was, though, a distressing side effect of the dropping birthrates. Many countries have a strong traditional preference for male children. So long as women in those countries were resigned to a lifetime of child-bearing, the sheer number of offspring ensured that the sex ratio at birth (SRB) would be close to its natural level of 105 males to 100 females. The post-natal ratio might be skewed somewhat by local traditions of female infanticide and by the loss of young men in war, but a rough balance was kept. China in the 1930s had around 108 males per 100 females.
Once the idea of limiting births settled in, however, people sought assurance that one of their babies be male. If a mother gives birth twice, there is a 24 percent chance neither baby will be male; the chance of no males in three births is twelve percent; the chance of no males in four births, six percent. Female infanticide continued to be an option, but not an attractive one — nor, in most modern jurisdictions, a legal one.
Technology met the need by providing methods to determine the sex of a fetus. From the mid-1970s to the early 1980s, amniocentesis was used for this purpose. Then high-quality second trimester ultrasound became widely available and took over the business of fetal sex determination. It caught on very fast all over East and South Asia, allowing women to abort female fetuses. The consequences showed up in last year's Chinese census, whose results are just now being published. They show an SRB of 118 males per 100 females.
These unbalanced sex ratios and their social and demographic consequences form the subject matter of Mara Hvistendahl's book Unnatural Selection. An experienced journalist who has lived for many years in China, Ms. Hvistendahl covers the history, sociology, and science of sex-selective population control very comprehensively. She has organized each of her book's fifteen chapters around the experience of some significant individual: "The Bachelor," "The Parent," "The Economist," and so on.
Her book's scope is by no means restricted to China: "The Student" of Chapter 6 is an Indian who commenced his medical training at a big hospital in Delhi in 1978, when sex-selective abortion was just taking off in India. We get a side trip to Albania, whose SRB is treated as a state secret, but seems to be at least 110. We also learn that sex-selective abortion is common among couples of Chinese, Korean, and Indian descent in the U.S.A. The subjects here are not just newly arrived immigrants, either. A research team from Columbia University found that:
If anything, mothers who were U.S. citizens were slightly more likely to have sons. Sex selection, in other words, is not a tradition from the old country that easily dies out.
South Korea makes a particularly interesting study. That country's governments were more foresighted than most in spotting the problems that might arise from sex-selective abortion. They outlawed the procedure in 1987, and followed up with rigorous enforcement. South Korea's SRB is now at the natural level.
There is more here than meets the eye, though, as Ms. Hvistendahl uncovers. As elsewhere, sex selection was mainly resorted to for second or subsequent births. The SRB for first births is essentially normal worldwide. And first births is wellnigh all the births there are now in South Korea. Our author tells us that: "In 2005 Korea bottomed out with the lowest total fertility rate in the world, at an average of 1.08 children per woman." Things have since recovered somewhat. The 2011 estimate for total fertility rate is 1.23. That still makes for a fast-declining and aging population, though — surely not the ideal solution to the problem of sex ratio imbalances.
What of the issue of angry young surplus males unable to find wives? Ms. Hvistendahl takes a less alarmist view than the one put forth by Hudson and den Boer in their 2004 book Bare Branches. That there is a causal relationship from excess males to political despotism, as those authors argued, is not well supported by historical evidence. As Hvistendahl notes: "Adolf Hitler came to power at a time when Germany had over two million more women than men as a result of the toll taken by World War I." (She might have added that the most authoritarian episode in recent Indian history was the 1975-77 "Emergency," instigated by a female Prime Minister at a time of normal adult sex ratios.) One feels intuitively that a surplus of sex-starved young men will generate trouble, but on the evidence so far, things may not go beyond domestic disorders of the containable kind.
Ms. Hvistendahl seems to be of conventionally feminist-leftist opinions, but she has visible trouble keeping those opinions in order when writing about sex-ratio imbalances. She of course favors "reproductive rights," yet cannot but deplore the fact that those rights, extended to Third World peasant cultures, have led to a holocaust of female babies and the trafficking of young women from poorer places with low male-female ratios, to wealthier places with high ones.
She works hard to develop a thesis about it all having been the fault of Western imperialists terrified of the breeding potential of poorer, darker peoples, in cahoots with opportunistic Third World dictators hungry for World Bank cash, but she cannot quite square the ideological circle. As she points out, abortion was frowned on throughout Asia until modern times. (Chinese people used to consider themselves one year old at birth: older Chinese still reckon their birthdays in this way.) Where would "reproductive rights" be in Asia if not for those meddling imperialists?
These blemishes are minor, though, and probably inevitable in any book written by a college-educated young woman of our time. If you skip over them, you will find a wealth of research and much good narrative journalism in Unnatural Selection. The occasional feminist, leftist, and anti-American editorializing aside, this is a rich and valuable book on an important topic. David Hume would have admired Hvistendahl's respect for the data, even when it leads to conclusions that make hay of her prejudices. Rousseau would have applauded her egalitarian passions. Thomas Malthus, had he read the book, would have been tearing his hair out. | <urn:uuid:f6399029-a683-4014-83d3-8e457d2526dd> | CC-MAIN-2013-20 | http://www.johnderbyshire.com/Reviews/HumanSciences/unnaturalselection.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.972116 | 2,060 | 2.953125 | 3 |
Mary Anne Dunkin
Louise Chang, MD
Two decades ago, if you had moderate to severe Crohn's, there were few treatment options. In the late 1990s, however, the first in a new class of treatment options emerged for Crohn's disease. Referred to as biologic response modifiers, biologic agents, or simply biologics, these drugs target specific parts of an overactive immune system to reduce inflammation.
Biologics not only relieve Crohn's symptoms but also can bring about remission and keep you in remission. They are indicated for use when someone has moderately to severely active Crohn’s disease and has not responded well to other Crohn’s disease treatments. Many people with Crohn's now live with significantly fewer symptoms, but may worry about side effects. Here's a look at the risks and benefits of biologics.
In Crohn's disease, an overactive immune system causes inflammation and damage to the digestive tract. Made from living organisms, biologics work just like substances made by the body’s immune system and can help control the immune system response.
Four biologics are FDA approved for Crohn's. Three of the four block a protein called tumor necrosis factor (TNF) that's involved in inflammation. These drugs are often called anti-TNF drugs or TNF inhibitors. They include Cimzia (certolizumab), Humira (adalimumab), and Remicade (infliximab).
The fourth medication, Tysabri (natalizumab), is called an integrin receptor antagonist. It blocks certain types of white blood cells that are involved in inflammation.
Because they suppress the immune system, all biologics carry an increased risk of infections, which in rare cases can be serious. Cimzia, Humira, and Remicade carry a boxed warning for increased risk of serious infections leading to hospitalization or death. If someone taking a biologic develops a serious infection, the drug should be discontinued. People with tuberculosis, heart failure, or multiple sclerosis should not take biologics because they can bring on these conditions or make them worse.
In rare cases, some people taking TNF inhibitors have developed certain cancers such as lymphoma. Lymphoma is a type of cancer that affects the lymph system, which is part of the body’s immune system.
Tysabri increases the risk of a very rare but potentially fatal brain infection called progressive multifocal leukoencephalopathy (PML). Tysabri also can cause allergic reactions and liver damage. Tysabri should not be used at the same time as other treatments that suppress the immune system or TNF inhibitors.
However, most infections that occur with biologic use are far less serious, says Richard Bloomfeld, MD, associate professor of medicine and director of the Inflammatory Bowel Disease Program at Wake Forest University School of Medicine in Winston Salem, N.C. "Infections such as colds, upper respiratory tract infections, and urinary tract infections are common and don't necessarily alter our treatment of Crohn's."
Other common side effects from biologic use include headache, flu-like symptoms, nausea, rash, injection site pain, and infusion reactions.
So who should take a biologic for Crohn's? Many gastroenterologists reserve these drugs for people who have not responded to conventional medications that suppress the immune system. But some gastroenterologists may treat Crohn's more aggressively.
"If you let inflammation go, inflammation leads to scarring and scarring leads to narrowing of the intestines, which becomes a surgical problem," says Prabhakar Swaroop, MD, assistant professor and director of the Inflammatory Bowel Disease Program at the University of Texas Southwestern Medical Center in Dallas. "You want to treat the person aggressively to prevent these problems."
"In addition to improving symptoms, the anti-TNF modifiers are associated with mucosal healing," says Bloomfeld. "We hope that in healing the mucosa we can stop the progression of the disease and prevent complications of Crohn's that result in hospitalization and surgery."
While there are other treatments that suppress the immune system to treat Crohn's, they too have side effects, says Bloomfeld. Like the biologics, drugs that suppress the immune system increase the risk of lymphomas and infections, which can be severe.
Cortiosteroids like prednisone, for example, can cause a wide range of adverse effects including weight gain, mood swings, bone loss, skin bruising, high blood pressure, and high blood sugar. Those side effects are why corticosteroids may be used to control a flare, but aren't the choice to treat Crohn's over a long period of time. "The stop-gap method, which is steroids, is something we cannot use long term," says Swaroop.
When prescribing any drug, doctors look at the potential risks against the benefits they hope or expect to achieve. Although doctors don't all share the same philosophy on when to start biologics for Crohn's disease, they do agree that biologics should be used when people have severe disease that can lead to permanent damage and make surgery unavoidable.
Swaroop says he looks for signs that the disease is progressing, such as how long between a person's diagnosis of Crohn's and when they have fistulas. "These are the patients who generally do better on biologics, who have the quality of life improvement, who are able to avoid surgery and get back in the workforce," he says.
Before prescribing biologics, doctors check for potential problems. "In the beginning, of course, we go ahead and make sure the person does not have an active liver infection or TB," says Marie Borum, MD, professor of medicine and director of the Division of Gastroenterology and Liver Diseases at George Washington University in Washington, D.C.
Once someone starts a biologic, the doctor looks for side effects in order to find them before they become serious. Monitoring includes include lab tests and possibly regular skin checks for signs of skin cancer.
All effective therapies for Crohn's disease come with some risk, says Bloomfeld. "It is not an option not to treat Crohn's, so we certainly need to weigh these risks against the benefits of having the disease well treated."
"It may be challenging for the individual to consider all of these risk and benefits. They need to work with their gastroenterologist to decide what might be most beneficial for them and what risk they are willing to accept to effectively treat Crohn's disease," Bloomfeld says. "You have to be willing to accept some risk to adequately treat Crohn's disease."
SOURCES:Crohn's Colitis Foundation of America: "Biologic Therapies."Marie Borum, MD, professor of medicine; director, Division of Gastroenterology and Liver Diseases, George Washington University, Washington, D.C.Richard Bloomfeld, MD, associate professor of medicine; director, Inflammatory Bowel Disease Program, Wake Forest University School of Medicine, Winston Salem, N.C.Prabhakar Swaroop, MD, assistant professor; director, Inflammatory Bowel Disease Program, University of Texas Southwestern Medical Center, Dallas.
Here are the most recent story comments.View All
The views expressed here do not necessarily represent those of NewsSource 16
The Health News section does not provide medical advice, diagnosis or treatment. See additional information. | <urn:uuid:d144a4d1-4df0-4ff9-bc6a-945b724413d1> | CC-MAIN-2013-20 | http://www.kmtr.com/webmd/crohnsdisease/story/Taking-a-Biologic-for-Crohns-Disease-Risks-and/jLMMkVIuEEygAcFikTW0ig.cspx | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.946601 | 1,559 | 2.703125 | 3 |
Positive Chronicles - East of eden
by Dr Kailash Vajpeyi
"Man is no longer
to be the measure of all things, the center of the universe. He has been measured
and found to be an undistinguished bit of matter, different in no essential way
from bacteria, stones and trees. His goals and purposes, his egocentric notions
of past, present and future; his faith in his power to predict and through prediction
to control his destiny—all these are called into question, considered irrelevant,
or deemed trivial."
When Leonard B. Meyer yanked man down from the exalted status assigned him by the Judeo-Christian tradition, in his 1963 book, The End of Renaissance?, he triggered off a radical shift in the relationship between man and nature. Today, that understanding goes variously by the name of Gaia or Deep Ecology.
The Gaia hypothesis postulates that Planet Earth is a living organism that adjusts and regulates itself like any other organism and that for 3.5 billion years, microbes, plants and animals have co-evolved with the environment as one globally integrated superorganism. In much the same vein, Deep Ecology believes in the essential ecological equality of all species, man and mouse, elephant and earthworm. In an interconnected, indivisible ecosystem, each part is as crucial as the next.
Here, T.S. Eliot may have been tempted to comment on the return of things to their point of beginning. For interconnection was the fundamental premise of the relationship between all traditional civilizations and nature. Unlike the western equation of conqueror and conquered, traditional people related to nature much as an offspring to a benevolent mother, or a devotee to a deity.
Most eastern religions such as Vedic Hinduism, Jainism and Buddhism, include within nature not only all forms of life but also that which is inanimate and invisible. Vedic texts uphold the doctrine called Madhu Vidya, or interdependence between man and nature. The Vedic worldview is beautifully expressed in that famous injunction, Vasudhaiva Kutumbakam (the world is one family).
In the Vedas, natural elements play a pivotal role. But the interrelationship of creation was always within the context of its relationship with the creator. The Vedic sages believed that everything in this world stems from divine knowledge (the word) which was first revealed to a group of seers, who then passed on this knowledge to successive generations of Vedic seers.
And thus, Saraswati, the Goddess of Divine Speech, holds a special place among Hindu deities.
May the divine speech, Saraswati,
The fountainhead of all faculties (mental and spiritual),
The purifier and bestower of true vision,
The recompenser of worship: Be the source of inspiration and accomplishments
For all our benevolent acts
(Rig Veda 1-3-10)
Thus, speech, or vak, has a preeminent role in the Indian tradition. Water, it is believed was literally produced by vak. In turn, if we accept the theory that the theory that the hydrogen molecule is the basis of all life, water could be said to have created the rest of life.
Of the five basic elements that make up life—earth, space, wind fire and water—the last, in the Vedic view, is the primal element. No wonder there are dozens of Vedic verses in praise of water:
O water source of happiness, we pray,
Please give us vigor so that we may
Contemplate the great delight Hail to you divine, unfathomable
All purifying waters
You are the foundation of all this universe
The consciousness of being composed of the same elements was one more proof of the unity of all creation. The elements, both separately and jointly as life forms, were, at one and the same time, objects of reverence and intimately related to us.
We hardly realize that there are cosmic forces which are working in cyclical patterns, and that the most fundamental pattern which governs our life is the movement of he earth on its axis. One shudders to think what would happen to life as we know it if the earth stopped spinning on its axis or the sun failed to rise in the morning.
We are creatures of the planet but the earth is not a geographical entity, it is us. The earth is not simply dust but a reservoir of all energy. It has given birth to four types of creatures: swedaj, udbhij, andaj and pindaj (aquarian, flora and fauna, avian and mammalian).
To the Vedic seers, the idea of subjugating or exploiting the earth was incomprehensible. To them it was an object of worship and not of exploitation. Its conquest was tantamount to dissecting a mother's body to study her heartbeat or chopping her breasts to isolate the gland producing milk. But times have changed. Today, man has no qualms about expropriating the earth's wealth for his own benefit. This has resulted in the creation of a new fifth species, the yantraj—the technetronic being.
According to Daniel J. Boorstin, the author of Cleopatra's Nose: "When the machine kingdom arrived on the scene, it entirely changed the fixedness of the idea of change. A natural species reacts to its environment and learns to adapt to it. But the technetronic species creates its own environment."
For instance, media technology tends to create what can be termed asdiplopia or double image, where it is hard to distinguish reality from illusion. Television, for example, has the capacity to convert an event into virtual reality, what is there is also here at the same time or what is here can also be there if it has been filmed. For the vedic man, the earth was the bestower of blessings, she was the protector of life. All descriptions of Ramrajya, (the reign of Lord Rama, the hero of the Indian epic Ramayana) portrayed the earth as abundant and giving.
The Mahabharata eulogized Yudhisthira's reign thus: "Earth yielded abundant crops and all precious things. She had become the provider of all goodness. Like kamdhenu, the celestial cow, the earth offered thousands of luxuries in a continuous stream."
In Bhumi Sukta we come across verses such as:
O purifying Earth, I you invoke
O, patient Earth by sacred word
Enhanced bearer of nourishment and strength of food and butter,
O, Earth we would approach you with due praise
Influenced by this holistic vision, the Indian way of life was integral, its purpose the well-being of creation. Even in the matter of eating, our ancestors emphasized the importance of feeding others before themselves. A householder could eat only after propitiating the ancestors, the devas representing different aspects of nature, the bhutas representing all created beings, guests, members of the household and servants. The practice of agriculture was deeply influenced by this sacred vision of interconnection.
According to the activist Vandana Shiva's book, The Seedkeeper, new seeds were first worshipped before being consumed. New crop was worshipped before being consumed. For the farmer, field is the mother: worshipping the field is a sign of gratitude towards the earth, who as mother, feeds the millions of life forms who are her children.
"In the place of chemical manures and pesticides, the traditional farmer used nature's own checks and balances to nurture fertility and keep pests at bay. A typical rice field supported and in some places continues to do so 800 species of "friendly insects"—spiders, wasps, ants and pathogens that controlled 95 per cent of insect pests.
These practices are still a living presence among India's tribal societies, for instance, the Warlis, a community near Mumbai, worship nature as Hirva (green) and consider all produce to be gifts of Hirva, rather the fruits of their own labor. Conservation of plants and animals was an innate aspect of their culture, illustrated in the concept of the sacred grooves: mangroves, marshlands and other tracts of land supposedly inhabited by spirits, where killing of plants and animals is taboo.
The Bishnois of Rajasthan, too, will rather die than let a single tree be felled. The concept of coexistence took many forms. Before felling a tree to construct a temple, the carpenter traditionally sought the permission of the tree. And in Emperor Asoka's time, veterinary hospitals were state institutions.
Among the five vital elements which sustain life on earth, the wind in the Rig Veda is called vata. Though the wind is connected with the primordial waters, its origin is not known.
Vedas also address it as the spirit:
May the wind breathe upon us
Prolong our lifespan
And fill our hearts with comfort
Responding to the current environmental crisis, Susan Griffin in her book Women & Nature writes: "We live as if nature is only need to provide extras: paper, recreation, specialty foods, a job to provide money."
Unlimited desire and man's greed has devastated this planet to such an extent that by the time you finish reading this article, at least 10 species of birds would be extinct forever. In contrast, personal fulfillment in Buddhism is sought through independence.
Here the self is temporary and nonessential rather than the center of the universe. Writes Kerry Brown, co-author of Buddhism and Ecology, about the Buddhist philosophy: "Where infinite spiritual development is possible within a physical existence that is understood and accepted as infinite."
Buddha attained enlightenment under a banyan tree, J. Krishnamurti had the same kind of realization under a pepper vine. No wonder the author of Bhamini Vilas called the tree Guru.
"O tree! You bear fruits, leaves and flowers and protect people from the scorching sun. Whoever come to you in scorching heat, you take away their suffering and give them coolness. This way you surrender yourself for others. That is why you are a Guru of all kind people."
Anekantavada, the Jain concept that professes multiple views of reality, goes even deeper. Its verdict on the unmindful endeavors of mankind would be damning. The bacterial organism, as understood in modern science, can be compared with what is called nigodiya life in Jainism. And ahimsa or nonviolence, which is fundamental to Jain philosophy, teaches not harming even the basic forms of life. Jainism and other Indian religions advocate that compassion must be the foundation for any truly civilized community.
Lawrence Joseph, the author of Gaia, has obviously been deeply influenced by all systems of Indian philosophy which adhere to the universal law of interdependence. Lynn Margulis, co-author of the Gaia theory along with James Lovelock, believes strongly that the biological microcosm provides a key controlling influence in the global environment and argues that the role of these tiny organisms has been underestimated because they are invisible. With the convergence of the most recent scientific understanding and the most recent ancient wisdom, there is hope yet for the survival of the earth and, in turn, life on it.
There can be no better sign of it than NASA circulating, all over the USA, a photograph of the earth with the caption: Love your mother.
|HOME | SUBSCRIBE | WALLPAPERS | ADVERTISING | POLICY | PRACTITIONERS | WRITERS | PEOPLE | ABOUT | CONTACT| | <urn:uuid:4e0e9bff-c5aa-4942-a3eb-fa578926eb71> | CC-MAIN-2013-20 | http://www.lifepositive.com/mind/philosophy/eden.asp | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.951647 | 2,407 | 2.59375 | 3 |
Ulva spp. on freshwater-influenced or unstable upper eulittoral rock
Ecological and functional relationships
The community predominantly consists of algae which cover the rock surface and creates a patchy canopy. In doing so, the algae provides an amenable habitat in an otherwise hostile environment, exploitable on a temporary basis by other species. For instance, Ulva intestinalis provides shelter for the orange harpacticoid copepod, Tigriopus brevicornis, and the chironomid larva of Halocladius fucicola (McAllen, 1999). The copepod and chironomid species utilize the hollow thalli of Ulva intestinalis as a moist refuge from desiccation when rockpools completely dry. Several hundred individuals of Tigriopus brevicornis have been observed in a single thallus of Ulva intestinalis (McAllen, 1999). The occasional grazing gastropods that survive in this biotope no doubt graze Ulva.
Seasonal and longer term change
- During the winter, elevated levels of freshwater runoff would be expected owing to seasonal rainfall. Also, winter storm action may disturb the relatively soft substratum of chalk and firm mud, or boulders may be overturned.
- Seasonal fluctuation in the abundance of Ulva spp. Would therefore be expected with the biotope thriving in winter months. Porphyra also tends to be regarded as a winter seaweed, abundant from late autumn to the succeeding spring, owing to the fact that the blade shaped fronds of the gametophyte develop in early autumn, whilst the microscopic filamentous stages of the spring and summer are less apparent (see recruitment process, below).
Habitat structure and complexity
Habitat complexity in this biotope is relatively limited in comparison to other biotopes. The upper shore substrata, consisting of chalk, firm mud, bedrock or boulders, will probably offer a variety of surfaces for colonization, whilst the patchy covering of ephemeral algae provides a refuge for faunal species and an additional substratum for colonization. However, species diversity in this biotope is poor owing to disturbance and changes in the prevailing environmental factors, e.g. desiccation, salinity and temperature. Only species able to tolerate changes/disturbance or those able to seek refuge will thrive.
The biotope is characterized by primary producers. Rocky shore communities are highly productive and are an important source of food and nutrients for neighbouring terrestrial and marine ecosystems (Hill et al., 1998). Macroalgae exude considerable amounts of dissolved organic carbon which is taken up readily by bacteria and may even be taken up directly by some larger invertebrates. Dissolved organic carbon, algal fragments and microbial film organisms are continually removed by the sea. This may enter the food chain of local, subtidal ecosystems, or be exported further offshore. Rocky shores make a contribution to the food of many marine species through the production of planktonic larvae and propagules which contribute to pelagic food chains.
The life histories of common algae on the shore are generally complex and varied, but follow a basic pattern, whereby there is an alternation of a haploid, gamete-producing phase (gametophyte-producing eggs and sperm) and a diploid spore-producing (sporophyte) phase. All have dispersive phases which are circulated around in the water column before settling on the rock and growing into a germling (Hawkins & Jones, 1992).
Ulva intestinalis is generally considered to be an opportunistic species, with an 'r-type' strategy for survival. The r-strategists have a high growth rate and high reproductive rate. For instance, the thalli of Ulva intestinalis, which arise from spores and zygotes, grow within a few weeks into thalli that reproduce again, and the majority of the cell contents are converted into reproductive cells. The species is also capable of dispersal over a considerable distance. For instance, Amsler & Searles (1980) showed that 'swarmers' of a coastal population of Ulva reached exposed artificial substrata on a submarine plateau 35 km away.
The life cycle of Porphyra involves a heteromorphic (of different form) alternation of generations, that are either blade shaped or filamentous. Two kinds of reproductive bodies (male and female (carpogonium)) are found on the blade shaped frond of Porphyra that is abundant during winter. On release these fuse and thereafter, division of the fertilized carpogonium is mitotic, and packets of diploid carpospores are formed. The released carpospores develop into the 'conchocelis' phase (the diploid sporophyte consisting of microscopic filaments), which bore into shells (and probably the chalk rock) and grow vegetatively. The conchocelis filaments reproduce asexually. In the presence of decreasing day length and falling temperatures, terminal cells of the conchocelis phase produce conchospores inside conchosporangia. Meiosis occurs during the germination of the conchospore and produces the macroscopic gametophyte (blade shaped phase) and the cycle is repeated (Cole & Conway, 1980).
Time for community to reach maturity
Disturbance is an important factor structuring the biotope, consequently the biotope is characterized by ephemeral algae able to rapidly exploit newly available substrata and that are tolerant of changes in the prevailing conditions, e.g. temperature, salinity and desiccation. For instance, following the Torrey Canyon tanker oil spill in mid March 1967, which bleached filamentous algae such as Ulva and adhered to the thin fronds of Porphyra, which after a few weeks became brittle and were washed away, regeneration of Porphyra and Ulva was noted by the end of April at Marazion, Cornwall. Similarly, at Sennen Cove where rocks had completely lost their cover of Porphyra and Ulva during April, by mid-May had occasional blade-shaped fronds of Porphyra sp. up to 15 cm long. These had either regenerated from basal parts of the 'Porphyra' phase or from the 'conchocelis' phase on the rocks (see recruitment processes). By mid-August these regenerated specimens were common and well grown but darkly pigmented and reproductively immature. Besides the Porphyra, a very thick coating of Ulva (as Enteromorpha) was recorded in mid-August (Smith 1968). Such evidence suggests that the community would reach maturity relatively rapidly and probably be considered mature in terms of the species present and ability to reproduce well within six months.
No text entered.
This review can be cited as follows:
Ulva spp. on freshwater-influenced or unstable upper eulittoral rock.
Marine Life Information Network: Biology and Sensitivity Key Information Sub-programme [on-line].
Plymouth: Marine Biological Association of the United Kingdom.
Available from: <http://www.marlin.ac.uk/habitatecology.php?habitatid=104&code=2004> | <urn:uuid:13da434f-f140-49e3-8fdb-67019653693a> | CC-MAIN-2013-20 | http://www.marlin.ac.uk/habitatecology.php?habitatid=104&code=2004&code=2004 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.912592 | 1,520 | 3.625 | 4 |
Increasing Local Contrast
The previous section showed adjustments to the brightness and contrast of the entire image. The adjustment changes every pixel of original brightness value A to the same final brightness value B, regardless of the pixel’s neighbors. Another class of operations increases the visibility of local differences between pixels, by suppressing the longer-range variations. These neighborhood functions use a moving neighborhood, usually a small circle, that compares or combines the central pixel and the neighbors to produce a new value that is assigned to the central pixel to construct a new image. Then the neighborhood shifts to the next pixel and the process is repeated. These calculations are applied to the pixel brightness values in a color coordinate system such as HSI or LAB that leaves the color values unchanged.
For instance, local equalization functions just like the histogram equalization procedure, except that it takes place within a moving circular neighborhood and assigns a new value only to the central pixel. The result makes a pixel that is slightly brighter than its surroundings brighter still, and vice-versa, enhancing local contrast. The result is usually added back in some proportion to the original image to produce a more visually pleasing result, as shown in the Local Equalization interactive Java tutorial.
Sharpening of images to increase local contrast is almost universally applied by publishers to counter the visual blurring effect of halftoning images in the printing process. This is usually done by a convolution using a kernel of weights, just as the Gaussian smoothing function shown above. But in this application, some of those weights will have negative values. For instance, the Laplacian sharpening filter in Table 1 combines each pixel with its eight adjacent neighbors as shown in the Laplacian Sharpening interactive Java tutorial.
A more flexible extension of this basic idea is the widely used (and as often misused) unsharp mask. The name derives from a century-old darkroom procedure that required printing the original image at 1:1 magnification but out of focus onto another piece of film (this was the unsharp mask), and then placing the two films together to print the final result. Where the original negative was dense, the mask was not (and vice versa) so that little light was transmitted, except near detail and edges where the mask was out of focus.
The same effect can be produced in the computer by applying a Gaussian blur to a duplicate of the original and then subtracting it from the original. The difference between the two images is just the detail and edges removed by the blurring. The original image is then added back to the difference to increase the visibility of the details while suppressing the overall image contrast. In the Unsharp Masking interactive Java tutorial, the result image is automatically scaled to the range of the display so that negative values that can result from the calculation are not lost.
One of the characteristics of the unsharp mask is the formation of bright and dark “haloes” adjacent to the dark and bright borders (respectively) of structure in the image. This increases their visibility, but can hide other nearby information. A related approach using neighborhood ranking rather than Gaussian blurring alleviates this problem. The method applies a median filter to remove fine detail, subtracts this from the original to isolate the detail, and then adds the original image back to enhance the visibility as shown in the Rank Masking interactive Java tutorial. This method is called a rank mask, but is sometimes (incorrectly) referred to as a top hat filter (the real top hat is shown below).
Note that all of these local enhancement methods are very noise sensitive, because both random speckle and shot noise produce pixels that are different from their local neighborhood. Image noise must be removed before enhancement is attempted, or the visibility of the noise will be increased as shown in the Comparison of Local Contrast Enhancement Methods interactive Java tutorial.
The top hat filter is also a based on neighborhood ranking, but unlike the procedure above it uses the ranked value from two different size regions. The brightest value in a circular interior region is compared to the brightest value in a surrounding annular region. If the brightness difference exceeds a threshold level, it is kept (otherwise it is erased). The Top Hat Filter interactive Java tutorial shows the filter’s operation. If the interior and annular regions are drawn as shown in the diagram in Figure 1, the reason for the filter name becomes apparent. The interior region is the crown and the threshold is its height, while the surrounding annulus is the brim of the hat. This operation is particularly well suited for finding the spikes in Fourier transform power spectra, as illustrated previously.
The top hat is also good for locating any features of a known size by adjusting the radius of the crown. Objects too large to fit into the crown of the hat are selectively removed. Reversing the logic to use the darkest values in both regions enables the same procedure to isolate dust or other dark features. By replacing the interior value by the mean of the surroundings, the dust can be selectively removed. In this application, shown in the Rolling Ball Filter interactive Java tutorial, the method is called a rolling ball filter.
John C. Russ - Materials Science and Engineering Dept., North Carolina State University, Raleigh, North Carolina, 27695.
Matthew Parry-Hill and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
Questions or comments? Send us an email.
© 1998-2009 by Michael W. Davidson, John Russ, Olympus America Inc., and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our | <urn:uuid:5a975454-3da4-4fb7-b80e-35755220af39> | CC-MAIN-2013-20 | http://www.micro.magnet.fsu.edu/primer/digitalimaging/russ/localcontrast.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.910231 | 1,204 | 3.5625 | 4 |
|About the presenter: Patricia M. Roberts, Ph.D., CCC-SLP, SLP(C), currently Associate Professor in speech language pathology at the University of Ottawa, in Canada's capital. She holds degrees from Queen's University (Kingston, Canada) and Florida State University and obtained her Ph.D. from the Universite de Montreal.In her first career as a clinical SLP, she worked with many bilingual clients and was privileged to have the late Marie Poulos as colleague and mentor. She is spending much of her 2nd career as a professor and researcher trying to understand the many unsolved puzzles of bilingual stuttering.|
In this presentation, I will focus on four mysteries (things we do not yet know) about stuttering in bilingual children and adults and some of the myths associated with these gaps in our current knowledge.
To make this essay easier to read, I won't say "bilingual or multilingual" each time the word "bilingual" comes up but in most places, what applies to bilinguals also applies to multilingual speakers - as far as we know - so far.
Mystery 1: How many bilingual people are there?
It is sometimes confusing to even try to discuss bilingualism because the word bilingual means different things to different people. For some people, bilinguals are people who speak two (or more - for multilinguals) languages equally and perfectly. People who speak two languages in their daily lives, and can do most things such as talking to people at work, reading the newspaper, understanding conversations with friends sometimes say "Oh yes, I can do all that. But I am not bilingual". Other people describe themselves as bilingual if they can communicate basic ideas, even if they make many errors in grammar and pronunciation and have a very small vocabulary in one language.
In research, both these kinds of people are seen as having different levels of bilingualism. Ratings from 1 to 7 or 1 to 9 are often used to estimate where each person falls along the continuous line that goes from "I really ONLY know one language" to "I am one of those rare people who feels equally at home in two languages, no matter what the task or topic". For speaking, hearing, reading, and writing, most of us are at slightly different levels of ability, in each of the languages we know.
For this essay - and the discussions I hope it will spark - let's think of bilingualism as being a continuum. We don't divide the world into tall people and short people. There is no rational cut-off to separate "tall" from "short". Same thing for "bilingual" and "unilingual". Everyone is at some point along the line that goes from strongly unilingual to very, very bilingual.
With a broad definition of bilingualism, some authors estimate that there are at least as many people in the world who need to use two or more languages in their daily lives as there are people who can only function in one language (see, for example, Bhatia & Ritchie, 2006). We cannot make precise estimates unless we first define what levels of bilingualism are included or excluded from the count (and where to divide dialects from languages).
Mystery 2: Is the incidence of stuttering the same in different languages?
There are studies of the incidence of stuttering in different countries. Some authors use these studies to say things like "the incidence of stuttering is higher in Country X than in Country Y". But, if each study used different ways of sampling and different ways of determining who stutters, it is not valid to compare across studies. For example, from one study to the next, different methods were used in deciding who is stuttering: parent reports? Teachers in schools or day care centres? Parents remembering what the child was like 5 or 10 years ago? There are also differences in what counts as stuttering: only for a few months at age 3? Only people who stuttered for more than a year? Only those who reach a given level of severity?
When people see these reports, they often speculate about why the incidence figures SEEM to be different (ignoring the differences in how the estimates were reached), often using their favourite aspect of stuttering as the explanation. Thus, we see explanations like:
1) "There is more stuttering in Country X than in Country Y because the grammar of the language spoken in Country X makes greater demands on memory...." The complexity of a language might be relevant, in some subtle ways including the location of moments of stuttering within a sentence. Concluding that the language itself influences the number of people who stutter requires a huge, dangerous leap of logic. There are other possible explanations that have to be ruled out before we can select one of them and reject the others.
2) "There is more stuttering in ___ because that culture views speaking well as very important and the pressure to speak well makes people stutter". This explanation now seems very unlikely, given what we know about the causes of stuttering.
Now that we understand the importance of genetics and the inherited nature of stuttering for many people, it seems logical to ask whether, in some ethnic groups, more people carry the genes that make them vulnerable to stuttering than is the case in other ethnic groups. Perhaps the genetic pool, not cultural or linguistic features, has the strongest influence on the incidence of stuttering. Or, more likely, perhaps several causal factors contribute, interacting with each others in ways we do not yet understand.
The only way we will ever know if the incidence of stuttering varies across languages or countries is to do international, collaborative studies where the same rigourous methods are used in everywhere THEN we can propose explanations for the similar or differing rates of stuttering in different languages and/or countries.
Mystery 3: Does speaking more than one language increase the risk of a child stuttering?
Many people think it does. For the general public, it seems logical. Sometimes people reason this way: 1) Speaking two languages is hard. 2) For children who stutter, speaking is hard. 3) Therefore, children who stutter (or those who are at risk of developing stuttering, because of a known family history) should not be expected to learn two languages.
In the research on stuttering, the Demands and Capacities model of stuttering seems to apply here. But is it "hard" to learn two languages or is this a myth? If learning two languages as a young child is neurologically or cognitively strenuous, why is it that tens of millions of children do so successfully? Are their brains slightly stressed if they have to sort out two languages during the best language-learning years in childhood? Most of the research on bilingualism says "no". However, bilingualism is seen as something very positive by most people who do research on it, and most studies are designed to detect advantages and not problems associated with bilingualism. Also, this research is based on children with no speech or language problems. In children with a genetic vulnerability to stuttering, is learning two sets of words and grammar rules, and two sets of speech sounds harder than it is for children without this vulnerability? If learning two languages as a child is much harder than learning one, is it all potentially bilingual children or only a sub-group of those who might be at risk for stuttering?
How should we interpret the recent and somewhat controversial study by Howell, Davis and Williams (2009) that found a higher incidence of stuttering in children if they began learning English (the language of their new country? before age 5? Were there other reasons for the finding that children who learned English before starting school were more likely to stutter than those who reportedly began learning English when they began school in London, England? (See Packman, et al, 2009 letter to the editor and Howell et al.'s reply.)
There are four other, older studies that have led some people to conclude that bilingualism is too great a strain for children who stutter. In each case, these studies have serious flaws that make it impossible to draw any conclusions from them. Travis, Johnson and Shover (1937) asked people with no training in communication disorders (such as priests and steel company personnel directors) to talk to young children and classify them as stuttering or not stuttering based on one interview. Stern (1948) interviewed children if their parents reported that they stuttered. In both these studies, we have little information about the type of speech sample obtained, how long it was or how the disfluencies were counted. Applying current standards to these studies, they would not be accepted for publication.
Dale (1977) reported that four Cuban-American teenagers reported feeling that being made to speak their weaker language made them more disfluent. Most bilinguals have a stronger and a weaker language. For these teens, their first language - Spanish - was their weaker language, since so much of their lives at school and with friends took place in English, their second language. This study "blames" bilingualism. But we have no information about real disfluency rates across different situations, and Dale does not distinguish between normal disfluencies and tense, stuttered disfluencies. There are studies showing that, in adults, the memory load of speaking in their weaker language may lead to a higher number of normal disfluencies (ums, uh's, revisions) in the weaker language than in the preferred language (e.g., Fehringer & Fry, 2007). Perhaps that is all that was happening in this study. Dale offers no data to support the notion that any of the four adolescents, in fact, stuttered.
Karniol (1992) described how stuttering appeared to increase and decrease in a young boy whose environment included exposure to various levels of English, Hebrew, and Hungarian during an extremely tense time that included a war going on around him. With the information provided, we cannot tell what his real level of exposure to each language was (siblings, friends, parents etc.) and whether his parents' attempts to expose him to only Hebrew had any impact on what is described as a recovery from stuttering. The parents' diaries cover a period of approximately one year (age 2 to age 3) when the boy was in the age group where the chances of spontaneous recovery from stuttering are very, very high.
There is (still) no clinical research to support the strategy of removing one language from a child's environment. Recent reviews of the literature do not find support for doing this routinely for all children (e.g., Bernstein Ratner, 2004; Roberts & Shenker, 2007; Van Borsel, Maes & Foulon, 2001). Some clinicians do this, however, if they work in a Demands and Capacities framework OR if the child also has delayed language and/or problems learning the speech sounds of his/her language. Until there is solid evidence on the impact of bilingualism in young children (i.e. a series of studies, done by different authors, ideally on different types of speakers and different pairs of languages), each clinician is left to try a particular strategy and assess its impact on a case by case basis.
Mystery 4: Do some bilingual people stutter in only one language?
As of 2010, I am still not aware of any documented case of this occurring. Like the Loch Ness monster, there are reported sightings from time to time, but no real proof that this is possible. In my years working with bilingual adults who stutter, I never assessed or treated a case of "unilingual stuttering". (Note: if you know of someone who stutters in only one of their two (or more) languages, I would be interested in exploring this with you. Just because there are no documented cases does not mean that it never occurs !)
Nonetheless, given the roles of genetics and motor processes in stuttering, it is highly unlikely that someone would stutter in only one of their languages. Van Riper (1971) cites second- and third-hand reports of two people reported to stutter in only one of their two languages, but offers no data. Howell, Davis and Williams (2009) report that 2 of the 38 children in their bilingual group stuttered in only one language, but there is no supporting data about the rates of disfluencies, or the level of proficiency in each language and the children were not assessed using a range of speaking tasks.
Roberts and Shenker (2007, Table 1) outlined the steps that would be required to show that someone with a working knowledge of two languages stutters in only one language. Sometimes, when someone appears to stutter in only one language it could be for one or more of the following reasons:
There are more and more studies about stuttering in different languages and some studies (and soon, a new book edited by Howell and Van Borsel) that focus specifically on stuttering in bilingual speakers. This is a very welcome change. Ten or fifteen years ago, there was little awareness that bilingual stuttering was a topic that needed exploring. Perhaps in a future ISAD forum, there will be articles about the answers to the questions raised in this one.
Bernstein Ratner, N. (2004). Fluency and stuttering in bilingual children. In B.Goldstein (Ed.) Bilingual language development and disorders in Spanish-English speakers (pp. 287-308). Baltimore: Paul H. Brookes.
Bhatia, T.K., & Ritchie, W.C. (2006). Introduction. In T.K. Bhatia & W.C. Ritchie (Eds.) The Handbook of Bilingualism (pp. 1-2), Oxford: Blackwell Publishing.
Dale, P. (1977). Factors relating to disfluent speech in bilingual Cuban-American adolescents. Journal of Fluency Disorders, 2, 311-314.
Fehringer, C., & Fry, C. (2007). Hesitation phenomena in the language production of bilingual speakers. Folia Linguistica, 41, 37-72.
Howell, P., Davis, S. Williams,R. ( 2009). The effects of bilingualism on stuttering during late childhood. Archives of Disease in Childhood, 94, 42-46
Karniol, R. (1992). Stuttering out of bilingualism. First Language, 12, 255-283.
Packman,A., Onslow, M., Reilly, S. et al. (2009). Stuttering and bilingualism. Archives of Disease in Childhood, 94, 248. (a letter to the editor re the Howell, Davis and Williams study)
Roberts, P.M. & Shenker, R.C. (2007). Assessment and treatment of stuttering in bilingual speakers. In R.F. Curlee & E.G. Conture (Eds). Stuttering and related disorders of fluency 3rd edition (pp. 183-209). New York: Thieme Medical Publishers.
Stern, E. (1948). A preliminary study of bilingualism and stuttering in four Johannesburg schools. Journal of Logopedics, 1, 15-25.
Travis, L.E., Johnson, W., & Shover, J. (1937). The relation of bilingualism to stuttering: a survey in the east Chicago, Indiana, Schools. Journal of Speech Disorders, 12, 185-189.
Van Borsel, J. Maes, E., & Foulon, S. (2001). Stuttering and bilingualism: A review. Journal of Fluency Disorders, 26, 179-205. | <urn:uuid:14aeb48d-5fb0-4aba-96b1-8b69dd3d8ea7> | CC-MAIN-2013-20 | http://www.mnsu.edu/comdis/isad13/papers/roberts13.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.95766 | 3,225 | 2.921875 | 3 |
REFORM IN SPANISH EDUCATION: THE INSTITUCION LIBRE DE ENSENANZA
by Noel M. Valis
Paper presented to Thomas Woody Society, University of Pennsylvania, January 26, 1977
This document examines the development and influence of the Free Institute of Education (Institucion Libre de Ensenanza) and of its founder, Don Francisco Giner de los Rios, in late nineteenth century Spain. Founded in 1876 against a background of repression and reimposition of state-controlled education during the Bourbon Restoration the Institute was a private institution free of Church and State. Its intent was to create an alternative to the higher education system of official Spain, but due to financial problems, it evolved into an institution of primary and secondary education. Subject matter included traditional, State-required subjects, but also anthropology, technology, social sciences, economics, art, drawing, singing, and handwork--all generally neglected in State- and Church-run schools. Most radical were the innovations in art and physical education (stressing free inquiry, observation, and spontaneous criticism in the former, and development of the whole person in the latter) and in the institution of field trips, hiking, and nature observation. The use of textbooks was discouraged as much as possible, and examinations were regarded as producing mostly negative results. Emphasis was placed instead upon the creation of student notebooks that reflected the pupil's judgment and synthesis of materials. Don Francisco borrowed much from the French and English forms of education, and was influenced by Rousseau, Froebel, Pestalozzi, Krause, and Sanz Del Rio, the last of whom provided his ideal of reconciling all human facilities to produce an artistic taste, technical preparedness, spiritual elevation and an austere, moral sense of life. The Institute fell victim to the Civil War of 1936, but proved a pervasive influence in Spanish society to this day. (MB)
It has sometimes been said of Spanish philosophy, "What Spanish philosophy?" The same reproach might be directed at the non-existence of Spanish education. "There ain't no such animal," some might claim, forgetting for the moment the intellectual freedom and depth of thirteenth-century Toledo under the reign of Alfonso X, the Wise (El Sabio), and the splendor and revival of learning in the sixteenth-century Universities of Salamanca, Alcala and other institutions. What is mostly remembered, however, is the disheartening decline of Spanish education, ushered in by the rigidities, fears and intolerance of the Catholic Counter-Reformation, of sixteenth- and seventeenth-century Spain.
But this view of education in Spain, of necessity simplified, would not be complete without mention of the establishment and significance of the Instititucion Libre de Ensenanza, in English, the Free Institution/Institute of Education. In order to understand more clearly the Institute's impact on Spanish society, I will review very quickly some of the historical background to its founding in 1876.
A broad overview of nineteenth-century Spain reveals to us that the "lack of civility" among Spaniards, which reached such extreme proportions during the 1936-30 Civil War, had its roots in the last century. Civil war, frequent military uprisings in the form of "pronunciamientos," and dissension everywhere created an ambience of unease and fragmentation within Spanish society. Historians talk of "Las dos Espanas," the "two Spains," that is, the liberal, progressive side as opposed to the traditional, sometimes reactionary side of the country. It is probably more accurate, however, to talk of the many Spains.
To disagree was the Spaniard's right -- no, his duty to himself, to his own proud sense of individuality and dignity. A solution imposed from above, from the State, seemed, in many cases, the only solution when there were problems, and there were many -- economic, political, religious. The problem of Spanish education was only of several, and it too came to be subsumed into the more general and overriding conflict of State versus Individual, of Authority vs. Freedom. Reconciling such absolutes frequently failed; worse still, the distinction between philosophy and ideology, that is, between the search for truth and the molding, frequent distorting of existential reality to one's own conception of it, this distinction would too often become blurred in the disputes and violence of nineteenth-century Spain. Tempers rose, passions were unleashed, ideologies reigned supreme, and somewhere in the shuffle, clarity of vision and truth were lost.
This split in Spanish society in part gave birth to the Free Institute of Education. Specifically, we must look to the years 1868 and 1874 to explain how the Institute came into being. The date 1868 conjures up one outstanding event in modern Spanish history: the overthrow of the reigning Bourbon monarch, Isabel II, an action which is termed the Glorious Revolution of 1868. It was a somewhat qualified victory for the liberal cause in Spain since the end result was to bring confusion, instability, bitterness, and finally, in 1874, the reestablishment of another Bourbon king, Isabel's son, Alfonso XII. This period, called the Restoration (Restauracion) in Spain, also reinstated the State-controlled, religiously oriented educational system which the Revolution of 1868 had attempted to change. This move and the specific action which the Minister in charge of education, Manuel de Orovio, brought against the future founder of the Institute, Francisco Giner de los Rios, would be the direct and immediate causes of the Institute's creation.
SPANISH EDUCATION: OVERVIEW
Before going into a more detailed explanation of the origins of the Institute, I think a brief look at the state of Spanish education prior to 1876 would be useful. In my opening statement, I mentioned two high points in the history of Spanish education: the medieval center of learning in Toledo and the sixteenth-century Universities of Salamanca and Alcala de Henares. Both periods were characterized by a high enthusiasm for learning and considerable freedom in which to do it. In Toledo, Jews, Moors and Christians collaborated together in a spirit of respect and tolerance. In the first half of the sixteenth century, students and professors at Salamanca, Alcala and other universities constituted, within an amazing diversity of university modes of existence, an entity independent of the strictures and authority of the State. Yet by the seventeenth and eighteenth centuries, learning in Spain seemed to have ceased. For example, at the University of Salamanca, the Chair for Mathematics and Astrology -- the title speaks for itself, I think -- remained vacant for thirtyyears until it was finally filled in 1726.
What happened? In brief: The Counter-Reformation. This is obviously a great simplification of the causes of Spain's decadence in education and elsewhere. But certainly Spain's withdrawal and increasing isolation from the rest of Europe from the middle of the sixteenth century on explains in part the origins of stagnation in her schools and universities. In 1559 Philip II forbade study in foreign universities; shortly before that, he banned the importing of books from abroad. The additional power of the Inquisition to safeguard orthodoxy among Spaniards and weed out the impure and heretical elements must also be taken into account. And finally, as the historian Americo Castro has pointed out, Spaniards became reluctant to demonstrate intellectual powers and curiosity for fear of being taken for a Jew. Spanish Jews were known for their interest in intellectual matters. And nobody wanted to deal with the Inquisition.
To control the inner life and content of the universities and other schools the government stepped in so that by the nineteenth century Spanish schools, particularly the Universities, to quote Salvador Madariaga, "were more Government establishments for the granting of official diplomas" (p. 75, Spain, N.Y., 1943). He goes on to say that "in a sense all universities tend fatally to become degree factories. But in Spain ... they were nothing else."
Schooling on all levels were plagued by unimaginative, and stiff, unbending teaching, bad textbooks, long hours of routine and frequent utter boredom, and sometimes even brutalization. Memorization and recitation were the chief pedagogic tools. The first third of the last century also brought in more imitation of French manners and customs, certainly not the first instance of French influence on Spanish education. Eighteenth-century Spain had already adopted Gallic centralization of schooling. The critic Mariano Jose de Larra describes the mania of copying, badly, I might add, French mores among the Spanish middle classes and well-to-do. The narrator of "El casarse pronto y mal" writes that his sister became enamored of French customs and from then on, "bread was no longer bread (pan), nor wine, wine (vino)." "Suffice it to say," continues Larra in this ironic vein,
that my sister adopted the ideas of the period; but as this second education was as shallow and superficial as the first (her Spanish upbringing), and as that weak segment of humanity never knows how not to go to extremes, she suddenly jumped from the Christian Year of Our Lord 18__ to the era of Pigault Lebrun (a frivolous, sometimes scandalous French novelist) and left off going to Mass and devotions, without knowing in the least why she did so, why she used to go in the first place. She said that her son could be educated in whatever manner it suited him; that he could read without order or method whatever books fell into his hands; and God knows what other things she said about ignorance and fanaticism, reason and enlightenment, adding that religion was a social contract into which only idiots entered in good faith and that the boy didn't need religion to be good; that the terms, father and mother (padre y madre) were lower-class, and that one should treat one's papa and mama familiarly with the tu form of address because there is no friendship like that which unites parents to their children. (Articulos de Costumbres, Madrid, 1965).
A writer of a later period, Jose Maria de Pereda retains this image of his school days in the 1840's:
The chill of death, the obscurity of a dungeon, the stench of grottoes, the unhappiness, affliction and pain of torture permeated the classroom ... Virgil and Dante, so clever in depicting hell and torment, would have been at wit's end to describe those images of school which are engraved in my memory for the rest of my life ... I believed myself cut off from the refuge of my family and the protection of the State; I heard the swish of the cane and the complaints of the victims, and the lessons were very long, and there were no excuses for not knowing them; and not knowing them meant caning and mockery, which also hurt; and confinement, fisticuffs, whipping, and the ignominy of all these things. Who is the brave soul who could truly paint such scenes if the worst of it was what the spirit felt and not what the eyes saw or the flesh suffered? (Esbozos y rasgunos, "Mas reminiscencias," Obras completas, v. 1, Madrid, 1959, p. 1226).
He then goes on to say that "one had to know the lesson literally (al pie de la letra"), word for word. One misplaced word, one substitute, was enough to merit punishment." (p. 1226). Why, one asks, was this so? Pereda explains that the professor "taught the way he had been taught: by blows. Little by little the habit became part of his nature. The want of intellect, the extreme devotion to the profession, the traditions of the classroom and the educational system did the rest." (p. 1232).
And finally, here is the testimony of yet another writer, the novelist Benito Perez Galdos, who describes Spanish education in the 1860's:
The class lasted hours and hours ... Never was there a more repugnant nightmare, fashioned out of horrible aberrations which were called Arithmetic, Grammar or Ecclesiastical History ... Around the axis of boredom revolved such grave problems as syntax, the rule of three, the sons of Jacob, all confounded in the common hue of pain, all tinted with loathing ... (El Doctor Centeno, Obras completas, v. 4, Madrid, p. 1319).
Pereda was a conservative, Galdos a liberal -- yet both concur that the Spanish educational system was wretched. Thus, when liberals and progressives took control in 1868, one of their first priorities was educational reform. They declared, first of all, the principle of freedom of education and teaching. The instability and factionalism of the liberal regime, however, precluded any lasting reforms in education. The conservative return to power in 1874 reestablished the right of the State to dictate to schools the textbooks to be used and the curriculum to be followed. In addition, Orovio, the Minister in charge of education, sent out to the Directors (or Rectores) of the Universities a circular in which he recommended that no religious doctrines inimical to those held by the State be taught and that no political ideas be expressed to the detriment of the king's person or the constitutional monarchy then in power. The circular also stated that action would be taken against any professor who so indulges in such political or religious meditations -- i.e., expulsion from the University. The Minister's recommendations were no mere recommendations.
The result of all this was the removal of several professors from their faculty chairs. Among them was the future founder of the Institute, Francisco Giner de los Rios. For not adhering to the State's demands, he was arrested in March 1875, at 4:00 in the morning, and spent four months confinement until he was finally expelled from the University.
The principle in question was clearly one of academic freedom. The State, however, saw the "university question" ("cuestion universitaria") from another angle: that is, the University and its Faculties were no more than instruments of the government's policies and, therefore, were obligated to conform to the State's instructions and directives. The professors were, in effect, civil servants (and still are today).
This, then, is the historical and educational background to the founding of the Institucion Libre de Ensenanza, the Institute, in 1876.
THE INSTITUCION LIBRE DE ENSENANZA
Perhaps the most significant point to be made about the Institute's existence is the profound and far-reaching, if diffuse, influence of its founder, Francisco Giner de los Rios, a professor of philosophy of law at the University of Madrid. It was largely Don Francisco's attractive and vibrant personality which held the Institute together and proved to be the Prime Mover of the school. All the accounts of Don Francisco by friends, former students, disciplines, and fellow professors stress the very personal and individual effect of the man. This is not to deny the cogency of his ideas and methods of teaching but simply to make clear that, with Don Francisco, abstractions were made concrete in his very person; that is, through him and his relations with men and women, he incarnated his own beliefs. One friend had this to say about him as a teacher at the time of his death in 1915:
What was the secret of his teaching? Did he reveal anything new? Or was it that everything was transformed at the touch of his powerful creative imagination?
The secret lay as much in the form as in the substance.
As a teacher, he brought us something that was the complete opposite of the old methods; and he discouraged the craze for oratory which has been so damaging to education in Spain. In his lectures at the University or at the Institution, he only aimed at one thing: to shake the pupil out of his torpor, stir him up to independent investigation, to working the thing out by himself; and above all he recommended games, art, the country.
As an educationist he created a complete system of social education, which had for its axis the child, the citizen, the man as he would like to see him, healthy in mind and body, and working for a Spain that was strong and dignified and which must one day rise again. (J.B. Trend, Origins of Modern Spain, N.Y. 1934, p. 99).
Or as another writer put it: "He gave us our conception of the universe and of the way to peel an orange." (p. 103).
What was Don Francisco's creation, the Institute, like and what were its educational and philosophical origins? How did Don Francisco and his disciples "make men," "hacer hombres," the overriding goal of the Institute? As J.B. Trend, Rafael Altamira, and others have noted, Don Francisco Giner believed that the most pressing problem of Spain was the problem of education. He, like the literary Generation of 1898, was obsessed -- and rightly so -- with the question of Spain's decadence, and the vital necessity for her regeneration. And he took the now classic position of the nineteenth-century liberal that through education lay the country's revitalization. It must be clearly understood that Don Francisco's efforts were those of a minority and directed toward a minority. It was not mass education, although the effects of the Institute certainly were to reach public education throughout Spain in the twentieth century.
First and most important, the Institute was created as a private institution, independent of both Church and State. I don't think it is necessary to do more than mention here the historic interdependence of Church and State in Spain. The credo of the Institute, which appeared regularly on the masthead of its publication, the Bulletin of the Free Institute of Education (El Boletin de la Institucion Libre de Ensenanza) was as follows:
The Free Institute of Education is completely opposed to religious, philosophical and political sectarianism, proclaiming only the principle of liberty and the inviolability of science and the concomitant independence of scientific research and explanation, with regard to any other authority than that of the conscience itself of the Professor, who is alone responsible for his ideas. (cited by A. Jimenez-Landi, "Don Francisco Giner de los Rios y la Institucion Libre de Ensenanza," Revista Hispanica Moderna, v. 25, nos. 1, 2, 1959, p. 16).
Because Spain's educational history had been one of bickering and divisiveness between the demands of the State and the private sector, and conflict between the precepts of the Church and needs of Science, with Religion usually dominating over Science, Don Francisco abhorred dogmatic, closed positions. He fervently believed in tolerance. It was not, however, mere intellectual benevolence which motivated Don Francisco. Rather, it was an ethical, moral stance, a way of life, which he wanted to instill in his pupils, the "Institutionists."
A follower of Giner de los Rios, Jose Castillejo, has written that, for Don Francisco, "the two greatest forces in education are: the personality of the teacher and the social atmosphere and surroundings of the school" (Wars of Ideas in Spain, London, 1937, p. 97). We have already seen in the magnetic power of Don Francisco's teaching itself the importance of the teacher's personality. What about the ambience of the school?
Here, one sees right away to what extent Don Francisco and his disciples felt compelled to move away from the current, i.e., antiquated and rigid teaching methods and atmosphere of both public and private schools in Spain. First, the classroom should be informal, akin to familial surroundings. The teacher should not merely dictate or lecture, but rather converse, using whatever approach or combination of approaches worked best, starting with the Socratic dialogue. No one method was to be used, to the exclusion of all others. The teacher was a guide, the pupils a family. A small family. Classes were to be kept small. And coeducational (Primary and secondary education in Spain today is not coeducational.) Cordiality and the spirit of discovery were the key words at the Institute. Don Francisco aimed at dispelling not only the fear and horror of school, such as we have seen in Pereda's reminiscences, but the passivity with which most students received their education.
The original intent of the Institute was to create an alternative to the higher education of official Spain, but the desire was not to be met. It was quickly found to be beyond the resources of the Institute which suffered from chronic insufficiency of funds from its inception. Instead, the school evolved into an institution of primary and secondary education. Since most students entering a Spanish university were ill-prepared to meet its demands, the "Institutionists" felt that a solid intellectual, moral, physical and spiritual background given in the primary and secondary levels of education was an a priori necessity.
What was taught at the Institute besides the traditional subjects required by the State ___ curriculum included Anthropology, Technology, Social Sciences, Economics, Art, Drawing, Singing, and Handwork. Most of these subjects were generally neglected in State and Church-run schools of the period.
Most remembered and most significant are the innovations carried out in the arts and in physical education, and the frequent excursions. First, art. "Institutionists," for the most part, tried to avoid systematic and highly structured courses in art and art history. Instead, they emphasized such activities as excursions to historical monuments and places and visits to museums. Such an unorthodox procedure was unheard-of in nineteenth-century Spain. Rather than mere lessons, the Institute stressed the actual, vivid experiencing of art as much as possible. Like the literary generation of 1898, they also, in a sense, rediscovered Spain's cultural heritage, by extolling the value of Spanish folklore, architecture and painting. It was, for example, a disciple of Don Francisco, Manuel de Cossio, who rediscovered the forgotten and neglected El Greco for Spaniards and the rest of the world.
One of the most delightful illustrations of the Institute's approach to art is to be found in Don Francisco's essay on "Spontaneous Criticism by Children of the Fine Arts." In it he describes how a group of children of twelve and fourteen years of age, conducted by him one day to a museum, learned to form their own artistic sensibilities and judgments by comparing two pieces of sculpture, one by Donatello and the other by Lucas de la Robbia. On this occasion, Don Francisco did not even attempt to point out the obvious differences in style, expression and composition in the two sculptures. He simply let the children use their own powers of observation, uninfluenced by any previous explanations or prejudices. Thus, through observation, they were able to define the work.
The second and more difficult problem, says Don Francisco, was one of judgment. Which was the better sculpture? "I discovered," he writes, "a very curious phenomenon: there was a unanimous explosion in favor of Lucas de la Robbia. They stumbled over their words in their rush to tell me that from the very first de la Robbia had seemed to them so superior that they could scarcely understand why there should be any doubt; that sweetness, that mystical expression, that softness, that elegance, that repose. How could anyone compare this divine object with the coarse rawness and the hard, unbecoming and massive forms of Donatello? Why, it was almost a caricature of a sculpture! 'And Donatello is a sculptor with a great reputation!' they told me, almost aggressively. You can imagine how I resisted giving them the least sign of disagreement with this vehemently-held point of view, nor did I even invite them to study more carefully both works before pronouncing an opinion. Showing nothing but the most rigorous neutrality and even indifference, I began to look around rather distractedly, now at one piece, now at another. They did the same. After a little while, and spontaneously, there occurred a certain attenuation in the crudeness of their first judgment: 'No, I wouldn't say that it was exactly grotesque (in Spanish, 'un mamarracho').' 'There's a certain strength; the composition has a certain vigor.' 'If you put the piece in the right place, it wouldn't seem so bulky, so massive ...' And then: 'You know, if you really look hard at these things of Donatello, they're very manly; de la Robbia seems a little effeminate,' etc., etc. Finally, why prolong it? The gradual reversal of opinion in favor of Donatello reached the point of one child saying: 'There must be other works of Lucas de la Robbia which deserve his fame.' And it was precisely the very boy who had first placed in doubt the merits of Donatello's own reputation." ("Antologia," Revista Hispanica Moderna, v. 25, nos, 1, 2, 1959, pp. 132-133).
A second innovation which I mentioned before was the approach to physical education. Rather than the routine and boredom of calisthenics, directed toward a military goal of physical competence, the Institute stressed games, games which were to form character. The use of games as an ethical force is, of course, an educational practice borrowed from the public schools of England. Anyone who has read Kipling's Stalky and Company or the early school novels of P. G. Wodehouse will have a good idea of what I am referring to.
But, for Don Francisco, playing cricket and football also signified that the whole person was being educated. Intellectual formation alone was lopsided. To provide an integral education required an awareness and use of one's own body. Mere discreet walks, in carefully monitored lines, which was the usual practice and extent of physical exercise in other schools, were simply inadequate.
The third point of the Institute's educational program were the excursions out to the countryside. This also was unheard-of in the last century of Spain. Long walks and mountain climbing simply were not done. In Don Francisco's time, people shut all their windows tight, never letting in fresh air; they frequented taverns and cafes, and sometimes strolled casually at night for a short walk along a busy thoroughfare, but almost never thought exploring the countryside an exhilarating occupation. Again, like the Generation of 1898, Don Francisco and his Institute discovered the Spanish countryside. Before that, almost no one seems to have appreciated it. Realist novelists, for example, rarely describe Nature; even the Spanish Romanticists evidence little sensitivity toward Nature.
The idea, which was brought back from Paris in 1878 by one of the Institute's professors, was, like the introduction of games, imported from abroad and adapted to Spanish circumstances. Excursions developed the intellectual and physical capabilities of the pupil; more important, for Don Francisco, they allowed one to enter into communion with Nature, to feel oneself as part of a Whole.
I would like to touch briefly on two other aspects of the Institute's educational program; the use of textbooks and examinations. Don Francisco discouraged the use of textbooks; to a great extent, he did so as a reaction to the wretched official textbooks forced on students at State-and Church-run schools. Instead, he preferred the creation of student notebooks which reflected the child's own judgments and synthesis of the material, and which were carefully checked and read by the teachers. Likewise, Don Francisco felt that examinations brought mostly negative results. Examinations in other schools were simply the means to acquire a degree; and stressed only the student's ability to memorize and to repeat exactly what the Professor dictated in class.
Respect for the freedom of the child is at the heart of the Institute's teaching. The intuitive method in education, which goes back to Jacques Rousseau, by way of Froebel and Pestalozzi, was practiced by the Institute. This meant the substitution of restraint, obligation, and mechanical behavior by personal effort, spontaneity, and school work which had become ___ and attractive.
These then, were the main points of the Institute's program. It should be noted that the educational reform undertaken by the Institute was not the first instance of attempts to improve education in Spain; and that the Institute's pedagogy depended, to a large extent, on influences from abroad which were modified to suit the Spanish temperament. The pedagogical efforts of [Gaspar Melchor de] Jovellanos in the eighteenth century and the short-lived Pestalozzian schools of the early nineteenth are but two examples of such attempts at educational improvement in Spain. With regard to the Institute's educational philosophy, we have already seen that the "Institutionists" borrowed from both England and France. And a survey of the Institute's publications reveals that Don Francisco and his colleagues were quite aware of the pedagogical approaches of Pestalozzi and Froebel.
But perhaps the most significant, if somewhat vague, influence on the Institute is derived from the importation of the ideas of an obscure, second-rate German philosopher by name of Christian Friedrich Krause by a then equally obscure Spanish professor, Julian Sanz del Rio. This is not the place to examine the abstruse metaphysics of Krause or of Sanz del Reio's adaptation of it, but simply to state that Sanz del Rio's profoundly ethical work, Humanity's Ideal for Life strongly influenced Francisco Giner de los Rios and many other Spanish intellectuals who, during the 1860's and 70's, called themselves Krausistas. The Krausist tendency and ideal to create a world increasingly more unified, harmonious and complete is translated in pedagogical terms into the attempt to reconcile all the faculties of the human being, to develop the whole personality of the individual so that he or she might cultivate not only an artistic taste and sensibility, but a technical preparation, spiritual elevation, and an austere, moral sense of life.
Despite the criticisms leveled against Don Francisco and his Institute of being anti-religious, the Institute did inculcate a spiritual leaning in its students without favoring any particular orthodox religious belief. Don Francisco himself was a boliovor. Jose Castillejo writes that "Ginor ... following Sanz del Rio, believed that schools need a religious spirit, to lift up the minds of children towards a universal order of the world, a supreme ideal of life and a harmony among men and between humanity and Nature. Without that spirit education is dead and dry" (Wars of Ideas in Spain, p. 100). One could say, in brief, that the whole of the Institute's education, the entire atmosphere of the Institute, was permeated with this spiritualization of man and the universe.
It does not take too much imagination to see that the Institute would not be without enemies. The ideological dichotomy between left and right, progressive and traditionalist, in Spain immediately polarized the significance of the Institute. It was the product of the Devil for some; the only hope and salvation in Spain for others. The Institute itself fell, one more victim, to the ravages of Spain's Civil War in 1936. Yet, looked at dispassionately, the Institute's openness to ideas and influences from the rest of Europe, its undogmatic approach to education and to life itself, could not help but bring a breath of fresh air to the closed and narrow society of nineteenth-century Spain. If it perhaps erred too much in the direction of intellectual anarchy and placed too much confidence in the innate goodness of man, the Institute's efforts at raising the moral and intellectual level of Spaniards became an all-pervasive influence in many institutions, both public and private, in government circles, in business. The Institute was "much more than a school." It was an atmosphere of intellectual and moral enlightenment; and a belief in the regeneration of Spain. | <urn:uuid:df3439e0-db77-4b79-9f38-5a9f9dd9fe4b> | CC-MAIN-2013-20 | http://www.naderlibrary.com/lit.reformspaneducvalis.toc.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.969591 | 6,658 | 3.0625 | 3 |
By Tom Boyle
All Things Birds Associate Naturalist
|Natco Lake was created by accident, rather than by Mother Nature. The National Fireproofing Company (Natco) mined clay here for bricks in the 1930's. Eventually the mining equipment hit underground springs and the lake filled in. A ditch was dug in an attempt to drain off the water into a nearby tidal creek. The ditching brought in salt water and made the lake brackish, as it remains today.
Birding the northern section of the lake:
Walk east along the Henry Hudson Trail and over a small bridge. Eastern Phoebe has nested under this bridge. After the bridge, turn right off the paved trail and then left. Follow the unpaved trail a short distance to a small tidal cove in the lake. On a changing tide, Yellow-crowned Night Heron is regularly seen. Both night herons nest locally and can be seen frequently. Occasionally, Diamond-backed Terrapins are seen basking on flotsam in the cove. Continue on the unpaved trail until it ends at a lawn on the lake's north side.
Scan the lake here. Shorebirds can be found in migration, along with herons, Osprey, gulls, cormorants, terns, and waterfowl. Great Black-backed Gull has begun nesting on one of the islands in the lake. Don't be surprised to see something unusual. An American White Pelican was seen on the lake in January about five years ago. I've seen American Oystercatcher, Black Skimmer, and copulating Least Terns sitting on the island in front of you. This is a good spot to check for lingering waterfowl at World Series of Birding time [mid-May]. Northern Shoveler has been seen in late May, Canvasback in late June and a drake Bufflehead has lingered here into July! Rough-winged Swallows and Belted Kingfishers have nested in the dirt banks around the lake and are often seen.
The woods along the Henry Hudson Trail are good for migrants in spring. In recent years I've seen (and heard) Acadian, Alder, and Olive-sided Flycatchers; Gray-cheeked Thrush; Mourning and Brewster's Warblers; Yellow-breasted Chat; and Lincoln's and White-crowned Sparrows. Fall is equally good, and Connecticut Warbler is regular at that time of year. If you walk the trail a little more than a quarter mile to the next bridge, look north along Thorne's Creek for Purple Martins, which now nest in houses provided by a homeowner here.
Birding the southern section of the lake:
Natco Park, a 260-acre Green Acres site managed by Hazlet Township, consists of mature swampy woods excellent in spring for migrants. From the Lakeside Manor restaurant parking lot, walk down the Orange Trail near the lake and into the woods. Philadelphia Vireo has been seen here in late May. The mature oaks along this trail can have Bay-breasted, Tennessee, and Cape May Warblers. A knowledge of bird song will be helpful here as the vegetation is thick. The trail turns left and follows the shoreline, eventually coming to a small cove (1 on map) where Spotted Sandpipers are seen.
At the south end of the cove, the trail (now the Red Trail) turns southeasterly into the woods. A small footbridge crosses over a little ripple called Thorne's Creek. Here the understory is again very thick. In this area in spring I've seen such sought-after migrants as Yellow-throated Vireo; Louisiana Waterthrush; and Worm-Eating, Prothonotary, Hooded, and Kentucky Warblers.
Continue south along the Red Trail. As you approach another footbridge, the Blue Trail comes in from the right. Follow it a short way to an area with standing water in spring (2). Check this spot for Rusty Blackbird and Northern Waterthrush.
Back on the main Red Trail, continue south. The trail gains elevation, leading into an area of pitch pine habitat (3). Pine Warbler nests and Whip-poor-will has been found here.
Retrace your steps back along the trail to the cove at the lake. Facing the cove, take the part of the Red Trail that leads left [west] away from the lake. The mature deciduous woods along the trail have nesting Wood Thrush, Ovenbird, and Red-eyed Vireo. This trail eventually comes to a T intersection with the Yellow Trail. Turn left onto the Yellow Trail, which will gain elevation until it arrives at another T intersection. Turn right on the unmarked trail and walk slowly to a small opening in the forest. In spring the vernal pond here (4) holds the occasional Solitary Sandpiper. Roosting above the pond in spring I've seen Broad-winged and Red-shouldered Hawks. Continuing along this trail will lead through several wet areas with second-growth woodland. Prairie, Mourning, and Wilson's Warblers have been seen here, and Brown Thrashers nest in this area.
Return to the last T, and turn left to retrace your route along the Yellow Trail. Pass the intersection with the Red Trail and continue straight ahead on the Yellow Trail to reach the parking lot.
Raptors are very much in evidence in the Natco Lake area in spring as northbound hawks bump up against the bayshore. On west winds, hawk flights can be seen over the park right from the parking lot. These flights consist mostly of buteos, with vultures, accipiters, and the occasional Bald Eagle mixed in. Mississippi Kite and Common Raven were seen over the park in Spring 2012.
Additional breeding birds in the park include Scarlet Tanager, Great-crested Flycatcher, Ruby-throated Hummingbird, Cooper's Hawk, and Great Horned and Screech Owls. Northern Saw-whet Owl has occurred in winter. Mammals in the park include Whitetail Deer, Opossum, Raccoon, Striped Skunk, flying squirrel and both Red and Gray Fox. With its mix of deciduous swamp and upland pine oak forest, Natco is also very botanically diverse.
Natco's mix of habitat, along with its location on the bayshore, makes it a great place to discover birds.
For more information on the park, including a more complete trail map, write to the Hazlet Environmental Commission at 317 Middle Road, Hazlet, NJ 07730. | <urn:uuid:328eadc9-b603-4a3d-8065-19a298ab74c0> | CC-MAIN-2013-20 | http://www.njaudubon.org/SectionCenters/SectionSHBO/CloseFocusonNatcoLake.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.953742 | 1,396 | 2.609375 | 3 |
Smoke Detectors, Carbon Monoxide Detectors, Gas Detectors
SMOKE DETECTOR INFORMATION:
Most people are aware of the danger of fire but are unaware of the fatality of smoke. More people die from breathing smoke than by burns. In fact, deaths from smoke inhalation outnumber deaths by burning by 2:1. In a hostile fire, smoke and deadly gases tend to spread farther and faster than heat from flames. Moreover, when people are asleep, deadly fumes can send them deeper into unconsciousness.
Smoke detectors and carbon monoxide detectors are a powerful and effective fire safety technology. They are the first lines of defense against smoke and fire. They may awaken those who would otherwise have been overcome by smoke and toxic gases in their sleep. And most importantly, they provide an early warning alerting individuals of a fire, allowing them precious time to escape.
According to the National Fire Protection Association (NFPA), 75 to 80% of all deaths by fire happen in the home. More than half of these deaths occurred in buildings without smoke detectors. By installing a smoke detector, individuals can reduce the risk of dying by almost 50%.
Ionization smoke detectors monitor 'ions,' or electrically charged particles in the air. Air molecules in a sample chamber of ionization smoke detectors, are 'ionized' by a radioactive source. This allows a small electrical current flow. Smoke particles entering the sensing chamber change the electrical balance of the air. The greater the amount of smoke, the higher the electrical imbalance. When combustion particles enter the smoke detector, they obstruct the flow of the current. An alarm is pre-programmed to sound when the current gets too low.
Ionization smoke detectors respond first to fast flaming fires. A flaming fire devours combustibles extremely fast, spreads rapidly and generates considerable heat with little smoke.
Ionization alarms are best suited for rooms, which contain highly combustible material. These types of material include:
1. Cooking fat/grease 2. Flammable liquids 3. Newspaper 4. Paint 5. Cleaning solutions
Smoke alarms with ionization technology are the most popular types sold in the United States.
The NFPA recommends smoke alarms be installed in EVERY room and area of your home or bulding for complete protection. For maximum protection, install at least one ionization and one photoelectronic smoke alarm on each level of your home.
All smoke alarms should be replaced after 10 years of operation. Ten years is a smoke alarm's useful lifetime and for continued, reliable safety and protection, smoke alarms need to be replaced.
Consumer's should consult their owner's manual for specific instructions when locating a smoke alarm. The following are some general guidelines:
Because smoke rises, smoke alarms should be installed on the ceiling or on walls at least 4 to 6 inches below the ceiling.
Smoke alarms should not be located less than 4 to 6 inches from where the wall and ceiling meet on either surface; this space is dead air that receives little circulation.
Smoke alarms should not be mounted in front of an air supply, return duct, near ceiling fans, peaks of A-frame ceilings, dusty areas, locations outside the 40 degree Farenheit to 100 degree Farenheit temperature range, in humid areas or near fluorescent lighting.
If you hear the smoke alarm, roll to the floor and crawl to the door. Stay low where the air is cleaner and cooler. Touch the door. If the door feels cool, open it just a crack and check for smoke. If there is no smoke, leave by your planned escape route. Crawl and keep your head down. If the door feels hot, do not open it. Do no panic. Escape out the window or use an alternate exit.
If you can't leave your room, seal the cracks around the doors and vents as best you can. Use a wet towel or clothing if possible. Open a window at both the top and bottom. Stay low and breathe fresh air. Shout for help and signal your location by waving a bright cloth, towel or sheet out of a window.
If you live in a high rise building, never use the elevator to escape fire. If the fire blocks your exit, close your apartment door and cover all cracks where smoke could enter. Telephone the fire department, even if fire fighters are aleready at the scene, and tell them where you are. Shout for help and signal your location by waving a bright cloth, towel or sheet out of a window.
If your clothes catch on fire, "Stop, Drop and Roll" to put out the flames. Do not run-running will only increase the flames.
Photoelectronic alarms contain a light emitting diode (LED) which is adjusted to direct a narrow infrared light across the unit's detection chamber. When smoke particles enter this chamber they interfere with the beam and scatter the light. A strategically placed photodiode monitors the amount of light scattered within the chamber. When a pre-set level of light strikes the photodiode, the alarm is activated.
Photoelectronic smoke alarms respond first to slow smoldering fires. A smoldering fire generates large amounts of thick, black smoke with little heat and may smolder for hours before bursting into flames.
Photoelectronic models are best suited for living rooms, bedrooms and kitchens. This is because these rooms often contain large pieces of furniture, such as sofas, chairs, mattresses, counter tops, etc. which will burn slowly and create more smoldering smoke than flames. Photoelectronic smoke alarms are also less prone to nuisance alarms in the kitchen area than ionization smoke alarms.
The use of both ionization and photoelectronic smoke alarms will provide a home with maximum protection and an ample warning in the event of a fire.
Families should get together and draw a floor plan of their home. They should show two ways out of every room. The first way should be out a door and the second way could be through a window. If it is a second or third story window, they might consider purchasing a safety ladder. They should choose a meeting place for all family members outside the home and mark it on the plan. A good meeting place would be a driveway, tree or a neighbor's home.
Families should practice the escape plan to make sure everyone understands the planned routes. Involve every member of the family. Start with everyone in their beds with the doors closed. Have one person sound the smoke alarm. Have each person touch his or her door. (Tip: sleep with bedroom doors closed. A closed door will help show the spread of fire, smoke and heat). Practice low escape routes-one for a cool door and one for a hot door. Meet outdoors at the assigned meeting place. Designate one person to call the fire department. Make sure everyone knows the fire department or local emergency telephone number.
Consumers should be advised of the following features when choosing a smoke alarm to best suit their needs:
Smoke detectors with an alarm silencer feature will silence an alarming unit for several minutes, giving the air time to clear. These models are idal near kitchen and cooking areas where most nuisance alarms occur. Note: consumers should always determine the reason for the unit sounding before quickly dismissing it as a nuisance alarm and pressing the alarm silencer feature to silence the alarm.
Long Life Smoke Detectors
The NFPA reports that 1/3 of all smoke detectors installed in homes are not operating because of dead or missing batteries. This is an all too common occurrence in smoke detectors that leaves families and homes vulnerable.
Long life smoke detectors utilize lithium batteries that provide up to 10 years of continuous protection. Lithium batteries eliminate the need and expense of semi-annual battery replacement. When long life smoke detectors near the end of their tenth year in operation, they will sound a low battery signal to remind consumers to replace the entire unit.
Note: it is recommended that smoke detectors be replaced every 10 years and be tested regularly.
Some smoke detectors have a built-in emergency light that will turn on when the unit goes into alarm. The emergency light will illuminate an escape route in case of a power failure. These units are best utilized when installed by stairs and in hallways.
Hardwire smoke detectors are connected to a home's AC power supply and should be intalled by a licensed electrician according to the local electrical code. AC power means you never have to replace a battery to protect your home and family.
CARBON MONOXIDE DETECTOR INFORMATION:
Carbon monoxide poisoning is often confused with the flu. It is important that you discuss with all family members the symptoms of carbon monoxide poisoning. Different carbon monoxide concentrations and exposure times cause different symptoms. Remember, carbon monoxide detectors are your first defense against carbon monoxide poisoning.
EXTREME EXPOSURE: Unconsciousness, convulsions, cardiorespiratory failure, and death
MEDIUM EXPOSURE: Severe throbbing headache, drowsiness, confusion, vomitting, and fast heart rate
MILD EXPOSURE: Slight headache, nausea, fatigue (often described as 'flu-like' symptoms)
For most people, mild symptoms generally will be felt after several hours of exposure of 100 ppm's of carbon monoxide.
Many reported cases of carbon monoxide poisoning indicate that while victims are aware they are not well, they become so disoriented that they are unable to save themselves by either exiting the building or calling for assistance. Also, due to small size, young children and household pets may be the first affected.
If left unchecked, a child's exposure to carbon monoxide can lead to neurological disorders, memory loss, personality changes and mild to severe forms of brain damage.
If a child complains or shows signs of headaches, dizziness, fatigue or nausea or diarrhea, he or she could have carbon monoxide poisoning. Be especially aware of symptoms that disappear when the child is out of the house and reappear upon return, or symptoms that affect the entire household at once.
Since the symptoms closely mimic viral conditions such as the flu, without the fever, carbon monoxide poisoning is often treated improperly, if at all.
A physician can perform a simple blood test (called a carboxyhemoglobin test) to determine the level of carbon monoxide in the bloodstream. If elevated levels of carbon monoxide are present, hyperbaric (high-pressure) oxygen treatment may be used to rid the body of carbon monoxide. A physician will make this determination and administer treatment if necessary.
Children with carbon monoxide poisoning have mistakenly been treated for indigestion.
The following are considerations consumers should be advised to take when choosing a carbon monoxide detector that will be sure to meet their needs.
1. Consumers should consider ease of installation, the location of installation and the power source of an alarm when choosing a plug-in, battery powered or hardwire model.
Plug-in units are designed to directly plug into a standard 120-volt electrical outlet for simple installation. This location provides easy access for both testing and resetting the detector. In addition, the location provides both a visual and audible difference from a ceiling mounted smoke alarm, which may help to eliminate confusion during an emergency alarm condition. A plug-in unit also requires no additional costs for annual battery replacement.
Battery powered units can be easily mounted to a wall or ceiling if the consumer wishes to keep electrical outlets free, if they wish to keep the unit relatively out of sight, or if they would like to keep the alarm away from the reach of children. Some battery-powered units are portable alarms that work anywhere--no installation required. These units may be mounted to a wall, left on a tabletop or carried while traveling. Battery powered units require battery replacement every year, similar to smoke alarms. These units will have a low battery-warning signal to indicate when the batteries need repacing.
Hardwire units are powered by wiring the unit directly into a household's AC power supply at a junction box. A licensed electrician according to the local electrical code should install them. The unit can be permanently installed to prevent tampering.
2. Consumers should choose a carbon monoxide detector with the features (e.g. low level warning, battery back up, digital display, etc.) that meet their needs.
Low Level Warning-some carbon monoxide alarms sound a warning (e.g. 3 short beeps) when a low level of carbon monoxide has been detected. Low levels of carbon monoxide can be hazardous over a long period of time. Low level warnings flag potential carbon monoxide problems and allow consumers time to respond to them before an emergency situation arises.
Battery Backup-some plug-in carbon monoxide alarm models have a back-up power source that allows the unit to function in the event of a main line power failure. During a power outage, people are likely to use alternate sources of power, light and heat (e.g. kerosene heaters, gas-powered portable generators and fireplaces) which may be out of tune and may produce deadly carbon monoxide gas.
Digital Display-some carbon monoxide alarms have a digital display that shows the levels of carbon monoxide in the air in parts per million (ppm). For some people, this added feature provides at-a-glance peace of mind.
3. Consumers should choose an alarm that has been accuracy tested.
American Sensors(TM), guarantees each of its alarms to be Triple Accuracy Tested(TM).
American Sensors'(TM) triple Accuracy Testing process exposes every alarm to three separate tests during manufacturing. This testing process includes twice exposing the alarm to carbon monoxide to precisely calibrate each unit. One test is at high levels and the second is at lower levels of carbon monoxide. In the third step, every alarm is tested to protect against nuisance alarms.
This stringent method of testing and quality control helps ensure that every American Sensors(TM) carbon monoxide alarm will provide years of reliable, accurate protection for your family and home.
4. Consumers should compare alarm warranties and note hidden operating costs.
Consumers should select an alarm that offers a comprehensive warranty. The alarm's warranty should include its sensor. Consumers should be advised that some CO alarms require the purchase of an expensive replacement sensor and/or battery pack as an ongoing expense. American Sensors(TM) alarms do not require replacement sensors and carry a 5 year warranty.
5. Check that the product is Listed by Underwriters Laboratories Inc. UL 2034 and/or Underwriters' Laboratories of Canada.
Consumers should avoid any brand that does not bear the mark of Underwriters Laboratories Inc. and/or Underwriters' Laboratories of Canada.
All American Sensors(TM) carbon monoxide alarms meets and/or exceeds the latest stringent standards of Underwriters Laboratories Inc. and/or Underwriters' Laboratories of Canada.
Carbon monoxide is generated through incomplete combustion of fuel such as natural gas, propane, heating oil, kerosene, coal, and charcoal, gasoline or wood.
This incomplete combustion can occur in a variety of home appliances. The major cause of high levels of carbon monoxide in the home is faulty ventilation of funaces, hot water heaters, fireplaces, cooking stoves, grills and kerosene heaters.
Other common sources are car exhausts, and gas or diesel powered portable machines.
Faulty or improper ventilation of natural gas and fuel oil furnaces during the cold winter months accouts for most carbon monoxide poisoning cases.
Correct operation of any fuel burning equipment requires two key conditions. There must be:
* An adequate supply of air for complete combustion.
* Proper ventilation of fuel burning appliances through the chimney, vents or duct to the outside.
Install carbon monoxide alarms as a first line of defense against poisoning. The US Consumer Product Safety Commission recommends installing at least one carbon monoxide alarm with an audible alarm near the sleeping areas in every home. Install additional alarms on every level and in every bedroom to provide extra protection.
Carbon monoxide poisoning can happen anywhere and at any time in your home. However, most carbon monoxide poisoning cases occur while people are sleeping. Therefore, for the best protection, a carbon monoxide alarm should be installed in the sleeping area.
Approximately 250 people in the US died last year from the 'silent killer'-carbon monoxide. The safety experts at Underwriter's Laboratories Inc. (UL) recommend that consumers follow these steps to help prevent carbon monoxide poisoning.
1. Have a qualified technician inspect fuel-burning appliances at least once a year. Fuel-burning appliances such as furnaces, how water heaters and stoves require yearly maintenance. Over time, components can become damaged or deteriorate. A qualified technician can identify and repair problems with your fuel-burning appliances. Carbon monoxide detectors can detect a carbon monoxide condition in your home.
2. Be alert to the danger signs that signal carbon monoxide problems, e.g., streaks of carbon or soot around the service door of your fuel burning appliances; the absence of a draft in your chimney; excessive rusting on flue pipes or appliance jackets; moisture collecting on the windows and walls of furnace rooms; fallen soot from the fireplace; small amounts of water leaking from the base of the chimney, vent or flue pipe; damaged or discolored bricks at the top of your chimney and rust on the portion of the vent pipe visible from outside your home.
3. Be aware that carbon monoxide poisoning may be the cause of flu-like symptoms such as headaches, tightness of chest, dizziness, fatigue, confussion and breathing difficulties. Because carbon monoxide poisoning often causes a victim's blood pressure to rise, the victim's skin may take on a ink or red cast.
4. Install a UL/ULC Listed carbon monoxide detector outside sleeping areas. A UL/ULC Listed carbon monoxide alarm will sound an alarm before dangerous levels of carbon monoxide accumulate.
Carbon monoxide poisoning can happen to anyone, anytime, almost anywhere. While anyone is susceptible, experts agree that unborn babies, small children, senior citizens and people with heart or respiratory problems are especially vulnerable to carbon monoxide and are at the greatest risk for death or serious injuries. Itís time to install your carbon monoxide detector.
Infants and children are especially vulnerable to carbon monoxide due to their high metabolic rates. Because children use more oxygen faster than adults do, deadly carbon monoxide gas accumulates in their bodies faster and can interfere with oxygen supply to vital organs such as the brain and the heart. Unborn babies have an even higher risk of carbon monoxide poisoning and carbon monoxide poisoning in pregnant women has been linked to birth defects. This is another reason to install a carbon monoxide detector.
Hundreds of people die each year, and thousands more require medical treatment, because of carbon monoxide poisoning in their home. Now, with recent technological breakthroughs, you can avoid becoming one of these statistics simply by installing a carbon monoxide detector in your home.
Consumers should consult their owner's maunal for a carbon monoxide detector procedure. However, the following is a general procedure:
If a carbon monoxide detector sounds a low level warning or hazard level alarm, consumers should push the test/reset button to silence it.
If no one in the household has any carbon monoxide symptoms (headache, dizziness, nausea, and fatigue) consumers should be advised to open the doors and windows to air out their house. They should turn off any gas, oil or other fuel powered appliances including the furnace and call a qualified technician or thier local utility company to inspect and repair their home before restarting the furnace and all fuel-burning appliances.
If anyone in the household does have signs of carbon monoxide poisoning, consumers should leave their home immediately and call their local emergency service or 911 for help. They should do a head count to check that all persons are accounted for once outside in the fresh air. They should not re-enter their home until it has been aired out and the problem corrected by a qualified technician or utility company.
Most carbon monoxide detectors sold at retail are for use in single residential living units only. They should only be used inside a single family home or apartment. They cannot be used in RV's or boats.
Carbon monoxide detectors should not be installed in the following locations:
1. Kitchens or within 5 feet of any cooking appliance where grease, smoke, and other decomposed compounds from cooking could build up on the surface of the carbon monoxide sensor and cause the alarm to malfunction.
2. Bathrooms or the other rooms where long-term exposure to steam or high levels of water vapor could permanently damage the carbon monoxide sensor.
3. Very cold (below 40 degrees Fahrenheit) or very hot (above 100 degrees Fahrenheit) rooms. The alarm will not work properly under these conditions.
4. Do not place in a close proximity to an automobile exhaust pipe, as this will damage the sensor.
***PLACE ONE CARBON MONOXIDE DETECTOR ON EVERY LEVEL OF YOUR HOME FOR MAXIMUM PROTECTION***
Read the manufacturer's instructions carefully before installing a carbon monoxide alarm. Do not place the alarm within five feet of household chemicals. If your alarm is wired directly into your home's electrical system, you should test it monthly. If your unit operates off a battery, test the alarm weekly and replace the battery at least once a year.
Avoid placing your alarm directly on top of or directly across from fuel-burning appliances. These appliances will emit some carbon monoxide when initially turned-on. Never use charcoal grills inside a home, tent, camper or unventilated garage. Don't leave vehichles running in an enclosed garage, even to 'warm up' your car on a cold morning.
Know how to respond to a carbon monoxide detector. If your alarm sounds, immediately open windows and doors for ventilation. if anyone in the home is experiencing symptoms of carbon monoxide poisoning-headache, dizziness or other flu-like symptoms, immediately evacuate the house and call the fire department. Don't go back into the house until a fire fighter tells you it is okay to do so. If no one is experiencing these symptoms, continue to ventilate, turn off fuel-burning appliances and call a qualified technician to inspect your heating system and appliances as soon as possible. Because you have provided ventilation, the carbon monoxide buildup may have dissipated by the time help responds and your problem may appear to be temporarily solved. Do not operate any fuel-burning appliances until you have clearly identified the source of the problem. A carbon monoxide alarm indicates elevated levels of carbon monoxide in the home. NEVER IGNORE THE ALARM.
The safety experts urge consumers to recognize the danger signs of carbon monoxide before any harm can come to them or their families. | <urn:uuid:a6654bd8-a96b-4e44-8f17-8171ec24a2b8> | CC-MAIN-2013-20 | http://www.smokesign.com/detectors.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.914133 | 4,736 | 3.328125 | 3 |
LABDIEN [Hello]! My name is Hannah Rosenthal, and I am the Special Envoy to Monitor and Combat Anti-Semitism at the U.S. Department of State. In Latvian, envoy means “Īpašā sūtne”. Thank you for inviting me here today to speak to you about the importance of diversity and respect for others. I am always eager to speak to young students because so much of my work depends on your help.
As the Special Envoy, it is my job to monitor anti-Semitic incidents and combat such intolerance. “Anti-Semitism” simply means hatred for Jewish people. I monitor anti-Semitic incidents such as vandalism of religious places, anti-Semitic speech, and even violence against Jews.
But the truth is, I am in the relationship-building business. I am here today to tell you that young people and students can have an impact and do what I do. We must all share and strive for the same mission: to combat hate and intolerance to create a more peaceful and just world.
In order to fight hatred, we must begin with respecting the dignity of every individual, regardless of his or her beliefs. In fact, our differences make us human. You may have heard about the concept of the “Other,” or in Latvian, “svešinieks”. There are individuals in this world who would like us to view some people as outside the larger human family.
The desire to stamp out or suppress or ostracize certain individuals because of who they are, how they worship, or who they love is an obstacle for all members of society. Intolerance prevents us from creating a just and peaceful society. Meanwhile, we, as society, must not stand by idly. When we stand by passively, we also pay a price.
Terrible things can happen when intolerance and racism take hold in a society, across a continent. Hitler’s Nazi ideology called for racial purity and targeted the Jews as an Other that needed to be exterminated. Some of you may know that yesterday communities around the world observed Yom HaShoah, or Holocaust Remembrance Day. Yom HaShoah is a day to remember the victims of the Holocaust and to commemorate the individuals – including some Latvians -- who risked their lives to save the Jews. I understand Latvia has its own official Holocaust Remembrance Day on July 4. While we officially commemorate the Holocaust on these days, we must carry their lessons with us every day. We must stand against attitudes that value some individuals below others. We must expand the circle of rights and opportunities to all people – advancing their freedoms and possibilities.
Intolerance is a moral, a political, and a social problem. But it is also a solvable one. It is not unchangeable. We are not born hating. Somewhere we learn to hate. We can, in fact, make hatred and intolerance something of the past. But this demands our attention. It’s not easy work, but it is urgent work.
At the U.S. Department of State (which is like the Foreign Ministry in Latvia) I work within the Bureau of Democracy, Human Rights and Labor. The primary and overarching goal of the Bureau is to promote freedom and democracy and protect human rights around the world. We are constantly strengthening our policies and pushing ourselves and others to break down former walls of intolerance. Over the past three years, Secretary of State Hillary Rodham Clinton has made the human rights of lesbian, gay, bisexual, and transgender people – “LGBT” in shorthand -- a priority of our human rights policy. As Secretary Clinton emphatically stated, “Gay rights are human rights and human rights are gay rights.”
In the United States, we are inspired by the idea that all human beings are born free and equal in dignity and rights. The United States has a strong multi-ethnic heritage. Over the course of centuries, many people have immigrated to the United States in hopes of a better life with more opportunities. We embrace this diversity and continue to uphold these values in our everyday lives, actions and laws.
I am learning that Latvia too has a diverse and multicultural history. Various tribes -- the Livs, the Letts, and the Cours -- lived here for many centuries. People from Belarus, Germany, Russia, Sweden, Ukraine, and many other places have played an important part in Latvia’s history. Jews have also contributed to Latvia’s heritage since the sixteenth century. In the eighteenth century, a Jewish man named Abraham Kuntze invented the famous Rigas Balzam (Latvia’s signature liquor). Latvia’s Jews backed the independence movement in the early twentieth century, with hundreds volunteering for service in the Latvian Army and fighting heroically during the war for independence. Latvia’s Jews thrived during the independence period of the 1920s and 30s, serving in parliament and helping write Latvia’s constitution. Zigfrids Meierovics, the first Foreign Minister of Latvia, and twice Prime Minister, had a Jewish father.
Sadly, when the Soviets arrived in Latvia in 1940, they shut down Jewish institutions and seized Jews’ property. When the Soviets deported tens of thousands of Latvians to Siberia, hundreds of Latvian Jews were deported as well. And then, just over one year later, the Holocaust followed and approximately 70,000 of Latvia’s Jews – almost 90 percent – were murdered by the Nazis and their accomplices.
And yet, the Jewish people survived in Latvia. In the 1980s and 90s, Latvia’s Jews once again supported Latvian independence from the Soviet Union, lending their efforts to those of the Popular Front of Latvia. Jews stood on the barricades in 1991. Today, Jews – along with all other Latvians -- are free to practice their faith and to celebrate their culture in a free Latvia. Latvian society is richer, and more diverse, because of the contributions of all these people.
Of course, neither Latvia, nor the United States, is perfect. There are people in both of our countries who do not believe in diversity and respect in every society. However, if we condemn their words of hate, we can spread the message of dignity and respect.
Anti-Semitism and other forms of hatred attack the very idea that every individual is born free and equal in dignity and rights. But Jews, Christians, Muslims and all religious communities are all part of the same family we call humanity. As a child of a Holocaust survivor, anti-Semitism is something very personal to me. My father was arrested – on Kristallnacht, the unofficial pogrom that many think started the Holocaust – and sent with many fellow Jews to prison and then to the Buchenwald concentration camp in Germany. And he was the lucky one – every other person in his family was murdered at Auschwitz. I have dedicated my life to eradicating anti-Semitism and intolerance with a sense of urgency and passion that only my father could give me.
At the State Department, we are trying to make human rights a human reality. As the Special Envoy to Monitor and Combat Anti-Semitism, I have recognized that this will not be possible without the help of you, our youth and future leaders.
Last year my colleague Farah Pandith, the Special Representative to Muslims Communities, and I launched a virtual campaign called “2011 Hours Against Hate,” using Facebook. Perhaps you have heard of it? We are asking you, young people around the world, to pledge a number of hours to volunteer to help or serve a population different than their own. We ask that you work with people who may look different, or pray differently or live differently. For example, a young Jew might volunteer time to read books at a Muslim pre-school, or a Russian Orthodox at a Jewish clinic, or a Muslim at a Baha’i food pantry, or a straight woman at an LGBT center. We want to encourage YOU to walk a mile in another person’s shoes. And while our goal was to get 2011 hours pledged, at the end of last year youth all over the world had pledged tens of thousands of hours.
The campaign was, in fact, so successful that we continued it into 2012. Thanks to a group of British non-governmental organizations, we are now also partnering with the London Olympic and Paralympic Games! In January, the London Olympic and Paralympics approved our application to have 2012 Hours Against Hate branded with the Olympics logo. We can now leverage the energy surrounding the 2012 Olympics to encourage athletes and fans alike to participate in combating hate and pledging their time to help or serve someone who is different from them.
Farah and I have met hundreds of young people – students and young professionals – in Europe, the Middle East and Central Asia. They want to DO something. And I have a feeling that YOU want to DO something too. Last summer, Farah and I met with youth and interfaith leaders in Jordan, Lebanon, and Saudi Arabia, and discussed reaching out to others, increasing tolerance and understanding among different religious groups, and addressed intolerance in their textbooks and lessons. Last month we traveled to Albania to encourage students from Tirana University and the local Madrasah to participate in 2012 Hours Against Hate. We held a panel discussion on the importance of religious diversity, and encouraged Albanian youth to live up to their country’s important legacy of acceptance and courage: Albania was the only country that saved all of its Jews during the Holocaust. Really, we have just begun.
So while I fight anti-Semitism, I am also aware that hate is hate. Nothing justifies it – not economic instability, not international events, not isolated incidents of hate.
Since the beginning of humankind, hate has been around, but since then too, good people of all faiths and backgrounds have worked to combat it. The Jewish tradition tells us that “you are not required to complete the task, but neither are you free to desist from it.”
Together, we must confront and combat the many forms of hatred in our world today. Where there is hatred born of ignorance, we must teach and inspire. Where there is hatred born of blindness, we must expose people to a larger world of ideas and reach out, especially to youth, so they can see beyond their immediate circumstances. Where there is hatred whipped up by irresponsible leaders, we must call them out and answer as strongly as we can – and make their message totally unacceptable to all people of conscience.
Thank you again for inviting me here to speak to you today. I am now happy and excited to answer your questions. | <urn:uuid:98ec614b-daef-452c-9e5b-ae03965de10f> | CC-MAIN-2013-20 | http://www.state.gov/j/drl/rls/rm/2012/189162.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.962984 | 2,200 | 2.90625 | 3 |
Crime and Personality: Personality Theory and Criminality Examined
Keywords: Criminality Personality Theory Criminal Personality Crime And Personality Criminology Psychopathy
The search for the criminal personality or super trait has captured both the minds and imaginations of academics and the wider community (Caspi et al., 1994). Partly, this is due to a stubborn aversion to the notion that normal, regular people rape, murder, or molest children (Barlow, 1990). Secondly, there is a desire for simple, straightforward answers (Bartol, 1991).
Generally, personality theorists endeavor to put together the puzzle of the human personality. Temperament is the term used for the childhood counterpart to personality (Farrington & Jolliffe, 2004). Facets of personality or temperament, traits, are combined together into super traits or broad dimension of personality. Personality traits are persisting underlying tendencies to act in certain ways in particular situations (Farrington & Jolliffe, 2004). Traits shape the emotional and experiential spheres of life, defining how people perceive their world and predict physical and psychological outcomes (Roberts, 2009). Various structured models of personality exist, each with a set of traits and super traits (Miller & Lynam, 2001).
Personality and crime have been linked in two general ways. First, in “personality-trait psychology” (Akers & Sellers, 2009, p. 74) certain traits or super traits within a structured model of personality may be linked to antisocial behavior (ASB).1 As reviewed by Miller and Lynam (2001), four structured models of personality theory were found to be widely used in criminological research and are considered reliable: the five-factor model (FFM; McCrae & Costa, 1990), the PEN model (Eysenck, 1977), Tellegen’s three-factor model (1985), and Cloninger’s temperament and character model (Cloninger, Dragan, Svraki, & Przybeck, 1993). In Table 1, the traits of these models are listed and defined. Eysenck hypothesized specific associations between the PEN model and ASB, proposing that the typical criminal would possess high levels of all three of his proposed personality dimensions. Cloninger hypothesized a link between ASB and personality dimensions from his model, stating that ASB would be linked to high novelty seeking, low harm avoidance, and low reward dependence (see Table 1).
The second way that personality theorists have linked personality to crime is through “personality-type psychology” (Akers & Sellers, 2009, p. 74) or by asserting that certain deviant, abnormal individuals possess a criminal personality, labeled psychopathic, sociopathic, or antisocial. The complex and twisting history of the term and concept of psychopathy can be traced back to the early 1800s (Feeney, 2003), contributing to its common misuse by both academics and nonacademics.2 Hare (1993, 1996) set forth a psychological schematic of persistent offenders who possess certain dysfunctional interpersonal, affective, and behavioral qualities and make up about one percentage of the population. The distinguishing interpersonal and affective characteristic of psychopaths is the dual possession of absolute self-centeredness, grandiosity, callousness, and lack of remorse or empathy for others coupled with a charismatic, charming, and manipulative superficiality (Hare, 1993). The defining behavioral characteristics of psychopaths are impulsivity, irresponsibility, risk taking, and antisocial behavior (Hare, 1993). Table 2 displays the emotional, interpersonal, and acts of social deviance hypothesized to indicate psychopathy. The term antisocial, not psychopath or sociopath, is now used by the American Psychological Association in the latest Diagnostic and Statistical Manual (DSM-IV-TR, 2000). This disorder manifests itself as a persistent disregard for and violation of the rights of others, beginning at an early age and persisting into adulthood. The DSM-IV-TR (2000) outlines the antisocial personality disorder as a broader clinical disorder than psychopathy, a diagnosis that could easily be applied to many who engage in criminal behavior (see Table 2).
Concerns Related to Theoretical Propositions and Policy Implications
Certain personality theorists such as Eysenck (1977) postulated that personality traits stem from biological causes. For example, Eysenck noted that arousal levels are directly associated with the personality trait of extraversion (Eysenck, 1977) and testosterone levels are linked to levels of psychotocism (Eysenck, 1997). The biologically deterministic premise postulated within segments of personality theory sparked an intense debate in criminology (Andrews & Wormith, 1989; Gibbons, 1989), which provides just a glimpse into a chasm in the field of criminology that has been rupturing for decades.
Criticisms against deterministic thought can best be understood within the historical context (Hirschi & Hindelang, 1977; Laub & Sampson, 1991; Rafter, 2006). Criminology is a field full of deep schisms and sharp debates, a sort of “hybrid” discipline (Gibbons, 1989), with even the historical accounts of criminology being disputed (Brown, 2006; Forsythe, 1995; Garland, 1997; Jones, 2008; Rafter, 2004). Yet, it is generally agreed that the foundations for understanding criminal behavior, even the justification for the existence of the discipline of criminology, is rooted in psychobiological perspectives (Brown, 2006; Garland, 1997; Glicksohn, 2002; Jones, 2008). Many of those considered to be the founders of criminology collaborated with psychiatrists focusing on the rehabilitation and medical or psychological treatment of criminal deviance, viewing such behavior as a disease of the mind or intellect rather than holding to the more primitive explanations that attributed crime to manifestations of evil spirits or sinfulness (Hervé, 2007; Jones, 2008; Rafter, 2004).
With the dawning of the ideals of the Enlightenment, interest grew in the notion that just as there are natural laws that act upon the physical world, there may be underlying forces that propel individuals or groups to react in certain ways (Jones, 2008). Two distinct schools of positivism arose during this period, those who assumed that these underlying forces were societal and those who assumed that the forces propelling criminal behavior were individualistic or psychological. One faction of nineteenth century positivists, with researchers such as Guerry and Quetelet, focused primarily on societal forces and emphasized geographical differences in crime rates, especially the effects of urbanization (Jones, 2008; Quetelet, 2003). At the core of this work was the idea that individuals do not have free will to act upon their societal environment, but rather are being acted upon by social forces; “Society prepares crime and the criminal is only the instrument that executes them” (Quetelet, Physique Sociale, quoted in Jones, 2008, p. 8).
However, the name most associated with nineteenth century positivism is Cesare Lombroso. Lombroso considered criminal behavior as indicative of degeneration to a lower level of functioning caused by brain damage or from certain genetic impacts (such as birth defects passed to children born of diseased or alcoholic parents), which impeded natural development (Glicksohn, 2002; Jones, 2008). Jones (2008) notes that Lombroso’ antagonists recount his professed allegiance to the use of the scientific method, yet they also detail how he would elaborate wildly, speculating far beyond the bounds of his empirical observations. Occasionally, Lombroso’s work is completely omitted from texts advocating individualistic or psychological approaches to criminal behavior, as Lombroso’s work is seen as an embarrassment and deemed a precursor to the Nazi ideology of the Ayran race (Jones, 2008; Rafter, 2006). Against this blemished backdrop of Nazi ideologies of racial hygiene, labeled biological determinism, sociologically inclined theories flourished within criminology and individualistic explanations for criminality were deserted as taboo and unmentionable (Andrews & Wormith, 1989; Glicksohn, 2002; Hirschi & Hindelang, 1977; Laub & Sampson, 1991).
Concerns about Policy Implications
Within such a historical context, ethical and moral concerns were raised regarding personality theory leading to inequitable or brutish policies (Rafter, 2006). Fears of policy recommendations forcing medical procedures, drug treatment, or excessively restrictive practices were common concerns levied against highly deterministic psychological theories (Bartol & Bartol, 2004; Gibbons, 1986; Jones, 2008). Labeling or stigmatizing persons as psychopaths, sociopaths, or antisocial, raised concerns that such labels might lead to unmerited, harsh sentences, as such individuals would be deemed as incorrigible (Andrews & Wormith, 1989). Conversely, there were concerns that labeling offenders with personality disorders could result in doubts about their culpability for crimes, leading to undue leniency (Bartol & Bartol, 2004).Continued on Next Page »
Download Article (PDF)This article is available as a PDF file.
Download PDF »
Subscribe to Updates
Did you enjoy this article? Subscribe to the Student Pulse RSS or follow us on Twitter to receive our latest updates.
On Topic These keywords are trending in Criminal Justice
Calling All College Students!
We know how hard you've worked on your school papers, so take a few minutes to blow the dust off your hard drive and contribute your work to a world that is hungry for information.
It's a good feeling to see your name in print, and it's even better to know that thousands of people will read, share, and talk about what you have to say. | <urn:uuid:101c9d04-f35e-4cd3-9d0f-a57b98d5fbf7> | CC-MAIN-2013-20 | http://www.studentpulse.com/articles/377/crime-and-personality-personality-theory-and-criminality-examined | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.909767 | 2,029 | 2.609375 | 3 |
HVAC Career Information
T he world outdoors is often an uncomfortable place. Weather changes can bring precipitation, blustery winds, and extreme temperatures. That's why we turn to the shelter of indoor spaces. We rely on climate-controlled environments to carry out our lives comfortably and effectively. But it takes much more than just a few walls, a roof, and insulation to make it all happen. So, what is HVAC?
What is HVAC and HVAC/R?
HVAC stands for heating, ventilation, and air conditioning. The HVAC systems in our homes, offices, shopping malls, and other buildings allow us to live inside without too much concern for what's happening outside. But HVAC goes beyond the regulation of indoor temperatures. When such systems are properly installed and maintained, they contribute to better airflow and healthier indoor air quality, which is especially important for people with allergies, asthma, or other medical issues.
In addition to heating, ventilation, and air conditioning, there is another type of climate-control technology that is crucial to modern life. The "R" in HVAC/R stands for refrigeration. The storage and transport of perishable foods, medicines, and other items we may take for granted is made possible by today's commercial refrigeration systems. (Side note: Don't be confused by the different ways in which the "R" is added to HVAC. The subtle variations you might encounter—HVAC&R, HVAC/R, HVACR, HVAC-R, or HVAC R—all mean the same thing.)
Advances in HVAC technology are making the heating and cooling of new and retrofitted buildings more and more energy efficient. Refrigerants are being developed and used that are more environmentally friendly. And technologies such as hydronics (water-based heating), geothermal, and solar-powered heating and cooling are turning the HVAC profession into one with a growing number of "green" jobs.
HVAC systems are installed and serviced by HVAC technicians (who are sometimes known as HVAC mechanics or HVAC installers).
What Does an HVAC Technician Do?
The work of an HVAC technician can be rather varied. From installation to routine maintenance to repair, the many duties of a professional in the heating, ventilation, and air conditioning industry often add up to working days full of diverse activities. However, a lot depends on whether or not an HVAC technician chooses to specialize in working with a particular type of equipment (i.e., residential, light commercial, or commercial/industrial) in either the installation or service side of the business.
So, depending on their specialty, level of knowledge, and arsenal of skills, HVAC technicians carry out tasks that can include:
- Installing furnaces, heat pumps, and air conditioning units
- Installing the ductwork that carries treated air throughout a building
- Following blueprints and specifications used in the installation of HVAC systems, including air ducts, vents, pumps, water and fuel supply lines, and other components
- Connecting electrical wiring and controls
- Performing routine maintenance on a variety of HVAC equipment, such as checking for leaks, adjusting blowers and burners, and checking nozzles, thermostats, electrical circuits, controls, and other components
- Diagnosing and repairing problems that are found within any part of an HVAC system
- Adjusting the controls of an HVAC system and recommending appropriate settings
- Testing the performance of a furnace, heat pump, air conditioning unit or other piece of HVAC equipment to ensure that it operates at peak efficiency
- Using carbon dioxide and carbon monoxide testers to make sure that a customer's equipment operates safely
- Selling service contracts or replacement equipment to customers
HVAC/R technicians, sometimes known as refrigeration mechanics, install and service commercial or industrial refrigeration systems. In addition to some of the tasks above, HVAC/R technicians have duties that can include:
- Charging refrigeration systems with the proper refrigerant
- Conserving, recovering, and recycling refrigerants for reuse or ensuring that they are disposed of properly since their release can be very harmful to the environment
- Venting refrigerant into the appropriate cylinders
To perform their duties, HVAC and HVAC/R technicians use a large variety of special tools (sometimes numbering in the dozens) such as:
- Pressure gauges
- Acetylene torches
- Voltmeters, ohmmeters, and multimeters
- Combustion analyzers
- Soldering and brazing equipment
- Pipe cutters
- Gas detectors
- Micron gauges
- Tap and die sets
Where Can HVAC Technicians Work?
Whether they specialize in installing or servicing residential, commercial, or industrial equipment (or all three), HVAC technicians perform their work on-site in a wide variety of settings. Any building that utilizes climate-control equipment will see multiple visits by HVAC technicians over the course of its lifetime. Such buildings can include:
Most HVAC technicians work for independent service contractors. However, employment can also be found with:
- Direct-selling retail establishments (e.g., HVAC equipment dealers)
- Repair shops for commercial or industrial equipment and machinery
- Merchant wholesalers of heating equipment and supplies
What is the Typical Salary of an HVAC Technician?
The typical salary of an HVAC technician depends on many factors such as the type of HVAC job, employer location, level of experience, and whether or not a union is involved. When it comes to HVAC, salary is usually implemented in the form of hourly wages. Most HVAC technicians, regardless of their training, begin their careers at a relatively low rate of pay, but their wages rise gradually as they increase their skills, knowledge, and experience.
So, what are some average HVAC salaries? Based on national estimates, yearly wages for HVAC and HVAC/R technicians break down this way: *
- The bottom 10 percent earn $26,490 or less.
- Median wages (50th percentile) are $42,530.
- The top 10 percent earn $66,930 or more.
The pay scales of similar employers, even within the same city, can sometimes vary dramatically. HVAC/R technicians that install and service commercial or industrial systems generally get paid the most. Unionized employers also tend to have much higher wages than non-unionized ones. However, you can expect a large chunk of your wages from any union job to go toward paying for union fees, insurance, and other benefits.
Many HVAC technicians maximize their income by working longer hours during peak seasons (summer and/or winter). Additional wages can also come, in some cases, from earning commissions on the sale of new equipment or service contracts.
Are There Any Downsides to Working in the HVAC Trade?
For the people who turn it into a long-term career, HVAC is a lifestyle. Many HVAC technicians reap a great deal of personal satisfaction from their work. But, like any occupation, the field of heating, ventilation, and air conditioning has its upsides and downsides. It's not a career for everybody. You've got to be 100 percent committed in order to succeed.
Here are some of the possible drawbacks of being an HVAC or HVAC/R technician:
- Physical hazards—It can be grueling and hard on your body. Installing or servicing HVAC systems often requires heavy lifting, crouching, and kneeling—including in tight places like attics and crawl spaces. Other physical hazards also exist such as the potential for cuts, scrapes, electrical shock, burns, or muscle strain. And, although rare, working with refrigerants without appropriate safety equipment can result in injuries like frostbite, skin damage, or even blindness.
- Uncomfortable working conditions—It frequently involves working outdoors in bad weather or extreme temperatures (hot and cold).
- Mental fatigue—In addition to being physically demanding, HVAC work can also be mentally tiring. That's because you must remain alert and focused in order to solve problems and avoid injury or costly mistakes. Plus, no matter how experienced you are, there is always a lot to learn. HVAC technology changes quickly, so being an HVAC technician requires staying on top of the latest developments and adding that knowledge to what you've already learned about older systems that are still in use. That makes the job sometimes feel overwhelming. As HVAC technology improves, much of it is also becoming more and more technically challenging to work on.
- Fluctuating work hours—Employment in the HVAC trade can sometimes be subject to seasonal fluctuations, particularly for technicians without much experience. It is common for many HVAC service technicians to work very long hours during peak seasons (summer and winter) followed by a reduction in hours (often less than full time) during the slower seasons. The peak seasons can be extra difficult if you have a family since working overtime and being on call at all hours (including weekends) can mean you're not able to spend as much quality time with those you care about. On the other hand, slow weeks are also inevitable, so you have to know how to account for the ups and downs in your personal finances.
- Irritable customers—Since many service calls happen when customers are in distress over failing heating or cooling equipment during extreme weather, HVAC technicians sometimes must deal directly with people who are cranky and impatient. Tempers are heightened when a problem can't be fixed right away because a part needs to be ordered.
- Delayed gratification—It takes time—usually at least five years—to develop the skills that enable you to begin making what are considered good wages in the HVAC industry. As a new technician, you should expect the starting pay to be lower than what you might be hoping for. You have to be willing to stick it out and learn everything you can in the meantime.
What are the Good Things About Working in HVAC?
The downsides of being an HVAC technician are balanced—and some might even say overcome—by the many positive attributes of the HVAC trade. Here are a few of them:
- A sense of accomplishment—It can be intensely rewarding to fix problematic equipment or install new systems since it means that your hard work directly impacts the ability of people to feel comfortable in their environments. You have the chance to make someone's day if they were freezing (or sweating) prior to your arrival. Plus, looking back on a job well done often leads to a great feeling of personal satisfaction, regardless of how difficult it might have been.
- Built-in exercise for mind and body—Despite the occupational hazards, being an HVAC technician can help you stay in shape—physically and mentally.
- Variety—Every day is bound to be somewhat different. You won't be stuck in an office. Instead, you'll get to solve a variety of problems and meet new people. And the fast pace of busy times helps the work days pass quickly.
- Pride—Because HVAC technicians can impact the well-being of people and the environment, they often feel a great sense of personal responsibility and pride of purpose.
- Stimulation—Opportunities for learning something new happen on a frequent basis, which means boredom is rare. As the HVAC industry moves closer and closer toward full computer automation for heating, ventilation, and air conditioning systems, the chance to develop advanced skills and knowledge also increases.
- Long-term stability—Once you've established yourself in the trade, there is great potential for making good money. And the job security can also be good. This is particularly true when you consider that HVAC skills are portable, and the work must be performed on location, which means that HVAC jobs are not subject to foreign outsourcing.
What Personal Characteristics Do I Need for an HVAC Career?
People who succeed as HVAC technicians possess key traits that enable them to handle the challenges of the occupation while taking advantage of the benefits. It's important to keep in mind that those who find long-term success and satisfaction in the HVAC trade generally possess the following characteristics:
- A strong desire to help other people
- A sense of craftsmanship and pride in their work (no cutting corners)
- Physical and mental toughness
- A courteous and respectful attitude
- Pride in their appearance
- An aptitude for mechanical, hands-on work
- Strong interpersonal skills
- Common sense
- The ability and willingness to learn
- Determination and a strong work ethic
- An interest in the science behind HVAC technology
- Good problem-solving abilities
How Do You Become an HVAC Technician?
There is more than one path to establishing a career in heating, ventilation, and air conditioning. When asking, "How do you become an HVAC technician?" it is important to consider that there are essentially four different ways to begin going about it:
- Obtaining formal HVAC training from a high school program or post-secondary school
- Entering a formal apprenticeship program for your training
- Joining the Armed Forces and receiving military HVAC training
- Pursuing an entry-level HVAC position without any formal training and hoping that you find an employer willing to teach you everything informally on the job (an increasingly rare circumstance)
Each option has its advantages and disadvantages. However, most employers generally consider formal training a must before they will even consider you for an open position.
Here are some things to consider about post-secondary training at an HVAC school:
- Most HVAC training programs at technical and trade schools take between six months and two years to complete.
- Programs that last a year or less generally award a diploma or certificate of completion. Those that last two years usually award an associate's degree.
- Shorter certificate or diploma programs are often designed only to teach students the basics of one of the three main areas of HVAC/R: (1) residential heating and air conditioning, (2) light commercial heating and air conditioning, or (3) commercial refrigeration.
- Most well-respected HVAC training schools offer programs that are accredited by at least one of the following agencies: HVAC Excellence, the National Center for Construction Education and Research (NCCER), or the Partnership for Air-Conditioning, Heating, and Refrigeration Accreditation (PAHRA).
- Taking the right courses in high school can help you better prepare for HVAC school. These include subjects such as mechanical drawing, basic electronics, math, computer science, and applied physics and chemistry. It can also be beneficial to gain some basic knowledge of electrical and plumbing work.
- HVAC schools are designed to give you a head start in the acquisition of your skills, but it will likely take a few years of working experience as an assistant HVAC technician after you graduate before anyone will begin to think of you as proficient.
Another popular and advantageous way to receive formal training is through an apprenticeship. Here is what you should know about HVAC apprenticeships:
- In general, apprenticeship opportunities pop up only periodically depending on the needs of employers, both unionized and non-unionized.
- Apprenticeships are often a pathway to national certification in the HVAC industry, and they can even allow you to earn college credits.
- In order to reap all of the benefits of a formal HVAC apprenticeship, you'll want to find an apprenticeship program that is registered with the Office of Apprenticeship, which is part of the U.S. Department of Labor's Employment and Training Administration.
- Most apprenticeships allow you to earn a wage while you learn. And, if you are part of a registered apprenticeship program, your paycheck is guaranteed to increase over time. Unionized apprenticeships offer the additional advantages of working under the protection of a union contract and, usually, receiving insurance and pension benefits.
- Apprenticeships usually last four to five years, and they include both classroom instruction and hands-on training on the job. After completing a five-year registered apprenticeship, you can become a journeyman in the HVAC field.
- The organizations with the most HVAC apprenticeship opportunities include, in no particular order: (1) Air-Conditioning Contractors of America (ACCA), (2) Mechanical Contractors of America (MCAA), (3) Plumbing-Heating-Cooling Contractors (PHCC), (4) Sheet Metal Workers' International Association (SMWIA), (5) Associated Builders and Contractors (ABC), and (6) United Association of Journeymen and Apprentices of the Plumbing and Pipe Fitting Industry of the United States and Canada (UA).
- Apprenticeship openings are often highly competitive. Plus, you must meet the minimum requirements of whatever apprenticeship program you are applying for. Organizations that offer or coordinate apprenticeships in HVAC often look for candidates that have at least a high school diploma (or equivalent), good math and reading skills, above-average manual dexterity and hand-eye coordination, strong mechanical aptitude, patience, dependability, the ability to get along well with other people, and a desire to do whatever it takes to learn the trade. As part of the application process, you may also be required to take aptitude tests and attend multiple interviews.
- Completing an HVAC program at a technical college or trade school can sometimes give you a leg up on the competition when applying for a registered apprenticeship.
Regardless of how you get your HVAC training, there are a number of other things to keep in mind about the HVAC trade and finding work in it. Consider the following points:
- Many employers look for HVAC professionals with at least two to five years of on-the-job experience. Schooling alone, while beneficial, is often not enough—particularly for openings at larger companies.
- In order to break into the trade and get the experience you need, you might have to spend a few years working for a smaller HVAC company at a lower wage than you might be expecting. The more you are willing to swallow your pride and do whatever is necessary to gain experience, the more opportunities you will have at the beginning of your career.
- In many regions, you are more likely to land your first HVAC job during a peak season (summer or winter) since that is when demand for HVAC workers increases.
- Employers want workers who will stick around for the long haul. That's why many of them prefer to hire people who've completed a formal HVAC program. Completing an HVAC education is a sign that you aren't just looking for a temporary job but, rather, have put your heart into making HVAC your career.
- As you seek to gain experience early in your career, it's best to go for variety, if possible, in the type of HVAC work you do. Some people in the trade get "stuck" in just one particular area (such as installation) and find it difficult later on if they wish to move into a different HVAC specialty that they might enjoy better.
- It pays to be assertive and proactive, especially when it comes to increasing your HVAC knowledge. You'll have better job security and advancement opportunities if you can become the "go-to" person for technical information and troubleshooting know-how about the equipment your employer sells and services. As you begin your career, it is essential to ask a lot of questions, pay close attention, and study, study, study. And, as you continue your career, the need to learn never stops. There will always be more to know.
- Like in any other trade, the better you are at your job, the more quickly you can climb the HVAC career ladder.
- It is impossible to learn everything you need to know in two years or less. So, although trade school can give you a great head start on the fundamentals, you should expect to begin your HVAC career in a "helper" or apprentice role as you continue to learn. It generally takes at least five years of on-the-job experience before you're ready to work on your own.
- Since demand for HVAC technicians can sometimes be prone to seasonal fluctuations, it is important to learn how to manage your money in a way that allows you to ride out any downtimes comfortably.
- Long-term success as an HVAC technician hinges a great deal upon your reputation. So it's important to develop a courteous and respectful attitude early on, to never cut corners, and to let the quality of work you perform speak for itself.
- Persistence and enthusiasm are the biggest keys to landing your first job in HVAC. Employers look for people who are willing to commit to hard work. You can improve your chances of finding employment by always acting polite and professional, following up repeatedly with the people in charge of hiring, and demonstrating to them that you're not an arrogant "know-it-all" but are, instead, humble and ready to learn and take on all of the challenges inherent to HVAC work.
How Do You Get HVAC Certification?
When asking, "How do you get HVAC certification?" it is essential to understand that some certifications are required while most others are voluntary. Even voluntary certifications, however, can help you advance in your HVAC career since most employers like to see official acknowledgment of your competencies.
But knowing how to obtain HVAC certification is just one aspect of this issue. You also need to understand what it all means. Here are the most important things to remember:
- Regardless of which area of HVAC/R you choose to work in, you will be required to obtain at least one type of certification from the U.S. Environmental Protection Agency (EPA). Section 608 of the Clean Air Act of 1990 requires anyone who services equipment that uses specific refrigerants to take a test to prove that they know how to properly handle, recycle, and dispose of materials that can damage the ozone layer.
- EPA Section 608 certification is broken down into four types depending on the kind of equipment you will be working with: (1) Type I for small appliances, (2) Type II for very high-pressure appliances, (3) Type III for low-pressure appliances, and (4) Universal for all types of HVAC/R equipment.
- HVAC students enrolled in formal training are often required to take the EPA Section 608 Universal certification test as part of their program.
- Although not required by the EPA, R-410A certification covers an especially dangerous type of refrigerant in greater detail than what is found in the EPA Section 608 test. R-410A refrigerant is used at a much higher vapor pressure than other refrigerants and, therefore, requires different tools, equipment, and safety standards. R-410A is increasingly replacing some of the older ozone-damaging refrigerants that are being phased out.
- Other types of professional HVAC certifications are designed to verify the real-world skills and working knowledge of HVAC and HVAC/R technicians who've had at least a year or two of on-the-job experience. Certification is offered by independent organizations in many different specialty areas such as residential and commercial air conditioning, heat pump service and installation, gas heat, electric heat, oil furnaces, hydronics, air distribution, and commercial refrigeration.
- The two most recognized providers of professional-level certifications in the American HVAC/R industry are (1) HVAC Excellence and (2) North American Technician Excellence (NATE). Obtaining certification from these organizations involves meeting any necessary prerequisites and then passing written exams. You can also obtain your EPA Section 608 certification through such providers.
- A certificate of completion (or diploma) from a formal HVAC training school is NOT the same thing as professional-level certification from organizations like HVAC Excellence or NATE.
How Long Do HVAC Classes Take?
Formal HVAC programs at technical colleges and trade schools vary in length. A lot depends on the type of credential you're after and how in-depth you want your schooling to be. So, how long do HVAC classes take?
HVAC programs that award certificates or diplomas typically last one year or less. Some take as little as about 18 weeks to complete. With these shorter programs, you often must choose to study just one of three specific areas: (1) light commercial air conditioning and heating, (2) residential air conditioning and heating, or (3) commercial refrigeration.
Associate degree programs in HVAC/R technology, on the other hand, are designed to last two years and are often more comprehensive.
How Much Does HVAC School Cost?
The cost of HVAC schooling varies significantly depending on where you go to school and whether you choose to pursue a certificate or associate degree. So, how much does HVAC school cost?
Basic program costs, including tuition, can range from as little as $2,000 or less to as much as $35,000 or more. The more expensive programs sometimes have a wider range of HVAC equipment and tools in their labs for better hands-on learning, although it is best to tour any school you are considering and check out their facilities to make sure you'll be getting good value for your money. Books and supplies are sometimes an extra expense and can cost as much as $4,500 depending on the program.
Financial aid in the form of loans and grants are frequently available from the federal government for those who qualify. And some states offer financial assistance through their own retraining programs for unemployed workers.
What Can I Expect to Learn in My HVAC Training?
HVAC schools are set up to teach the fundamentals of what you need to know to begin working as an HVAC technician at the entry level. Ultimately, HVAC involves learning at least the basics of about five different trades competently, including electrical work, plumbing, welding, pipefitting, and sheet metal.
HVAC education programs vary in their curriculum, but the ones that are accredited by an industry-recognized organization generally share a number of common elements. Three of the biggest accrediting bodies for HVAC training are (1) HVAC Excellence, (2) the Partnership for Air-Conditioning, Heating, and Refrigeration Accreditation, and (3) the National Center for Construction Education and Research.
Most HVAC programs combine classroom study with hands-on training. Depending on the school and program you choose, you can expect the curriculum to include subjects such as:
- Electric, gas, and oil heat
- Residential and light commercial air conditioning
- Heat pumps
- Basic electronics
- Soldering and brazing
- Venting and duct systems
- Interpreting mechanical drawings and diagrams
- Components of HVAC systems
- General HVAC theory
- Airflow and indoor air quality
- Heating fuels
- Refrigerant types and refrigerant oils
- Installation and service
- Troubleshooting and problem solving
- Building codes and requirements
- Tools and test instruments
- Safety precautions and practices
Many accredited HVAC/R programs use the Industry Competency Exam (ICE) as an exit exam for students. So, depending on the program you choose, you might have to take one or more of the three different tests that are available as part of the ICE. The different testing areas are: (1) residential air conditioning and heating, (2) light commercial air conditioning and heating, and (3) commercial refrigeration.
As an HVAC Technician, Will I Need to Be Licensed?
The answer depends on where you intend to work. Licensing requirements for HVAC technicians vary greatly depending on the state or locality they work in and whether they intend to be their own boss. And some states don't have any legal requirements. In the ones that do, however, a state exam often must be passed. Plus, some states require you to have completed the equivalent of an apprenticeship program or two to five years of on-the-job HVAC experience before you can apply for a license to legally work on your own.
The content of state licensing exams also varies significantly. In some states, for example, emphasis might be placed on having an extensive knowledge of electrical codes, but, in other states, the focus might be more on HVAC-specific knowledge.
Just remember: Although your state might not require you to obtain an official license in order to perform HVAC work, the federal government will still require you to be certified in the proper handling of refrigerants. The EPA Section 608 certification exam is a written test and is administered by a variety of organizations that have been approved by the U.S. Environmental Protection Agency, including unions, building groups, trade schools, and contractor associations.
How Promising is the HVAC Job Outlook?
The HVAC job outlook is expected to be excellent for the foreseeable future. In America, employment of HVAC technicians is projected to increase by 28 percent between 2008 and 2018, which is much faster than average. **
The growing demand for HVAC and HVAC/R technicians can be attributed to a number of factors. As the nation's population grows, so does the number of buildings (residential, commercial, and industrial) that need to be fitted with climate-control systems. And the increasing complexity of new HVAC systems means an increasing possibility of their malfunction and need for servicing, which then requires skilled technicians. In addition, the growing focus on reducing energy consumption and improving indoor air quality means that more HVAC technicians are needed for analyzing the efficiency of existing systems and replacing old polluting ones with new, more efficient models.
Although experienced HVAC technicians can expect excellent job prospects, the odds of new techs landing employment are best for those who have had training through a formal apprenticeship program, through an accredited program from an HVAC school, or both. You can also increase your chances of landing a good job by becoming an expert at increasing energy efficiency and gaining a solid understanding of complex computer-controlled HVAC systems such as those found in modern high-rises.
What Kind of Advancement Opportunities Exist in the Heating, Ventilation,
and Air Conditioning (HVAC) Industry?
The HVAC industry is incredibly diverse. Most HVAC technicians begin their careers in the residential and light commercial sectors of the field. Advancement usually comes in the form of higher wages or supervisory positions. But, with advanced knowledge, a lot of experience, and the right mindset, new opportunities can arise for entering other areas of the industry, which offer new challenges.
Commercial refrigeration, for instance, is an area of high demand that requires workers with a lot of patience and specialized skills. With the right training and education, HVAC/R technicians can also specialize in areas such as solar-powered or geothermal heating and cooling, retrofitting, system testing and balancing, efficiency evaluations, or building operations with advanced computer controls. In addition, some technicians move into teaching, HVAC sales and marketing, or managing their own contracting businesses.
It is even possible to earn a bachelor's degree in HVAC engineering technology. Such a degree could allow you to become an HVAC engineer or HVAC technologist and design new systems and controls for the manufacturing, commercial, institutional, or industrial sectors.
How Do I Get Started?
One of the best ways to discover whether HVAC might be a good field for you is to talk with a few experienced HVAC technicians. See if you can schedule a time to ride along with them on some service or installation calls. Or, if you're ready to get moving now, then check out our list of HVAC schools. You could soon have the repeated, satisfying experience of standing back and admiring a job well done. | <urn:uuid:2544a8b7-2405-4749-8cfa-059c46bbfae9> | CC-MAIN-2013-20 | http://www.trade-schools.net/career-counselor/hvac-technician-information.asp | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.95472 | 6,747 | 3 | 3 |
As the Fukushima world disaster continues to unfold, we have learned that the strontium levels are 240 times over the legal limit near the plant, which has become an uninhabitable land area. The nuclear waste advisor to the Japanese government recently explained that roughly 966 square kilometers (km), or 600 square miles, around Fukushima are now uninhabitable due to the unfolding disaster. This massive dead zone area is equivalent in size to 17 Manhattan Islands placed next to one another. Unfortunately, the latest readings taken approximately 20 miles out to sea from the site showed radioisotope levels from all the radioactive particles were ten times higher than those measured in the Baltic and Black Seas after the massive Chernobyl disaster.
"Given that the Fukushima plant is on the ocean, and with leaks and runoff directly to the ocean, the impacts on the ocean will exceed those of Chernobyl, which was hundreds of miles from any sea," said Ken Buessler, Senior Scientist in Marine Chemistry at the Woods Hole Oceanographic Institution in Massachusetts, several months back. It has also been revealed that reactors 1, 2, and 3 have all experienced "melt-throughs." This means the radiation materials have burnt through and gone directly into the ground and water. This is considered to be the worst possible scenario in a disaster of this nature.
"Dangerous levels of radioactive iodine and cesium have already contaminated the sea, the soil, groundwater, and the air," said reporter Mark Willacy of the Australian Broadcast Corporation in a recent Lateline interview. "This week plutonium was detected for the first time outside the stricken plant, and Strontium-90, known as a ‘ seeker’ , because it can cause bone cancer and leukemia, has now been found as far away as 60 kilometers (37+ miles) from the facility."
Various atomic experts now agree that the unfolding situation is truly "as serious as it gets in a nuclear disaster." Fukushima presently has 20 nuclear cores exposed, and it has 20 times the potential of Chernobyl to be released. This is without a doubt the worst nuclear disaster the world has ever seen.
"We are discovering hot particles everywhere in Japan," said Arnold Gundersen, a former industry senior vice president with 39 years of nuclear engineering experience.
The average number of infant deaths caused by the Fukushima radiation exposure multiple meltdown was 37 deaths in 4 weeks (an average of 9.25 per week), ending March 19th (prior to the disaster) and after 10 weeks, ending May 28th (post disaster), the number of deaths was 125 (an average of 12.60 per week). This is a 35% increase of perinatal mortality in infants under a year old, in 8 cities in the Northwest (Boise, Seattle, Portland, Santa Cruz, Sacramento, San Francisco, San Jose, and Berkeley). According to Joseph Mangano, Epidemiologist and Executive Director of Radiation & Public Health Project, the perinatal mortality rate in Philadelphia rose 48% 10 weeks after the meltdown.
The bad news, of course, is that this is an ongoing disaster, and the governments, for whatever unexplained reasons, don’ t seem interested in sealing it off. Even though this ongoing radiation exposure and disaster is not being noted in the newspapers, and one cannot see it, smell it or detect it easily, it is still there and getting worse.
Evidence of the ongoing danger of U.S. nuclear plants is the report by the Associated Press citing 48 out of 65 of the facilities reported leaking tritium, a radioactive form of hydrogen. In other words, 75% of U.S. nuclear plants are leaking. It is confirmatory to hear that the U.S. commission blames many of the leaks on corroded buried piping. The significance of the leaking is that in 37 of the 48 sites, there was found to be contamination of the ground water that exceeded the federal drinking water standard. The good news is that no public water supplies are known to be contaminated, but it was found in private wells in Illinois and Minnesota. In New Jersey, tritium was found in a discharge canal feeding Barnegat Bay. This is not a recent phenomenon. In 2007 cesium-137 was found, along with tritium, at the Fort Calhoun plant near Omaha, Nebraska, and strontium-90 was found near New York City and the Indian Point nuclear site. All this just supports how important it is not only protect yourself against all forms of radiation with our supplement program, but also to attempt to create some reforms to protect the American public.
The threat that nuclear power poses to our nation is alarming, as our government recklessly moves to re-license old reactors and use tax dollars to help finance new plants. Even more frightening is the lack of evacuation plans for more than 111 million Americans who live within 50 miles of a reactor. Unfortunately, the U.S. government isn’t learning the critical lessons from these nuclear energy disasters. Incredibly, our nation’ s evacuation plans only include areas within 10 miles of reactors—despite clear evidence from Chernobyl and Fukushima that serious radiation impacts extend much further. And our emergency medical capacities fall short of what’ s needed to meet a major nuclear catastrophe. These dangerously inadequate emergency response plans put major U.S. urban areas at risk—including New York City, Chicago, Boston, Los Angeles, and Washington D.C.
The significance of this censored news further supports the unsustainability and dangers of nuclear plants in the U.S. compounded by lax federal regulation. This is a dangerous situation that unfortunately is just waiting to happen. I suggest for the protection of our family, children, and the U.S. population that we exert every possible effort to follow the German example of making a commitment to dismantle all nuclear reactors by 2020 and certainly not to build new ones. That is the least we can do to put common sense and values of human safety above the interests of economic investment and profit of the nuclear industry. This dangerous nonsense is only going to be stopped if enough people complain. I urge you to support all the anti-nuclear groups nationally and locally, such as Environmental Defense Fund, Greenpeace, Friends of the Earth, Physicians’ Committee for Responsible Medicine, etc. Your active support on local and national levels can help to:
- End loan guarantees for new reactors and implement a nationwide moratorium on new reactor licensing and design certification;
- Suspend operations at reactors similar to those at Fukushima—as well as those on geological fault lines—and reject renewed licensing for existing reactors until all of the lessons of the current crisis are fully understood; and
- Deal with the dangerous radioactive waste by upgrading spent fuel pools and hardening onsite fuel storage for all operating reactors.
Blessings to your health and radiant wellbeing,
Gabriel Cousens, M.D. | <urn:uuid:c193344d-5da7-4901-b36e-51363b9b9fcc> | CC-MAIN-2013-20 | http://www.treeoflife.nu/DRCOUSENS/DRCOUSENSBLOG/tabid/364/PostID/158/language/en-US/~/~/Default.aspx?tabid=364&PostID=105&language=en-US | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.953095 | 1,396 | 3.328125 | 3 |
Tell me about the International Dark-Sky Association—how did it start, what's the mission?
IDA’s mission is to preserve and protect the nighttime environment and our heritage of dark skies through environmentally responsible outdoor lighting. IDA was founded in 1988 by a professional astronomer working at the Kitt Peak Observatory in Tucson, Arizona and an amateur astronomer who noticed that the increasing sky glow over Tucson was interfering with nighttime observations. Their message is simple, clear, and effective, and their efforts were fundamental in getting light pollution recognized around the world as an unwelcome and detrimental environmental condition.
What are some of the effects of light pollution on animals? On humans? Tell me about some of the science that supports this.
There are four main types of light pollution: sky glow (that strange orange dome over urban areas), glare (overly bright, unshielded points of light), light trespass (unwanted light intruding onto private property), and clutter (groupings of light sources). Animals and even plants are affected by sky glow and light trespass due to their extreme photosensitivity. Because seasonal temperatures can vary from year to year, many species rely on light cues to tell them when to shed leaves, mate, and reproduce. When outdoor lighting artificially prolongs the day, the instinctive rhythms of many species are affected.
Many species behave unnaturally in the presence of artificial light—for example, light at night decreases a firefly’s ability to be seen, thereby hindering its ability to attract a mate. Sea turtles and migratory birds use starlight to orient themselves, so light pollution has devastated their internal navigation systems. Some birds will crash into tall buildings, or fixate on a light source, circling around it until they are exhausted and unable to fly.
Glare from unshielded light sources presents the largest problem to humans. Effects of glare from poor outdoor lighting are a primary reason that the American Medical Association unanimously adopted Resolution 516 to support light pollution and glare reduction efforts last June. Depending on the severity, glare can cause discomfort or temporary night blindness. On the roadway, glares can interfere with visibility, presenting a hazard to both drivers and pedestrians. The problem gets worse as people age and gradually lose their ability to adjust to changing light levels.
Exposure to excessive light at night has been found to alter the circadian rhythm, interfere with sleep patterns, and suppress the sleep hormone melatonin. The amount of light needed to affect sleep patterns is not known, but sleeping in total darkness is recommended by both the CDC and the NIH as a way to promote a regular circadian rhythm.
What is International Dark-Sky Association doing to combat light pollution? How are you measuring the effectiveness of these campaigns?
As an environmental educational 501(c)(3) non-profit, IDA has enacted dynamic programs in the areas of technology, conservation, and public awareness. IDA’s Fixture Seal of Approval program directly attacks sky glow by establishing “dark sky-friendly” criteria for outdoor light fixtures.
Currently in the spotlight is our International Dark Sky Places (IDSPlaces) program, a conservation curriculum established to protect urban and rural starscapes. The IDS Communities and Dark Sky Developments of Distinction designations recognize outstanding dark sky preservation efforts in municipalities and planned communities. All designated IDSPlaces have met stringent lighting requirements through retrofits and legislation and have undertaken outreach efforts to educate the public about the importance of natural night. Many designees are successfully incorporating astronomy and stargazing into their local attractions, hosting festivals or sky watching events known as “star parties.”
In house, IDA collects, creates, and distributes information relating to light pollution, much of which is available for free on the IDA website. We also participate in industry meetings and technology expos, and actively collaborate with non-profit interest groups.
Do you see differences regionally? How bad is the East Coast in terms of light pollution?
Several New England states have taken great strides to protect their skies. The east coast has more light on the whole simply because it is more densely populated, not because the lighting is necessarily worse. We don’t see a huge difference regionally so much as from city to city. Rural and urban areas across the world have enacted dark sky ordinances or are undertaking retrofits in public lighting (usually as part of an energy saving endeavor) and their lighting is much more thoughtful, more aesthetically pleasing, and more efficient than cities or townships that have not.
What can you tell me about any legislation concerning light pollution, particularly on the East Coast? In New York? IDA’s newly opened public policy office in Washington, DC is creating a lot of opportunities for collaboration with other non-governmental organizations (NGOs) and some significant inroads with energy agencies and congressional leaders, but any national action is a long way off. Many of these accomplishments have been spurred by Leo Smith, IDA’s Regional Director for New England Sections. Connecticut is the furthest along in terms of addressing light pollution, with three state laws, one state building code requirement, and one requirement from the utility regulators to present a new streetlight rate for streetlights that are programmed to turn off at midnight are on the books.
The New Hampshire law, signed in July, also requires utility regulators to adopt a rate for streetlights that are turned off at midnight, as well as requiring shielded streetlights. Maine and Rhode Island both require shielded streetlights. New York isn’t quite there yet, though night lighting has been addressed in several regions, namely the municipalities of Tully, East Hampton, Southampton, Tuxedo Park, Riverhead, and Brookhaven.
IDA conducts third-party certification of light fixtures— how successful has this program been, how many certified fixtures are currently on the market and how receptive has the industry been to change?
IDA has reached out to the lighting industry since its inception. Good quality light at night is necessary for safety, security, and recreation, but outdoor light is the main cause of light pollution. Some members of the lighting community have been very apt to address this, and have worked to create products that minimize light pollution by directing light to the ground, where it is needed, instead of to the sky, where it becomes a wasteful nuisance. IDA is fortunate to have support from these companies, because they provide the technology to make our mission effective.
The Fixture Seal of Approval program was started in 2005 to recognize lighting manufacturers who integrated the concept of full shielding into their fixture design and to encourage market expansion of dark sky-friendly products. Any approved fixture must be fully shielded to emit no light above a 90 degree angle. This program has been wildly successful for both IDA and the lighting manufacturers who join. The IDA seal is gaining worldwide recognition and becoming a selling point for manufacturers and vendors alike, and the market for dark sky-friendly products is expanding as companies strive to design sleek, stylish, and efficient fixtures. Over 100 manufacturers have joined the FSA program to date, featuring approximately 300 fixture models.
What can consumers do to combat light pollution? And then, what can architects, builders, planners do to combat light pollution?
Shield your light sources, especially floodlights. A “par shield” that clips on to the fixture makes a huge difference in directing light where you want it to go. If you install dark sky-friendly fixtures outside your home or business, you’ve already made a difference. Look for the IDA Fixture Seal of Approval or purchase a fully-shielded or full-cutoff product. Those who want to learn more or work toward creating an ordinance can join a local IDA Section (information at darksky.org) or contact a local astronomy club.
Architects and builders interested in sustainability can achieve LEED Credit 8, which specifically addresses outdoor lighting. Again, purchasing and installing fully shielded fixtures in any new development is all it takes. The market now contains so many qualified fixtures that there is virtually no difference in price.
Light sent into the sky costs the U.S. approximately $2.2 billion every year. As energy efficiency becomes imperative, city planners must consider improvements in public lighting as a long term way to reduce energy and conserve public funds. Most streets can dramatically lessen their lighting without compromising driver response time or pedestrian safety.
In your view, what's the most compelling reason for consumers to swap their traditional outdoor lighting to a Dark Sky fixture, and how are you getting that message out?
The dark skies movement will resonate with anyone who recognizes the profound effect light has on a space, indoor or outdoor. Seriously, what other small change can you make that affects wildlife, energy, and the ambiance of your entire neighborhood? In addition to the personal benefits you receive in terms of reduced energy use and a more pleasant personal space, a shift to dark sky-friendly lighting shows an awareness of the environment at large and a respect for the place you live.
The dark sky message usually sells itself, once people become aware of it. IDA’s wonderful volunteers do a phenomenal job in spreading enthusiasm for the cause. Their interest in creating a sustainable, beautiful nighttime environment and their dedication to action is what drives the success this campaign. Thanks to the hard work of IDA volunteers worldwide, cities in the Americas, Europe, Asia, and even Australia are seeing the darkness.
For more information about the IDA, local policies, and where to find shielded lighting, visit darksky.org | <urn:uuid:23364f66-83aa-43c7-b0fd-cb8825f1b6f8> | CC-MAIN-2013-20 | http://www.upstatehouse.com/view/full_story/4976834/article-Exclusive-Q-A-with-Rowena-Davis- | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.949437 | 1,950 | 3.625 | 4 |
Such was the condition in Kansas at the opening of the presidential year of 1856, and it became one of the leading issues of the campaign. The whole country was aroused over reports from Kansas, and it was impossible that such a question remain long out of the halls of Congress, notwithstanding the claim of Douglas that his famous bill would remove the slavery question from national politics. In May, 1856, Senator Sumner made a powerful speech on "The Crime against Kansas." The speech was a fearful arraignment of the slave power. But the speaker went out of his way to abuse certain senators whom he did not like, especially Senator Butler of South Carolina, who was then absent from the city, and who had made no special personal attack on Sumner.
Charles Sumner, with all his learning, was a narrow-minded man. He was opinionated, egotistical, and incapable of giving credit to another for an honest difference of opinion. But he was sincerely honest and courageous.¹ His espousal of the cause of the slave when that cause was very unpopular rose from the innermost depths of his soul. His furious attack on Butler was occasioned by the indignation expressed by the latter at the audacity of the Topeka convention in applying for statehood. But Sumner suffered severely for his extravagance. Two days after making this speech, as he sat at his desk writing, after the Senate had adjourned, he was assaulted with a cane by Preston Brooks, a member of the House and a relative of Senator Butler. Brooks rained blows on Sumner's head with great ferocity. Sumner sat so near his desk that he had no chance to defend himself; but at length he rose, wrenching the desk from its fastenings. Brooks then grappled with him and continued his blows until Sumner fell bleeding and unconscious to the floor.
So great were the injuries of the Massachusetts senator that he did not fully recover for four years; and indeed, never after this assault was he the powerful, robust athlete that he had been before. No incident in many years revealed more vividly the vast gulf between the North and the South than did the different manner of their receiving the news of this assault on Sumner² Throughout the North the deed was denounced as a cowardly outrage, unworthy of any but a bully and a thug. At the South, where Sumner was hated above all men, the verdict was that he received only the punishment he deserved. Brooks was hailed as a champion and a hero, and was presented with many canes. He resigned his seat in the House because of a majority vote--not the necessary two thirds--for his expulsion; but he was immediately reëlected by his district.³
Meantime matters were growing worse on the plains of Kansas. On the day that intervened between the closing of Sumner's speech and the assault by Brooks the town of Lawrence was sacked by a mob. The House of Representatives sent a committee of three to Kansas to investigate matters and report. This committee, composed of William A. Howard of Michigan, John Sherman of Ohio, and Mordecai Oliver of Missouri, after examining several hundred witnesses, reported in July. Howard and Sherman reported favorably to the free-state party, but agreed that the election of Reeder to Congress, as that of Whitfield, was illegal. Oliver made a minority report favoring the southern view.
With the attack on Lawrence the Civil War in Kansas may be said to have begun. Soon after this occurred the massacre of Pottawatomie, the leader of which was John Brown. Brown had come from the East to join his sons, who had been among the early settlers of Kansas. He was an ascetic and a fanatic. He had come to Kansas to make it a free state at any hazard. He regarded slavery with a mortal hatred, and while his courage was unlimited and his intentions upright, his soul was too utterly narrow to see a thing in its true light. He believed that the only way to free the slaves was to kill the slaveholders. "Without the shedding of blood, there is no remission of sins," said John Brown.
A few free-state men, one of whom was a neighbor of Brown, had been killed by the opposite party, and Brown determined that an equal number of them should suffer death to expiate the crime. He organized a night raid--his sons and a few others--and started on his bloody errand. They called at one farmhouse after another and slew the men in cold blood. He did not inquire if they were guilty of not guilty; enough if they belonged to the opposite party. One man was dragged from the presence of a sick wife. Her pleadings that he be spared were not heeded. He was murdered in cold blood in the road before his house. Before the end of that bloody night raid Brown's party had put six or seven men to death--for no crime except that they belonged to the opposite party and had made threats--an offense of which Brown's party were equally guilty. When the news of this ghastly work was flashed over the country, the people in general refused to believe it; and to the credit of the free-state people in Kansas, they repudiated it as wholly unwarranted.
¹While he was uttering this speech, in which he attacked Senator Douglas also without mercy, the latter said to a friend: "Do you hear that man? He may be a fool, but I tell you that he has pluck." Poore's "Reminiscences," Vol. I, p. 461.
²Rhodes, Vol. II, p. 143
³Brooks died the following January, and Butler in May of the same year. | <urn:uuid:7cccc48a-8bff-43dd-aedc-ca7bda58238f> | CC-MAIN-2013-20 | http://www.usgennet.org/usa/ks/state/history/history3.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.991606 | 1,174 | 3.265625 | 3 |
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
A modem self-test in which data from the keyboard or an internal test pattern is sent to the modem's transmitter, turned into analog form, looped back to the receiver, and converted back into digital form.
A variety of signals and wavelengths that can be transmitted over communications lines such as the sound of a voice over the phone line.
The mode used by your modem when answering an incoming call from an originating modem. The transmit/receive frequencies are the reverse of the originating modem, which is in originate mode.
A computer program designed to perform a specific task or set of tasks. Examples include word processing and spreadsheet applications.
Automatic Repeat reQuest. A function that allows your modem to detect flawed data and request that it be retransmitted. See MNP and V.42.
American Standard Code for Information Interchange. A code used to represent letters, numbers, and special characters such as $, !, and /.
Data transmission in which the length of time between transmitted characters may vary. Because characters may not be transmitted at set intervals, start/stop bits are used to mark the beginning and end of each character.
Sets the modem to pick up the phone line when it detects a certain number of rings. See S-register S0 in the Technical Reference section of this guide.
A process where your modem dials a call for you. The dialing process is initiated by sending an ATDT (dial tone) or ATDP (dial pulse) command followed by the telephone number. Auto-dial is used to dial voice numbers. See basic data command Dn in the Technical Reference section of this guide.
A term used to measure the speed of an analog transmission from one point to another. Although not technically accurate, baud rate is commonly used to mean bit rate.
A 0 or 1, reflecting the use of the binary numbering system. Used because the computer recognizes either of two states, OFF or ON. Shortened form of binary digit is bit.
Also referred to as transmission rate. The number of binary digits, or bits, transmitted per second (bps). Communications channels using analog modems are established at set bit rates, commonly 2400, 4800, 9600, 14,400, 28,800, 33,600, and higher.
bits per second (bps)
The bits (binary digits) per second rate. Thousands of bits per second are expressed as kilobits per second (Kbps).
A temporary memory area used as storage during input and output operations. An example is the modem's command buffer.
A group of binary digits stored and operated upon as a unit. Most often the term refers to 8-bit units or characters. One kilobyte (KB) is equal to 1,024 bytes or characters; 640 KB is equal to 655,360 bytes or characters.
The basic signal altered or modulated by the modem in order to carry information.
A representation, coded in binary digits, of a letter, number, or other symbol.
characters per second (cps)
A data transfer rate generally estimated from the bit rate and the character length. For example, at 2400 bps, 8-bit characters with start/stop bits (for a total of ten bits per character) will be transmitted at a rate of approximately 240 characters per second (cps). Some protocols, such as error-control protocols, employ advanced techniques such as longer transmission frames and data compression to increase cps.
class 1 and 2.0
International standards used by fax application programs and faxmodems for sending and receiving faxes.
cyclic redundancy checking (CRC)
An error-detection technique consisting of a test performed on each block or frame of data by both sending and receiving modems. The sending modem inserts the results of its tests in each data block in the form of a CRC code. The receiving modem compares its results with the received CRC code and responds with either a positive or negative acknowledgment.
The transmission or sharing of data between computers via an electronic medium.
data compression table
A table containing values assigned for each character during a call under MNP5 data compression. Default values in the table are continually altered and built during each call: The longer the table, the more efficient throughput gained.
Mode used by a modem when sending and receiving data files.
Data Communications (or Circuit-Terminating) Equipment, such as dial-up modems that establish and control the data link via the telephone network.
Any setting assumed, at startup or reset, by the computer's software and attached devices. The computer or software will use these settings until changed by the user or other software.
A test that checks the modem's RS-232 interface and the cable that connects the terminal or computer and the modem. The modem receives data (in the form of digital signals) from the computer or terminal and immediately returns the data to the screen for verification.
Discrete, uniform signals. In this guide, the term refers to the binary digits 0 and 1.
Data Terminal (or Terminating) Equipment. A computer that generates or is the final destination of data.
Indicates a communications channel capable of carrying signals in both directions. See half-duplex, full-duplex.
Electronic Industries Association (EIA)
Group which defines electronic standards in the U.S.
Various techniques that check the reliability of characters (parity) or blocks of data. V.42 and MNP error-control protocols use error detection (CRC) and retransmission of flawed frames (ARQ).
A method for transmitting the image on a page from one point to another. Commonly referred to as fax.
The mode used by a modem to send and receive data in facsimile format. See definitions for V.17, V.27 ter, V.29.
A mechanism that compensates for differences in the flow of data into and out of a modem or other device. See extended data commands &Hn, &In, &Rn in the Technical Reference section of this guide.
A data communications term for a block of data with header and trailer information attached. The added information usually includes a frame number, block size data, error-check codes, and Start/End indicators.
Signals can flow in both directions at the same time over one line. In microcomputer communications, this may refer to the suppression of the online local echo.
Signals can flow in both directions, but only one way at a time. In microcomputer communications, may refer to activation of the online local echo, which causes the modem to send a copy of the transmitted data to the screen of the sending computer.
Hertz, a frequency measurement unit used internationally to indicate cycles per second.
An electronic communications network that connects computer networks and organizational computer facilities around the world.
Internet Service Provider (ISP)
A company that provides dial-up (modem) access to the Internet for a fee.
An international organization that defines standards for telegraphic and telephone equipment. For example, the Bell 212A standard for 1200-bps communication in North America is observed internationally as ITU-T V.22. For 2400-bps communication, most U.S. manufacturers observe V.22 bis.
Link Access Procedure for Modems. An error-control protocol defined in ITU-T recommendation V.42. Like the MNP protocols, LAPM uses cyclic redundancy checking (CRC) and retransmission of corrupted data (ARQ) to ensure data reliability.
A modem feature that enables the modem to display keyboard commands and transmitted data on the screen. See basic data command En in the Technical Reference section of this guide.
Microcom Networking Protocol, an error-control protocol developed by Microcom, Inc., and now in the public domain. There are several different MNP protocols, but the most commonly used one ensures error-free transmission through error detection (CRC) and retransmission of flawed frames.
A device that transmits/receives computer data through a communications channel such as radio or telephone lines. It also changes signals received from the phone line back to digital signals before passing them to the receiving computer.
nonvolatile memory (NVRAM)
User-programmable random access memory whose data is retained when power is turned off. On the USRobotics modem, it includes four stored phone numbers and the modem settings.
Modem operations that are the equivalent of manually lifting a phone receiver (taking it off-hook) and replacing it (going on-hook).
online fall back/fall forward
A feature that allows high-speed, error-control modems to monitor line quality and fall back to the next lower speed in a defined range if line quality diminishes. As line conditions improve, the modems switch up to the next higher speed.
The mode used by your modem when initiating an outgoing call to a destination modem. The transmit/receive frequencies are the reverse of the called modem, which is in answer mode.
A simple error-detection method that checks the validity of a transmitted character. Character checking has been surpassed by more reliable and efficient forms of error checking, including V.42 and MNP 2-4 protocols. Either the same type of parity must be used by two communicating computers, or both may omit parity.
A system of rules and procedures governing communications between two or more devices. Protocols vary, but communicating devices must follow the same protocol in order to exchange data. The format of the data, readiness to receive or send, error detection and error correction are some of the operations that may be defined in protocols.
Random Access Memory. Memory that is available for use when the modem is turned on, but that clears of all information when the power is turned off. The modem's RAM holds the current operational settings, a flow control buffer, and a command buffer.
remote digital loopback
A test that checks the phone link and a remote modem's transmitter and receiver.
A copy of the data received by the remote system, returned to the sending system, and displayed on the screen. Remote echoing is a function of the remote system.
Read Only Memory. Permanent memory, not user-programmable.
The consecutive flow of data in a single channel. Compare to parallel transmissions where data flows simultaneously in multiple channels.
The signaling bits attached to a character before and after the character is transmitted during asynchronous transmission.
A device whose keyboard and display are used for sending and receiving data over a communications link. Differs from a microcomputer or a mainframe in that it has little or no internal processing capabilities.
Software mode that allows direct communication with the modem. Also known as command mode.
The amount of actual user data transmitted per second without the overhead of protocol information such as start/stop bits or frame headers and trailers. Compare with characters per second.
The ITU-T standard specification that covers the initial handshaking process.
An ITU-T standard for making facsimile connections at 14,400 bps, 12,000 bps, 9,600 bps, and 7,200 bps.
An ITU-T standard for modems operating in asynchronous mode at speeds up to 300 bps, full-duplex, on public switched telephone networks.
An ITU-T standard for modem communications at 1,200 bps, compatible with the Bell 212A standard observed in the U.S. and Canada.
An ITU-T standard for modem communications at 2,400 bps. The standard includes an automatic link negotiation fallback to 1,200 bps and compatibility with Bell 212A/V.22 modems.
An ITU-T standard for facsimile operations that specifies modulation at 4,800 bps, with fallback to 2,400 bps.
An ITU-T standard for facsimile operations that specifies modulation at 9,600 bps, with fallback to 7,200 bps.
An ITU-T standard for modem communications at 9,600 bps and 4,800 bps. V.32 modems fall back to 4,800 bps when line quality is impaired.
An ITU-T standard that extends the V.32 connection range: 4,800, 7,200, 9,600, 12,000, and 14,400 bps. V.32 bis modems fall back to the next lower speed when line quality is impaired, fall back further as necessary, and also fall forward (switch backup) when line conditions improve (see online fall back/fall forward).
An ITU-T standard that currently allows data rates as high as 28,800 bps.
An enhancement to V.34 that enables data transfer rates as high as 33,600 bps.
An ITU-T standard for modem communications that defines a two-stage process of detection and negotiation for LAPM error control.
An extension of ITU-T V.42 that defines a specific data compression scheme for use during V.42 connections.
An ITU-T standard for modem data compression. It provides for a 6:1 compression ratio.
The ITU-T standard for 56 Kbps modem communications. This technology uses the digital telephone network to increase the bit rate of the receive channel by eliminating the analog to digital conversion commonly found in modem connections. V.90 connections require a modem with V.90 or x2 technology calling a digitally connected Internet Service Provider or corporate host site compatible with V.90 or x2 technology.
The ITU-T standard for advanced 56 Kbps modem communications. This technology offers three new features to enhance the V.90 standard. The first feature is V.PCM-Upstream, which allows a modem's upstream communication to reach speeds of 48,000 bps. The second feature provides quicker connection times by allowing the modem to remember the line conditions of a V.92 supported service provider. The third feature is the Modem on Hold technology, which allows your Internet connection to be suspended when there is an inbound telephone call, then return to the connection when the call is completed without losing the connection. The V.92 technology can only be utilized if a V.92 modem is dialing into an Internet Service Provider that supports and provides a digital V.92 signal.
World Wide Web (WWW)
A part of the Internet designed to allow easier navigation of the network through the use of graphical user interfaces and hypertext links between different addresses.
USRobotics's trademark for its proprietary technology that uses the digital telephone network to increase the bit rate of the receive channel by eliminating the analog-to-digital conversion commonly found in modem connections. x2 connections require a modem with x2 technology calling a digitally connected Internet Service Provider or corporate host site compatible with x2 technology.
Standard ASCII control characters used to tell an intelligent device to stop/resume transmitting data. | <urn:uuid:8b850b22-852e-4826-a9cd-72324510d250> | CC-MAIN-2013-20 | http://www.usr.com/support/5686e/5686e-ug/gloss.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.876833 | 3,112 | 3.015625 | 3 |
African Americans in the Bluegrass
Whether you are spending a day, a week or longer in the Bluegrass Region, you and your family will learn fascinating information about African Americans. Gleanings from your travels will become answers to questions that you might not ever have thought to ask.
History in the Heart of Downtown
The public square at the center of downtown was platted in 1780 as the site of the courthouse for the newly established town of Lexington. The square has always been, and still is, a place where significant events and community activities have occurred. Archive records tell of a fight between a school teacher and a wildcat, controversial slave auctions, military drills, Civil War skirmishes, riots, hangings, speeches and fires that destroyed previous courthouses. This history has been inclusive of African Americans both enslaved and free.
By 1789, an area of the square had been designated as a marketplace and named after the market in London, England - Cheapside (old English ceapan means to buy). William Tucker (1787-1837), a free African American, was one of the merchants who advertised the sale of household items and spices from his stall. Farmers and others, during their monthly visits to transact legal business, bought, sold and swapped livestock and agricultural products. The sale activity, known as Court Day, ended in 1921. Historian J. Winston Coleman, Jr. documented two dozen dealers in Lexington who bought and sold the enslaved between 1833 and 1865. This commercial enterprise established Lexington as one of the largest slave markets in the south. The Cheapside Auction Block stood near Main Street in the general vicinity of the monument for J.C. Breckinridge. The historical marker giving an account of the sale of African Americans stands in the northeast courtyard on Short Street. It was placed on the former site of the whipping post, erected by order of town trustees in 1806.
The impressive Romanesque design courthouse, the fourth built on site, was erected between 1898 and 1900. The Tandy and Byrd Construction Company, owned by African Americans Henry Tandy and Albert Byrd, laid the brick under the stone façade. In 2003 the building became the Lexington History Center; it closed in 2012 for renovation.
|Blue Note: A number of nationally known individuals started their lives in Kentucky. Vertner Tandy (1885-1949), son of constructor Henry Tandy, became the first licensed architect in New York and a founding member of the Alpha Phi Alpha fraternal organization. He designed the New York mansion of Madam C.J. Walker, the hair care product millionaire and Berea Hall dormitory on the campus of Lincoln Institute, Simpsonville.|
In 2009, Cheapside once again became an open-air market when area farmers and merchants began selling fresh produce and food products every Saturday from April through November. The pavilion also serves as performance space for musicians during “Thursday Night Live” and shelters those who attend local festivals, events and celebrations.
Walk around the square to read the wayside markers and stroll our downtown streets to view other points of interest. Historical Highway Markers are located throughout Lexington. Those highlighting African American history include: Doctors' offices at 118 N. Broadway; Historic Pleasant Green Baptist Church at 540 Maxwell Street; Lyman T. Johnson who integrated the University of Kentucky on Administration Drive; Polk/Dalton Infirmary at 148 Deweese Street; African Cemetery No. 2 at 419 East Seventh Street; The Colored Orphan Home at 644 Georgetown Street; The Agricultural and Mechanical Fair of Colored People at Georgetown Street past Nandino Drive; and Maddoxtown Community on Huffman Mill Road. Main Street Baptist Church will be placing a marker at their church in 2013 celebrating 150 years at their West Main Street location.
|Blue Note: The Aviation Museum at Bluegrass Airport off Man-O-War Boulevard and U.S. Hwy 60 has an exhibit about the Tuskegee Airmen of Kentucky as well as other aviation history. 4316 Hanger Drive, behind the airport. (859)231-1219.|
Equine Industry Superstars
Plan a visit to the Kentucky Horse Park by traveling down Hwy 922, Newtown Pike, to Iron Works Pike. On the way, you’ll pass the Coldstream Research Farm on the left. It was once the thoroughbred breeding farm McGrathiana, owned by H.P. McGrath. On this farm worked Oliver Lewis, the African American jockey who won the inaugural Kentucky Derby in 1875. The winning thoroughbred was Aristides, trained by renowned African American Ansel Williamson. Williamson was inducted into the National Museum of Racing and Hall of Fame in 1998. Outlining a portion of the original boundary of the farm is a rock wall fence. A sign designates that it was crafted by African American masons who had replaced the Scottish and Irish immigrant stone masons of the 1840s and 1850s.
Admission to the Kentucky Horse Park includes both the International Museum of the Horse and the American Saddle Horse Museum. African Americans were the national sports superstars during the early development of the thoroughbred racing and Saddlebred horse industries. There are memorials to Isaac Murphy, the first African American jockey to win three Kentucky Derbies, and the famous thoroughbred, Man-O-War and his groom, Will Harbut. "The Buffalo Soldiers of the Western Frontier" is a permanent exhibit housed in the International Museum of the Horse. Pick up a DVD produced by the American Saddlebred Association entitled "Out of the Shadows", the story of African American trainers and owners. (859)233-4303.
|Blue Note: The rock fences seen as you travel the roadways are of limestone that was uncovered in fields being cultivated for agriculture as well as quarried. Most were dry laid - without the use of mortar. The Lexington Fayette Urban County government has ordinances in place that encourage the preservation and restoration of area stone fences. The nonprofit Dry Stone Conservancy has taken on the task of preserving and restoring the stone fences by conducting workshops to train new masons in old techniques. Look for signs that designate the dates, styles and builders of these fences.|
African Americans played an important role in the development of the racing industry. Stop by the Lexington Public Library downtown and you’ll see a mural highlighting a number of influential early African American jockeys, and the world’s largest ceiling clock as well! (859)231-5501.
The Stories of Slaves and Soldiers
Another day's tour can take you just outside Lexington to Waveland, site of a restored historic mansion and slave quarters. Head south on Nicholasville Road, then turn right onto Waveland Museum Lane. The stone building where the enslaved were housed and worked has been preserved and furnished with period artifacts. The guides tell you the history of enslaved on the property in conjunction with the story of the Bryan family, relatives of Daniel Boone, who lived in the Mansion house. (859)272-3611.
Leaving Waveland, turn right onto Hwy 27 again and travel south past Nicholasville, taking the 27 Bypass. Signs let you know you are approaching Camp Nelson, established in 1863 as a supply camp for the Union Army during the Civil War. It became the third largest recruitment and training center for African Americans who formed the regiments known as the United States Colored Troops. Kentucky recruiters enlisted 23,700 African Americans, primarily among those who were enslaved. Some 10,000 began their training at Camp Nelson.
The camp originally encompassed 4,000 acres and held 300 buildings which were dismantled following the war. The house that was used as headquarters was saved and has been restored. Guided tours are available. A self guided tour of the grounds will lead you to the camp's earthen fortifications which are being restored. A number of artifacts which have been unearthed can be viewed in the interpretive center, a replica of a barracks. Camp Nelson Heritage Park was added to the National Parks Underground Railroad Network to Freedom in 2007.
The third weekend in September, the park celebrates Camp Nelson Days. The site comes alive with re-enactors of the 12th Heavy Regiment of the USCT and other military units. Lectures and demonstrations (firing of the cannon, cavalry charges, open fire cooking) help you experience some of what camp life was like for the soldiers as well as the families who escaped slavery and became free.
Adjacent to the Heritage Park is the National Military Cemetery. In an original section, the grave sites of African American soldiers can be found. Check the list of those who are interred to see if you might have relatives who were veterans.
Just beyond the park are several Kentucky Highway Markers that tell the history as it relates to the formation of the Hall community and the Ariel school established following the closing of the camp. (859)881-5716.
|Blue Note: The town of Nicholasville is the birthplace of Morgan and Marvin Smith, the twin brothers whose photography captured images of Harlem, New York between 1935 and 1952.|
Cousins of Influence
Lexington and Richmond are the locations of homes of two influential men who were cousins. Ashland, the Henry Clay Estate is located at 120 Sycamore Drive, just off Richmond Road. At its zenith, the estate encompassed over 600 acres which were developed, cultivated and harvested by 50 enslaved at one time by Mr. Clay's telling. The farming operations also included active livestock breeding of horses, sheep and cattle. An interpretive history of the work performed by the enslaved in the management of the farm and household is presented. There are archival panels along with a sketch of Charles Dupuy, a member of the family responsible for the personal care of the Clay household. The Dupuy family traveled to Washington, D.C. when Henry Clay was appointed Secretary of State in 1825 and lived in the Decatur house, the Clay’s official residence. The story of Charlotte Dupuy's lawsuit filed in 1829, petitioning for her freedom as well as that of her two children, is truly fascinating. Charlotte did not win the suit, but Henry Clay did finally emancipate her and her two children, Charles and Mary Ann, in the 1840s. There are archive photos of the T.H. Hummons' family and other African Americans who were employed in the household from the 1900s to 1964. (859)266-8581.
From the Henry Clay estate, turn right onto Richmond Road and take I-75 South to Richmond, exit 95, to discover White Hall State Historic Site, the home of Henry Clay's cousin. The road leads to the home of Cassius Marcellus Clay - not the boxer - but the man who served as Ambassador to Russia during Abraham Lincoln's presidency. Cassius became an ardent emancipationist, having freed 50 of those enslaved to him in 1844. He printed the True American, a newspaper in 1845 promoting the emancipation of the enslaved. White Hall, a 44 room Italianate mansion, makes an impressive appearance as you approach the entrance.
|Blue Note: Cassius M. Clay supported the founding of Berea College in 1855, donating both land and money. Founder John G. Fee promoted the idea of a school where students from the Appalachian region could be educated regardless of race and income. Julia Britton, grandmother of Benjamin Hooks, Director of the NAACP, John H. Jackson, first president of Kentucky State University and Carter G. Woodson, founder of Black History Week, were graduates. The college is located in Berea, KY, just south of Richmond. You can spend a full day in the town enjoying the food, crafts and history.|
At the right rear of the house is a stone building that was used as housing and workspace for the enslaved. Several of the original outbuildings have also been restored. One serves as the Gift Shop and location for admission to the home. There are picnic tables and restroom facilities, so plan for lunch or a late afternoon snack on the grounds. (859)623-9178
An Afternoon in Paris
A scenic drive to Paris will take you past historic horse farms and more rock wall fences. Take Broadway/Paris Pike, Hwy 68 North from Lexington. One of the first stops should be the Thoroughbred Training Center located at 3380 Paris Pike. This facility actually trains future champion horses. You do need to be there before 9 a.m. if you want to see the horses put through their paces. Observing the work here will help you understand what is involved in the care and preparation of thoroughbreds for their careers in racing.
In earlier times, the tasks you observe would have been performed by African Americans, many of whom were children and young males. At age seven and eight, they started working in the barns and stables. By ten years of age some were being mounted on the horses as exercisers. Jockeys Isaac Murphy and William Walker began riding at the age of 11 and Raleigh Colston, Jr. rode in his first Kentucky Derby at the age of 13 in 1875. (859)293-1853. Reservations recommended.
If you have stopped at the training center, return to Paris Pike and continue into town. Visit the Hopewell Museum, (859)987-7274, located in the old Paris post office at 800 Pleasant Street. There is a permanent display featuring Garrett Morgan, inventor of the traffic signal and gas mask. Look for the Kentucky Historical Highway Marker at 10th and Vine Streets that marks the birthplace of Garrett Morgan.
Several quaint, independently owned restaurants make great lunch or dinner stops to round out your afternoon in Paris.
A Hamlet and a Railroad Town
Leaving Lexington from another direction, head west on Leestown Road (Hwy 421) and you’ll pass an African American community established in 1865 by Frederick Braxton, founder and minister of the Main Street Baptist Church in Lexington. He had purchased land and sold small acreage to other blacks after emancipation. They named the community in his honor, Bracktown.
Stay on Hwy 421 until you reach Midway. Turn left at Hwy 62 which will lead you to town. Don't be surprised to find that the railroad tracks run through the middle of the street. When goods were delivered by rail, it made it convenient to off load supplies directly to stores. On Railroad Street, a marker pays tribute to Edward Dudley "Dick" Brown. He was born into slavery in Lexington about 1848. R.A. Alexander purchased him at auction around 1856 and brought him to the Woodburn farm in Woodford County where he began his career as a stable boy. He eventually advanced to exerciser, jockey, trainer and finally owner of his own thoroughbred, Ben Brush, 1896 Kentucky Derby winner. Also in town are Historical Markers detailing the history of the Second Christian Church, Smith Street; Pilgrim Baptist Church, 133 East Stephen Street and St. Matthews AME Church, 112 S. Winter Street. They are within walking distance from Railroad Street.
A Capital Idea
Leaving Midway, get back on Hwy 421 and follow it into Frankfort, Kentucky’s capital. Take the by-pass until you see the sign directing you to Kentucky State University. Founded in 1887 by act of legislature, it became the first state supported school to train African Americans to become teachers. John H. Jackson, a native of Lexington, became its first president. Recitation Hall was the first building completed in 1887 by stone mason, James C. Brown. The building was renamed Jackson Hall and placed on the National Register of Historic Places in 1973. The building is now the office/museum of the Center of Excellence for the Study of Kentucky African Americans. Visit the Welcome Center to view a display on African American history. Visitor permit parking is available.
Other sites to visit in Frankfort are the Memorial to United States Colored Troops at the Greenhill Cemetery, the Kentucky Military History Museum, the Thomas D. Clark Center for Kentucky History, Kentucky State Capitol and Old State Capitol. Historic Markers are located at St. John AME Church, 210 West Clinton; 1st Baptist Church at 100 W. Clinton and Emily Thomas Tubman House on Washington Street.
For more information call the Lexington Convention and Visitors Bureau at 800-845-3959.
Written by Yvonne Giles, December 2008 | <urn:uuid:88da4d25-3083-404d-80d8-d5a552c80292> | CC-MAIN-2013-20 | http://www.visitlex.com/idea/african-americans.php | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.964578 | 3,402 | 3.40625 | 3 |
(1) The section of a pedestal between base and surbase. (2) The lower portion of the wall of a room, decorated diffrently from the upper section.
The dance of death, a favorite late medieval picture subject. It generally shows skeletons forcing the living to dance with them, usually in matching pairs, e.g. a live priest dancing with a skeleton priest. Holbein's woodcut series the Dance of Death is one of the most famous.
Refers to a style of painting that developed in Regensburg, Germany, and elsewhere along the Danube river during the Renaissance and Reformation. It is characterized by a renewed interest in medieval piety, an expressive use of nature, the relationship of the human figure and events to nature, and the introduction of landscape as a primary theme in art. The term was coined by Theodor von Frimmel (1853-1928), who believed that painting in the Danube River region around Regensburg, Passau, and Linz possessed common characteristics; the style seems to exist even though leading artists did not form a school in the usual sense of the term, since they did not work in a single workshop or in a particular centre. Major artists whose work represents the style include Lucas Cranach the Elder, Albrecht Altdorfer and Wolf Huber.
a minister who was below the rank of priest in the Catholic, Anglican and Orthodox churches. Deacons originally cared for both the sick and the poor in early Christian communities.
the representation of Christ enthroned in glory as judge or ruler of the world, flanked by the Virgin Mary and John the Baptist acting as intercessors.
in medieval art a picture, often an altarpiece, consisting of two folding wings without a fixed central area.
In Renaissance art theory, the design of a painting seen in terms of drawing, which was help to be the basis of all art. The term stresses not the literal drawing, but the concept behind an art work. With the Mannerists the term came to mean an ideal image that a work attempts to embody but can in fact never fully realize. As disegno appeals to the intellect, it was considered far more important that coloure (colour), which was seen as appealing to the senses and emotions.
A technique of painting in which pigments are diluted with water and bound with a glue. It was usually used for painting wall decorations and frescoes, though a few artists, notably Andrea Mantegna (1430/31-1506), also used it on canvas.
in architecture, hemispherical structure evolved from the arch, usually forming a ceiling or roof.
A Roman Catholic order of mendicant friars founded by St. Dominic in 1216 to spread the faith through preaching and teaching. The Dominicans were one of the most influential religious orders in the later Middle Ages, their intellectual authority being established by such figures as Albertus Magnus and St.Thomas Aquinas. The Dominicans played the leading role in the Inquisition.
a patron who commissioned a work of art for a church. Donors sometimes had their portraits included in the work they were donating as a sign of piety.
A male garment, formerly worn under armour, that from the 15th century referred to a close-fitting jacket.
A ceramic product invented in England around 1720, which belongs to the category of fine stoneware. The porous white bodies are made of fired raw materials containing clay and kaolin as well as quartz, feldspar, and talc. A transparent glaze is applied upon the first or second firing. Earthenware, which is suitable for everyday use, is distinguished by its light, creamy surface. The most famous example of this category is made by the English firm of Wedgwood (since 1780).
Stand on which a painting is supported while the artist works on it. The oldest representation of an easel is on an Egyptian relief of the Old Kingdom (c. 2600-2150 BC). Renaissance illustrations of the artist at work show all kinds of contrivances, the commonest being the three-legged easel with pegs, such as we still use today. Light folding easels were not made until the 18th and 19th centuries, when painters took to working out of doors. The studio easel, a 19th-century invention, is a heavy piece of furniture, which runs on castors or wheels, and served to impress the c1ients of portrait painters. Oil painters need an easel which will support the canvas almost vertically or tip it slightly forward to prevent reflection from the wet paint, whereas the watercolourist must be able to lay his paper nearly flat so that the wet paint will not run down. The term 'easel-painting' is applied to any picture small enough to have been painted on a standard easel.
The words of Pontius Pilate in the Gospel of St. John (19, 5) when he presents Jesus to the crowds. Hence, in art, a depiction of Jesus, bound and flogged, wearing a crown of thorns and a scarlet robe.
In portraiture, a pose in which the sitter faces the viewer directly; full face.
Coloured glass in powder form and sometimes bound with oil, which is bonded to a metal surface or plaque by firing.
A print made from a metal plate that has had a design cut into it with a sharp point. Ink is smeared over the plate and then wiped off, the ink remaining in the etched lines being transferred when the plate is pressed very firmly onto a sheet of paper.
A combining of several media grouped together to form a composite art work. Chapels were among the most notable Renaissance ensembles, sometimes combining panel painting, fresco, sculpture, and architecture.
In classical architecture, the part of a building between the capitals of the columns and the roof. It consists of the architrave, the frieze, and the cornice.
Pictures or tables with reliefs and inscriptions erected in honour of the deceased in churches or sepulchral chapels.
the science of the end of the world and beginning of a new world, and of the last things,death and resurrection.
the sacrament of Holy Communion, celebrated with bread and wine, the most sacred moment of the Christian liturgy.
The term is used in an Italian context to designate spiritual currents manifest around 1540 which might be said to have occupied the confessional middle ground between Catholicism and Protestantism; hence it does not relate at all to the term 'Evangelical' as used in German or English contexts. It has been applied particularly to the so-called spirituali of the Viterbo circle, notably Cardinal Pole, Vittoria Colonna, Marcantonio Flaminio, Carnesecchi and Ochino, and also to Giulia Gonzaga, Contarini, Giovanni Morone; Gregorio Cortese and Vermigli. Such persons combined a zeal for personal religious renewal with spiritual anxieties akin to those of Luther, to which they sought an answer in the study of St Paul and St Augustine; convinced of the inefficacy of human works, they stressed the role of faith and the all-efficacy of divine grace in justification. Few of them broke with the Catholic Church.
Tin-glazed European earthenware, particularly ware made in France, Germany, Spain, and Scandinavia. It developed in France in the early 16th century, was influenced by the technique and the designs of Italian maiolica, and is named for Faenza, Italy, which was famous for maiolica. It is distinguished from tin-glazed earthenware made in Italy, which is called "maiolica," and that made in the Netherlands and England, which is called "delftware." It has no connection to the ancient objects or material also named faience, which was developed in the Near East ca. 4500 BCE.
A title given to those leaders of the early Christian Church whose writings had made an important contribution to the development of doctrine. Saints Ambrose, Augustine, Jerome, and Gregory the Great were often considered the four principal Fathers of the Church.
Ancient Roman god of nature, protector of shepherds, farmers, fields and livestock. Equated with the Greek god Pan, he is frequently depicted with a goats legs and horns.
Architectural ornaments consisting of fruit, leaves, and flowers suspended in a loop; a swag.
In painting, representation of a rural feast or open-air entertainment. Although the term fête galante ("gallant feast") is sometimes used synonymously with fête champêtre, it is also used to refer to a specific kind of fête champêtre: a more graceful, usually aristocratic scene in which groups of idly amorous, relaxed, well-dressed figures are depicted in a pastoral setting.
of a column or pillar, carved with closely spaced parallel grooves cut vertically.
the Four Horsemen in the Revelation of St John (Rev 6, 2 - 8), which contains the description of the end of the world and the Second Coming of Christ. The Horsemen personify the disasters about to happen to mankind, such as plague, war, famine and death. Their attributes are the bow, sword and set of balances. In some sculptures the first rider is identified as Christ by a halo. The colour of his horse is white, that of the others red, black and dun.
A Roman Catholic order of mendicant friars founded by St. Francis of Assisi (given papal approval in 1223). Committed to charitable and missionary work, they stressed the veneration of the Holy Virgin, a fact that was highly significant in the development of images of the Madonna in Italian art. In time the absolute poverty of the early Franciscans gave way to a far more relaxed view of property and wealth, and the Franciscans became some of the most important patrons of art in the early Renaissance.
Wall painting technique in which pigments are applied to wet (fresh) plaster (intonaco). The pigments bind with the drying plaster to form a very durable image. Only a small area can be painted in a day, and these areas, drying to a slightly different tint, can in time be seen. Small amounts of retouching and detail work could be carried out on the dry plaster, a technique known as a secco fresco.
Save in Venice, where the atmosphere was too damp, fresco painting was the habitual way of decorating wall surfaces in Italy, both in churches and in private and public palaces. During the 16th century a liking for the more brilliant effect of large canvases painted in oils, and to a lesser extent for tapestries, diminished the use of frescoes save for covering upper walls, covings and ceilings. The technique of buon fresco, or true fresco, involved covering the area with a medium-fine plaster, the intonaco, just rough enough to provide a bond (sometimes enhanced by scoring) for the final layer of fine plaster. Either a freehand sketch of the whole composition (sinopia) was drawn on the wall, or a full-scale cartoon was prepared and its outlines transferred to the intonaco by pressing them through with a knife or by pouncing - blowing charcoal dust through prickholes in the paper. Then over the intonaco enough of the final thin layer was applied to contain a day's work. That portion of the design was repeated on it either by the same methods or freehand, and the artist set to work with water-based pigments while the plaster was still damp; this allowed them to sink in before becoming dry and fixed. (Thus 'pulls' or slices of frescoes could be taken by later art thieves without actually destroying the colour or drawing of the work.) It is usually possible to estimate the time taken to produce a fresco by examining the joins between the plastered areas representing a day's work. Final details, or effects impossible to obtain in true fresco pigments, could be added at the end in 'dry' paints, or fresco secco, a technique in which pigment was laid on an unabsorbent plaster; the best known example of an entire composition in fresco secco is Leonardo's Last Supper.
The highest order the English monarch can bestow. It was founded by Edward III in 1348. The blue Garter ribbon is worn under the left knee by men and on the upper left arm by women. The motto is Honi soit qui mal y pense (Evil to those who think evil).
in classical Rome, a person's invisible tutelary god. In art from the classical period onwards, the low-ranking god was depicted as a winged, usually childish figure.
In a broad sense, the term is used to mean a particular branch or category of art; landscape and portraiture, for example, are genres of painting, and the essay and the short story are genres of literature.
The depiction of scenes from everyday life. Elements of everyday life had long had a role in religious works; pictures in which such elements were the subject of a painting developed in the 16th century with such artists as Pieter Bruegel. Then Carracci and Caravaggio developed genre painting in Italy, but it was in Holland in the 17th century that it became an independent form with its own major achievements, Vermeer being one of its finest exponents.
A term applied to the 14th-century followers of Giotto. The best-known of the 'Giotteschi' are the Florentines Taddeo Gaddi, Maso di Banco, Bernardo Daddi, and to a lesser extent the Master of St Cecilia. Giotto's most loyal follower was Maso, who concentrated on the essential and maintained the master's high seriousness.
French term used from the 15th century onwards for a lying or recumbent effigy on a funerary monument. The gisant typically represented a person in death (sometimes decomposition) and the gisant position was contrasted with the orant, which represented the person as if alive in a kneeling or praying position. In Renaissance monuments gisants often formed part of the lower register, where the deceased person was represented as a corpse, while on the upper part he was represented orant as if alive.
paint applied so thinly that the base beneath it is visible through the layer.
(1) The supernatural radiance surrounding a holy person.
(2) To have the distinction of one's deeds recognized in life and to be revered for them posthumously: this was glory. The nature of true gloria was much discussed, whether it must be connected with the public good, whether the actions that led to it must conform with Christian ethics, how it differed from notoriety. The concept did not exclude religious figures (the title of the church of the Frari in Venice was S. Maria Gloriosa), but it was overwhelmingly seen in terms of secular success and subsequent recognition, as determining the lifestyles of the potent and the form of their commemoration in literature, in portraits and on tombs. As such, it has been taken as a denial of medieval religiosity ('sic transit gloria mundi'), and thus a hallmark of Renaissance individual ism; as a formidable influence on cultural patronage; and as spurring on men of action, as well as writers and artists, to surpass their rivals - including their counterparts in antiquity.
French tapestry manufactory, named after a family of dyers and clothmakers who set up business on the outskirts of Paris in the 15th century. Their premises became a tapestry factory in the early 17th century, and in 1662 it was taken over by Louis XIV, who appointed Lebrun Director. Initially it made not only tapestries but also every kind of product (except carpets, which were woven at the Savonnerie factory) required for the furnishing of the royal palaces — its official title was Manufacture royale des meubles de la Couronne. The celebrated tapestry designed by Lebrun showing Louis XIV Visiting the Gobelins (Gobelins Museum, Paris, 1663-75) gives a good idea of the range of its activities. In 1694 the factory was closed because of the king's financial difficulties, and although it reopened in 1699, thereafter it made only tapestries. For much of the 18th century it retained its position as the foremost tapestry manufactory in Europe. 0udry and Boucher successively held the post of Director (1733-70). The Gobelins continues in production today and houses a tapestry museum.
a noble chivalric order, still in existence today, founded by Duke Philip the Good of Burgundy in 1430 in honor of the Apostle Andrew, for the defence of the Christian faith and the Church. In allusion to the legend of Jason and the Argonauts, the symbol of the order is a golden ram's fleece drawn through a gold ring.
In painting and architecture, a formula meant to provide the aesthetically most satisfying proportions for a picture or a feature of a building. The golden section is arrived at by dividing a line unevenly so that the shorter length is to the larger as the larger is to the whole. This ratio is approximately 8:13. The golden section (sometimes known as the golden mean), which was thought to express a perfect harmony of proportions, played an important role in Renaissance theories of art.
Italian gonfaloniere ("standard bearer"), a title of high civic magistrates in the medieval Italian city-states.
In Florence the gonfaloniers of the companies (gonfalonieri di compagnia) originated during the 1250s as commanders of the people's militia. In the 1280s a new office called the gonfalonier of justice (gonfaloniere di giustizia) was instituted to protect the interests of the people against the dominant magnate class. The holder of this office subsequently became the most prominent member of the Signoria (supreme executive council of Florence) and formal head of the civil administration. In other Italian cities, the role of the gonfaloniers was similar to that in Florence. Gonfaloniers headed the militia from the various city quarters, while the gonfalonier of justice often was the chief of the council of guild representatives.
The kings of France traditionally bore the title gonfalonier of St. Denis. The honorary title of gonfalonier of the church (vexillifer ecclesiae) was conferred by the popes, from the 13th until the 17th century, on sovereigns and other distinguished persons.
Gothic, which may well have originated with Alberti as a derogatory term and which certainly corresponds to Vasari's 'maniera tedesca' ('German style'), is properly the descriptive term for an artistic style which achieved its first full flowering in the Ile de France and the surrounding areas in the period between c. 1200 and c. 1270, and which then spread throughout northern Europe. It is characterized by the hitherto unprecedented integration of the arts of sculpture, painting, stained glass and architecture which is epitomized in the great cathedrals of Chartres, Amiens, and Reims or in the Sainte Chapelle in Paris. In all the arts the predominantly planar forms of the Romanesque are replaced by an emphasis on line. There is a transcendental quality, whether in the soaring forms of the pointed arches or in the new stress on the humanity of Christ, which similarly distinguishes it from the preceding Romanesque style.
In thinking of Nicola (d. c. 1284) or Giovanni Pisano (d. after 1314) there is same danger of forgetting what had happened in French sculpture half a century or more earlier, and likewise it is hard to remember that the spectacular achievements of early Renaissance art are a singularly localized eddy in the continuing stream of late gothic European art. By northern European standards few Italian works of art can be called gothic without qualification, and the story of 13th and 14th century Italian architecture is as much one of resistance to the new style as of its reception, whether directly from France or through German or central European intermediaries. In sculpture and in painting, the Italian reluctance to distort the human figure, conditioned by a never wholly submerged awareness of the omnipresent antique heritage, gives a special quality to the work of even those artists such as Giovanni Pisano or Simone Martini who most closely approached a pure gothic style.
Nevertheless, the vitalizing role of Northern gothic art throughout the early Renaissance and the period leading up to it should never be underestimated. The artistic, like the cultural and commercial, interaction was continuous and much of the Italian achievement is incomprehensible if seen in isolation. It is not merely at the level of direct exchanges between one artist and another, or the influence of one building; painting, manuscript or piece of sculpture upon another, that the effects are to be felt. The streaming quality of line which is so characteristic of Brunelleschi's early Renaissance architecture surely reflects a sensitivity to the gothic contribution which is entirely independent of, and lies much deeper than, the superficial particularities of form.
The counterflow of influence and inspiration from South to North must likewise not be underrated. In particular, the contribution of Italian painters from Duccio and Simone Martini onwards is central to the evolution of the so-called International Gothic style developing in Burgundy, Bohemia and north Italy in the late 14th and early 15th centuries.
Gouache is opaque watercolour, known also as poster paint and designer's colour. It is thinned with water for applying, with sable- and hog-hair brushes, to white or tinted paper and card and, occasionally, to silk. Honey, starch, or acrylic is sometimes added to retard its quick-drying property. Liquid glue is preferred as a thinner by painters wishing to retain the tonality of colours (which otherwise dry slightly lighter in key) and to prevent thick paint from flaking. Gouache paints have the advantages that they dry out almost immediately to a mat finish and, if required, without visible brush marks. These qualities, with the capacities to be washed thinly or applied in thick impasto and a wide colour range that now includes fluorescent and metallic pigments, make the medium particularly suited to preparatory studies for oil and acrylic paintings. It is the medium that produces the suede finish and crisp lines characteristic of many Indian and Islamic miniatures, and it has been used in Western screen and fan decoration and by modern artists such as Rouault, Klee, Dubuffet, and Morris Graves.
Term applied to the lofty and rhetorical manner of history painting that in academic theory was considered appropriate to the most serious and elevated subjects. The classic exposition of its doctrines is found in Reynolds's Third and Fourth Discourses (1770 and 1771), where he asserts that 'the gusto grande of the Italians, the beau idéal of the French, and the great style, genius, and taste among the English, are but different appellations of the same thing'. The idea of the Grand Manner took shape in 17th-century Italy, notably in the writings of Bellori. His friend Poussin and the great Bolognese painters of the 17th century were regarded as outstanding exponents of the Grand Manner, but the greatest of all was held to be Raphael.
An extensive journey to the Continent, chiefly to France, the Netherlands, and above all Italy, sometimes in the company of a tutor, that became a conventional feature in the education of the English gentleman in the 18th century. Such tours often took a year or more. It had a noticeable effect in bringing a more cosmopolitan spirit to the taste of connoisseurs and laid the basis for many collections among the landed gentry. It also helped the spread of the fashion for Neoclassicism and an enthusiasm for Italian painting. Among the native artists who catered for this demand were Batoni, Canaletto, Pannini, and Piranesi, and British artists (such as Nollekens) were sometimes able to support themselves while in Italy by working for the dealers and restorers who supplied the tourist clientele. There was also a flourishing market in guide books.
A cross with four arms of equal length.
Term current with several different meanings in the literature of the visual arts. In the context of the fine arts, it most usually refers to those arts that rely essentially on line or tone rather than colour — i.e. drawing and the various forms of engraving. Some writers, however, exclude drawing from this definition, so that the term 'graphic art' is used to cover the various processes by which prints are created. In another sense, the term — sometimes shortened to 'graphics' — is used to cover the entire field of commercial printing, including text as well as illustrations.
A painting done entirely in one colour, usually gray. Grisaille paintings were often intended to imitate sculpture.
Italian political terms derived from the German Welf, a personal and thence family name of the dukes of Bavaria, and Waiblingen, the name of a castle of the Hohenstaufen dukes of Swabia apparently used as a battle cry. Presumably introduced into Italy 1198-1218, when partisans of the Emperor Otto IV (Welf) contested central Italy with supporters of Philip of Swabia and his' nephew Frederick II, the terms do not appear in the chronicles until the Emperor Frederick's conflict with the Papacy 1235-50, when Guelf meant a supporter of the Pope and Ghibelline a supporter of the Empire. From 1266 to 1268, when Naples was conquered by Charles of Anjou, brother of Louis IX, the French connection became the touchstone of Guelfism, and the chain of Guelf alliances stretching from Naples, through central Italy, to Provence and Paris, underwritten by the financial interests of the Tuscan bankers, became an abiding feature of European politics. The Italian expeditions of Henry of Luxemburg (1310-13) and Lewis of Bavaria (1327-29) spread the terms to northern Italy, with the Visconti of Milan and the della Scala of Verona emerging as the leading Ghibelline powers. Attempts by Guelf propagandists to claim their party as the upholder of liberty and their opponents as the protagonists of tyranny rarely coincide with the truth: power politics, then as now, generally overrode ideology in inter-state affairs.
Factional struggles had existed within the Italian states from time immemorial, the parties taking a multitude of local names. In Florence, however, Guelf and Ghibelline were applied to the local factions which supposedly originated in a feud between the Buondelmonte and Amidei clans, c. 1216. In 1266-67 the Guelf party, which had recruited most of the merchant class, finally prevailed over the predominantly noble Ghibellines; after this, internal factions in Florence went under other names, like the Blacks and the Whites who contested for control of the commune between 1295 and 1302. Meanwhile the Parte Guelfa had become a corporate body whose wealth and moral authority as the guardian of political orthodoxy enabled it to play the part of a powerful pressure group through most of the 14th century. After the War of the Eight Saints, the influence of the Parte declined rapidly. Although its palace was rebuilt c. 1418-58 to the designs of Brunelleschi, it had no part in the conflicts surrounding the rise of the Medici régime.
An association of the masters of a particular craft, trade or profession (painters, goldsmiths, surgeons, and so on) set up to protect its members' rights and interests. Such guilds existed in virtually every European city in the 16th century. The guild also monitored standards of work, acted as a court for those who brought their trade into disrepute, and provided assistance to members in need.
Guilds were essentially associations of masters in particular crafts, trades, or professions. In Italy they go back a long way; there is documentary evidence of guilds in 6th century Naples. In origin they were clubs which observed religious festivals together and attended the funerals of their members, but in time they acquired other functions. Their economic function was to control standards and to enforce the guild's monopoly of particular activities in a particular territory. Their political function was to participate in the government of the city-state. In some cities, notably Florence in the 14th century, only guildsmen were eligible for civic office, thus excluding both noblemen (unless they swallowed their pride and joined, as some did), and unskilled workers like the woolcombers and dyers. In Florence in 1378 these groups demanded the right to form their own guilds, and there were similar movements of protest in Siena and Bologna.
Guilds were also patrons of art, commissioning paintings for guildhalls, contributing to the fabric fund of cathedrals and collaborating on collective projects like the statues for Orsanmichele at Florence. The guilds were not equal. In Florence, the 7 'Greater Guilds', including such prestigious occupations as judges and bankers, outranked the 14 'Lesser Guilds', and in general the guild hierarchy was reflected in the order of precedence in processions. The great age of the guilds was the 13th and 14th centuries. The economic recession after 1348 meant fewer opportunities for journeymen to become masters, and greater hostility between master and man. The shift from trade to land in the 15th and 16th centuries meant a decline in the social standing of the crafts. In some towns, such as Brescia and Vicenza, guild membership actually became a disqualification instead of a qualification for municipal office. The guilds lost their independence and became instruments of state control. In 16th century Venice, for example, they were made responsible for supplying oarsmen for the galleys of the state.
Dutch painters who worked in The Hague between 1860 and 1900, producing renderings of local landscapes and the daily activities of local fisherman and farmers in the style of Realism. In this they extended the traditional focus on genre of the 17th-century Dutch masters with the fresh observation of their contemporary French counterparts, the Barbizon school. The group included Jozef Israëls; Hendrik Willem Mesdag; Jan Hendrik Weissenbruch; Jacob Maris, Matthijs Maris, and Willem Maris; Johannes Bosboom; and Anton Mauve.
In a drawing, print or painting, a series of close parallel lines that create the effect of shadow, and therefore contour and three-dimensionality In crosshatching the lines overlap.
the study of the meaning of emblems and coats of arms, with the rules governing their use.
The heretical movements affecting Italy between the mid-12th and the mid-14th century had their main impact in an area covering the north-west of the peninsula and southern France: it is not possible to speak of distinct Italian and meridional French movements. The authentically Christian movements which were expelled from the Catholic Church must in the first instance be distinguished from Catharism, which represented an infiltration by the originally non-Christian dualist system of Manichaeanism; from the start, the Cathars were an anti-church. By contrast, the Waldensian, Spiritual and Joachimite movements appeared initially as vital manifestations of Catholicism; only after their condemnation by the ecclesiastical authorities do they seem to have developed notably eccentric doctrines and to have described themselves as the true Church in opposition to the institutional Church; they had a recognizable kinship with movements that remained within the pale of orthodoxy.
These Christian heresies had in common an attachment to the ideal of apostolic poverty, which came to be seen by the ecclesiastical authorities as a challenge to the institutionalized Church. The Waldensians or Valdesi (not to be confused with Valdesiani, the followers of Juan de Valdes, d. 1541) took their origin from the Poor Men of Lyons, founded by Peter Valdes or Waldo in the 1170s. They were distinguished by a strong attachment to the Bible and a desire to imitate Christ's poverty. At first approved by the Papacy as an order of laymen, they were condemned in 1184. Likewise condemned was the rather similar Lombard movement of the Humiliati. One stream of these remained as an approved order within the Catholic Church, while others merged with the Waldensians. The Waldensians came to teach that the sacraments could be administered validly only by the pure, i.e: only by Waldensian superiors or perfecti practising evangelical poverty. Alone among the heretical sects existing in Italy they were organized as a church, and regarded themselves as forming, together with brethren north of the Alps, one great missionary community. They spread all over western and central Europe but in the long term they came to be largely confined to the Rhaetian and Cottian Alps (the Grisons and Savoy). The Italian Waldensians in the 16th century resisted absorption by Reformed Protestantism.
The early Franciscans might be regarded as a movement, similar in character to the Poor Men of Lyons, which was won for the cause of Catholic orthodoxy. However, divisions within the order over the issue of poverty led to religious dissidence. The Spirituals held up the ideal of strict poverty as obligatory for Franciscans and, indeed, normative for churchmen; following the Papacy's recognition of the Franciscan order as a property-owning body in 1322-23, their position became one of criticism of the institutional Church as such. Their heresies came to incorporate the millenarian doctrines of the 12th century abbot Joachim of Fiore. He had prophesied a coming age of the Holy Spirit ushered in by Spiritual monks; his heretical followers prophesied a new Spiritual gospel that would supersede the Bible. Joachimite Spiritualists came to see the pope, head of the 'carnal Church', as Antichrist. The main impact of the movement upon the laity was in southern France; in Italy it was an affair of various groups of fraticelli de paupere vita (little friars of the poor life), mainly in the south.
A courtesan of ancient Greece. There may have been one or two hetaira called Lais in ancient Corinth. One was the model of the celebrated painter Apelles.
prepared throne, Preparation of the Throne, ready throne or Throne of the Second Coming is the Christian version of the symbolic subject of the empty throne found in the art of the ancient world. In the Middle Byzantine period, from about 1000, it came to represent more specifically the throne prepared for the Second Coming of Christ, a meaning it has retained in Eastern Orthodox art to the present.
Painting concerned with the representation of scenes from the Bible, history (usually classical history), and classical literature. From the Renaissance to the 19th century it was considered the highest form of painting, its subjects considered morally elevating.
a representation of the Virgin and Child in a fenced garden, sometimes accompanied by a group of female saints. The garden is a symbolic allusion to a phrase in the Song of Songs (4:12): 'A garden enclosed is my sister, my spouse'.
group of American landscape painters, working from 1825 to 1875. The 19th-century romantic movements of England, Germany, and France were introduced to the United States by such writers as Washington Irving and James Fenimore Cooper. At the same time, American painters were studying in Rome, absorbing much of the romantic aesthetic of the European painters. Adapting the European ideas about nature to a growing pride in the beauty of their homeland, for the first time a number of American artists began to devote themselves to landscape painting instead of portraiture. First of the group of artists properly classified with the Hudson River school was Thomas Doughty; his tranquil works greatly influenced later artists of the school. Thomas Cole, whose dramatic and colourful landscapes are among the most impressive of the school, may be said to have been its leader during the group's most active years. Among the other important painters of the school are Asher B. Durand, J. F. Kensett, S. F. B. Morse, Henry Inman, Jasper Cropsey, Frederick E. Church, and, in his earlier work, George Inness.
philosophical movement which started in Italy in the mid-14th century, and which drew on antiquity to make man the focal point. In humanism, the formative spiritual attitude of the Renaissance, the emancipation of man from God took place. It went hand in hand with a search for new insights into the spiritual and scientific workings of this world. The humanists paid particular attention to the rediscovery and nurture of the Greek and Latin languages and literature. To this day the term denotes the supposedly ideal combination of education based on classical erudition and humanity based on observation of reality. | <urn:uuid:8fa305ad-c183-4ce9-8443-401b5e3befd9> | CC-MAIN-2013-20 | http://www.wga.hu/database/glossary/glossar2.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.967775 | 7,788 | 3.015625 | 3 |
Contents - Previous - Next
This is the old United Nations University website. Visit the new site at http://unu.edu
Because the imminent demise or depletion of commercially usable natural forests can be so readily foreseen in many Pacific Island countries (Watt 1980, 297), governments and development agencies have in several places promoted either some form of restocking or enrichment of commercially logged areas or the establishment of forest plantations on degraded grassland sites. Not all these efforts can be classified as agroforestry, strictly speaking; but in the Pacific context, as in most of the tropical world, the traditional, if transient, shift of land use back and forth between forest and agriculture on any particular site makes it relevant to consider what at first glance appear to be purely forestry projects.
Many of the timber species institutionally promoted have been exotics such as Caribbean pine (Pinus caribaea), West Indian mahogany (Swietenia macrophylla), cordia (Cordia alliodora), and Eucalytus spp., although some indigenous Pacific species such as Albizia falcataria, Agathis spp., Araucaria spp., and Endospermum spp. have been successfully established, often as exotics in areas beyond their natural range. Many other species - including West Indian cedar (Cedrela odorata), the silky oak (Grevillea robusta), teak (Tectona grandis), mahogany (Swietenia mahogoni), toon tree ( Toona australis), cadamba (Anthocephalus chinensis), and Albizia lebbeck along with several indigenous trees - have also been the subject of trials, and planted to various degrees throughout the islands.
Firewood and multi-purpose species that have been successfully introduced include Leucaena leucocephala, Erythrina spp., Casuarina spp., and Gliricidia septum, and, to a lesser extent, Securinega samoana and Adenanthera pavonina. Other species, all of which have been planted experimentally and which seem to grow successfully, but which have not yet become so well established, include Cassia, Acacia, and Calliandra spp. Apart from timber and fuel wood, the major multi-purpose objectives of such plantings are site reclamation and amelioration, erosion control, wind protection, shade, multipurpose construction and handicrafts, nurse cropping, fodder, green manure, and food.
The indigenous casuarinas, particularly Casuarina equisetifolia, have also shown considerable promise for reforestation programmes, and have been planted in Tonga in land reclamation projects, in the Cook Islands for the rehabilitation of degraded lands, and on atolls as sources of fuel wood and to protect coconut plantations from saltwater damage. C. oligodon and C. papuana are traditionally used for reforestation and to enrich fallow land in Papua New Guinea, and are now promoted in some areas for land rehabilitation and as shade plants for coffee.
Pine planting in relation to agroforestry
Of the total area of timber plantations in the Pacific, well over 50 per cent is accounted for by Caribbean pine (Pinus caribaea). The largest area of pine planting is in Fiji, where that country's Pine Commission together with the Forestry Department has established over 50,000 ha of plantation since 1960, mostly on degraded anthropogenic grasslands (Drysdale 1988a, 110; Watt 1980, 301). Some pine timber is used locally, but the wood was intended mainly for export, and a wood-chipping mill is now in operation. In the mid1960s, under a programme now discontinued, woodlots of Pinus caribaea on smallholder sugar-cane farms were promoted by the colonial government.
Sized from 0.4 to 2 ha, these woodlots were planted on steeper non-cane areas of farms to control erosion, provide on-farm supplies of timber and fuel wood, and for undergrazing by farm animals (Eaton 1988b, personal communication). Apart from this woodlot grazing and grazing of cattle in association with larger pine plantations (described below), there has been no institutional support for any form of intercropping or other agroforestry activities in pine plantations (Drysdale 1988b).
Similarly, in the limited areas of pine planting in New Caledonia, Western Samoa, Tonga, and the Cook Islands, there has been little or no link to agroforestry in such programmes, with the main focus being on creating a timber resource, land improvement, erosion control, and employment creation in rural areas.
In highland Papua New Guinea large areas of degraded grassland have been planted with pines (Pinus spp.) and Araucaria spp. Intercropping activities are few and consist of the intercropping of coffee and cardamon on a trial and demonstration basis (Howcroft 1983).
In Vanuatu, P. caribaea var. hondurensis is the main species planted in forest plantations in seasonally dry and highly degraded sites on the southern islands of Aneityum and Erromango, where some 550 ha had been established up to April 1985. The commercial viability of such plantings is still uncertain, however, due to poor access to markets and high transport costs. On Erromango, high costs of clearing land of the indigenous pioneering species Acacia spirobis has stopped the development of pine plantations. Benefits in the form of erosion control and aiding the local economy through wages were the main motives behind these programmes (Neil 1986a).
Non-pine forestry in relation to agroforestry
To judge from programmes in Papua New Guinea, the Solomon Islands, Vanuatu, Tonga, and Western Samoa, there seems to be greater promise and greater institutionalized promotion of intercropping with other, primarily broadleaved evergreen, species than has been the case with pines.
In Papua New Guinea, where extensive areas of Eucalyptus deglupta have been planted, cocoa and coffee have been successfully grown at 4 m x 4 m and 3 m x 3 m spacing, respectively, in conjunction with E. deglupta planted at 10 m x 10 m (Jacovelli and Neil 1984, 10).
Also in Papua New Guinea, severe environmental degradation resulting from rapid urban expansion and associated subsistence gardening and "fuel-wood mining" prompted the cities of Lae and Port Moresby to institute fuel-wood-planting programmes. In Lae, in 1978, it was decided to plant 200 ha of sloping land (20°-30°) in Leucaena leucocephala for firewood and to intercrop fuel-wood species with annual food crops in zones designated for subsistence food gardening. The project, which was allocated K250,000 (US$275,000) over six years, had a management component coupled with a public education programme and a team of local government rangers to control gardening and to police the area (King 1987). Follow-on projects were planned but not carried out because of lack of funding. By 1988 the project had ceased to operate, and the original plantings of some 100 ha of L. Ieucocephala, Acacia auriculformis, and Eucalyptus spp. and 5 ha of "agroforestry plantings" of fuel-wood species with food crops had been cut down or removed completely (King 1987).
In Vanuatu, Cordia alliodora, a hardwood native to Central America, has been the main commercial silvicultural species since the mid-1970s, with over 1,000 ha planted on 12 islands as of 1984 (Neil 1984). Cordia was first planted on various islands in 5-10-ha blocks called Local Supply Plantations (LSP). As the potential contribution of forestry to rural and national development became evident, larger, export-oriented Industrial Forest Plantations (IFP) were established on the islands of Pentecost, Erromango, and Aneityum (Jacovelli and Neil 1984). The rapid expansion of IFPs, sometimes with plantings of up to 200 ha per year on single sites, led to unprecedented demands for land and aroused fears among landowners, especially on Pentecost, that these silvicultural activities would make land unavailable for planting subsistence and commercial crops. This prompted the Vanuatu Forest Service to establish, on Pentecost in 1984, demonstration plots growing a wider range of subsistence and cash crops within forestry plantations of Cordia alliodora (Jacovelli and Neil 1984).
Crops established between line plantings of Cordia alliodora included 8 sweet potato cultivars, 6 cassava cultivars, 13 aroid cultivars from Colocasia esculenta, Xanthosoma sagittifolium, and Alocasia macrorrhiza, 12 yam cultivars, kava (Piper methysticum), and trials with coffee (Arabica and Robusta), cocoa, and cardamon. In addition to these trials, subsistence gardens have also been established under Cordia alliodora by both local landowners and forest workers alike (Jacovelli and Neil 1984, 8).
Because C. alliodora may be severely attacked by root rot (Phelli nus noxius) in some conditions, and does not perform well on some sites, other species currently being tried in Vanuatu include Terminalia brassii, T. calamansanai, Eucalyptus deglupta, Swietenia macrophylla, Toona australis, and Cedrela odorata. However, the barks of both T. brassii and E. deglupta are palatable to cattle (Jacovelli and Neil 1984, 10; MacFarlane 1980). The species showing greatest potential as an alternative species to C. alliodora may be S. macrophylla, and if grown with nurse species to reduce pest problems, intercropping should be possible during the early years of rotation (Neil 1986b).
Several other systematic experiments on tree species, both exotic and indigenous, have been carried out in Vanuatu in a search for species especially suitable for fuel wood, timber, or pulpwood, but none of this research was connected with agroforestry. Research on agroforestry has focused almost exclusively on "cash crops which appear to have great potential, particularly coffee and cocoa, and possibly kava and cocoa" (Jacovelli and Neil 1984, 11).
In Fiji, some 22,953 ha of tropical hardwood forests have been planted as of mid-1986. Of these, 14,987 ha are West Indian mahogany (Swietenia macrophylla), 3,058 ha are Cordia alliodora, 2,963 ha are cadamba (Anthocephalus chinensis), 928 ha are Maesopsis eminii, 438 ha are Eucalyptus deglupta, and 202 ha are the indigenous species Endospermum macrophyllum (ADAB 1986). Despite such considerable silvicultural activity, in terms of both hardwood and pines, it is essentially monocultural, and, as the General Manager of the Fiji Pine Commission has stated: "Institutionalized agrosilviculture is non-existent in Fiji at present" (Drysdale 1988b, personal communication).
Tonga's silvicultural activities are more diverse, some being significantly agrosilvicultural. More purely silvicultural activities include a major reforestation programme begun on the island of Eua in the mid-1960s. Over 40 ha of mixed exotic species including Toona australis, Cedrela odorata, Cordia alliodora, Grevillea robusta, Agathis robusta, Pinus caribaea, and Eucalyptus spp., as well as suitable indigenous species, such as Casuarina equisetifolia, Terminalia catappa, and Dysoxylum tongense, were planted on the Eua Forest Farm. Tests of seed stock from throughout the world were also carried out on the farm. Larger areas were subsequently planted, with 104 ha alone being planted in 1979 (Thaman 1984e, 3).
The species most commonly planted in 1984 were Eucalyptus saligna, E. tereticornis, Toona australis, and Pinus caribaea. Seedling pro auction for these species and other timber species, such as Cupressus lusitanica, amounted to 77,491 seedlings (42,427 of which were planted) in 1979 (MAFF 1985, 100-102). Reforestation continues, as the small areas of remaining indigenous forest on Eua are exploited, with the local mill "approaching the end of its productive life as the local hardwood timber supply is cut out and cannot be replaced from the Forest Farm for at least another 10 years" (MAFF 1985, 99). The only truly agroforestry aspect of the Eua silvicultural activities, a taungya system of combined tree-planting and temporary gardens, was phased out because "it has greatly increased pressures for settlement of unsuitable land, and is thus clearly not in the national interest" (MAFF 1985, 100).
A second and continuing agroforestry activity has been the Forestry Extension Programme, which began in the 1960s to produce seedlings for distribution to smallholder farmers for planting in small woodlots or as windbreaks around their agricultural allotments (see chapter 5 on Tongan agroforestry). The major species distributed included Casuarina equisetifolia, Grevillea robusta, Cedrela odorata, Eucalyptus spp., Agathis spp., and Gmelina arborea (Thaman 1984e, 3).
With the establishment of the Extension Nursery at Mataliku on the main island of Tongatapu in 1978, the programme was expanded to include the propagation and distribution of a wide range of timber trees, "cultural" species, and species providing food, medicine, and ornamentation. The considerable interest shown by the people for planting on both rural and town allotments led to a "blossoming of forest extension work" to the point that, in 1978, the nursery could not cope with the demand, which exceeded 8,000 trees per month (MAFF 1979, 99).
According to programme records, as of 1984, at least 155 species had been tested and/or propagated for distribution on Eua and Tongatapu. Of these, 66 were timber species, 45 ornamentals, 32 "cultural" plants of particular importance to the Tongan society, 11 food plants, 6 plants used for coastal protection or land reclamation, 4 for living fences or hedgerows, 3 medicinal plants, and 2 each for windbreaks and firewood. Among the most popular nontimber species were Casuarina equisetifolia (planted as an ornamental, living fence, or wind-break); culturally important sacred or fragrant plants, known locally as akau kakala, such as heilala (Garcinia sessilis), langakali (Aglaia saltatorum), sandalwood, or ahi (Santalum yasi), pua (Fagraea berteriana), pipi (Parinari glaberrima), huni (Phalaria disperma), perfume tree, or mohokoi (Cananga odorata), allspice (Pimenta doica), and Pandanus cultivars; fruit-trees, such as mango, Malay apple (Syzygium malaccense), and macadamia nut (Macadamia integrifolia); and ornamental or shade plants, such as flamboyant, or poinciana (Delonix regia), hibiscus (Hibiscus rosa-sinensis), Cordyline fruticosa, copperleaf, or beefsteak, plant (Acalypha amentacea), bougainvillea (Bougainvillea spp.), poinsettia (Euphorbia pulcherrima), gardenia (Gardenia spp.), and the hedge panaxes (Polyscias spp.) (Thaman 1984e).
The final major area of activity has been the testing and establishment of trees for land reclamation, such as the project to rehabilitate low-lying areas at Sopu to the west of the capital of Nuku'alofa on Tongatapu. Reclamation work at Sopu began in the 1960s, with the planting of Casuarina equisetifolia to stabilize the area, and has continued to the present with extensive plantings of Lumnitzera littorea, Rhizophora mangle, Bruguiera gymnorhiza, Xylocarpus granatum, and other selected species. As recently as 1980, 6 acres of Lumnitzera littorea, 4 acres of Terminalia catappa, and 3 acres of Queensland kauri (Agathis robusta) were planted. The vegetation has reportedly been well-established, with the operation becoming more maintenance than reclamation.
Grazing, usually of cattle, with commercial tree cropping and silviculture consists mainly of the widespread practice of grazing cattle under coconuts or commercial timber species, and the limited grazing of cattle under Leucaena leucocephala or other fuel-wood or multipurpose species.
Livestock under coconuts
The grazing of cattle (primarily beef, but also dairy cattle) under coconuts (in some cases with pasture improvement) is by far the most widespread practice. It has been encouraged throughout the Islands since colonial times, particularly on large coconut estates. In addition to providing meat and dairy products, cattle are seen as effective weed control and fertilization agents, thus facilitating plantation management and the collection of fallen nuts.
Although primarily promoted on large, often foreign or state controlled estates or plantations, some governments, such as those in the Solomon Islands, Tonga, and Niue, have encouraged smallholder grazing of cattle under coconuts and other trees. In the case of Tonga, smallholder agriculturalists have been encouraged to fence limited portions of their 3.3 ha bush allotments to graze cattle, and sometimes horses, under coconuts and other tree crops and protected trees, or, alternatively, to tether animals to trees and graze on a rotational basis.
The practice has been particularly important in Vanuatu (both before and after independence in 1980) and New Caledonia, where beef cattle production is a major activity. Beef cattle production became so important in Vanuatu, prior to independence, that some plantations were turned into cattle properties. The importance of cattle grew in the 1950s, when steeply rising labour costs made planters increasingly dependent on cattle to keep their plantations clean. At one period in the 1950s, herds became larger than the plantations could support, especially during dry spells, and by the end of the decade, town butcheries had opened in both Port Vila and Luganville, the two main towns. By the end of the 1960s, copra production had become no more than a sideline on a number of plantations (Brookfield with Hart 1971, 164165).
In Fiji, in 1973, 10.5 per cent of the local beef requirements were supplied by the 9.9 per cent of the cattle population grazed under coconuts (MAF 1973; Manner 1983). This is particularly significant given the large proportion of range-fed cattle raised on extensive large-scale developments in the dry zones of Fiji. Papua New Guinea, the Solomon Islands, Vanuatu, and New Caledonia in Melanesia, and Western Samoa and French Polynesia have also actively encouraged cattle under coconuts with trials having been conducted on optimum stocking rates and pasture improvement. Much of the Western Samoa Trust Estates (WSTEC) Mulifanua Copra Plantation, reportedly one of the largest copra plantations in the world (Carter 1984), is undergrazed by cattle.
The potential for the formal promotion of large-scale grazing of cattle under coconuts is greatest on the larger islands of Melanesia and Polynesia. On smaller islands, such as those in Tonga and the Cook Islands, where high population densities and land scarcity make more extensive agrosilvipastoral developments less relevant, small-scale rotational undergrazing of tethered animals is more appropriate. In Nine, where population density is low because of emigration to New Zealand, there have been problems of overgrazing and lack of fodder during times of drought- for example, during the severe drought of 1977-1978, when hay had to be imported from New Zealand.
Richardson (1983, 59) cautions that grazing under coconuts can create problems of soil compaction and, especially in the case of free grazing, preclude intercropping, which should take precedence in areas with limited land resources. As shown by studies in Papua New Guinea and elsewhere, smallholder beef cattle production can have harmful impacts on subsistence cropping (Grossman 1981). Where cash cropping or subsistence production is feasible, Richardson (1983, 59) argues that intercropping should take precedence over grazing under coconuts.
Cattle under timber species
The grazing of cattle under commercial timber species has been actively promoted in Papua New Guinea, the Solomon Islands, Vanuatu, and Fiji. In Papua New Guinea, reforestation projects in both the highlands and lowlands offer opportunities for beef production, and cattle have been actively promoted to control weeds and reduce fire danger by consuming the fuel. Pinus caribaea planting has also been encouraged in order to provide shade for cattle in open grasslands (Watt 1980, 308). The introduction of pasture legumes into timber plantations and surrounding areas has also been actively encouraged, and the development of pastures, followed by grazing, has been more or less standard practice in a number of forest plantations in Papua New Guinea, where klinki and hoop pine (Araucaria spp.), Pinus caribaea, and Eucalyptus spp. are grown. Government forest plantations are made available to local Braziers who establish adequate fencing and pastures and follow acceptable range management and stocking practices (Howcroft 1974; 1983).
In the Solomon Islands, where there is a "Cattle Under Trees" (CUT) project, cattle have been grazed under Eucalyptus deglupta in forest plantations established by the government in logged forest (Macfarlane and Whiteman 1983; Schirmer 1983, 101; Watt 1980, 308) and in Vanuatu under both "Local Supply Plantations" and "Industrial Supply Plantations" of Cordia alliodora, as well as under Pinus caribaea on Aneityum, Erromango, Pentecost, and Santo (Jacovelli and Neil 1984, 8). Grazing under pines in Vanuatu is seen as a means of reducing the significant fire threat in plantations (Neil 1986a).
It is in Fiji that the practice has probably been tried most exten sively, owing to research undertaken by the Fiji Pine Commission (FPC), a statutory body with the objective of facilitating and developing "an industry based on the growing, harvesting, preserving and marketing of pine and other species of trees grown in Fiji" (CPO 1980, 141). The FPC is responsible for managing over 45,000 ha of Pinus caribaea out of an envisioned gross estate of 80,000 ha on the highly degraded talasiga (sunburnt) soils of the drier leeward grasslands of the two largest islands of Fiji. The relatively infertile and eroded areas are vegetated with a grassland sub-climax of presumed anthropogenic origin, including species such as Pennisetum polystachyon, Pteridium esculentum, Gleichenia liners, Psidium guajava, Dodonaea viscose, and Casuarina equisetifolia. On moister slopes, Miscanthus floridulus forms almost impenetrable thickets. These grasslands are subject to frequent and unauthorized burning.
The FPC undertook research into cattle grazing for two reasons: to examine the effects of cattle grazing on reducing fuel in high fire-risk zones; and to test the use of cattle as a site-preparation tool for clearing the land of Miscanthus floridulus, which proved difficult to eradicate by more conventional means such as slashing and burning (Drysdale 1982). Research has yielded variable results. Vincent (1971) concluded that grazing of cattle under 5- and 6-year-old pine plantations in poor soils had a detrimental effect on the incremental growth of pines, whereas grazing trials in the Nausori Highlands to determine the effect on fire hazard reduction resulted in a reduction in fuel from 2,500 kg per hectare to 800 kg per hectare, an average cattle weight gain of 0.24 kg per day, and no pasture deterioration despite heavy stocking rates (Gregor 1972). At Nawaicoba, Partridge (1977) reported weight gains twice this, when trees were planted at 2 m x 3 m spacing, with two rows in every five missing. In variable spacing trials, Bell (1981) found slight bark damage to trees less than one year old because of trampling, when the trees were spaced 3 m apart within rows and 2.5, 3, 3.5, and 4 m apart between rows, the cattle being introduced into the plantation when the pines were 54 cm high.
In 1982, the FPC reviewed various research projects on cattle under pines and concluded that given "the high overhead and general costs of FPC operations, commercial cattle grazing of unimproved pasture under pines, is an unlikely prospect" (Drysdale 1982, 4). Although fuel loadings were considerably reduced, the cost of using cattle for fuel reduction was "considered unacceptably high compared with alternatives such as burning" (Drysdale 1982, 3). In contrast, the use of cattle as a site-preparation tool where Miscanthus predominates was termed an "outstanding success" (Drysdale 1982, 8) because other methods of clearing the giant grass gave incomplete results, were impractical, or cost too much.
Because of the high cost of fencing, the long-term and extensive grazing of cattle under pines has been found to be an uneconomic proposition for the Fiji Pine Commission, although some 480 cattle are allowed to graze under pines free of charge at Drasa and Tavaka-bo, and some cattle owners unofficially graze their cattle in Fiji Pine Commission forests. Native landowners are also allowed to graze cattle under their own pine plantings, subject to certain restrictions. But cattle owners also are unlikely to find fencing a profitable venture. Open-range grazing with night-time penning may be a possibility. In addition, the economics of cattle grazing on improved pastures under trees in Fiji still needs to be ascertained.
Other silvipastoral activities
Trees such as Leucaena leucocephala are used as fodder in Tonga and Papua New Guinea, where they are browsed by cattle as a dietary supplement (Watt 1980, 308). There is perhaps some scope for the grazing of other animals such as pigs, goats, and chickens on improved legume pastures or fallows under coconuts, commercial timber species, or other trees (Quartermain 1980; Richardson 1983).
In the Pacific, as elsewhere, interest in agroforestry has recently grown rapidly among scientists, land-use experts, conservationists, and the development professionals of national governments and international agencies. As already noted, systems of commercial production that would now be classified as agroforestry were initiated early in the Pacific's colonial past, particularly in the form of multistorey arrangements of coconut palms with other crops or with cattle. With regard to agroforestry systems in the subsistence sphere, this book has sought to demonstrate their prevalence and antiquity in the Pacific Islands. As Yen (1980b, 91) comprehensively expressed it in his discussion of "Pacific Production Systems," there is nothing new about multi-storey cropping even though it has often been suggested to smallholders as an innovative technique they might adopt.
In fact native systems have always involved such techniques in village gardens with descending storeys of palms, trees, productive vines, shrubs, herbaceous root crops, and vegetable plants and ornamentals. Similarly, in swiddens, mixed species and variety plantings are themselves multi-storey. In this case such plantings also take on a successional aspect, for following the root crops, some cultigens such as banana and longer-term plants such as breadfruit and other fruit and nut trees, industrial shrubs, and vines, prolong the production of these gardens.
Geographers and anthropologists who have studied these sorts of indigenous systems find ironic some of the attempts made to introduce institutional agroforestry into the Pacific context. On the other hand, in a time of deforestation and agrodeforestation, it is apt to encourage both of the approaches to agroforestry described in chapter 1- the institutional approach, which generally seeks to introduce commodity-focused systems devised on the basis of modern forms of analysis, and the cultural-ecological approach, which is concerned more with long-standing indigenous systems, empirically devised and deeply embedded in the cultural landscape. Whether or not the two approaches can be usefully meshed remains open to question, although some forms of "progressing with the past" do seem possible (Clarke 1978).
When attention is turned to the future of institutional agroforestry in the Pacific, it can be clearly forecast that if individual smallholders are to benefit over the long term from the introduction of an unfamiliar institutionalized agroforestry system, they will need to receive an ongoing package of inputs and information, which suggests the need for some sort of extension service. Unfortunately, it is acknowledged that extension work in many Pacific countries is generally poor, and extension services often have only secondary ranking within ministries or departments (Hau'ofa et al. 1980, 188-189). How to remedy this deficiency raises several complex but pervasive issues, which have been dealt with at length in a large literature and which can only be superficially treated here.
With regard to the initial introduction of a new agroforestry system, it is easy - given the current popularity of agroforestry in the development world to find funding for workshops and projects, but these by their nature lack continuity, and they are often administered by staff unfamiliar with local agroforestry traditions. The Pacific is littered with projects advanced in support of all sorts of good causes their collapsed remnants remain, like the military paraphernalia rust ing on beaches after World War II. One way to incorporate continuity into projects and to move beyond reliance on inadequate extension services is to form a centralized management system for smallholders (sometimes referred to as a plantation mode of management). Such a system has been successful in several instances, notably the efficient smallholder production of sugar so important in Fiji's economy and also in tobacco production in that same country (Eaton 1988a). Some other attempts have been less successful. The pros and cons of the approach have been cogently summed up by Hardaker et al. (1984a; 1984b) and Ward (1984).
Aside from problems common to any project-based introduction, a specific constraint to the full realization of the potential of agroforestry by institutional means relates to the disciplinary compart-mentalization that characterizes institutions concerned with land use, whereby - as the Director of ICRAF commented - "agriculture and forestry normally fall under different ministries or, if they are under the same ministry, under separate departments,' (Lundgren 1987, 44). Writing specifically of the forestry sector in the South Pacific, Watt (1980, 302-303) noted that "the separation of agricultural and forestry extension services encourages the impression that agriculture and forestry are mutually exclusive alternatives rather than complementary land uses." Following on from and related to this sectoral compartmentalization is each institution's imperative to maximize the individual component that is the focus of that institution. In contrast, as has often been observed:
The subsistence land user's strategy and aims are to use his labour and land resources to optimize, with minimum risk, the production of various products and services required to satisfy all his basic needs. The fundamental inadequacy of conventional-discipline-oriented institutions lies in the failure to acknowledge and understand these basic facts, strategies and aims, and in the inability to adapt to them. The aims, infrastructure, rationale and philosophy of these institutions, as well as the training of their experts, are geared to the maximization of individual components, be they food crops, cash crops, animals or trees. There is little understanding that the land user needs to share out his resources for the production of other commodities or services (Lundgren 1987, 46).
When maximization is aimed at commercial products, as it most frequently is in the Pacific, a set of sometimes contradictory processes comes into play. For example, attempts to produce cash crops while continuing to meet subsistence needs may bring agricultural involution if land is limited, or it may result in an extension of cropping onto marginal sloping lands as cash crops or cattle take over better lands. A specialization in commercial products may not be accompanied by any concomitant increase in labour availability or extension advice (often restricted to larger producers) on how to increase subsistence production (Ward 1986; Yen 1980b).
Even the Fiji-German Forestry Project, which commenced in the mid1980s, appears mainly focused toward facilitating export cash cropping, although its terms of reference suggest a broader approach that includes "providing ecologically sound advisory assistance in the fields of forestry and agroforestry in line with the social, cultural and economic requirements of target groups" (Tuyll 1988, 3). Consultants to the Fiji-German Forestry Project have also made holistic and wide-ranging recommendations, but the Project's current activities, as described earlier in this chapter, are concentrated on improving the production of ginger as a cash crop by introducing exotic trees to prevent erosion and replace artificial fertilizer.
This accomplishment is not to be decried, but the approach, distinguished by its introduction of and experimentation with exotic trees alley-cropped with a cash crop, does little to preserve existing agroforestry systems or to maintain a balance between commercial agroforestry activities and activities that could protect the existing subsistence base. One consultant recommended to the Project that "agroforestry and forestry extension should not attempt to remain with or return to pure forms of subsistence economy but focus on including profitable cash crops at low risks" (von Maydell 1987, 35). This recommendation does indicate an appreciation of the need to minimize risk, but both it and all the other consultants' recommendations to the Project fail to support strongly the maintenance of a viable subsistence base. Another consultant, who had been selected to identify suitable sites for demonstration plots for the Project, was asked to comment on the idea of putting greater emphasis on the subsistence aspects of agroforestry and of analysing existing local agroforestry systems as demonstration plots into which selected improvements could be introduced. He responded that it was quite unrealistic to expect either the Fiji Government or the German funding agency to support such an emphasis in place of an emphasis on using agroforestry as a way to improve monocultural cash cropping.
In summary, export crops, timber trees, and grazing under coconuts have been the continuing focus of almost all official agroforestry activities for the past century. Regardless of whether it has been the colonial or post-colonial agricultural and forestry departments or, re cently, international aid agencies, the focus has been almost exclusively on monocultural, often large-scale production for export or, in the case of timber and fuel-wood production, for import substitution. Even the intercrops are usually cash crops for export or local sale. Consequently, most indigenous wild species and the wide range of traditional cultivars have received little official promotion and have been the focus of only limited research. Few technical experts or development entrepreneurs know enough about traditional mixed agricultural systems and their component plants to be willing or able to promote their expansion or maintenance. It is not only projects intended to develop commercial agriculture and forestry that may displace or degrade traditional agroforestry systems; modern institutional agroforestry projects may themselves play the same role.
Agencies and educational institutions promoting agroforestry
However, there are also movements in support of traditional systems. The growing popularization and recognition worldwide of the value of the "wisdom of the elders" (Knudtson and Suzuki 1992) may motivate increased institutional attention to indigenous polycultural systems of agroforestry in the Pacific. This section provides information on several examples of such attention and on the institutions involved; mention has been made earlier of some of these, but they will be referred to here briefly again to provide a coherent single account.
All the major universities within the Pacific region (University of Guam, both of Papua New Guinea's universities, the University of the South Pacific in Fiji and its School of Agriculture in Western Samoa, University of Hawaii, and the developing francophone institutions in New Caledonia and Tahiti) support staff with interests in traditional matters, including agriculture, agroforestry, and the management of soil and vegetation. Rather than attempt a full listing of course offerings relevant to agroforestry to at least some degree, we note here only that, on the basis of current information at hand, the courses most directly focused on agroforestry are found within the Geography Department at the University of the South Pacific in Suva, Fiji, and the Department of Agronomy and Soil Science at the University of Hawaii in Honolulu. To the best of our knowledge, the University of Hawaii is distinguished by being the only university in the region to have a named Professor of Agroforestry, who is located in the Department of Agronomy and Soil Science. The Col lege of Micronesia in Pohnpei also has staff with active and direct interests in indigenous agroforestry.
Agroforestry promotion by the Fiji-German Forestry Project, a bilateral agency, has been described in the previous section. A different approach is followed by the South Pacific Forestry Development Programme, which is a multilateral 5-year project funded by UNDP, executed by FAO, and now based in Suva, Fiji. The Programme is concerned with forests and trees in 15 countries, so far particularly with forests in the larger countries, but atoll countries are making enquiries about coconuts and other multi-purpose trees. The role of the Programme is to stimulate activities and provide technical advice, not to operate activities itself. For instance, it facilitated the import of seeds of superior rattan from Malaysia for planting in Pacific forests in order to increase their non-timber production capability. Aside from technical advice, the Programme acts as a focal point for information about forests and trees and publishes the quarterly South Pacific Forestry Newsletter. It is also trying to organize the documentation of local knowledge on indigenous agroforestry, with studies planned or under way in Pohnpei, Fiji, Kiribati, Tuvalu, Tonga, and other island countries.
The Programme has worked cooperatively with the international NGO The Foundation of the Peoples of the South Pacific (FSP) on a project intended to develop sustainable forestry in local areas while slowing down or stopping rapid conversion of forests by large-scale industrial logging. This objective is based in part on selling small mobile sawmills to rural entrepreneurs and community groups so that they may develop small-scale but profitable and locally utilitarian logging, carried out in ways that avoid major environmental damage and that maintain the essential structure of the forest for traditional uses and ecological services.
A US Government project based in Hawaii is carrying out work related to several aspects of agroforestry in Hawaii, American Micronesia, and American Samoa. Called Agricultural Development in the American Pacific (ADAP), the project has provided agroforestry educational materials to all the public (land grant) colleges and universities in the American-affiliated Pacific. In association with the US Department of Agriculture and the US Forest Service, ADAP is also developing training programmes in agroforestry.
The Environment and Policy Institute of the East-West Center in Hawaii maintained a strong programme of research, seminars, and publication on agroforestry for several years during the 1980s (e.g., Djogo 1992; Nair 1984). Although agroforestry is no longer a principal focus of its work, the Institute remains a repository of a large volume of published and unpublished material on the topic.
Mentioned at the beginning of this chapter was the report (Clements 1988) of a technical meeting on agroforestry in tropical islands held at the Institute for Research, Extension and Training in Agriculture (IRETA), which is part of the University of the South Pacific's School of Agriculture in Western Samoa. IRETA is also involved in research projects to improve or strengthen atoll agroforestry in Kiribati.
In the Melanesian countries, with their comparatively larger natural forests, forest-resource inventories are under way or planned, generally as a cooperative, aid-funded project between the local Forestry Department and overseas technical personnel. The inventories are intended to provide the information base necessary for effective land-use planning and management, but now, unlike some past forest assessments, the inventory process includes collection of data on watershed vulnerability and on the indigenous ethnobotanical value of forest plants, as in the forest-resource inventory now being completed by the Vanuatu Forestry Department with technical assistance from the Queensland (Australia) Forest Service and the Division of Tropical Crops and Pastures of the (Australian) Commonwealth Scientific and Industrial Research Organization (CSIRO).
Finally, mention should be made of the work of ORSTOM, the French organization that promotes French scientific research in the third world, mainly in the tropics. With centres in the Pacific in Nouméa and Tahiti, ORSTOM has sponsored work not only related to many aspects of modern development but also to traditional cultural-ecological matters, for example, with specific relevance to agroforestry, the work on the cultivars of kava (Piper methysticum) in Vanuatu (Lebot and Cabalion 1986).
Contents - Previous - Next | <urn:uuid:81474da6-e269-428c-ad49-51c9523b96f6> | CC-MAIN-2013-20 | http://archive.unu.edu/unupress/unupbooks/80824e/80824E0k.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.936156 | 8,882 | 3.421875 | 3 |
PLAGUE ON OUR SHORES
City at War
THE GREAT CHINATOWN FIREOn New Year's Eve one hundred years ago, the first of a number of controlled fires were set in Chinatown as a way of defending Honolulu from bubonic plague, known in history as Black Death. Next to the Pearl Harbor attack, the outbreak of plague was the greatest public-safety disaster in Hawaiian history. The government was determined to do anything to save the city -- even burn it to the ground. Last week we began a four-part series by describing the discovery of plague in Honolulu and the quarantine system set up to contain it. Today's installment chronicles the attitudes that inspired the controlled burning that preceded the Great Chinatown Fire. The series concludes tomorrow.
PART I | II | III | IV | EpilogueBy Burl Burlingame
IT may have been simple bad luck, or it may have have been a white-dominated business conspiracy, or more likely it fell between the two extremes, but the Chinese residents of teeming Chinatown felt unfairly targeted by health authorities when Black Death erupted in Honolulu at the cusp of the century.
Although thousands of Hawaiian and Japanese were uprooted as the Board of Health methodically began to burn out plague infestations in the quarantine zone, it was Chinese-owned businesses that absorbed the brunt of property damage.
Chinese immigration to the island kingdom climbed steadily until the political coup in 1893 that unseated Liliuokalani. By the mid 1890s, one in five residents of Hawaii was of Chinese descent, and they put down firm roots, establishing schools, newspapers, cemeteries, temples and clan societies. Unlike some other groups of immigrants, however, the Chinese did not assimilate into Hawaiian culture, preferring instead to form a separate society.
This sense of separation was expressed in the Honolulu district known as "Chinatown" where small businesses operated by Chinese ex-plantation workers began to flourish in the 1860s. It is roughly the area bordered by Nuuanu, Beretania and King streets. The area was chockablock with Chinese restaurants, Chinese groceries, Chinese dry-goods shops and other small Chinese industries.
In 1886, sparks from a restaurant ignited an enormous fire that leveled most of the district. Excited by the urban clean slate, the Hawaiian government declared new structures had to follow sanitary constraints, were to be made of stone or brick, and considered widening and consolidating the streets. The Advertiser declared they had turned "a national disaster into an ultimate blessing."
It didn't happen. In the 14 years following the fire, Chinatown landowners allowed ramshackle, quickly constructed boomtown wooden buildings to blossom in the area, looming over the narrow dirt streets and overwhelmingly primitive sanitation facilities. The lessons of the 1886 fire were largely ignored.
In 1898, concerned about the swelling tide of Chinese immigration, the Republic of Hawaii evoked the restrictions of the Chinese Exclusion Act of 1882 even though Hawaii was not yet a territory of the United States.
More than 7,000 lived in Chinatown's 50 acres at the turn of the century, in an era when no building rose above two stories. Many were Japanese immigrants, jammed in structures controlled by Chinese landlords, who in turn paid Hawaiian and haole landowners.
And Chinatown had become the center of another kind of Asian-controlled business as well. The census taken in December 1899 revealed the area was brimming with organized prostitution, a niche business that provided economic entre for new immigrants. In 1900, 84 percent of known prostitutes in Honolulu were Japanese, and nearly 100 percent of the pimps were Japanese.
Despite the filthy squalor of living conditions, and the disdain with which Chinatown was viewed by the rest of Honolulu, it was an economic engine, pumping money into the pockets of landowners like Bishop Estate. At a time when a plantation worker made about $18 a month, Japanese prostitutes were making hundreds of dollars .
Although maintaining the status quo was lucrative, the overcrowded living conditions in Chinatown, coupled with a complete lack of urban planning for the area, created a neighborhood that ran with rats and insects, that had sewage and garbage lying unattended in the streets. Other residents of Honolulu turned up their noses at Chinatown, both literally and figuratively, while the residents of Chinatown had little choice but to stay where they were. The Advertiser called the district a "pestilential slum."
When the city finally started to build a sewer line through Chinatown in 1899, workers discovered they were digging through compacted layers of fermenting garbage. The intense odor caused diggers to slow to a near-halt.
With the onset of Black Death, a hastily organized troupe of health inspectors went on field trips into Chinatown as if it were a foreign country, and returned horrified. The district, full to bursting with shanty buildings, boarding houses, livestock corrals and chicken coops, reeking outdoor toilets and backyard cesspools, was swarming with rats, maggots, flies, lice and cockroaches. The only solution, it was argued, was a repeat of the cleansing fire of 1886, but this time applied in scientific manner, coupled with military discipline.
The military model was much admired at the time, following the triumph of American forces over the Spanish, and the new conflict involving Great Britain and the Boers was closely followed in Honolulu newspapers. Virtually all contemporary coverage of Honolulu's plague outbreak refers to the "campaign" against the bacillus as "war." And indeed it was -- a fight to the death.
It was in this atmosphere of indignant public opinion that the notion of burning Chinatown for the public good began to take root. What was missing was a legal excuse. An argument on Smith street provided it. A National Guard soldier stabbed a Japanese civilian in the thigh with his bayonet, and fallout from the incident forced the police and the military to determine their jurisdictions.
As Pvt. Hunt explained it, the Japanese attempted to run the blockade; others claimed Hunt had been prodding the man along. The slight wound triggered a reorganization of civil authority, with far-ranging consequences.
At the bottom line was the question of whether Hunt was legally responsible for his actions, whether civil or martial law reigned. After questioning witnesses, police officials decided martial law had not been declared, and the military was called out to assist the police in carrying out civil statutes. In this scenario, both soldiers and police had authority to use force to enforce the quarantine, but that did not give soldiers permission to commit assaults within the quarantine zone.
When this opinion was presented to the National Guard's Maj. Ziegler, however, the commander decided the military, once called to active duty, cannot be interfered with by civil authorities. The military's authority over the quarantine, and over the Honolulu police, was absolute.
Within hours, all Honolulu police were withdrawn from the quarantine zone, and all questions of authority routed to the National Guard. Although martial law had not been officially declared, soldiers were allowed to proceed as if it had. This made it easier to ignore the rules of civil law during the medical emergency that gripped Honolulu. The Board of Health, civilians appointed by President Sanford Dole, lame-duck head of a temporary republic, had absolute power over questions of life and death.
Chinese residents trapped by the city quarantine feared they were being singled out both in life and after death. Chinese immigrants believed if they died overseas, their bones must be returned to China. The Board of Health's solution to plague deaths -- quick cremation -- left no remains for shipping. Horrified Chinese began to hide their ill friends and relatives from authorities. This practice not only exacerbated contagion, but likely obscured the true numbers of plague victims.
A large delegation of Chinese merchants and Chinese consul Yang Wei Pin and Vice Consul Goo Kim met with Henry Cooper, president of the Board of Health, who insisted any decisions regarding cremation would be made by the board. The Chinese claimed the board was discriminating in favor of Japanese, and Cooper responded no Japanese have been diagnosed with plague, and the body of a white teenager had also been hurridly cremated. Cooper suggested they collect the ashes in urns for shipment back to China.
As Honolulu became a city at war, the battle lines of bureaucracy were being drawn. As the Evening Bulletin editorialized, lacking a clear chain of command while details of the new government were being hammered out, President Dole had the authority to appropriate funds to battle the plague. "Let there be no delay," the paper insisted. "This is a time for action, prompt energetic action. The people are prepared to support the vigorous measures which money will forward and which must be set on foot if the battle against black plague is to be short, sharp and decisive."
Burning was the apparent immediate answer. A committee of businessmen was formed to find warehouse space for goods removed from Chinatown stores that were being burned down, and during the first three weeks of January, 1900, buildings were torched nearly every day.
A photographer hired by the government recorded pictures of each building, and then it was set alight. Honolulu firemen bookended the flames with streams of water; soldiers and police kept crowds in line and watched for looters.
The newspapers kept track with maps and marveled at the "military" precision of the assault on Black Death. Lists of the dead were daily updated like box scores; by late January, dozens had passed away. The new crematorium on Quarantine Island blazed day and night.
Then five plague deaths within a couple of days occurred near the corner of Nuuanu and Beretania. Clearly, this was a hot spot for pestilence and the government decided to burn it out on the morning of Jan. 20. Four fire engines and every fireman in Honolulu were on the scene, but about an hour into the controlled burning, the wind scattered embers across neighboring rooftops. The wooden roof of Kaumakapili Church with its twin spires, the tallest building in the area, erupted into flame beyond the hoses of firemen.
Helpless, they watched flaming embers, carried on a sudden wind, fly unchecked onto the wooden buildings of Chinatown.
Click for online
calendars and events. | <urn:uuid:ec21091c-22a2-4733-bdc3-e4698ee40350> | CC-MAIN-2013-20 | http://archives.starbulletin.com/2000/01/31/features/story1.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.975172 | 2,113 | 3.234375 | 3 |
- published: 12 Feb 2012
- views: 396745
- author: musicisprettyneat
pretty great blocked in germany. irony Artist/Band: Kraftwerk Album: The Man-Machine Year: 1978 Genre: "Synthpop"/Electronic Wikipedia article: http://en.wik...
Man-Machine may refer to:
|This disambiguation page lists articles associated with the same title.
If an internal link led you here, you may wish to change the link to point directly to the intended article.
A machine is a tool consisting of one or more parts that is constructed to achieve a particular goal. Machines are powered devices, usually mechanically, chemically, thermally or electrically powered, and are frequently motorized. Historically, a device required moving parts to classify as a machine; however, the advent of electronics technology has led to the development of devices without moving parts that are considered machines.
The word "machine" is derived from the Latin word machina, which in turn derives from the Doric Greek μαχανά (machana), Ionic Greek μηχανή (mechane) "contrivance, machine, engine" and that from μῆχος (mechos), "means, expedient, remedy". The meaning of machine is traced by the Oxford English Dictionary to an independently functioning structure and by Merriam-Webster Dictionary to something that has been constructed. This includes human design into the meaning of machine.
A simple machine is a device that simply transforms the direction or magnitude of a force, but a large number of more complex machines exist. Examples include vehicles, electronic systems, molecular machines, computers, television and radio.
|This section requires expansion.|
Perhaps the first example of a human made device designed to manage power is the hand axe, made by chipping flint to form a wedge. A wedge is a simple machine that transforms lateral force and movement of the tool into a transverse splitting force and movement of the workpiece.
The idea of a "simple machine" originated with the Greek philosopher Archimedes around the 3rd century BC, who studied the "Archimedean" simple machines: lever, pulley, and screw. He discovered the principle of mechanical advantage in the lever. Later Greek philosophers defined the classic five simple machines (excluding the inclined plane) and were able to roughly calculate their mechanical advantage. Heron of Alexandria (ca. 10–75 AD) in his work Mechanics lists five mechanisms that can "set a load in motion"; lever, windlass, pulley, wedge, and screw, and describes their fabrication and uses. However the Greeks' understanding was limited to the statics of simple machines; the balance of forces, and did not include dynamics; the tradeoff between force and distance, or the concept of work.
During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how much useful work they could perform, leading eventually to the new concept of mechanical work. In 1586 Flemish engineer Simon Stevin derived the mechanical advantage of the inclined plane, and it was included with the other simple machines. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche ("On Mechanics"). He was the first to understand that simple machines do not create energy, only transform it.
The classic rules of sliding friction in machines were discovered by Leonardo Da Vinci (1452–1519), but remained unpublished in his notebooks. They were rediscovered by Guillaume Amontons (1699) and were further developed by Charles-Augustin de Coulomb (1785).
|Simple machines||Inclined plane, Wheel and axle, Lever, Pulley, Wedge, Screw|
|Mechanical components||Axle, Bearings, Belts, Bucket, Fastener, Gear, Key, Link chains, Rack and pinion, Roller chains, Rope, Seals, Spring, Wheel|
|Clock||Atomic clock, Chronometer, Pendulum clock, Quartz clock|
|Compressors and Pumps||Archimedes' screw, Eductor-jet pump, Hydraulic ram, Pump, Trompe, Vacuum pump|
|Heat engines||External combustion engines||Steam engine, Stirling engine|
|Internal combustion engines||Reciprocating engine, Gas turbine|
|Heat pumps||Absorption refrigerator, Thermoelectric refrigerator, Regenerative cooling|
|Linkages||Pantograph, Cam, Peaucellier-Lipkin|
|Turbine||Gas turbine, Jet engine, Steam turbine, Water turbine, Wind generator, Windmill|
|Aerofoil||Sail, Wing, Rudder, Flap, Propeller|
|Electronic devices||Vacuum tube, Transistor, Diode, Resistor, Capacitor, Inductor, Memristor, Semiconductor, Computer|
|Robots||Actuator, Servo, Servomechanism, Stepper motor, Computer|
|Miscellaneous||Vending machine, Wind tunnel, Check weighing machines, Riveting machines|
The idea that a machine can be broken down into simple movable elements led Archimedes to define the lever, pulley and screw as simple machines. By the time of the Renaissance this list increased to include the wheel and axle, wedge and inclined plane.
An engine or motor is a machine designed to convert energy into useful mechanical motion. Heat engines, including internal combustion engines and external combustion engines (such as steam engines) burn a fuel to create heat which is then used to create motion. Electric motors convert electrical energy into mechanical motion, pneumatic motors use compressed air and others, such as wind-up toys use elastic energy. In biological systems, molecular motors like myosins in muscles use chemical energy to create motion.
An electrical machine is the generic name for a device that converts mechanical energy to electrical energy, converts electrical energy to mechanical energy, or changes alternating current from one voltage level to a different voltage level.
Electronics is the branch of physics, engineering and technology dealing with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes amplification of weak signals possible and is usually applied to information and signal processing. Similarly, the ability of electronic devices to act as switches makes digital information processing possible. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a working system.
Charles Babbage designed various machines to tabulate logarithms and other functions in 1837. His Difference engine is the first mechanical calculator. This machine is considered to be the forerunner of the modern computer though none of them were built in his lifetime.
Study of the molecules and proteins that are the basis of biological functions has led to the concept of a molecular machine. For example, current models of the operation of the kinesin molecule that transports vesicles inside the cell as well as the myocin molecule that operates against actin to cause muscle contraction; these molecules control movement in response to chemical stimuli.
Researchers in nano-technology are working to construct molecules that perform movement in response to a specific stimulus. In contrast to molecules such as kinesin and myosin, these nano-machines or molecular machines are constructions like traditional machines that are designed to perform in a task.
Machines are assembled from standardized types of components. These elements consist of mechanisms that control movement in various ways such as gear trains, transistor switches, belt or chain drives, linkages, cam and follower systems, brakes and clutches, and structural components such as frame members and fasteners.
Modern machines include sensors, actuators and computer controllers. The shape, texture and color of covers provide a styling and operational interface between the mechanical components of a machine and its users.
Assemblies within a machine that control movement are often called "mechanisms." Mechanisms are generally classified as gears and gear trains, cam and follower mechanisms, and linkages, though there are other special mechanisms such as clamping linkages, indexing mechanisms and friction devices such as brakes and clutches.
Controllers combine sensors, logic, and actuators to maintain the performance of components of a machine. Perhaps the best known is the flyball governor for a steam engine. Examples of these devices range from a thermostat that as temperature rises opens a valve to cooling water to speed controllers such the cruise control system in an automobile. The programmable logic controller replaced relays and specialized control mechanisms with a programmable computer. Servomotors that accurately position a shaft in response to an electrical command are the actuators that make robotic systems possible.
Design plays an important role in all three of the major phases of a product lifecycle:
The Industrial Revolution was a period from 1750 to 1850 where changes in agriculture, manufacturing, mining, transportation, and technology had a profound effect on the social, economic and cultural conditions of the times. It began in the United Kingdom, then subsequently spread throughout Western Europe, North America, Japan, and eventually the rest of the world.
Starting in the later part of the 18th century, there began a transition in parts of Great Britain's previously manual labour and draft-animal–based economy towards machine-based manufacturing. It started with the mechanisation of the textile industries, the development of iron-making techniques and the increased use of refined coal.
Mechanization or mechanisation (BE) is providing human operators with machinery that assists them with the muscular requirements of work or displaces muscular work. In some fields, mechanization includes the use of hand tools. In modern usage, such as in engineering or economics, mechanization implies machinery more complex than hand tools and would not include simple devices such as an un-geared horse or donkey mill. Devices that cause speed changes or changes to or from reciprocating to rotary motion, using means such as gears, pulleys or sheaves and belts, shafts, cams and cranks, usually are considered machines. After electrification, when most small machinery was no longer hand powered, mechanization was synonymous with motorized machines.
Automation is the use of control systems and information technologies to reduce the need for human work in the production of goods and services. In the scope of industrialization, automation is a step beyond mechanization. Whereas mechanization provides human operators with machinery to assist them with the muscular requirements of work, automation greatly decreases the need for human sensory and mental requirements as well. Automation plays an increasingly important role in the world economy and in daily experience.
An automaton (plural: automata or automatons) is a self-operating machine. The word is sometimes used to describe a robot, more specifically an autonomous robot. An alternative spelling, now obsolete, is automation.
|Wikimedia Commons has media related to: Machines|
The World News (WN) Network, has created this privacy statement in order to demonstrate our firm commitment to user privacy. The following discloses our information gathering and dissemination practices for wn.com, as well as e-mail newsletters.
We do not collect personally identifiable information about you, except when you provide it to us. For example, if you submit an inquiry to us or sign up for our newsletter, you may be asked to provide certain information such as your contact details (name, e-mail address, mailing address, etc.).
We may retain other companies and individuals to perform functions on our behalf. Such third parties may be provided with access to personally identifiable information needed to perform their functions, but may not use such information for any other purpose.
In addition, we may disclose any information, including personally identifiable information, we deem necessary, in our sole discretion, to comply with any applicable law, regulation, legal proceeding or governmental request.
We do not want you to receive unwanted e-mail from us. We try to make it easy to opt-out of any service you have asked to receive. If you sign-up to our e-mail newsletters we do not sell, exchange or give your e-mail address to a third party.
E-mail addresses are collected via the wn.com web site. Users have to physically opt-in to receive the wn.com newsletter and a verification e-mail is sent. wn.com is clearly and conspicuously named at the point ofcollection.
If you no longer wish to receive our newsletter and promotional communications, you may opt-out of receiving them by following the instructions included in each newsletter or communication or by e-mailing us at michaelw(at)wn.com
The security of your personal information is important to us. We follow generally accepted industry standards to protect the personal information submitted to us, both during registration and once we receive it. No method of transmission over the Internet, or method of electronic storage, is 100 percent secure, however. Therefore, though we strive to use commercially acceptable means to protect your personal information, we cannot guarantee its absolute security.
If we decide to change our e-mail practices, we will post those changes to this privacy statement, the homepage, and other places we think appropriate so that you are aware of what information we collect, how we use it, and under what circumstances, if any, we disclose it.
If we make material changes to our e-mail practices, we will notify you here, by e-mail, and by means of a notice on our home page.
The advertising banners and other forms of advertising appearing on this Web site are sometimes delivered to you, on our behalf, by a third party. In the course of serving advertisements to this site, the third party may place or recognize a unique cookie on your browser. For more information on cookies, you can visit www.cookiecentral.com.
As we continue to develop our business, we might sell certain aspects of our entities or assets. In such transactions, user information, including personally identifiable information, generally is one of the transferred business assets, and by submitting your personal information on Wn.com you agree that your data may be transferred to such parties in these circumstances. | <urn:uuid:28a7c1f3-c595-43bb-b7f3-2960d5ccb10f> | CC-MAIN-2013-20 | http://article.wn.com/view/2013/03/07/Atoms_for_Peace_and_the_battle_between_man_and_machine/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.929192 | 2,997 | 2.5625 | 3 |
By Dr. Mercola
Junk food is contributing to skyrocketing rates of diabetes, high blood pressure, and even strokes -- and not just among adults.
Food and beverage companies spend $2 billion a year promoting unhealthy foods to kids, and while ultimately it's the parents' responsibility to feed their children healthy foods, junk food ads make this much more difficult than it should be.
A new campaign, We're Not Buying It, is now underway to help expose deceptive marketing to children, debunk industry claims, and highlight the latest research, in the hopes of ending this assault on today's youth, and I'll explain how you can get involved, too, below.
Does Your Child Recognize the "Golden Arches"?
Most toddlers recognize the sign of McDonald's "golden arches" long before they are speaking in full sentences.
Because they are often raised on French fries, fast-food hamburgers and orange soda, or if "raised" is a bit of a stretch, are taught that French fries, chicken fingers and soda is an acceptable meal. Have you noticed that even in "regular" restaurants the kids' menu options are almost always entirely junk food like pizza, macaroni and cheese or fried chicken strips?
Of course kids will probably prefer these foods if that's what they're offered; these foods are manufactured to taste good, and most kids aren't going to opt for a spear of broccoli over a French fry -- until they're old enough to understand the implications of the choice, and assuming you have taught them about the importance of eating healthy foods along the way.
In many ways society is set up against you on this one. As The Interagency Working Group on Foods Marketed to Children (IWG) reports:
- The fast-food industry spends more than $5 million every day marketing unhealthy foods to children.
- Kids watch an average of over 10 food-related ads every day (nearly 4,000/year).
- Nearly all (98 percent) of food advertisements viewed by children are for products that are high in fat, sugar or sodium. Most (79 percent) are low in fiber.
So even under the best circumstances, your kids will probably be exposed to the latest "cool" kid foods, and this is what marketers are banking on. Then, when you go to the grocery store, your child will have a meltdown if you don't give in and buy the cereal with their favorite cartoon character on the box, or the cookies with brightly colored chips. If you're a parent, it's certainly easier to just give in, but it's imperative to be strong as shaping your child's eating habits starts very early on …
Your Child's Taste Preferences are Created by Age 3
Research shows when parents fed their preschool-aged children junk foods high in sugar, salt and unhealthy fats, it had a lasting impact on their taste preferences. All of the children tested showed preferences for junk foods, and all (even those who were just 3 years old!) were also able to recognize some soda, fast food and junk food brands.
The researchers concluded what you probably already suspect: kids who were exposed to junk food, soda and fast food, via advertising and also because their parents fed them these foods, learned to recognize and prefer these foods over healthier choices. This does have an impact on their health, as nutrients from quality foods are critical in helping your child reach his or her fullest potential!
One study from British researchers revealed that kids who ate a predominantly processed food diet at age 3 had lower IQ scores at age 8.5. For each measured increase in processed foods, participants had a 1.67-point decrease in IQ.
As you might suspect, the opposite also held true, with those eating healthier diets experiencing higher IQ levels. For each measured increase in dietary score, which meant the child was eating more fruits and vegetables for instance, there was a 1.2-point increase in IQ.
The reality is, the best time to shape your kids' eating habits is while they're still young. This means starting from birth with breast milk and then transitioning to solid foods that have valuable nutrients, like egg yolk, avocado and sweet potatoes. (You can easily cross any form of grain-based infant cereal off of this list.)
From there, ideally you will feed your child healthy foods that your family is also eating -- grass-fed meats, organic veggies, vegetable juice, raw dairy and nuts, and so on. These are the foods your child will thrive on, and it's important they learn what real, healthy food is right from the get-go. This way, when they become tweens and teenagers, they may eat junk food here and there at a friend's house, but they will return to real food as the foundation of their diet -- and that habit will continue on with them for a lifetime.
This is What Happens When You Let Marketers Dictate Your Kid's Diet …
The state of most kids' diets in the United States is not easy to swallow. As IWG reported:
- Nearly 40% of children's diets come from added sugars and unhealthy fats.
- Only 21% of youth age 6-19 eat the recommended five or more servings of fruits and vegetables each day
This is a veritable recipe for disease, and is a primary reason why today's kids are arguably less healthy than many prior generations. Obesity, type 2 diabetes, high blood pressure -- these are diseases that once appeared only in middle-age and beyond, but are now impacting children. The U.S. Centers for Disease Control and Prevention (CDC) estimates that by 2050, one in three U.S. adults will have diabetes -- one of them could be your child if you do not take steps to cancel out the messages junk-food marketers are sending and instead teach them healthy eating habits.
Make no mistake, the advertisers are doing all they can to lure your child in.
In fact, last year the food and beverage industry spent more than $40 billion, yes billion, lobbying congress against regulations that would decrease the marketing of unhealthy foods to kids. You can do a lot of persuading with $40 billion, which may explain why food manufacturers are allowed to get away with so much -- like putting pictures of fruit all over product packaging when the product actually contains no fruit.
A 2011 study by the Prevention Institute even found that 84 percent of food packages that contain symbols specifically intended to help people choose healthier foods did not meet even basic nutritional standards! In fact, 57 percent of these "Better-for-You" children's foods were high in sugar, 95 percent contained added sugar, and 21 percent contained artificial colors. So you need to be very wary when buying any processed foods for your kids, even the "healthy" ones, as they will most certainly contain large amounts of fructose with very little to offer in the way of healthy nutrition.
Help Fight Back Against Junk-Food Marketers and Stand Up for Kids' Health
The Prevention Institute's "We're Not Buying It" campaign is petitioning President Obama to put voluntary, science-based nutrition guidelines into place for companies that market foods to kids. You can sign this petition now, but I urge you to go a step further and stop supporting the companies that are marketing junk foods to your children today.
Ideally, you and your family will want to vote with your pocketbook and avoid as much processed food as possible and use unprocessed raw, organic and/or locally grown foods as much as possible. Your children should be eating the same wholesome foods you are -- they don't need bright-blue juice or deep-fried "nuggets" any more than you do.
If you and your kids are absolutely hooked on fast food and other processed foods, you're going to need some help and most likely some support from friends and family if you want to kick the junk-food lifestyle. Besides surrounding yourself with supportive, like-minded people, you can also review my article "How to Wean Yourself Off Processed Foods in 7 Steps" or read the book I wrote on the subject, called Generation XL: Raising Healthy, Intelligent Kids in a High-Tech, Junk-Food World.
Finally, my nutrition plan offers a step-by-step guide to feed your family right, and I encourage you to read through it now. You need to first educate yourself about proper nutrition and the dangers of junk food and processed foods in order to change the food culture of your entire family. To give your child the best start at life, and help instill healthy habits that will last a lifetime, you must lead by example. Children will simply not know which foods are healthy unless you, as a parent, teach it to them first. | <urn:uuid:f212f6fc-c358-4ebf-b694-a8eb315dfe06> | CC-MAIN-2013-20 | http://articles.mercola.com/sites/articles/archive/2011/10/24/stop-junk-food-marketing-to-kids.aspx?x_cid=021412Care2 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.968758 | 1,797 | 2.96875 | 3 |
Something astonishing is happening in China. An unfolding story that one Chinese friend told me, “could be a turning point in conservation and wild bird protection in China.”
On Sunday 11 November local people discovered many sick and dying ORIENTAL STORKS (Ciconia boyciana) at Beidagang Reservoir, Tianjin (just 30 mins from Beijing by train). These globally endangered birds - with a restricted range in East Asia – had been poisoned illegally by poachers using a chemical called carbofuran that, although banned in the EU, Canada and many other countries, is commonly available and used, legitimately, as a pesticide all over China.
The storks were possibly unintended victims of well-organised and, sadly, all-too common poaching activity intended to catch swans, ducks and geese for the restaurant trade.
Carbofuran is mixed with cereal, or given to fish in small man-made pools. Birds lose consciousness after eating the bait, are caught by hand and injected with an antidote. The victims are then shipped – usually alive – to restaurants, primarily in southern China. The demand for wild birds is high and they are sold as a delicacy, with many consumers, particularly in southern cities like Guangzhou and Shenzhen, believing that wild birds taste better than farmed produce, and they are prepared to pay a premium. A wild goose or swan can fetch several hundred Yuan (100 Yuan = 10 GBP). The business is highly profitable.
The scale of this activity in China, and the range of methods used by poachers to catch wild birds, are covered in an excellent, but sobering, article in the most recent issue of Goose Bulletin. The authors estimate that between 80,000 and 120,000 ducks, swans and geese are caught illegally in China for the restaurant trade every year.
So what makes the recent case involving Oriental Storks at Beidagang such a big deal?
The answer is the incredible public reaction, led by local people and driven by social media.
The events unfolding at Beidagang, although desperately sad, could have been much worse were it not for some dedicated and brave individuals. Local birders, together with volunteers, officials from the Forestry Administration, police and even firemen have been working together to help catch, treat and care for these birds. They have set up 24/7 patrols to deter the poachers. All of this has been transmitted on social media and the coverage has gone viral. The Chinese micro-blogging service, Weibo, has over 500 million users (on a par with the global membership on Twitter) and activists have been providing regular updates that have been ‘re-tweeted’ by a growing band of followers. As I write this post, the latest update has been ‘re-tweeted’ over 900 times to more than a million users in less than one hour.
This is leading the traditional print and visual media. Already, we are seeing articles relating to this poisoning incident in Chinese and English language media, both local and national.
All of this follows a recent outcry against the illegal trapping and hunting of wild birds in China, also led by social media. Three weeks ago a brave undercover journalist released a shocking video about hunters using spotlights to confuse migrants in Hunan Province before gunning them out of the sky. The Chinese public was outraged and Weibo was alive with condemnation of the hunters and also criticism of the authorities for being slow to act. Shortly after this major outcry, local birders discovered over 2km of illegal mist nets at Beidagang, the site of the current Oriental Stork tragedy. Local activists, many of whom are now on site trying to save the storks, led a ‘day of action’ involving over 60 volunteers, and even the Chinese army, to take down illegal mist nets in the reedbed. This was covered by local and national TV as well as print media. Due to these two events, the number of articles relating to illegal bird trapping and hunting nationwide has exploded.
The campaign to eradicate the illegal hunting of birds is gaining momentum. And the scale of the reaction by ordinary Chinese people all over the country has been overwhelming, demonstrating clearly that the vast majority of Chinese people care deeply about their wild birds. It will be very hard for the authorities to ignore.
None of this would be happening without the incredible dedication, passion and energy of a small number of volunteers at Beidagang. There are many people involved but a special mention must go to Xunqiang Mo (aka “Nemo”), a local student, and Jingsheng Ma, who have personally led the effort to cut down the illegal nets and are now leading the ongoing operation to save the Oriental Storks. They are heroes in every respect.
Here is a personal account from yesterday evening, provided by Zhu Lei, a Beijing-based birder monitoring the situation:
“There is heart-breaking news. 8 more dead storks been found today, which raise the total number up to 21 !
The ground team located 3 evidently man-made small water pools (around diameter of 1m, depth of 0.3m), one of them contained a big empty packing bag (900 g × 20 packets – although the scene is absolutely terrible, it does not necessarily mean the whole bag of poison has been used there) of pesticide. We suspect that the poachers have put the toxic chemical directly into the water in these pools or used the same methods as those 2 Jilin guys (filled the fish with toxic, then put into the pools) to poison the birds.
According to signs on the bag, the pesticide used in this massacre is nothing but Carbofuran. The bags were already taken by the police as potential evidence. Some tissue also been taken from the dead birds for further forensic tests. The cause of death will only be revealed as the test report is released (although everything points to it being poisoning with carbofuran).
The volunteer team (mostly from the local community and nearby Tianjin city) should be applauded for their hard work. Among them, a bicycle enthusiasts team is worthy of mention for they’ve taken the duty to patrol the dam which surrounds the wetland in daytime, and at least 3 of them have tried hard to wade into the muddy wetland searching for sick birds. Several local rich bird photographers (I think the guys who can afford the big Canon or Nikon big lenses and expensive cameras could be called ‘rich’) have provided financial support to cover spending such as other volunteers’ accommodation and food, etc.
People from government agencies also contributed to the action. Today, even a team of firemen was called to the spot, due to lack of proper equipment (e.g. waders, boats) to deal with the situation faced in the wetland. They just try to do what they can over there.
24h ground patrolling has been launched last night, and the patrol has been equipped with night-vision goggles donated by a businessman from Tianjin.
Tomorrow, the team will focus on locating more poisoned lure pools and will destroy them. A plan to provide safe food (mainly small fish) to the storks still at the wetland will be carried out tomorrow.
Special thanks to Nemo for his great devotion and efforts in saving those birds on-site, and kindly receiving my interview tonight. He is a real hero and deserves our highest respect.”
You can follow the latest developments with the Oriental Storks at Beidagang and the broader campaign to eradicate illegal mist-netting at this website. Already, many people have expressed their support for these brave and committed individuals and their comments are making a real difference to the volunteers. Knowing that there are people all over the world supporting their efforts is a real boon for them. If you haven’t already, please take a moment to comment to show your support. This could just be the decisive battle in the war against illegal trapping and hunting of wild birds in China. | <urn:uuid:ae31b8ac-dd10-46ca-8429-6076e82025d2> | CC-MAIN-2013-20 | http://birdingfrontiers.com/2012/11/14/a-turning-point-in-china/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.961842 | 1,658 | 2.515625 | 3 |
Debt Ceiling to allow U.S. Debt to hit historic level in early 2013
While there have been and continue to be a significant number of hands wringing over the fiscal cliff , which takes effect on December 31, perhaps the REAL issue is coming very early in 2013: the U.S. Debt Ceiling.
The fiscal cliff is being discussed on every business report on television, radio, Internet blogs and print media. As you most likely know, fiscal cliff is the name given to the event associated with the simultaneous expiration of the Bush-era tax cuts, the increase in the payroll tax and the immediate reduction of federal government spending. For reference, here are links to APMEX’s special reports n the fiscal cliff.
Fiscal Cliff is but the Beginning
While the sudden and significant impact of multiple changes in the economy is surely creating anxiety and uncertainty in both the personal lives and business of Americans, this is likely only the beginning of issues as the United States begins to respond to the “new normal” following the Great Recession.
However, the next increase in the federal debt ceiling – the maximum amount the U.S. may borrow as set by Congress – will establish the maximum U.S. Federal Debt at about $18 trillion. While this is, of course, a huge level of debt and the largest debt of any country, the U.S. also has the world’s largest economy.
The question that each country must address is “How much debt can this country afford?” The answer depends on a number of factors and is often measured in the ratio of debt to Gross Domestic Product (GDP) of the borrowing country. Historically, for the U.S., this ratio has generally been between 30 percent and 65 percent, from 1950 until the beginning of the Great Recession in 2008.
U.S. Debt is at Historically High and Dangerous Levels
When the next debt ceiling is set by Congress, most likely in early 2013, presuming borrowing to the ceiling and low GDP growth, the U.S. Debt to U.S. GDP ratio will most likely be about 120 percent, a level more than double the historical levels since 1950.
How does this compare to other countries? Below is a table of several key countries around the world. Also, here is a complete list of countries with Debt to GDP levels provided by the International Monetary Fund.
The History and the Current Status of the U.S. Debt Ceiling
During World War I in 1917, the U.S. Congress passed a law requiring Congressional approval on the aggregate debt outstanding of the United States. Prior to this, Congress was required to approve each and every debt offering. Since 1950, there have been 95 changes to the debt ceiling; since 2000 there have been 13 changes, or about one per year. You can read about the History of the U.S. Debt Ceiling or see a listing of all changes to the U.S. Debt Ceiling, use Table 7.3.
Since 2000, the increases in the U.S. Debt Ceiling have been larger than in previous years as the United States borrowed more to finance the 2000 dot-com bust, the wars in Afghanistan and Iraq, and the Federal support of the Great Recession of 2007–2008.
The current status of the U.S. Public Debt and the Debt Limit is shown in the charts below. The U.S. Debt has increased by more than 15 percent since January 2011. The current U.S. Debt is very close to the U.S. Debt Ceiling of about $16.5 trillion and, accordingly, Congress will be required to take action very soon.
The U.S. Debt has increased $2.1 trillion, or about 15percent, in just two years since January 2011. Despite the large increase, the Federal Government has almost borrowed to the limit.
The U.S. Debt Ceiling must be raised in the very near future, most likely in a few months. As the chart below shows, at the end of October 2012, only about $172 billion remained available under the U.S. Debt Ceiling. In November 2011, federal borrowing increased by $119 billion, and if that were the borrowing rate for November 2012, almost all of the available U.S. Debt availability would be consumed.
Note: In an article in The Wall Street Journalon December 12, 2003, it was reported that the U.S. Treasury currently has only about $67 billion remaining in borrowing capacity.
The red line represents the total borrowing capacity of the United States that is above the current aggregate outstanding U.S. Debt. Since January 2012, U.S. borrowing has increased such that the remaining availability has declined each month , leaving the availability in November 2012 at just $172 billion. Here is the U.S. Treasury Monthly Statement of the Public Debt of the United States.
Gold and the U.S. Debt in 2012 and Beyond
With much debate on the fiscal cliff and future debate on the debt ceiling, the end result will be that the U.S. will most likely continue to be in a period of very high federal debt relative to the GDP. This relationship cannot be changed in a year and perhaps not even in five years.
The Europeans are ahead of the United States in addressing their debt to GDP issues with Greece, Portugal, Ireland and Italy. Spain will most likely become a problem as well. The solution in Europe has been the same as the solution in the U.S.: the Central Banks create more currency to keep the economy from falling even further.
A recent article in Barron’s, titled “Is Bad News Still Good News for Gold?” Randall Forsyth, the author, in the last paragraph says
As long as authorities try to do whatever it takes to hold the system of fiat currencies and indebted governments from flying apart, paper money will continue to lose value relative to the traditional store of value, gold. | <urn:uuid:c505c2e6-2111-4528-9e20-3531f0540c72> | CC-MAIN-2013-20 | http://blog.apmex.com/2012/12/18/special-report-fiscal-cliff-u-s-debt-ceiling-is-the-real-issue/?like=1&source=post_flair&_wpnonce=c1ce40f6e1 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.95781 | 1,228 | 2.921875 | 3 |
I like projects. I really liked this project. The pipe insulation roller coaster project is one of the most enjoyable projects I've ever used in class.
It was my second year teaching physics. During the unit on energy, the book we were using frequently used roller coasters in their problems. We even had a little "roller coaster" to use with photo gates. I thought we could do better.
My original idea was to get some flexible Hot Wheels tracks and make some loop-de-loops and hills. Turns out a class set of Hot Wheels track is pretty expensive. On an unrelated yet serendipitous visit to my local big box hardware store, I ran across the perfect (and cheap!) substitute: Pipe Insulation!. For $1.30 or so you can get six feet of pipe insulation- which doubles nicely as a marble track1 when you split the pipe insulation into two equal halves. It's really easy to cut pipe insulation with a sharp pair of scissors. Just be sure you don't buy the "self-sealing" pipe insulation, which has glue pre-applied- it's more expensive and it'd turn into a sticky mess.
At first I planning to simply design a one-period long investigation using the pipe insulation (my original ideas morphed into the pre-activity for this project). As I started to think through the project more and more, I realized we could go way bigger. And thus, the pipe insulation roller coaster project was born.
Building the Coasters
In groups of three, students were given 24 feet of pipe insulation (4 pieces), a roll of duct tape2, and access to a large pile of cardboard boxes3. All groups had to adhere to a few standard requirements:
- Construction requirements
- The entire roller coaster must fit within a 1.0m x 2.0m rectangle4.
- There must be at least two inversions (loops, corkscrews, etc.).
- All 24 feet of pipe insulation must be used.
- The track must end 50 cm above the ground.
In addition to meeting the above requirements, students were required to utilize their understanding of the work-energy theorem, circular motion, and friction to do the following:
- Determine the average rolling friction, kinetic energy, and potential energy at 8 locations on their roller coaster.
- Determine the minimum velocities required for the marble to stay on the track at the top of all the inversions
- Determine the g-forces the marble experiences through the inversions and at least five additional corners, hills, or valleys.
- The g-forces must be kept at "safe" levels5.
- Rolling friction, kinetic energy, and potential energy
- The potential energy () is easy enough to find after measuring the height of the track and finding the mass of the marble. The kinetic energy is trickier and can be done by filming the marble and doing some analysis with Tracker, but since the speed of the marble is likely to be a little too fast for most cameras to pick up clearly, it's probably easier (and much faster) to simply measure the time it takes the marble travel a certain length of track. I describe how this can be done in a previous post, so check that out for more info. That post also includes how to calculated the coefficient of friction by finding how much work was done on the marble due to friction- so I'll keep things shorter here by not re-explaining that process.
- Pro-tip: Have students mark every 10 cm or so on their track before they start putting together their coasters (note the tape marks in this pic). Since d in in this case is the length of track the marble has rolled so far, it makes finding the value for d much easier than trying to measure a twisting, looping roller coaster track.
- This is also called the critical velocity. That's fitting. If you're riding a roller coaster it's pretty critical that you make it around each loop. Also, you might be in critical condition if you don't. While falling to our death would be exciting, it also limits the ability to ride roller coasters in the future (and I like roller coasters). Since we're primarily concerned with what is happening to the marble at the top of the loop, here's a diagram of the vertical forces on the marble at the very top of the loop:
So just normal force (the track pushing on the marble) and gravitational force (the earth pulling on the marble). Since these forces are both acting towards the center of the loop together they're equal to the radial force:
When the marble is just barely making it around the loop (at the critical velocity), the normal force goes to zero. That is, the track stops pushing on the marble for just an instant at the top of the loop. If the normal force stays zero for any longer than that it means the marble is in free fall, and that's just not safe. So:
Then when you substitute in masses and accelerations for the forces and do some rearranging:
There you go. All you need to know is the radius of the loop, and that's easy enough to measure. Of course, you'd want a little cushion above the critical velocity, especially because we're ignoring the friction that is constantly slowing down the marble as it makes its way down the track.
- An exciting roller coaster will make you weightless and in the next instant squish you into your seat. A really bad roller coaster squishes you until you pass out. This is awesomely known as G-LOC (G-force Induced Loss of Consciousness). With the proper training and gear, fighter pilots can make it to about 9g's before G-LOC. Mere mortals like myself usually experience G-LOC between 4 and 6g's.
As I mentioned, I set the limit for pipe insulation roller coasters at 30g's simply because it allowed more creative and exciting coaster designs. While this would kill most humans, it turns out marbles have a very high tolerance before reaching G-LOC.
Raise the stakes
Students become fiercely proud of their roller coasters. They'll name them. Brag about them. Drag their friends in during lunch to show them off. Seeing this, I had students show off their creations to any teachers, parents, or administrators that I was able to cajole into stopping by for the official testing of the coasters. I even made up a fun little rubric (.doc file) for any observers to fill out for each coaster. This introduces some level of competition into the project, which gives me pause- though from day one students generally start some friendly smack talk about how their coaster is akin to the Millenium Force while all other coasters are more like the Woodstock Express. The students love to show off their coasters, and it seems the people being shown enjoy the experience as well.
Assessment is massively important. However, this post is already long. The exciting conclusion of this post will feature the assessment piece in: Part 2: Pipe Insulation Roller Coaster Assessment.
The Pipe Insulation Roller Coaster Series
- Pipe Insulation Roller Coasters and Rolling Friction
- Pipe Insulation Roller Coasters
- Pipe Insulation Roller Coaster Assessment
- The first day we played with pipe insulation in class I had students use some marble-sized steel balls. Unfortunately because the steel balls are so much heavier and the pipe insulation is spongy and flexible, there was just too much friction. When we switched to marbles the next day everything worked like a charm. (back)
- Most groups typically use more than one roll of duct tape. My first couple years I bought the colored duct tape and gave each group a different color. That was a nice touch, but also a bit more expensive than using the standard silver. Whatever you decide, I highly recommend avoiding the cut-rate duct tape. The cheap stuff just didn't stick as well which caused students to waste a lot of time fixing places where the duct tape fell and in the end used a lot more duct tape. (back)
- I had an arrangement with our school's kitchen manager to set broken down boxes aside for me for a few weeks before we started the project. If that's not an option, I've also found if you talk to a manager of a local grocery store they're usually more than willing to donate boxes. (back)
- I made it a requirement for groups to start by building a cardboard rectangle with the maximum dimensions. This served two functions: (1) It made it easy for the groups to see what space they had to work with, and (2) it allows the roller coasters to be moved around a little by sliding them across the floor. (back)
- Originally I wanted students to keep g-forces below 10. Very quickly it became apparent that under 10g's was overly restrictive and I upped it to 30g's. That's not really safe for living creatures, but it would certainly make it more "exciting." (back) | <urn:uuid:d8b5df09-4781-471e-a292-02b3843b6783> | CC-MAIN-2013-20 | http://blog.benwildeboer.com/2011/pipe-insulation-roller-coasters/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.962883 | 1,871 | 3.015625 | 3 |
Using the Congressional Serial Set for Genealogical Research
By Jeffery Hartley
(This article appeared in the Spring 2009 issue of Prologue. It has been excerpted and reprinted here with the permission of the author.
The Historical Documents section in GenealogyBank includes over 243,000 reports from the US Serial Set and the American State Papers).
Click here to search the American State Papers and US Congressional Serial Set in GenealogyBank.com
Genealogists use whatever sources are available to them in pursuit of their family history: diaries, family Bibles, census records, passenger arrival records, and other federal records. One set of materials that is often overlooked, however, is the Congressional Serial Set.
This large multivolume resource contains various congressional reports and documents from the beginning of the federal government, and its coverage is wide and varied. Women, African Americans, Native Americans, students, soldiers and sailors, pensioners, landowners, and inventors are all represented in some fashion. While a beginning genealogist would not use the Serial Set to begin a family history, it nevertheless can serve as a valuable tool and resource for someone helping to flesh out an ancestors life, especially where it coincided with the interests of the U.S. federal government.
Since its inception, the U.S. government has gathered information, held hearings, compiled reports, and published those findings in literally millions of pages, the majority of which have been published by the Government Printing Office (GPO).
These publications include annual reports of the various executive branch agencies, congressional hearings and documents, registers of employees, and telephone directories. Their topics cover a wide range, from the Ku Klux Klan to child labor practices to immigration to western exploration.
In 1817, the Serial Set was begun with the intent of being the official, collective, definitive publication documenting the activities of the federal government. Following the destruction of the Capitol in 1814 by the British, Congress became interested in publishing their records to make them more accessible and less vulnerable to loss.
In the early Federal period, printing of congressional documents had been haphazard, and the Serial Set was an effort designed to rectify that situation. Although initially there were no regulations concerning what should be included, several laws and regulations were promulgated over the years. The contents, therefore, vary depending on the year in question.
In 1831, 14 years after the Serial Set was begun, the printers Gales & Seaton proposed that a compilation of the documents from the first Congresses be printed. The secretary of the Senate and the clerk of the House were to direct the selection of those documents, 6,278 of which were published in 38 volumes between 1832 and 1861. This collection was known as the American State Papers.
Because it was a retrospective effort, these 38 volumes were arranged chronologically within 10 subject areas: Foreign Relations, Indian Affairs, Finance, Commerce & Navigation, Military Affairs, Naval Affairs, Post Office, Public Lands, Claims, and Miscellaneous.
Although not technically a part of the Serial Set, the volumes were certainly related, and therefore the volumes were designated with a leading zero so that these volumes would be shelved properly, i.e. before the volumes of the Serial Set. (1)
The Congressional Serial Set itself includes six distinct series: House and Senate journals (until 1953), House and Senate reports, House and Senate documents, Senate treaty documents, Senate executive reports, and miscellaneous reports. The journals provide information about the daily activities of each chamber. The House and Senate reports relate to public and private legislation under consideration during each session.
Documents generally relate to other investigations or subjects that have come to the attention of Congress. Nominations for office and military promotion appear in the Senate Executive Reports. Miscellaneous reports are just thatwidely varied in subject matter and content. With the possible exception of the treaty documents, any of these can have some relevance for genealogists.
The documents and reports in the Serial Set are numbered sequentially within each Congress, no matter what their subject or origin. The documents were then collected into volumes, which were then given a sequential number within the Serial Set. The set currently stands at over 15,000 volumes, accounting for more than 325,000 individual documents and 11 million pages.
The Serial Set amounts to an incredible amount of documentation for the 19th century. Agency annual reports, reports on surveys and military expeditions, statistics and other investigations all appear and thoroughly document the activities of the federal government.
In 1907, however, the Public Printing and Binding Act provided guidelines for what should be included, resulting in many of these types of reports no longer being included as they were also issued separately by the individual agencies. The number of copies was also trimmed. With that stroke, the value of the Serial Set was lessened, but it nevertheless stands as a valuable genealogical resource for the 19th century.
So what is available for genealogists? The following examples are just some of the types of reports and information that are available.
The Serial Set contains much information concerning land claims. These claims relate to bounty for service to the government as well as to contested lands once under the jurisdiction of another nation.
In House Report 78 (21-2), there is a report entitled “Archibald Jackson.” This report, from the House Committee on Private Land Claims, in 1831, relates to Jackson’s claim for the land due to James Gammons. Gammons, a soldier in the 11th U.S. Infantry, died on February 19, 1813, “in service of the United States.” The act under which he enlisted provided for an extra three month’s pay and 160 acres of land to those who died while in service to the United States. However, Gammons was a slave, owned by Archibald Jackson, who apparently never overtly consented to the enlistment but allowed it to continue. That Gammons was eligible for the extra pay and bounty land was not in dispute, but the recipient of that bounty was. Jackson had already collected the back pay in 1823 and was petitioning for the land as well. The report provides a decision in favor of Jackson, as he was the legal representative of Gammons, and as such entitled to all of his property. (2)
Land as bounty was one issue, and another was claims for newly annexed land as the country spread west. In 1838, the House of Representatives published a report related to Senate Bill 89 concerning the lands acquired through the treaty with Spain in 1819 that ceded East and West Florida to the United States. Claims to land between the Mississippi and the Perdido Rivers, however, were not a part of that treaty and had been unresolved since the Louisiana Purchase, which had taken the Perdido River as one of its limits. The report provides a background on the claims as well as lists of the claimants, the names of original claimants, the date and nature of the claim, and the amount of the land involved. (3)
Other land claims are represented as well. In 1820, the Senate ordered a report to be printed from the General Land Office containing reports of the land commissioners at Jackson Court House. These lands are located in Louisiana and include information that would help a genealogist locate their ancestor in this area. Included in this report is a table entitled “A List of Actual Settlers, in the District East of Pearl River, in Louisiana, prior to the 3d March, 1819, who have no claims derived from either the French, British, or Spanish, Governments.” The information is varied, but a typical entry reads: No. 14, present claimant George B. Dameson, original claimant Mde. Neait Pacquet, originally settled 1779, located above White’s Point, Pascag. River, for about 6 years. (4)
Among the reports in the Serial Set for the 19th century are the annual reports to Congress from the various executive branch agencies. Congress had funded the activities of these organizations and required that each provide a report concerning their annual activities. Many of these are printed in the Serial Set, often twice: the same content with both a House and a Senate document number. Annual reports in the 19th century were very different from the public relations pieces that they tend to be today.
Besides providing information about the organization and its activities, many included research reports and other (almost academic) papers. In the annual reports of the Bureau of Ethnology, for instance, one can find dictionaries of Native American languages, reports on artifacts, and in one case, even a genealogy for the descendants of a chief. (5)
These reports can often serendipitously include information of interest to the family historian. For instance, the annual report of the solicitor of the Treasury would not necessarily be a place to expect to find family information. The 1844 report, however, does have some information that could be useful. For instance, pages 36 and 37 of this report contains a “tabular list of suits now pending in the courts of the United States, in which the government is a part and interested.”
Many on the opposite side of the case were individuals. An example is the case of Roswell Lee, late a lieutenant in the U.S. Army, against whom there has been a judgment for over $5,000 in 1838. Lee was sued in a court in Massachusetts and in 1844 still owed over $4,000. In a letter dated May 5, 1840, the district attorney informed the office (6)
that Mr. Lee is not now a resident of the district of Massachusetts, and that whether he ever returns is quite uncertain; that nothing, however, will be lost by his absence, as the United States have now a judgment against him, which probably will forever remain unsatisfied.
Another set of annual reports that appear in the Serial Set are those for the Patent Office. The annual reports of the commissioner of patents often include an index to the patents that were granted that year, arranged by subject and containing the names of the invention and the patentee and the patent number. The report included a further description of the patent, and often a diagram of it as well. Each year’s report also included an index by patentee.
Unfortunately, the numbers of patents granted in later years, as well as their complexity, led to more limited information being included in later reports. The 1910 report, for instance, simply contains an alphabetical list of inventions, with the entries listing the patentee, number, date, and where additional information can be found in the Official Patent Office Gazette. (7)
The Civil War gave rise to a number of medical enhancements and innovations in battlefield medicine, and the annual report for 1865, published in 1867, contains a reminder of that in the patent awarded to G. B. Jewett, of Salem, Massachusetts, for “Legs, artificial.” Patent 51,593 was granted December 19, 1865, and the description of the patent on page 990 provides information on the several improvements that Jewett had developed. The patent diagram on page 760 illustrated the text. (8)
This annual report relates to a report from May 1866, also published in the Serial Set that same session of Congress, entitled “Artificial Limbs Furnished to Soldiers.” This report, dated May 1866, came from the secretary of war in response to a congressional inquiry concerning artificial limbs furnished to soldiers at the government’s expense. Within its 128 pages are a short list of the manufacturers of these limbs, including several owned by members of the Jewett family in Salem, Massachusetts, New York, and Washington, D.C., as well as an alphabetical list of soldiers, detailing their rank, regiment and state, residence, limb, cost, date, and manufacturer. Constantine Elsner, a private in B Company of the 20th Massachusetts living in Boston, received a leg made by G. B. Jewett at a cost of $75 on April 8, 1865. 9 This may have been an older version of the one that Jewett would have patented later in the year, or it may have been an early model of that one. Either way, a researcher would have some idea not only of what Elsner’s military career was like, but also some sense of what elements of life for him would be like after the war.
Congress also was interested in the activities of organizations that were granted congressional charters. Many of the charters included the requirement that an annual report be supplied to Congress, and these were then ordered to be printed in the Serial Set.
One such organization is the Daughters of the American Revolution (DAR). As one would expect, the DAR annual reports contain a great deal of genealogical and family history information. The 18th annual report is no exception. Among other things, it includes, in appendix A, a list of the graves of almost 3,000 Revolutionary War soldiers. The list includes not just a name and location, but other narrative information as well:
Abston, John. Born Jan. 2, 1757; died 1856. Son of Joshua Abston, captain of Virginia militia; served two years in War of the American Revolution. Enlisted from Pittsylvania County, Va.; was in Capt. John Ellis’ company under Col. Washington. The evening before the battle of Kings Mountain, Col. Washington, who was in command of the starving Americans at this point, sent soldiers out to forage for food. At a late hour a steer was driven into camp, killed, and made into a stew. The almost famished soldiers ate the stew, without bread, and slept the sleep of the just. Much strengthened by their repast and rest, the next morning they made the gallant charge that won the battle of Kings Mountain, one of the decisive battles of the American Revolution. Washington found one of the steer’s horns and gave it to Abston, a personal friend, who carried it as a powder horn the rest of the war. (10)
Another organization whose annual reports appear is the Columbia Institution for the Deaf and Dumb, which later became Gallaudet University. These reports, found in the annual reports of the secretary of the interior, contain much of what one would expect: lists of faculty and students, enrollment statistics, and other narrative. While that information can help to provide information about one’s ancestor’s time there, there are other parts of the narrative that include information one would not expect to find.
For instance, the 10th annual report for 1867 has a section entitled “The Health of the Institution.” It concerns not the fiscal viability of the institution but rather the occurrences of illness and other calamities. One student from Maryland, John A. Unglebower, was seized with gastric fever and died: “He was a boy of exemplary character, whose early death is mourned by all who knew him.” Two other students drowned that year, and the circumstances of their deaths recounted, with the hope that “they were not unprepared to meet the sudden and unexpected summons.” (11) Both the faculty and the student body contributed their memorials to these two students in the report.
Other organizations represented in the Serial Set are the Boy Scouts and Girl Scouts of America, Veterans of World War I of the United States, proceedings of the National Encampment, United Spanish War Veterans, the American Historical Association, and the National Convention of Disabled American Veterans.
Lists of Pensioners
The history of pensions provided by the federal government is beyond the scope of this article. However, the Serial Set is a source of information about who was on the rolls at various times. For instance, an 1818 letter from the secretary of war was published containing a list of the persons who had been added to the pension list since May 28, 1813. The list provides information on the likes of Susanna Coyle, certificate of pension no. 9, heiress of deceased soldier William Coyle, alias Coil, a private who received pay of four dollars per month. (12)
Sundry lists of pensions appeared in 1850, related to the regulation of Navy, privateer, and Navy hospital funds. The report included four lists: those placed in the invalid list who were injured while in the line of duty; those drawing pensions from wounds received while serving on private armed vessels; widows drawing pensions from their husbands who were engineers, firemen, and coal-heavers; and orphan children of officers, seamen, and marines pensioned under the act of August 11, 1848. (13)
One of the most widely consulted lists is that for 1883, “List of Pensioners on the Roll, January 1, 1883” (Senate Executive Document 84 [47-2]). This five-volume title, arranged by state and then county of residence, provides a list of each pensioner’s name, his post office, the monthly amount received, the date of the original allowance, the reason for the pension, and the certificate number.
An example is the case of Eli G. Biddle, who served in the 54th Massachusetts. Biddle can be found on page 439 of volume 5 of the “List,” and a researcher can learn several things without even having seen his pension file: his middle name is George, he was living in Boston in 1883, and he was receiving four dollars each month after having suffered a gunshot wound in the right shoulder. His pension certificate number is also provided 99,053 and with that one could easily order the appropriate records from the National Archives.
The Serial Set serves as a source of military registers and other lists of government personnel as well. Both Army and Navy registers appear after 1896. The Army registers for 1848–1860 and the Navy registers for 1848–1863 are transcripts of the lists that appeared the preceding January and include pay and allowances, with corrections to that earlier edition for deaths and resignations.
The Official Register, or “Blue Book,” a biannual register of the employees of the federal government, appears for 10 years, from 1883 to 1893. If one’s ancestors were employees at this time, their current location and position, place from which they were appointed, date of appointment, and annual compensation can be gleaned from this source.
The Serial Set often provides unexpected finds, and the area of registers is no exception. There is a great deal of material on the Civil War, from the 130 volumes of the Official Records of the War of the Rebellion to other investigations and the aforementioned registers and lists of pensions. There are not, however, large amounts of compiled unit histories.
One exception, however, is the report from the adjutant general of Arkansas. Shortly after the Civil War, the adjutant general offices of the various Union states prepared reports detailing the activities of the men from their states. The same was done in Arkansas, but the state legislature there, “under disloyal control,” declined to publish the report. Senator Henry Wilson of Massachusetts, chairman of the Senate Committee on Military Affairs, brought it to the committee in 1867, and it was ordered to be printed in the Serial Set so that the loyal activities of these 10,000 men would be recognized. (14) The report includes brief histories of each unit as well as a roster of the unit and rank, enlistment date, and other notes on each soldier.
Accessing Information in the Serial Set
The indexing for the Serial Set has long been troublesome to researchers. Various attempts have been made to provide subject access, with varying degrees of success. Many of the indexes in the volumes themselves are primarily title indexes to the reports from that Congress and session. The Checklist of United States Public Documents, 1789–1909, does provide information about what reports listed therein do appear in the Serial Set, but the researcher has to know the name of the issuing agency in order to access that information. The Document Index provides some subject indexing by Congress, and other efforts such as those by John Ames and Benjamin Poore can also be used, but none index the tables and contents of many of the reports that have been discussed in this article. (15)
The best comprehensive print index is the Congressional Information Service’s (CIS) U.S. Serial Set Index, produced in conjunction with their microfilming of the volumes through 1969 beginning in the mid-1970s. In this index, a two-volume subject index covers groups of Congresses, with a third volume providing an index to individual names for relief actions, as well as a complete numerical list in each report/document category. The index, however, does not index the contents of the documents. For instance, although the title given for the Archibald Jackson land claim includes James Gammons’s name, the latter does not appear in the index to private relief actions. In addition, users must often be creative in the terms applied in order to be sure that they have exhausted all possibilities. In the mid-1990s CIS released these indexes on CD-ROM, which makes them somewhat easier to use, although the contents are essentially the same.
The indexing problems have been rectified by the digitization of the Serial Set. At least two private companies, LexisNexis and Readex, have digitized it and made it full-text searchable.
[The Serial Set and American State Papers are available in GenealogyBank. Click here to search them online]
This article can only hint at some of the genealogical possibilities that can be found in the Congressional Serial Set. It has not touched on the land survey, railroad, western exploration, or lighthouse keeper’s reports or many of the private relief petitions and claims. Nonetheless, the reports and documents in the Serial Set provide a tremendous and varied amount of information for researchers interested in family history.
Jeffery Hartley is chief librarian for the Archives Library Information Center (ALIC). A graduate of Dickinson College and the University of Maryland’s College of Library and Information Services, he joined the National Archives and Records Administration in 1990.
1 For a more complete description of the American State Papers, and their genealogical relevance, see Chris Naylor, “Those Elusive Early Americans: Public Lands and Claims in the American State Papers, 1789–1837,” Prologue: Quarterly of the National Archives and Records Administration 37 (Summer 2005): 54–61.
2 H. Rept. 78 (21-2), 1831, “Archibald Jackson” (Serial 210).
3 H. Rept. 818 (25-2), 1838, “Land Claims between Perdido and Mississippi” Serial 335.
4 S. Doc. 3 (16-2), 1820, “Reports of the Land Commissioners at Jackson Court House” (Serial 42).
5 H. Misc. Doc. 32 (48-2), 1882, “3rd Annual Report of the Bureau of Ethnology” (Serial 2317).
6 H. Doc. 35 (28-1), 1844, “Annual Report of Solicitor of the Treasury” (Serial 441), p. 37. 7 H. Doc. 1348 (61-3), 1911, “Annual Report of the Commissioner of Patents for the Year 1910″ (Serial 6020).
8 H. Exec. Doc. 62 (39-1), 1867, “Annual Report of the Commissioner of Patents for the Year 1865″ (Serial 1257-1259).
9 H. Exec. Doc. 108 (39-1), 1866, “Artificial Limbs Furnished to Soldiers” (Serial 1263).
10 S. Doc. 392 (64-1), 1916, “Eighteenth Report of the National Society of the Daughters of the American Revolution, October 11, 1914, to October 11, 1915″ (Serial 6924), p.155. 11 H. Exec. Doc. 1 (40-2), “Tenth Annual Report of the Columbia Institution for the Deaf and Dumb” (Serial 1326), pp. 429–430.
12 H. Doc. 35 (15-1), 1818 (Serial 6), p. 17.
13 See H. Ex. Doc. 10 (31-2), 1850, “Sundry Lists of Pensioners” (Serial 597).
14 See S. Misc. Doc 53 (39-2), 1867, “Report of the Adjutant General for the State of Arkansas, for the Period of the Late Rebellion, and to November 1, 1866″ (Serial 1278).
15 A good discussion of how some of these indexes work can be found in Mary Lardgaard, “Beginner’s Guide to Indexes to the Nineteenth Century U.S. Serial Set,” Government Publications Review 2 (1975): 303–311. | <urn:uuid:3bb9565f-09a5-44b9-863d-d81e7a34466f> | CC-MAIN-2013-20 | http://blog.genealogybank.com/tag/serial-set | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.963289 | 5,204 | 3.109375 | 3 |
Labrador Retriever originated in Newfoundland, Canada. Small water dogs were used to retrieve birds and fish; they even pulled small boats through the water. Their strong desire to work, versatility, and waterproof coats impressed fishermen, one of whom brought a dog back to England with him. Lord Malmsbury saw this dog, then called a St. John’s Dog, and imported several from Newfoundland. Lord Malmsbury is credited with having started to call the dogs Labradors, although the reason is lost to history. Eventually, the English quarantine stopped additional imports from coming into the country, and the Labradors already in England were cross-bred to other retrievers. However, breed fanciers soon put a stop to that, and the breed as we know it today was born.
Labrador Retriever is probably the most popular dog breed in the world.
Labrador Retriever is a medium-sized, strongly built dog breed that retains its hunting and working instincts. Standing between 21.5 and 24.5 inches tall and weighing between 55 and 80 pounds, with females smaller than males, the breed is compact and well-balanced. Labrador Retrievers have short, weather-resistant coats that can be yellow, black, or chocolate. The head is broad, the eyes are friendly, and the tail is otterlike. Grooming a Labrador Retriever is not difficult, although it is amazing how much the coat can shed at times. Shedding is worst in spring and fall when the short, dense undercoat and coarser outer coat lose all the dead hair. Brushing daily during these times will lessen the amount of hair in the house.
Photo: Labrador Retriever puppies – Brown, Black and Yellow.
Labrador Retrievers do everything with vigor. When it’s time to play, they play hard. When it’s time to take a nap, they do that with enthusiasm, too. But this desire to play and instinct to work means that Labs need vigorous exercise every day and a job to do. They need to bring in the newspaper every morning, learn to pick up their toys, and train in obedience. Labrador Retrievers do very well in many canine activities, including agility, flyball, field tests and trials, tracking, search-and-rescue work, and therapy dog work. Labrador Retrievers still enjoy swimming, and if water is available, a swim is a great way to burn off excess energy.
Early socialization and training can teach a Labrador Retriever puppy household rules and social manners. Training should continue throughout puppyhood and into adulthood so that the Labrador Retriever’s mind is kept busy. The Labrador Retriever can learn advanced obedience, tricks, or anything else her owner wishes to teach her.
Labrador Retrievers are great family dogs. They will bark when people approach the house but are not watchdogs or protective. Labrador Retriever puppies are boisterous and rambunctious and need to be taught to be gentle with young children. Older kids will enjoy the Lab’s willingness to play. Most Labrador Retrievers are also good with other dogs and can learn to live with small pets, although interactions should be supervised. Health concerns include hip and elbow dysplasia, knee problems, eye problems, and allergies.
The Labrador Retriever loves to swim. However, as unlikely as it may seem, Labs do not come “out of the box” knowing how to swim. Furthermore, some Labs become truly nervous around water. That having been said, most Labs can be taught to swim quickly and easily, and a few simple lessons can lead to hours of enjoyment for both you and your dog. There are a number of reasons to teach your Lab to swim while he’s still a pup. For one thing, it’s easier on the dog. A large dog has a lot of body weight to manage in the water, and for a dog new to swimming, this can increase the slope of the learning curve. Puppies, because of their small size, have an easier time.
See 7.5 week old Labrador pups go to the water for the first time
Even before you teach your Lab to swim, you can start off on the right foot by building his confidence around water. Take your dog for a walk around the local pond or lake. Encourage any interest that your dog shows in the water with verbal praise. If he is willing to get his feet wet, encourage him to do so and praise him when he does. Simple preliminaries like this lay a strong foundation for you because you teach the dog that there is no reason to fear water. Remember that the primary goal here is to provide positive experiences for your Lab around and in the water. Making sure that the aquatic site you’ve chosen is safe goes a long way towards ensuring such experiences.
Labrador Retriever Videos
Training Labrador Retrievers: Training Labrador puppies is best started around 2 months of age; the same time as the Labrador puppy is weaned from his mother. This life-long commitment is the beginning of a wonderful relationship between owner and dog. Here are some videos from dog training expert Melanie McLeroy to help you get started.
Teach your Labrador to learn and respond to their name.
Teach your Labrador to sit on command.
Teach your Labrador to stay on command.
Teach your Labrador to lie down on command.
Teach your Labrador to heel.
Teach your Labrador to come on command.
Before your Labrador training starts, you have to consider the training method you intend to use. This method needs to be consistent, so making the decision is one that requires some research. Many professional animal trainers use what’s called positive reinforcement (believes that animals are much better behaved and easier to train when they’re earning rewards and praise than if they’re being punished). | <urn:uuid:f1ed2d8a-27ec-4d1b-8271-d3ae5586c0ae> | CC-MAIN-2013-20 | http://dogbreedstandards.com/labrador-retriever/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.960187 | 1,226 | 3.078125 | 3 |
Do you ever feel like no matter what you do to get healthier or fit that you don’t seem to succeed? Have you tried every diet out there, only to find the restrictions too difficult to manage? Have you attempted in earnest to maintain a fitness program only to find that, despite your best efforts, doing a consistent routine is too difficult to manage?
With all these starts and stops over years and decades, you may think: What’s the point of trying? Or maybe you’ve given up and resolved to just accept poor health as part of your life. Well, the inability to “stick with it” has many facets, some of which you may not be able to control. In fact, research shows that the self-control needed to succeed in many of these cases may be a limited resource.
In 2006, Michael Inzlicht and colleagues at the University of Toronto Scarborough studied what happens in the brain when humans try to abstain from something they want. That is, when we try to use willpower to refrain from acting on our urges to do something specific.
Failure to control one’s behavior is found in all aspects of life. It includes acting out, saying mean things, stealing and drug abuse. It also encompasses not doing things that are good for you like walking, eating healthy and getting plenty of rest.
Inzlicht set up a study, published in the journal Psychological Science, which tested participants’ self-control over time. Participants were first asked to do something to deplete their “store” of willpower or behavioral control and then see how much they had left for another, unrelated task.
First, participants watched an emotionally upsetting movie and were asked to suppress their emotions and try not to cry during especially difficult scenes. Following this, participants completed what is called a Stroop task. Stroop is a psychological test that measures the reaction time needed to name colors that are printed in a color not associated with the color word. In other words, saying “green” when the word “green” was printed in the color red. This task may seem simple. While this seems simple, if you try it you will see how much self-control it takes not to blurt out the printed color and to have to suppress that urge and replace it with a correct response.
During both the watching of the film and the Stroop task, participants’ brain activity was measured by an EEG (electroencephalography) device. This records the electrical activity on the scalp to measure voltage changes within the brain’s neurons.
What the researchers discovered was intriguing. When participants had to restrain themselves and exert quite a bit of self-control (when not expressing emotions or when trying to say the names of colors), there was an increase of brain activity in the part of the brain’s frontal lobe known as the anterior cingulate cortex. This is the region of the brain involved in autonomic functions, like regulating blood pressure and heart rate, as well as rational cognitive functions, such as reward anticipation, decision-making and emotion.
The interesting finding in this study is that there was less frontal lobe activity with the Stroop task after watching the gut-wrenching film. In other words, when a fair amount of self-control was previously used on one task, the next time it was needed there was less available for use. These findings suggest that people may not have as much willpower or control over their behavior as time progresses and demands are placed on them to exert such control.
It is pretty discouraging to think that the human brain is capable only of providing a strong degree of self-control during a given time period. That might seem to leave most of us with little hope for change. Think about it: If we use self-control to not eat a sticky bun with breakfast and force ourselves to take that morning jog, then we will have less available control over our behavior when it comes to making lunch and dinner choices, or passing on the second round of drinks, or going to the gym or to yoga class. Is it any wonder why so many fail at diets and exercise routines time and time again?
Well, this needn’t be the case and more information has recently been published on this issue. A study again headed by Inzlicht, this time with colleague Brandon Schmeichel of Texas A&M University, appeared in the September issue of the journal Perspectives on Psychological Science. In this further research, Inzlicht now finds that the “limited resource” model of self-control is too narrow and does not explain the exceptions, the times when self-control is in place and one is able to maintain the level necessary to effect positive change by making repeated good choices. It is not a “use it or lose it” situation as previously thought, but more closely tied to motivation, this study shows.
While previous research apparently pointed to a decrease in the amount of willpower available with each passing task requiring some form of self-control, this conclusion may be flawed because of the generic activity used in the studies. In other words, researchers had set up lab situations wherein subjects had no strong motivation influencing their behavior.
The more recent study indicates that mood, personal beliefs, positive reinforcement and motivation play a big role in exerting willpower. Inzlicht and Schmeichel propose that “engaging in self-control by definition, is hard work; it involved deliberation, attention, and vigilance.”
It’s not the case that resisting an extra piece of bacon at breakfast uses up our daily store of willpower, making self-control more difficult later in the day when needed. Rather, it seems that the motivation to exert our willpower later in the day seems less motivating. At that later time, we tend to want to reward ourselves for hard work done.
In the end, as with everything else affecting health and well-being, you can divide your circumstances into things you can do to help reach your goals and things beyond your control.
In the case of self-control, you need long-term behavior modification for success. My experience has shown that trying to restrict too many things is what leads to failure. For example, trying to set new exercise goals, diet routines and sleep patterns all at the same time creates an overwhelming struggle.
Instead, making one change for a few weeks before adding another seems to allow the brain and behaviors to reshape and recondition to the new activity. Repetition over time turns a self-controlled behavior into a habit that then keeps taking place on autopilot. Once the first piece of the healthy behavior is under new control, add the second piece, and so on. In this way, you don’t run out of your willpower stores, you don’t deplete your motivation and you learn new healthier behaviors along the way. Without behavior modification, all programs for change will fail.
Think about times you have tried to make positive changes in your life and have fallen short or failed. Then think about how many things you were trying to control for at that time. Also consider the moments when you were on the path to success but allowed yourself an indulgence for work well done, and that indulgence set you back in your efforts.
If you analyze in light of the research on self-control, you can find the way forward. It reminds me of the old maxim: “Inch by inch, life is a cinch; yard by yard, it’s very hard.” Which leads to another appropriate maxim: “The journey of a thousand miles begins with the first step.”
Slow down your efforts to be healthier into manageable steps, and over time new behaviors will arise that make self-control easier overall and wellness restoration an achievable goal. | <urn:uuid:3ece9155-b933-4d67-b97e-036f5f903fb3> | CC-MAIN-2013-20 | http://easyhealthoptions.com/alternative-medicine/nutrition/the-key-to-changing-your-life/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.965235 | 1,607 | 2.921875 | 3 |
Snow leopard population discovered in Afghanistan
The Wildlife Conservation Society has discovered a surprisingly healthy population of rare snow leopards living in the mountainous reaches of northeastern Afghanistan's Wakhan Corridor, according to a new study. The discovery gives hope to the world's most elusive big cat, which calls home to some of the world's tallest mountains. Between 4,500 and 7,500 snow leopards remain in the wild scattered across a dozen countries in Central Asia.
The study, which appears in the June 29th issue of the Journal of Environmental Studies, is by WCS conservationists Anthony Simms, Zalmai Moheb, Salahudin, Hussain Ali, Inayat Ali and Timothy Wood.
WCS-trained community rangers used camera traps to document the presence of snow leopards at 16 different locations across a wide landscape. The images represent the first camera trap records of snow leopards in Afghanistan. WCS has been conserving wildlife and improving local livelihoods in the region since 2006 with support from the U.S. Agency for International Development (USAID).
"This is a wonderful discovery – it shows that there is real hope for snow leopards in Afghanistan," said Peter Zahler, WCS Deputy Director for Asia Programs. "Now our goal is to ensure that these magnificent animals have a secure future as a key part of Afghanistan's natural heritage."
According to the study, snow leopards remain threatened in the region. Poaching for their pelts, persecution by shepherds, and the capture of live animals for the illegal pet trade have all been documented in the Wakhan Corridor. In response, WCS has developed a set of conservation initiatives to protect snow leopards. These include partnering with local communities, training of rangers, and education and outreach efforts.
Anthony Simms, lead author and the project's Technical Advisor, said, "By developing a community-led management approach, we believe snow leopards will be conserved in Afghanistan over the long term."
WCS-led initiatives are already paying off. Conservation education is now occurring in every school in the Wakhan region. Fifty-nine rangers have been trained to date. They monitor not only snow leopards but other species including Marco Polo sheep and ibex while also enforcing laws against poaching. WCS has also initiated the construction of predator-proof livestock corrals and a livestock insurance program that compensates shepherds, though initial WCS research shows that surprisingly few livestock fall to predators in the region.
In Afghanistan, USAID has provided support to WCS to work in more than 55 communities across the country and is training local people to monitor and sustainably manage their wildlife and other resources. One of the many outputs of this project was the creation of Afghanistan's first national park – Band-e-Amir – which is now co-managed by the government and a committee consisting of all 14 communities living around the park.
Snow leopards have declined by as much as 20 percent over the past 16 years and are considered endangered by the International Union for Conservation of Nature (IUCN).
WCS is a world leader in the care and conservation of snow leopards. WCS's Bronx Zoo became the first zoo in the Western Hemisphere to exhibit these rare spotted cats in 1903. In the past three decades, nearly 80 cubs have been born in the Bronx and have been sent to live at 30 zoos in the U.S. and eight countries in Europe, Asia, Australia, and North America.
Source: Wildlife Conservation Society
- Wildlife Conservation Society finds 'world's least known bird' breeding in AfghanistanWed, 13 Jan 2010, 14:10:53 EST
- WCS confirms the return of the Persian leopard In Afghanistan's central highlandsMon, 5 Dec 2011, 22:39:32 EST
- First ever videos of snow leopard mother and cubs in dens recorded in MongoliaThu, 12 Jul 2012, 18:05:03 EDT
- Wildlife Conservation Society documents pneumonia outbreak in endangered markhor Sun, 8 Jan 2012, 21:31:30 EST
- Rare Andean cat no longer exclusive to the AndesWed, 16 Mar 2011, 14:36:29 EDT
- Threatened snow leopards found in Afghanistanfrom AP ScienceFri, 15 Jul 2011, 0:00:36 EDT
- Elusive snow leopards thrive in Afghan regionfrom MSNBC: ScienceThu, 14 Jul 2011, 16:00:21 EDT
- Photos: Elusive Snow Leopards Thrive in Surprising Spotfrom Live ScienceThu, 14 Jul 2011, 11:30:44 EDT
- Cameras catch snow leopards in Afghanistanfrom UPIThu, 14 Jul 2011, 5:30:29 EDT
- Cameras catch snow leopards in Afghanistanfrom UPIWed, 13 Jul 2011, 18:00:26 EDT
- Snow leopard population discovered in Afghanistanfrom Science BlogWed, 13 Jul 2011, 14:00:23 EDT
- Snow leopard population discovered in Afghanistanfrom Science DailyWed, 13 Jul 2011, 13:30:31 EDT
- Snow leopard population discovered in Afghanistanfrom PhysorgWed, 13 Jul 2011, 12:32:13 EDT
- Healthy Snow Leopard Population Discovered in Afghanistanfrom Newswise - ScinewsWed, 13 Jul 2011, 12:32:02 EDT
Latest Science NewsletterGet the latest and most popular science news articles of the week in your Inbox! It's free!
Check out our next project, Biology.Net
From other science news sites
Popular science news articles
- UC Davis engineers create on-wetting fabric drains sweat
- Not just blowing in the wind: Compressing air for renewable energy storage
- Amazon River exhales virtually all carbon taken up by rain forest
- 1 in 10 teens using 'study drugs,' but parents aren't paying attention
- Slow earthquakes: It's all in the rock mechanics
No popular news yet
No popular news yet
- Stem cell transplant restores memory, learning in mice
- 2 landmark studies report on success of using image-guided brachytherapy to treat cervical cancer
- Researchers discover mushrooms can provide as much vitamin D as supplements
- Cutting back on sleep harms blood vessel function and breathing control
- Study: Low-dose aspirin stymies proliferation of 2 breast cancer lines | <urn:uuid:87c22f43-e8d7-47ab-939e-f4104c80536c> | CC-MAIN-2013-20 | http://esciencenews.com/articles/2011/07/13/snow.leopard.population.discovered.afghanistan | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.903391 | 1,303 | 3.359375 | 3 |
Woodrow Wilson, as described in the introductory section of the text, was the leader of the immediate post-war period and was the architect of an internationalist vision for a new world order. Yet, as discussed in the paragraphs below, he was not able to persuade the other Allied leaders at the peace settlement negotiations in Paris to embrace his vision. But it was not just the opposition of Clemenceau and Lloyd George to some of his ideas that moved the conference away from Wilson's vision. Wilson became so blindingly caught up in his vision, thinking that everything he advocated was what democracy and justice wanted, that he completely alienated the other negotiators in Paris, and they stopped listening to him. Another historian points to a different problem, that Wilson himself stopped listening to his earlier vision, having become convinced that a harsh peace was justified and desirable. Even if that historical view is accurate, Wilson was probably still more moderate in his conception of a harsh peace than were Clemenceau and Lloyd George. But as the conference dragged on and the departure from Wilsonianism became more and more pronounced, Wilson clung to his proposal for the League of Nations. In fact, he seemed to place all his faith in his pet project, believing it would solve all the evils the negotiators were unable to solve during the conference. Unfortunately, Wilson made it clear that the League was his primary objective, and it came to be his only bargaining chip. He then compromised on numerous issues that had no corollary in his vision in order to maintain the support for the creation of the League. Thus, though full of good intentions and a vision for a just and peaceful future, Wilson's arrogance and ineffective negotiating skills largely contibuted to the downfall of his vision. Finally, it must be mentioned that Wilson's inability to negotiate with the Senate in its discussion of the ratification of the Treaty of Versailles caused the Senate to reject the Treaty, leaving the United States noticeably absent from the newly created League of Nations, which greatly undermined the effectiveness and importance of Wilson's principal goal. Nonetheless, Wilson was awarded the 1919 Nobel Peace Prize for his efforts to secure a lasting peace and the success in the creation of the League of Nations.
David Lloyd George, the British Prime Minister,
entered the negotiations in Paris with the clear support of the British
people, as evidenced by his convincing win in the so-called khaki election
of December 1918.
During the weeks leading up to the election, though, he had publicly committed
himself to work for a harsh peace against Germany, including obtaining
payments for war damages committed against the British. These campaign
promises went against Lloyd George's personal convictions. Knowing
that Germany had been Britain's best pre-war trading partner, he thought
that Britain's best chance to return to its former prosperity was to restore
Germany to a financially stable situation, which would have required a
fairly generous peace with respect to the vanquished enemy.
Nonetheless, his campaign statements showed Lloyd George's understanding
that the public did not hold the same convictions as he did, and that,
on the contrary, the public wanted to extract as much as possible out of
the Germans to compensate them for their losses during the war. So
Lloyd George and Clemenceau were in agreement on many points, each one
seeming to support the other in their nationalist objectives, and thereby
scratching each other's back as the "game of grab" of Germany's power played
itself out. But most historians do not attribute to Lloyd George
a significant role in the Treaty negotiations.
In their defense, Clemenceau and Lloyd George were only following popular sentiment back home when they fought for harsh terms against Germany. It is clear from historical accounts of the time that after seeing so many young men not return from the trenches on the Western front, the French and British wanted to exact revenge against the Germans through the peace settlement, to ensure that their families would never again be destroyed by German aggression. In that respect, democracy was clearly functioning as it is intended in a representative democracy. In fact, Lloyd George is the quintessential example of an elected leader serving the interests of his people, putting his personal convictions second to British public opinion. Yet it was that same public opinion (in France and Britain) that Wilson had believed would support his internationalist agenda, placing Germany in the context of a new and more peaceful world order which would prevent future aggression. Wilson's miscalculation was one of the single greatest factors leading to the compromise of his principles and the resulting harsh and, in the eyes of many, unjust treatment of Germany within the Treaty of Versailles.
[See also the biographies of the Big Three listed
on the Links
1. James L. Stokesbury, A Short History of World War I, 1981, p. 309.
2. Manfred F. Boemeke, "Woodrow Wilson's Image of Germany, the War-Guilt Question, and the Treaty of Versailles,"inThe Treaty of Versailles: A Reassessment After 75 Years, Ch. 25, Boemeke, Feldman & Glaser, eds., 1998, pp. 603-614.
3. Robert H. Ferrell, Woodrow Wilson and World War I: 1917-1921, 1985, p. 146.
4. Lawrence E. Gelfand, "The American Mission to Negotiate Peace: An Historian Looks Back," in The Treaty of Versailles: A Reassessment After 75 Years, Ch. 8, Boemeke, Feldman & Glaser, eds., 1998, p. 191.
5. See Ferrell, supra note 3, Ch. 10, "The Senate and the Treaty."
6. Information from this paragraph is taken from Ferrell, supra note 3, at 142, 144, 151.
7. Id. at 151.
8. Stokesbury, supra note 1, at 311-312. | <urn:uuid:54521255-4567-40ea-9b12-eccf47e11bd7> | CC-MAIN-2013-20 | http://faculty.virginia.edu/setear/students/sandytov/Big_Three.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.976382 | 1,231 | 4.1875 | 4 |
“Utopian” has almost become a put-down or a suggestion that one is being unrealistic, if not naive. But I would argue that socialists must be utopian, not in the sense of expecting fundamental change instantaneously, but in the sense of holding in their very being the deep desire for the realization of a world completely unlike our own. It is that for which generations have fought and it is that ideal that has kept many a freedom fighter going despite tremendous adversities.
What is especially interesting about the history of capitalism is that with its rise there also emerged the impulse towards alternatives. These alternatives were not necessarily elaborated as eloquently as were the theories behind capitalism and, specifically, democratic capitalism, but they were nevertheless important. The oppressive and often criminal nature of rising capitalism brought with it revolutionary movements that challenged either the system itself or components of the system. These revolts took various forms, such as the slave revolts that spanned the entire period of the African slave trade.
Peter Linebaugh’s The Many-Headed Hydra: Sailors, Slaves, Commoners, and the Hidden History of the Revolutionary Atlantic offers a glimpse into the world of the North Atlantic and the development of capitalism. It was a world of significant resistance carried out by men and women; slaves and the free; mutinies and worker conspiracies. And in most cases there was a deep desire, sometimes elaborated, toward a not-always-defined freedom from the exploitation and oppression that accompanies capitalism.
With this as a backdrop, one can see that the desire for a utopia has always been a component of progressive and revolutionary anti-capitalism. Utopia was not simply a dream, but it represented the ideological and spiritual outlines of the ideal alternative. It became something for which movements fought. For many, that utopia took the name “socialism.”
In the 19th century, there were two diametrically opposed approaches to the question of socialism. On the one hand, there were the formations of local communities based on ideal socialist principles, such as equality and shared work. These were generally referenced as examples of “utopian socialism.” These communities attempted to live side-by-side with capitalism, hoping to demonstrate a viable alternative. Yet in their failure to tackle the system itself, these communities were strangled by the ever-growing amoral beast of capitalism.
In contrast, there were revolutionary movements, initially based in Europe, that sought to gain power for workers through struggle. Karl Marx and Frederick Engels were only two of those associated with this approach. These movements also co-existed (and usually not very well) with revolutionary anarchists who envisioned the immediate end of not only capitalism, but any governmental/state system.
It was also during the 19th century that the first great experiment in the creation of a worker’s state took place during the short-lived Paris Commune of 1871. This urban uprising of the dispossessed shook the world and suggested that worker power was more than a slogan.
The 20th century was the moment for the great socialist experiments, beginning with the Russian/Soviet Revolution in October 1917, and continuing on with China, Vietnam, Cuba and numerous other locales. Time and space do not permit anything approaching an exhaustive look at the twists and turns of the socialist experiments of the last century and the many conclusions that we could draw. For the purposes of this essay, let us say that revolutionary transformation proved to be far more difficult than the overthrow of a particular state structure.
Among other things, capitalism is not simply about a ruling class of capitalists, but about toxic practices, many of them day-to-day, which people have learned over generations and, as the great Italian Marxist Antonio Gramsci would say, have come to be accepted as “common sense.” These practices and expectations operate like the ghostly hands of demons in a graveyard reaching out and placing often unexpected constraints on the ability to break free of such haunted spaces.
We also discovered that socialism was about far more than economics. It must be about the expansion of democracy and the actual control over the lives of working people by the workers themselves. This means that there will be mistakes, setbacks, and detours. But the people themselves need to take these on, since there is no omnipotent individual or organization that can ensure success in a process that knows no guarantees.
Socialism, then, is not a utopia but a step in a process that takes us in the direction of an idea- that is, a society free of all exploitation and oppression, and with the elimination of all oppressing and oppressed classes. For me, it is summarized not in the text of a great socialist treatise, but, ironically perhaps, in the words of a fictional character, Captain Jean Luc Picard of the starship Enterprise, in the film Star Trek: First Contact. In explaining to someone from the 21st century the economics of the 24th century, he says, “The acquisition of wealth is no longer the driving force in our lives. We work to better ourselves…and the rest of humanity.”
Such an era, however, is a very long way off, and humanity will have to earn admission to that new historical epoch through the trials and tribulations associated with transforming the way that we live our lives and the way that we treat the planet.
Each day we must struggle to get one step closer.
This post is also available in: Spanish | <urn:uuid:ff332c0e-86bd-4089-a859-a5b5f69a5240> | CC-MAIN-2013-20 | http://freedomroad.org/askasocialist/2011/12/is-socialism-utopian/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.973006 | 1,119 | 2.96875 | 3 |
- Current Issue
SIGN IN to access the Harper’s archive
ALERT: Usernames and passwords from the old Harpers.org will no longer work. To create a new password and add or verify your email address, please sign in to customer care and select Email/Password Information. (To learn about the change, please read our FAQ.)
From “What Democracy? The case for abolishing the United States Senate,” by Richard N. Rosenfeld in the May 2004 Harper’s.
Americans believe in the idea of democracy. We fight wars in its name and daily pledge allegiance to its principles. Curiously, the fervor with which we profess our faith in democracy is matched only by the contempt with which we regard our politics and politicians. How interesting that we should so dislike the process that we claim to revere. Perhaps, however, our unhappiness with politics points to something significant; perhaps Americans dislike the daily reality of their political system precisely because it falls short of being a proper democracy. Indeed, in the last presidential election, we saw a man take office who did not win the popular vote. Money above all else shapes our political debate and determines its outcome, and in the realm of public policy, even when an overwhelming democratic majority expresses its preference (as for national health insurance), deadlocks, vetoes, filibusters, and “special interests” stand in the way. No wonder so few people vote in national elections; we have become a nation of spectators, not citizens.
The United States of America is not, strictly speaking, a democracy; indeed, the U.S. Constitution was deliberately designed to prevent the unfettered expression of the people’s will. Yet the Founders were not, as some imagine, of one mind concerning the proper shape of the new American union, and their disputes are instructive. The political dysfunction that some imagine to be a product of recent cultural decadence has been with us from the beginning. In fact, the document that was meant to prevent democracy in America has bequeathed the American people a politics of minority rule in which our leaders must necessarily pursue their unpopular aims by means of increasingly desperate stratagems of deceit and persuasion.
Yet hope remains, for if Americans have little real experience of democracy, they remain a nation convinced that the best form of government is by and for the people. Growing numbers of Americans suspect that all is not right with the American Way. Citizens, faced with the prospect of sacrificing the well-being of their children and grandchildren on the altar of supply-side economics, the prospect of giving up new schools and hospitals so that the colony in Iraq might have zip codes and modern garbage trucks, have begun to ask hard questions. Politics, properly understood as the deliberate exercise of citizenship by a free people, appears to be enjoying a renaissance, but the hard point must be made nonetheless that tinkering with campaign-finance reform is unlikely to be sufficient to the task. True reform becomes possible only if Americans are willing to return to the root of our political experiment and try again. And if democracy is our aim, the first object of our constitutional revision must be the United States Senate.
“We now have probably the most powerful upper house of any legislature,” Ritchie said. “Combine that with the inequality, and it creates some peculiar situations.” Not all small states are G.O.P. strongholds. (Hello, Vermont, Delaware and Rhode Island.) And it’s true that Obama won the 2008 nomination thanks in part to racking up caucus victories in states such as Idaho and Wyoming. But since Obama took office, senators from the wide-open spaces have asserted themselves against him over and over. Conrad opposed his plan to cut subsidies for wealthy farmers. Chuck Grassley (R-Iowa) pushed to focus transportation funding in the stimulus bill on rural areas and last week blocked the lifting of sugar tariffs to protect the ethanol industry. –“The Gangs of D.C.: In the Senate, small states wield outsize power. Is this what the Founders had in mind?” by Alec Macgillis, The Washington Post
Ezra Klein with Harper’s editor Luke Mitchell on the Leonard Lopate show today;
insurance and citizenship;
the leading cause of catastrophic injury in young women: cheerleading;
a review of Why This World: A Biography of Clarice Lispector by “New Books” author Benjamin Moser
According to plant pathologists, this killer round of blight began with a widespread infiltration of the disease in tomato starter plants. Large retailers like Home Depot, Kmart, Lowe’s and Wal-Mart bought starter plants from industrial breeding operations in the South and distributed them throughout the Northeast. (Fungal spores, which can travel up to 40 miles, may also have been dispersed in transit.) Once those infected starter plants arrived at the stores, they were purchased and planted, transferring their pathogens like tiny Trojan horses into backyard and community gardens. Perhaps this is why the Northeast was hit so viciously: instead of being spread through large farms, the blight sneaked through lots of little gardens, enabling it to escape the attention of the people who track plant diseases. It’s important to note, too, that this year there have been many more hosts than in the past as more and more Americans have taken to gardening… the explosion of home gardeners— the very people most conscious of buying local food and opting out of the conventional food chain— has paradoxically set the stage for the worst local tomato harvest in memory. –“You Say Tomato, I Say Agricultural Disaster,” Dan Barber, The New York Times
Suggestions for flavours range from Gooey Decimal System to Sh-sh-sh-sherbet. Woodworth writes on Facebook that the logic behind the scheme is that “libraries are awesome, Ben & Jerry’s ice-cream is tasty, therefore a library-themed Ben & Jerry’s ice-cream would be tasty awesome.” Gooey Decimal System could combine dark fudge alphabet letters with caramel swirls in hazelnut ice-cream, he suggests, while Dusty Stacks could be a layered ice-cream with speckles of cocoa in every layer. Li-Berry pie could mix lime sherbet with raspberry sauce and pie-crust pieces, and Overdue Fine as Fudge Chunk could drop fudge brownies and white chocolate coins into milk chocolate ice-cream swirled with caramel. The fine details of Sh-sh-sh-sherbet aren’t pinned down quite yet– it could be key lime, or possibly a vanilla/chocolate combination –“Book Fans Develop a Taste for Library-themed Ice-cream,” Alison Flood, The Guardian
Lucas Mann on hope and change in a minor-league-baseball city
Minimum number of baboons forced to smoke crack in a 1989 study testing the efficacy of cigarettes as a drug delivery device:
A reduction in distrust toward atheists was documented among pious Canadians who are reminded of the Vancouver police.
A Missouri cinema apologized for hiring an actor dressed in body armor and carrying a fake rifle to appear at a screening of Iron Man 3.
Subscribe to the Weekly Review newsletter. Don’t worry, we won’t sell your email address!
“This is the heart of the magic factory, the place where medicine is infused with the miracles of science, and I’ve come to see how it’s done.” | <urn:uuid:6fc7dcc3-bde9-4bfd-82b0-3dce93b2dd0b> | CC-MAIN-2013-20 | http://harpers.org/blog/2009/08/links-71/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.947711 | 1,561 | 2.546875 | 3 |
A reading for Cultural Anthropology
by Walter Trobisch
(adapted from Readings in Missionary Anthropology II, edited by William Smalley. Used under the educational "fair use" provision of the 1976 U.S. Copyright Acts.)
"Jesus took the man aside, away from the crowd ... and said to him 'Be opened.' With that his ears were opened, and at the same time the impediment was removed." (Mark 7:33-35)
What we need is a message tailored for each individual. In a concrete situation, general principles alone are not enough. Let us therefore take three people aside -- away from the crowd. Let us try to help them and take responsibility for them as a congregation. All three of them are real persons. They come from three different African countries, thousands of miles apart, but I shall not tell you from which countries and I have changed their names. They have given me permission to use their cases as an example, so I am not breaking their confidence.
Joseph is a 26-year-old teacher at a mission school. I never met him, but we corresponded for almost three years. He wrote me after he had read my book I Loved a Girl:Three years ago I married a 15 year-old person. I have ten years of schooling, my wife only six. God blessed us one year ago with a baby. I purposely did not choose a. girl with a higher level of education, for I intended to educate my wife in order that she become exactly as I wanted her to be in her work and cleanliness, in her whole life. But she does not satisfy me any more with her obedience. She does not do what I command her to do. If I insist, we quarrel. I ask you for a solution to save this young marriage.
In order to help Joseph we have to understand his way of thinking. For him, marriage is an alliance with an inferior being. For him, a woman is primarily a garden. Man is then primarily the bearer of the seed of life. Such is their mutual destiny. Their destiny decides their function. Their function defines their relationship. According to this conception the woman can never be as important as the man, any more than the soil can be as important as the seed. By her very nature, she is secondary, auxiliary. This is the root of all discrimination between man and woman that has shaped the history of mankind, not only in Africa, but also in Asia and -- until recently -- in Europe and America. This conception of marriage is not only based on a wrong and inaccurate biology. It is also not in accordance with the New Testament which conceives of husband and wife as equal partners before God.
My task was to change Joseph's image of marriage. Here is my answer:Joseph, you have not married a wife. You have married a daughter. You were looking for a maid, obedient to your commandments. She was 15 when you married her. Now she is 18. In these three years she has developed from a girl to a young woman. In addition she has become a mother. This has changed her personality completely. She wants to be treated as a person. She wants to become your partner . . . It strikes me that your quarreling started after God gave you a baby. How long is the period of lactation in your tribe? Could it be that your quarreling has a deeper reason? It is not God's will for a married couple to abstain from physical union for such a long time.Joseph's answer came quickly:You are exactly right . . . It is true that we abstain from sex relations for two years after the birth of a child ... This habit is incorporated in us. Otherwise we are afraid of losing the baby, especially if the mother breast-feeds it and if it is a boy .. . My father - in - law pointed this out to me when our child was born.
The practice of abstaining from sex relations during the period of lactation presupposes a polygamous society. According to the biological conception of marriage, a man can have several gardens to be planted one after the other. A garden can have one proprietor only, Joseph wants to be a Christian. He has been taught by his church that polygamy is sin. But he has been left with this negative message. He has not received any positive advice on how to live with one wife as a partner, nor has he been told how to space his children.
It is interesting that Joseph did not confide his problem to his pastor. Evidently he did not expect any help from the pastor. Still, Joseph looks for a counselor. He may find that counselor in a questionable friend, maybe one that is not even a Christian, and he may be advised to do things which are poison for his marriage. The method our couple uses for spacing their children -- complete abstention -- will lead to an estrangement and husband and wife will slowly drift apart.
Let us imagine that Joseph would have tried to solve his problem by taking a second wife. It is evident that refusing him communion as punishment for this action would have been the most inadequate answer to his problem on how to space his children. What is needed in Africa are not church disciplinarians, but marriage counselors.
In this case had Joseph not gone ahead and simply taken a second wife, but confided his intention to his pastor, explaining his motive, would his pastor have been able to help? Would the pastor have received enough training in this respect at the seminary? When a Christian takes a second wife, it is mostly due to the fact that his congregation has not carried responsibility for him.
It is unkind and merciless if missionaries condemn polygamy as sin, but keep silent to Africans about methods of conception control which they themselves use. It is even more so because a missionary usually has powdered milk at his disposal while an African villager does not.
Let us imagine another possibility. Maybe Joseph did not take a second wife, but secretly had sexual relations with an unknown girl, or even the wife of another man. In other words, he had committed adultery. Now, since he wants to be a Christian, his conscience hurts him. What could he have done? Would he have found someone in your congregation to whom he could have gone, confessed his sin and received the absolution? If he had come to you, whether you are a pastor or not, would you have known what to do?
What is needed in Africa are not ex-communicators but confessors who are keep the secret of confession absolute. What kind of training do our pastors receive in this respect? Here is the heart of the congregational responsibility for the individual. The offer of private confession is probably the most helpful contribution the Lutheran Church could make to the African churches as a whole. Martin Luther said: "No one knows what private confession can do for him, except he who has struggled much with the devil. Yes the devil would have slain me long ago, if the confession would not have sustained me."
It is also possible that Joseph would not have dared to confess, but maybe you would have heard anyway about his sin. Then it would have been your duty to go to him. Responsibility for the individual means to take the initiative. Just as God has taken the initiative in Jesus Christ and has spoken to us without our inviting Him, so we have to take the initiative and talk to our brother, even if he does not ask us. This is "church discipline" according to the New Testament. "Go ye therefore ..." not to put him out of the church but to win him back to Jesus Christ (Matthew 18:15; 2 Thessalonians 3:15; 2 Timothy 2:25). Church discipline means to go and to win, not to wait and to judge.
There is not time to report the case of Joseph in full. The relationship between him and his wife improved after I informed him about other methods of conception control. Later on a new problem arrived. The family moved from the village to town. While living in the village Joseph's wife had fed her family from that which she had grown in her own garden. But in town she did not have a garden. She had to go shopping. Joseph had to give her money, which had rarely happened before.
Here is Joseph's letter:Tell me how to make up a family budget and how to convince a woman -- however idiotic she may be -- to keep it. Most of the time my wife buys things which we don't need and then they spoil.I made up a detailed monthly budget according to Joseph's income and included as one item, "pocket money for each one of you." Joseph wrote:My wife was very happy about it. After we had divided up the money, she was frank enough to tell me also the criticisms which she had in her heart about my spending habits. She was overwhelmed by joy to see the item, "pocket money for each one of you."
This was, after almost three years of correspondence, the first time that Joseph had reported to me a reaction of his wife. The fact that he had shared my letter with her, that he even listened to her reproaches, but above all the fact that he gave her spending money, shows that his marriage had grown from a patriarchal pattern where the husband-father dominates his wife, into a marriage of partnership. A garden cannot rejoice and talk. One cannot listen to a garden. Joseph's wife had changed from a garden to a person. She had become a wife.
Formerly, the course of life was channeled. The individual made very few decisions on his own. The road was marked by customs and traditions. This had changed now. The individual has to make up his mind about many things which formerly were decided by the family and the group. But -- as the case of Joseph and his wife shows -- the individual is not trained to make these decisions. Counseling therefore becomes indispensable. It belongs to the responsibility of the congregation. It is the service which the Christian church must give in a situation of social change.
The work of the counselor can be compared best of all with "swimming." The time is past when a counselor could stand on a solid hilltop and give prefabricated rules and commandments to the counselee. The counselor has to descend from the hilltop and go into the water. Counselor and counselee have to swim together. With this picture of "swimming" in mind, the fact of uncertainty is expressed. At the outset the counselor may be more in need of advice than his counselee. But he swims together with him, trying to make. out beforehand the whirlpools and the rapids, the islands and the riverbanks. For a limited time, while exploring the situation for clarification and solutions, the counselor becomes the partner of his counselee. God is in this situation and the counselor has to find his will together with the counselee. Only what the latter is able and willing to accept and put into practice will help him.
The development of Joseph's marriage during the time of our correspondence proves that marriage guidance by letter can be fruitful. It may even be easier to confide the most intimate problems to a complete stranger. Because of the long distances and the lack of trained counselors, marriage guidance by mail has great promise in Africa, all the more because a personal letter there is highly treasured. It gives the receiver the experience of "being taken aside, away from the crowd," to have his impediment removed.
Marriage guidance is not only a counseling task. It is also a missionary opportunity. Since marriage is part of practical Christian living, the Christian marriage counselor has the possibility of proclaiming the Gospel to non-Christians along with the advice he gives. Marriage has become the problem of life today. People of all confessions, religions, classes and races are interested in it. Every heathen, Muslim or Communist will listen to those who have something useful to say about marriage. As Christians, I believe we do have something useful to say. But, do we say it? Or is the church in possession of a treasure of knowledge and wisdom and is keeping it locked up instead of handing it out?
Elsie is a high school student and the daughter of a "minister of religion" as she calls it. I know her too only by letter. She wrote to me and asked: "How can I meet a Christian boy?" I advised her to attend church. There she could meet boys.
Here is her answer:The old people in our churches don't want boys to meet girls, not even to talk to them in their presence. Always the Sunday service begins by speaking against boys and girls. This has turned away most of the boys and girls from attending church. The other day the pastor said: "If any boy has written to you a letter, return it to him and tell him never to write to you any letter."I answered, but for a long time did not hear from Elsie. Later I learned that her school principal had confiscated my letter. I was not on the list of men with whom she was allowed to correspond. So my letter went to her parents, who lived in a small village hundreds of miles away from her school. It took three months before the permission came and my letter was handed over to Elsie.
Finally she wrote again:I have met a boy who is not of my tribe. He is a keen Christian and a student in a secondary school. It appears to me as if he would make a good husband according to the direction in your book I Loved a Girl. I went home and talked to my parents about him. They said they would not allow me to marry from any other tribe apart from mine. They claim that men from my boyfriend's tribe are going about with other women, even if they are married. I have tried to tell them that not all men from that tribe are bad, but they insist on my marrying someone from my own tribe. Since we are told that we should honor our parents, I cannot do something which is against their will. To make it worse: I do not live at home. I know very few boys from my own tribe. Seeing that this boy is interested in me, should I disregard my parents' advice?
In my answer I advised Elsie to take her boyfriend home once and present him to her parents so that they could meet him as a person. If she is certain about God's will for her marriage, she should obey God more than men.
Elsie's answer:My parents have become impossible. They cannot approve the choice I have made. They say they have heard rumors that the man I have chosen was misbehaving at college. But ever since I met him, he has never showed me any nonsense. I have decided to remain single for the whole of my life, unless I can marry him.Marriage between two Christians must be based on mutual trust and confidence. Confidence demands free choice. Free choice demands opportunities where young people can meet in a healthy atmosphere without suspicion. It belongs to the responsibility of the congregation to provide such opportunities. Many marriage problems in Africa have their root in the fact that the couple never had time and opportunity to really meet and get acquainted before marriage.
Many African boys and girls have a list with names of a limited number of persons with whom they correspond. In a society where the meeting of the sexes is still difficult, also for outward reasons, we have to recognize that letter-writing as a means to establish contacts, can be a good one. Instead of intercepting mail, schools should rather teach criteria of how to evaluate a letter and give helpful instructions for answering.
Elsie's case reflects two areas of conflict. There is the conflict between the younger and older generation. Dealing with parents, uncles and grandparents is probably the thorniest problem of a marriage counselor in Africa. It has been overlooked that, in a fast-changing society, the education of the older generation is also a responsibility of the congregation. The church may have to speak out on the rules of exogamy (the tradition forcing a young man to find a bride outside a defined group of relatives) or endogamy (reversely, the rule that a bride can only be found within a close core of relatives.) Once a young African wrote me that he had 11,000 girls ("sisters") in his tribe which he could not marry. Unfortunately, he had fallen in love with one of them.
There is also the conflict between individual freedom and the obligation to tradition and family. Elsie has new possibilities of choice, unknown to her parents. She is caught between (1) making use of this freedom and (2) submission to rules originating from customs no longer relevant to her situation. Like Joseph, she is in need of personal counseling in her new freedom.
Her decision to renounce this freedom and the wish of her heart, even against the advice of her counselors, poses lots of questions:
- If you had been her counselor, what would you have advised her to do?
- Assuming that God called Elsie to stay single, would it be possible for her to put this' decision into practice?
- Does our church have a message for single girls?
- What would be the responsibility of her congregation for her?
- Is the decision against individual freedom and for submission to tradition always God's will?
- Where are the limitations of the fourth commandment?
- What is behind the attitude of her parents? (Her father is a pastor !)
- How far here is also the "biological" conception of marriage at work?
- Will they be pleased by her "obedience" or rather be shocked that their "garden" shall never be planted?
- What could be done to help her parents to better understand their daughter?
Elsie's case is an encouraging one. She has character. She proves that one of the oncoming generation of African girls is able to make up her mind by herself instead of being pushed around and dominated. She is on her way to mature womanhood. Africa's future will depend upon this growth. There will be no free nations, unless there are free couples. There will be no free couples unless the wife grows into true partnership with her husband. It is the responsibility of the congregation to help toward such growth. It is the solution for Joseph's case as much as for Elsie's and even for our next case.
On one trip, I worshipped in an African church where nobody knew me. After the service I talked to two boys.
"How many brothers and sisters do you have?" I asked the first one.
"Are they all from the same stomach?"
"Yes, my father is a Christian."
"How about you?" I addressed the other boy.
He hesitated. In his mind he was adding up. I know immediately that he came from a polygamous family.
"We are nine," he finally said.
"Is your father a Christian?"
No," was the typical answer, "he is a polygamist."
"Are you baptized?"
"Yes, and my brothers and sister too," he added proudly.
"And their mothers?"
"They are all three baptized, but only the first wife takes communion."
"Take me to your father."
The boy led me to a compound with many individual houses. It breathed cleanliness, order and wealth. Each wife had her own house and her own kitchen. The father -- a middle-aged, good-looking man, tall, fat and impressive -- received me without embarrassment and with apparent joy. Omodo, as we shall call him, was well-educated, wide awake and intelligent, with a sharp wit and a rare sense of humor. From the outset, he made no apologies for being a polygamist. He was proud of it. Here's the essential content of our conversation which lasted for several hours.
"Welcome to the hut of a poor sinner!" The words were accompanied by good-hearted laughter.
"It looks like a rich sinner," I retorted.
"The saints come very seldom to this place," he said, "they don't want to be contaminated with sin."
"But they are not afraid to receive your wives and children. I just met them in church."
"I know. I give everyone a coin for the collection plate. I guess I finance half of the church's budget. They are glad to take my money, but they don't want me."
I sat in thoughtful silence.
After a while he continued, " I feel sorry for the pastor. By refusing to accept the polygamous men in town as church members, he has made his flock poor. They shall always be dependent upon subsidies from America. He has created a church of women whom he tells every Sunday that polygamy is wrong."
"Wasn't your first wife heart-broken when you took a second one?"
Omodo looked at me almost with pity. "It was her happiest day," he said finally.
"Tell me how it happened."
"Well, one day after she had come home from the garden and had fetched wood and water, she was preparing the evening meal, while I sat in front of my house and watched her. Suddenly she turned to me and mocked me. She called me a `poor man,' because I had only one wife. She pointed to our neighbor's wife who could care for her children while the other wife prepared the food."
"Poor man," Omodo repeated. "I can take much, but not that. I had to admit that she was right. She needed help. She had already picked out a second wife for me and they get along fine."
I glanced around the courtyard and saw a beautiful young woman, about 19 or 20, come out of one of the huts.
"It was a sacrifice for me," Omodo commented. "Her father demanded a very high bride price."
"Do you mean that the wife, who caused you to become a polygamist is the only one of your family who receives communion?"
"Yes, she told the missionary how hard it was for her to share her love for me with another woman. According to the church, my wives are considered sinless because each of them has only one husband. I, the father, am the only sinner in our family. Since the Lord's supper is not given to sinners, I am excluded from it. Do you understand that, pastor?"
I was entirely confused.
"And you see," Omodo continued, "they are all praying for me that I might be saved from sin, but they don't agree from which sin I must be saved."
"What do you mean?"
"Well, the pastor prays that I may not continue to commit the sin of polygamy. My wives pray that I may not commit the sin of divorce. I wonder whose prayers are heard first."
"So your wives are afraid that you become a Christian?"
"They are afraid that I become a church member. Let's put it that way. For me there is a difference. You see, they can only have intimate relations with me as long as I do not belong to the church. In the moment I would become a church member, their marriage relations with me would become sinful."
"Wouldn't you like to become a church member?"
"Pastor, don't lead me into temptation ! How can I become a church member if it means disobeying Christ? Christ forbade divorce, but not polygamy. The church forbids polygamy but demands divorce. How can I become a church member if I want to be a Christian? For me there is only one way: be a Christian without the church."
"Have you ever talked to your pastor about that?"
"He does not dare to talk to me, because he knows as well as I do that some of his elders have a second wife secretly. The only difference between them and me is that I am honest and they are hypocrites."
"Did a missionary ever talk to you?"
"Yes, once. I told him that with the high divorce rate in Europe, they have only a successive form of polygamy while we have a simultaneous polygamy. That did it. He never came back."
I was speechless. Omodo accompanied me back to the village. He evidently enjoyed being seen with a pastor.
"But tell me, why did you take a third wife?" I asked him.
"I did not take her. I inherited her from my later brother, including her children. Actually my older brother would have been next in line. But he is an elder. He is not allowed to sin by giving security to a widow."
I looked in his eyes. "Do you want to become a Christian?"
"I am a Christian." Omodo said without smiling.
What does it mean to take responsibility as a congregation for Omodo? I am sorry that I was not able to see Omodo again. Our conversation contains in a nutshell the main attitudes of polygamists toward the church. It is always healthy to see ourselves with the eyes of an outsider.
I asked myself: What would I have done if I were the pastor in Omodo's town? Let me share with you my thoughts and then ask for your criticism. They are based on many experiences in dealing with other polygamist families. Maybe you have better ideas than I have. Please, help me to help Omodo.
The trouble with Omodo is that, unlike Joseph or Elsie, he did not ask for help. But that does not mean that he is not in need of help. The fact that he did almost all the talking and hardly gave me a chance, proves his inner insecurity. His sarcasm showed me that deep down in his heart he was afraid of me.
In order to take this fear away, I accepted defeat. You will have noticed that I was a defeated person when I left him. If you want to win someone over, nothing better can happen to you than defeat. In the eyes of the world the cross of Jesus Christ was a defeat. Yet, God saved the world by this defeat. In talking with people we must remember this truth. We can easily win an argument, but lose a person. Our task is not to defend, but to witness.
Humble acceptance of defeat is often the most convincing testimony we can give for our humble Lord. It is the one thing which the other one does not expect. Counseling is not preaching at a short distance. It is ninety percent listening.
When I have a conversation like this I ask myself first of all, where is the other one right? I think Omodo is right in his criticism of contradictory church policies, which sometimes deny our own doctrines. We have made the church the laughing-stock of a potentially polygamous society. We have often acted according to the statement, "There are three things that last forever: faith, hope and love, but the greatest of them all is church order and discipline."
Some churches demand that a polygamous man separate himself from all his wives, some from all but one. Others demand that he keep the first one; others allow him to choose. Some allow that his wives stay with him under the condition that he has no intercourse with them.
Some do not even allow polygamists to enter the catechumen class. Others allow them, but do not baptize them. Again others baptize them, but do not give them communion. A few allow them full church membership, but forbid them to hold office.
The most generous solution was to baptize a polygamist only on his death bed. It happened to a Swedish missionary once that such a polygamist did not die but recovered after baptism. The church council decided, "Such things must not happen." They did not specify whether they referred to the recovery of the polygamist or to his baptism. We have made ourselves fools before the world with our policies. Let us admit honestly our helplessness first of all. We are facing a problem where we just do not know what to do.
Maybe our mistake is that we want to establish a general law for all cases. We want to be like God, knowing what is good and evil and have decided that monogamy is "good" and polygamy "evil" while the Word of God clearly does not say so. The Old Testament has no outspoken commandment against polygamy and the New Testament is conspicuously silent about it. Instead of dealing with polygamy, the Bible has a message for polygamists.
Therefore let us not deal with an abstract problem. Instead let us meet a concrete person. Let us meet Omodo. Is he a special case? Well, every one of us is a "special case" in one way or another. There are no two persons exactly alike. Still, if we can help in one case like that of Omodo's, we might find the key to deal with many others. So, what would I have done?
First of all, I would have gone back to visit him again. Church discipline, as the New Testament understands it, starts with me, not with the other one. If possible, I would have taken my wife along. I would have asked her to tell Omodo what she would think of me if I let her work all day in the garden, get wood and water, care for the children and prepare the food while I sit idly in the shade all day under the eaves of my hut and watch her work. I think she would have told him that he does not have three wives, but actually he has no wife at all. He is married to three female slaves. Consequently, he is not a real husband; he's just a married male. Only a real husband makes a wife a real wife.
In the meantime I would have talked to Omodo's first wife and told her precisely the same, that only a real wife makes a husband a real husband. I would have challenged her because she had not demanded enough from her husband. She had behaved like an overburdened slave trying to solve her problem by getting a second slave. Instead, she should have asked her husband to help her. She should have behaved like a partner and expected partnership.
She probably would have thought I was joking and not have understood at all. So I would have explained and we would have talked, visit after visit, week after week. Then finally I would have asked her why she ridiculed her husband. I am sure there was something deeper behind it, a concrete humiliation for which she took revenge, a hidden hatred.
At the same time, I would have continued to talk to Omodo -- not telling him anything which I had learned from his wife, but listening to his side of the story. I am sure I would have heard precisely the opposite of what his wife had said. I would have tried to make him understand his wife and to make his wife understand her husband. Then, maybe after months, I would have started to see them both together at the same time, possibly again accompanied by my wife.
The best way to teach marriage of partnership is by example. One day we were discussing this in our "marriage class," a one-year course I taught at Cameroon Christian College. The students were telling me that African women are just not yet mature enough to be treated as equal partners. While we were discussing this, rain was pouring down. We watched through the window of the classroom the wife of the headmaster of our primary schools, who jumped from her bicycle and sought refuge under the roof of the school building. After a little while a car drove up. Out stepped her husband. He handed her the car keys, and off she drove with the car, while he followed her on the bicycle getting soaked in the rain. This settled the argument. It is up to the husband to make his wife a partner.
Then, one day I would have attacked the case of the second wife. I can imagine her story. She probably was given into marriage with Omodo for a high bride price at a very young age. I would have tried to find out how she felt about her situation. Young and attractive as she was, I cannot imagine that she was so terribly excited by old fat Omodo. It is very likely that she had a young lover alongside. I have found that women in polygamous marriages often live in adultery, because their husbands, staying usually with one wife for a week at a time, are not able to satisfy them.
Solving the problem of the second wife would involve talks with her father and "fathers" and also with the young man she really loves. It would have been a hard battle, but I do not think a hopeless one. It is a question of faith. I would trust Jesus that He can do a miracle. I would ask some Christians in the congregation who understand the power of prayer to pray when I talk to the father. Every father wants to have a happy daughter. I would try to convince him to pay the bride price back to Omodo (or at least a part of it).
The first time I would have suggested to Omodo to let his second wife go, he probably would have thrown me out the front door. So I would have entered again through the back door. I would have tried to tire him out by an unceasing barrage of love.
It is very important that by now a very deep personal contact and friendship is established between Omodo and me, a "partnership in swimming." In this partnership Jesus Christ becomes a reality between us even when His name is not mentioned in every conversation.
One day, I think, Omodo would have admitted that he did not take his second wife just out of unselfish love for his first one, but that he considered his first wife as dark bread when he had appetite for a piece of candy.
Now, we could start to talk meaningfully about sin. Not about the sinfulness of polygamy, but about concrete sins in his polygamous state.
To talk to a polygamist about the sinfulness of polygamy is of as little help as talking to a soldier about the sinfulness of war or to a slave about the sinfulness of slavery. Paul sent the slave Onesimus back to his slave master, while at the same time he proclaimed a message incompatible with slavery which finally caused its downfall. Paul broke the institution of slavery from inside, not from outside. This is a law in God's kingdom which can be called the "law of gradual infiltration." It took centuries until slavery was outlawed. God is very patient. Why are we so impatient?
So I would have talked to Omodo about his selfishness. I would have talked to his first wife about her hatred, lies and hypocrisy and to the second one about her adultery. In the minute they began to see how these things separate them from God, it would not have been difficult to make them aware of their need of forgiveness. Then we could have talked about reconciliation with God. This reconciliation would have happened through the absolution. "He has enlisted us in this service of reconciliation." (2 Corinthians 5:18)
After the experience of the absolution, we would have tried together to find the will of God for each person involved. Would the separation of Omodo from his second wife be a divorce? It depends upon whether we consider polygamy also as a form of marriage.
Parenthetically, I believe we may have to. Let us be fair. Polygamy is not "permanent adultery" as a missionary once tried to tell me. Adultery is never permanent. It is a momentary relationship in secrecy with no responsibility involved. Polygamy, on the other hand, is a public state, often based on a legally valid contract. It involves life-long responsibility and obligations. If polygamy is marriage, separation is divorce.
If we compare marriage with a living organism, husband and wife can be compared with the two essential organs, the head and the heart. In all higher-developed organisms one head corresponds to one heart. Only primitive organisms are just a plurality of cells, as for example the Alga volvox globator. Parts of that organism are relatively independent from the whole. A tapeworm can be cut apart and the parts are still able to live. One could compare polygamy with a primitive organism, which has not yet reached the state in which one head corresponds to one heart. Still, a tapeworm is one organism as much as polygamy is marriage.
Our dilemma is that we want monogamy and we do not want divorce. Yet, in a polygamous situation we cannot have one without the other. There are situations in life where we have the choice between two sins and where the next step can only be taken in counting on the forgiveness of our crucified Savior. It is in such situations where Luther gave the advice in all evangelical freedom, "Sin bravely!" being guided by the love for your neighbor and by what is most helpful to that neighbor. For me, there is no doubt that in Omodo's case the most helpful solution for his second wife would be to marry the man she loves.
The case of Omodo's third wife, whom he had inherited from his late brother, is probably the most difficult one. In 0modo's case it was especially difficult, because she was blind. I would have gathered the elders of the church and explored possibilities with them on how to support her through congregational help in case she wanted to live independently. The way a congregation treats their widows is the best test of their willingness and ability to carry responsibility for the individual.
One question is still open: When would I have baptized Omodo? I do not know. One cannot answer this question theoretically. I hope you understand that what I have just described is not the work of an afternoon, but of months, maybe years. Under the condition that this work is done, the moment chosen for baptism is not of decisive importance. There are no chronological laws in the process of salvation.
I would not have baptized Omodo before he had an experience of private confession and absolution. But then, someplace along the way, I would have done it, asking God for guidance together with the congregation for the right moment.
We should get away from considering church discipline as a matter of sin and righteousness, but rather put it on the basis of faith and unbelief. Faith is not a nothing and the use of the sacraments is not a nothing. In case it would have taken years to find a solution for Omodo's wives, I would have expected such a solution as fruit of his baptism and not as a condition for it.
In the meantime, while working and praying for a solution, Omodo would have to "sin bravely," sensing his polygamous state more and more as a burden. As his brother in Christ, together with the congregation, I could only act then according to Galatians 6:2 which says: "Bear one another's burden and fulfill the law of Christ."
If we followed that course of action, would then the walls break and the church be flooded by polygamy? I do not believe so. For economic reasons, polygamy is on the retreat anyway in Africa. The current generation of Africans longs for a monogamous marriage of partnership. We overestimate ourselves if we always think we have to keep shoring up the walls so they won't break. The statement "God is a God of order" is not in the Bible. First Corinthians 14:33 reads, " God is not a God of disorder, but of peace."
Counseling the individual is putting congregational responsibility into practice. In the process of counseling, the unacceptable person is taken aside, away from the crowd, and unconditionally accepted. To help the individual in the name of the God of peace, we need both the rules and the exceptions. The counselor has to give himself into life with its many different situations and happenings and "swim" with his counselee. God is with them in the water.
Howard Culbertson, Southern Nazarene University, 6729 NW 39th, Bethany, OK 73008 | Phone: 405-491-6693 - Fax: 405-491-6658
Copyright © 2002 - Last Updated: October 10, 2011 | URL: http://home.snu.edu/~hculbert/trobisch.htm
You have permission to reprint what you just read. Use it in your ezine, at your web site or in your newsletter. Please include the following footer:
Article by Howard Culbertson. For more original content like this, visit: http://home.snu.edu/~hculbert | <urn:uuid:d143f1c2-7648-4a73-8714-c8ebe3fa3d2e> | CC-MAIN-2013-20 | http://home.snu.edu/~hculbert/trobisch.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.988818 | 8,429 | 2.703125 | 3 |
By Julie Steenhuysen
CHICAGO (Reuters) - In the latest installment in the mammogram debate, a new study finds that getting a mammogram every other year instead of annually did not increase the risk of advanced breast cancer in women aged 50 to 74, even in women who use hormone therapy or have dense breasts, factors that increase a woman's cancer risk.
The findings, released on Monday by researchers at the University California, San Francisco, support the conclusions of the U.S. Preventive Services Task Force, an influential government panel of health advisers, which in 2009 issued guidelines that said women should have mammograms every other year starting at age 50 rather than annual tests starting at age 40.
The controversial recommendations to reduce the frequency and delay the start of mammogram screening were based on studies suggesting the benefits of detecting cancers earlier did not outweigh the risk of false positive results, which needlessly expose women to the anguish of a breast cancer diagnosis and the ordeal of treatment.
The matter, however, is not settled. The American Cancer Society still recommends women be screened for breast cancer every year they are in good health starting at age 40, but the group is closely watching studies such as this.
"I don't think any one study ought to change everything," Dr. Otis Brawley, chief medical officer of the American Cancer Society, said in a telephone interview. But he added, "This is one of several studies that are all pointing in the same direction over the last several years."
Brawley said he did not expect screening recommendations from professional organizations to change in the next year, but he does see doctors moving toward a more personalized approach over the next five years. There may be some women who need to be screened every six months and others every two years depending on their breast density, family history and genetic testing.
In the latest study, Dr. Karla Kerlikowske of the University of California, San Francisco, and colleagues wanted to see whether risk factors beyond a woman's age play a role in the decision of when to start mammogram screening.
In addition to age, the team considered whether women had dense breast tissue - which has a higher ratio of connective tissue to fat - or took combination estrogen and progesterone hormone therapy for more than five years, both of which can increase the risk of breast cancer.
"If you have these risk factors, would it help if you got screened annually vs. every two years?" said Kerlikowske, whose study was published online in JAMA Internal Medicine.
To study this, the team analyzed data from 11,474 women with breast cancer and 922,624 without breast cancer gathered from 1994 to 2008. Even after looking at these other factors, the team found no increased risk of advanced cancer in women 50 to 74 who got a mammogram every other year instead of every year.
"It didn't matter whether you screened that group every year or every two years, the risk of advanced disease or having a worse tumor was no different," Kerlikowske said.
More frequent screening in these women did result in more false-positive results. Women aged 50 to 74 who had annual mammograms had a 50 percent risk of having a false-positive result over a 10-year period, but a 31 percent risk when they were screened every other year.
Studies suggest a false positive can have lasting psychological effects. A March study in the Annals of Family Medicine said, "Three years after a false-positive finding, women experience psychosocial consequences that range between those experienced by women with a normal mammogram and those with a diagnosis of breast cancer."
Breast density was a factor in younger women, however. When the team looked at screening frequency in women 40 to 49, they found those with extremely dense breasts who were screened every other year had a higher risk of having a more advanced cancer than those who got screened every year. Younger women also were far more likely to have false-positive results and undergo unnecessary procedures.
Without getting a mammogram in their 40s, Kerlikowske said, "women aren't going to know if they have extremely dense breasts."
Among women in their 40s, about 12 percent to 15 percent have extremely dense breast tissue. For these women, Kerlikowske said she recommends that they get a mammogram if they have other risk factors that might put her at risk of breast cancer, including having a first-degree relative that a common term, or just "close relative"? such as a mother or a sister with breast cancer.
"Once we see their breast density is high, we will offer annual mammography," she said.
The American College of Radiology and the Society of Breast Imaging, groups that represent radiologists, said the study's methodology was flawed because it used early and late breast cancers to determine the outcomes of breast screening rather than more refined measures of tumor size, nodal status and cancer stage, which could determine whether screening detected cancers at an earlier stage.
It also faulted the study for not being a closely controlled, randomized clinical trial. The study used data from the Breast Cancer Surveillance Consortium, a national mammography screening database that gathers information from community mammography clinics on millions of women.
"We're never going to have a randomized study. This is the best in terms of the type of study anyone can actually hope for," said Brawley, whose group monitors scores of breast studies from around the world each year. He said such a study would take decades and would be prohibitively expensive.
Catching cancers earlier does not always translate into lives saved, according to a November study published in the New England Journal of Medicine by Dr. Gilbert Welch of the Dartmouth Institute for Health Policy & Clinical Practice in New Hampshire.
That study suggested that as many as a third of cancers detected through routine mammograms may not be life-threatening, contradicting the deeply ingrained belief that cancer screening is always good.
Kerlikowske said the strength of her study is that it allows researchers to consider other risk factors, such as breast density, allowing doctors to offer women personalized choices about when to start breast cancer screening.
"We're trying to move it away from this idea that it all should be based on age. There should be some thoughtfulness to it," she said.
(Reporting by Julie Steenhuysen; Additional reporting by Bill Berkrot; Editing by Jilian Mincer and Douglas Royalty) | <urn:uuid:07a59dc4-01d9-4277-b867-90979646b6ea> | CC-MAIN-2013-20 | http://kelofm.com/news/articles/2013/mar/19/less-frequent-mammograms-dont-increase-risks-after-age-50-study/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.970928 | 1,328 | 2.59375 | 3 |
INDO-CHINA: The New Frontier
TIME. Monday, May. 29, 1950
The U.S. now has a new frontier and a new ally in the cold war. The place is Indo-China, a Southeast Asian jungle, mountain and delta land that includes the Republic of Viet Nam and the smaller neighboring Kingdoms of Laos and Cambodia, all parts of the French Union.
For more than three years this land, in prewar times the rich French colony of Indo-China, has been suffering, on a lesser scale, the ruinous kind of civil war which won China for Communism. The Mao Tse-tung of the Indo-Chinese is a frail, but enduring comrade, who looks like a shriveled wizard; his nom de guerre is Ho Chi Minh (or One Who Shines). Chiang Kai-shek has no counterpart in Indo-China. The initial brunt of the Red attack has been borne by French soldiers. Meanwhile, the job of rallying native anti-Communist forces falls mainly on the meaty shoulders of the Emperor Bao Dai (or Guardian of Greatness), who now bears the official title of Chief of State of Viet Nam.
While the dust of the Chinese civil war was settling before the bemused eyes of the State Department, the U.S. paid scant attention to the Indo-Chinese struggle. It seemed largely a local affair between the French and their subjects. Since the dust has settled in China, Asia’s Communism is thrusting southward. Indo-China stands first on the path to Singapore, Manila and the Indies (see map).
Last January, led by Peking and Moscow, the world’s Communist bloc recognized Ho Chi Minh’s “Democratic Republic.” It was more than the Kremlin had ever done for the Communist rebels of Greece. Over the past several weeks, arms and other supplies were reported passing from Russia and China to the comrades in Indo-China. The stakes in Southeast Asia were big—as big as the global struggle between Communism and freedom.
A fortnight ago in Paris, U.S. Secretary of State Dean Acheson drew a line in the dust that has so long beclouded U.S. diplomacy. He implicitly recognized that the war in Indo-China is no local shooting match. He pledged U.S. military and economic aid to the French and Vietnamese. The U.S. thus picked up the Russian gauntlet.
What kind of frontier and what kind of ally had history chosen for the U.S.?
A Golden Asset. Unlike China, where U.S. traders and missionaries began a fruitful acquaintance more than a century ago, Indo-China has had little contact with Americans, either commercial, cultural or diplomatic.* The last comprehensive U.S. book on the country was published in 1937. Among other things, its author observed: “IndoChina lies too far off the main scene of action to play any but a secondary role in the Pacific drama.”
In the pre-French past, most of Indo-China had been conquered by the Chinese, who had left their culture indelibly behind.† Through the last half of the 19th. Century, the French converted Indo-China into a tight, profitable colonial monopoly. They explored its fever-laden jungles, lofty ranges, great river valleys. They discovered its antiquities, including the majestic loth Century towers of Angkor Wat in northern Cambodia. They wrote about its mandarins, its Buddhist temples and Confucian family life.
The French invested $2 billion, built up Indo-China’s rice and rubber production; before World War II, the colony, along with Siam and Burma, was one of the world’s three leading rice exporters. Its surplus went to rice-short China, a fact of great significance these days in Communist China’s support of Communist Ho Chi Minh. All the raw rubber France needed came from Indo-China. There were other lucrative items: coal, wolfram, pepper, opium (which, to French shame, was sold to the natives through a state monopoly) and many jobs for a white bureaucracy. French politicians called the colony “our marvelous balcony on the Pacific.”
A Dangerous Liability. Indo-China is no longer a golden asset for France. As everywhere in the East, the old colonialism has died beneath the impact of Western nationalist, egalitarian ideas, a process greatly hastened by the Japanese march in World War II under the slogan “Asia for the Asiatics.” The French have bowed grudgingly to the times.
In an agreement signed March 8, 1949 with Bao Dai, they promised limited freedom for Viet Nam within the French Union. Under its terms, a Viet Nam cabinet has charge of internal affairs, the right to a national army. Paris keeps direct control of foreign policy, maintains military bases and special courts for Frenchmen, retains a special place for French advisers and the French language.
By that time the French were up to their necks in a costly campaign to crush Ho Chi Minh and his Communist bid for power. The civil war has cut rice production in half and disrupted the rest of Indo-China’s economy. It has tied down 130,000 French troops, about half of the Fourth Republic’s army, and thereby weakened the contribution France might make to Western Europe’s defense. In lives, the Indo-China war has cost the French 50,000 casualties. In money, it has cost $2 billion—just about the sum of ECAid to France.
Indo-China, in brief, has become a dangerous liability for France— nor does any realistic Frenchman think it can ever again be an asset. Why, therefore, spend more blood and treasure in thankless jungle strife? Why not pull out?
The answer is: more than French war weariness and prestige are at stake. If Indo-China falls to Communism, so, in all probability”, will all of Southeast Asia.
For U.S. citizens, the first fact about their new frontier is that it will cost money to hold—much more than the French can pay alone, much more than the $15 million in arms and $23 million in economic aid thus far promised by Washington. The second fact is more compelling : the new frontier, if it is not to crumble, may need U.S. troops as well as French.
Otherwise, the U.S. might surfer another catastrophic defeat in the Far East.
A Question of Sympathy. The French have made more than the usual colonial mistakes. All too often, especially since they put the Foreign Legion and its German mercenaries to the work of restoring order after World War II, they have been arrogant and brutal toward the Indo-Chinese. They are paying for it now, for the bulk of Communist Ho Chi Minh’s support comes from anti- French, or anti-colonial Indo-Chinese. A sign over an Indo-Chinese village street tells the story; it reads “Communism, No. Colonialism, Never.”
But the issue of native sympathy is complex. The vast majority of the people are simple rice farmers, who want peace and order so they may tend their paddy-fields. Ho Chi Minh himself does not now preach Communism openly: his explanation is that his people have no understanding of the word. Besides a crude, hate-the-French appeal (including atrocity propaganda—see cut), he has another effective persuasion: terror. His guerrillas and underground operators stalk the countryside; his assassins and bomb-throwers terrorize the cities. Indefatigably he spreads the word that he is winning, as his comrades have won in China.
The result is that many are cowed into helping him, or at least staying out of the anti-Communist effort. Others, especially among the intelligentsia, sit on the fence, waiting to jump on the winning side. This is where Bao Dai comes in.
A Display of Strength. It is Bao Dai’s mission, and the U.S.-French hope, to rally his countrymen to the anti-Communist camp of the West. In this undertaking he needs time. “Nothing can be done overnight,” he says. He needs time to organize an effective native government, train an army and militia that can restore order in the villages, win over the doubting fence-sitters among the intelligentsia. Besides a military shield, he also needs a display of winning strength and patient understanding by his Western allies.
As a national leader, Bao Dai has his weaknesses, and largely because of them he does not enjoy the kind of popularity achieved by India’s Jawaharlal Nehru. But, as the lineal heir of the old monarchs of Annam, he is his nation’s traditional “father & mother,” its first priest (Buddhist) and judge. The French say that Bao Dai should act more decisively; whenever he does, there is impressive popular response.
Nehru’s government of India, trailed by Burma’s Thakin Nu, Indonesia’s Soekarno and even by the Philippines’ Elpidio Quirino, has so far refused to follow the major Western democracies in recognizing Bao Dai’s Viet Nam Republic. They look on him as a French puppet. But Bao Dai has shown a judgment on the crucial ideological conflict of his time that compares strikingly and favorably with the petulant, third-force position of Pandit Nehru.
Recently, for example, Bao Dai told a TIME correspondent about his impressions of Ho Chi Minh in 1946, when both leaders were cooperating with the French to establish a new Viet Nam regime.
“At first,” recalled Bao Dai, “we all believed the Ho government was really a nationalistic regime . . . I called Ho ‘Elder Brother’ and he called me ‘Younger Brother.’ . . .
“Then, I saw he was fighting a battle within himself. He realized that Communism was not best for our country. But it was too late. He could not overcome his own allegiance to Communism.”
A Royal Notion. Bao Dai is essentially a product of the old French colonialism—the best of it thwarted by some of the worst.
Born in 1913, the only son of the ailing Emperor Khai Dinh, he studied under Chinese tutors until nine. Then his father’s French advisers decided he should go to France for a Western education.
The emperor put on a parasol-shaped red velvet hat and a golden-dragon robe, accompanied his son on the first trip abroad for any of their dynasty. In Paris he put the prince under the tutelage of former Annam Governor Eugene Charles. “I bring you a schoolboy,” said Khai Dinh. “Make of him what you will.” Three years later, Khai Dinh died. He was buried in a splendid mausoleum, at Hué; at the foot of his tomb lay his prized French decorations, toothbrush, Thermos bottles and “Big Ben” alarm clock. Bao Dai, who had come ‘home for the funeral, was crowned the 13th sovereign of the Nguyen (pronounced New Inn) dynasty. He turned the throne over to a regent, and hurried back to Paris.
The young Emperor continued his Chinese lessons, studied Annamite chronicles, browsed through French history, literature and economics. He was especially fond of books on Henry IV, the dynast from Navarre who began the Bourbon rule in France with the cynical remark, “Paris is worth a Mass,” and the demagogic slogan, “Every family should have a fowl in the pot on Sunday.” Bao Dai put his money in Swiss banks (and thereby saved it from World War II’s reverses), collected stamps, practiced tennis with Champion Henri Cochet, learned ping-pong, dressed in tweeds and flannels, vacationed in the Pyrenees, scented himself heavily with Coty and Chanel perfumes.
Up to this point the Emperor had absorbed a good deal of the education of an intelligent, progressive French adolescent. He had high notions of applying what he had learned back home.
In 1932, at 19, Bao Dai formally took over the Dragon Throne at Hué; two years later he married beautiful Mariette-Jeanne Nguyen Huu Thi Lan, the daughter of a wealthy Cochin-Chinese merchant. The Empress Nam Phuong was a Roman Catholic, educated at Paris’ Convent “Aux Oiseaux.”
Bao Dai reigned but he did not rule. The French (Third Republic and Vichy) shrugged off his earnest pleas for social and economic reforms and more native political autonomy. Cleverly, as they thought, they encouraged the Emperor to devote himself to sport and pleasure.
Bao Dai was hunting tigers near his summer villa at Dalat when the Japanese, early in 1945, made their 1940 control of the colony official and complete. They surprised his party, took him prisoner, installed him as a puppet emperor—until their own capitulation to the Allies a few months later.
Agitator Ho. At this point, the lines of Bao Dai’s destiny first crossed those of his fellow Annamite Ho Chi Minh.
The two men made a dramatic contrast. The Emperor was young (then 32), plump, clean-shaven, bland-faced, fond of snappy Western sport clothes. Ho was aging (55), slight (hardly 5 ft. tall), goat-bearded, steelyeyed, usually seen in a frayed khaki tunic and cloth slippers. Ho Chi Minh, too, had gone to France for education. As a young man, he had been sent into exile by the French police of Indo-China because of his family’s nationalist agitation. His father and a brother went to political prison for life. A sister received nine years of hard labor.
In Paris, Ho (then known as Nguyen Ai Quoc) became a photographer’s assistant, wrote anti-imperialist articles. He also joined the French Communist Party. He was sent to Moscow for training, became a Comintern functionary, re-emerged in 1925 at Canton, where he helped Russian Agent Borodin in Communism’s first attempt to seize China.
From Hong Kong in 1931 Ho Chi Minh organized the first Indo-Chinese Communist Party. The British clapped him into jail for a year. When he came out, he continued organizing Red cells in his country. Japan and World War II gave him his big chance.
Using popular front-tactics, Ho established the Viet Minh—League for the Independence of Viet Nam. It directed guerrilla war against both Vichy French and Japanese, enlisted the support of many Indo-Chinese nationalists. American OSS agents and arms were parachuted to Ho’s side.
“Uncle Ho.” By the time the French were ready to pick up the postwar strings again in Indo-China, Communist Ho was very much a popular hero, better known as “Uncle Ho.” He spoke a “soft” Communist line, talked more about freedom, democracy and reform. Bao Dai was in a different position. He had suffered in reputation because he had “gotten along” with Vichy French and Japanese.
The returning French began negotiations with the Viet Minh leader. There were polite hints that Bao Dai must go—he was too “unpopular.” Bao abdicated, and Ho was in the saddle.
Bao Dai stayed on in Indo-China for a while, as plain citizen Nguyen Vinh Thuy and Honorary Councilor to the Republic. Nobody had much use for him. He went abroad and flung himself into a reckless round of pleasure and sport.
Playboy. Most of his time he spent at Cannes, on the French Riviera, where he had bought the palatial Château de Thorenc (reported purchase price: $250,000). In his garage were a pale blue Lincoln convertible, a black Citroen limousine, a blue Simca “Gordoni” one-seat racer, a sleek Italian two-seater, a Simca-8 sports model. He also kept several motorcycles. He insisted that every engine run “as accurately as a watch.”
He dallied in the bars and casinos, chain-smoked cheap Gauloise cigarettes, treated hangers-on to champagne and caviar, played roulette for 10,000-franc chips (“His Majesty’s losses,” remarked a croupier, “befitted his rank”), sometimes conducted jazz bands, sent his secretary to open negotiations with the many women who caught his eye. (“My grandfather had 125 wives and 300 children,” Bao Dai once remarked to a journalist. “I have a few mistresses. What then?”) He played golf capably and bridge like a master. A crack shot with rifle or revolver, he often arranged target competitions with the château’s servants.
Meanwhile the French, back in Indo-China, had broken with Ho Chi Minh, were floundering in a Communist-led nationalist uprising. They appealed to Bao Dai to come home again and help rally his people against the Red menace. They promised to grant Viet Nam gradual independence within the new French Union. Bao was persuaded. On March 8, 1949, he signed the document creating the new Indo-Chinese Republic which he would head as chief of state. As he left the gaudy safety of the Riviera for the hazards of a country torn by civil war, he grinned and said: “I risk my skin.” French Communists snarled: “Cet empereur des boites de nuit [this nightclub emperor].”
Behind him, at the Château de Thorenc, he left Empress Nam Phuong and their family of two boys and three girls.
Statesman. Bao Dai has been back in Indo-China about a year. He has made some progress, but it is slow and the difficulties are enormous. The French have promised his government more authority, but they are vague in making good and sometimes stupidly petty. One point of friction between Bao Dai and French High Commissioner Léon Pignon concerns the high commissioner’s residence in Saigon. It is the old imperial palace, and the symbol, in native eyes, of paramount place. Bao Dai wants it for his own use, and he stays away from the city lest he lose face by residing elsewhere. The French, with bureaucratic pigheadedness, have refused to part with it, though there are reports that they will soon do so.
Another disappointment has been Bao Dai’s effort to enlist capable ministers and lower-echelon administrators. Partly this is because so many Vietnamese are fence-sitters or fear the terror of Viet Minh agents. Partly it is a consequence of French failure, in the past and at present, to train enough natives to take over the government. Bao Dai seems to be counting on U.S. pressure to loosen up the French in this respect.
Most serious failure is the sluggish pace in recruiting a Viet Nam army. Bao Dai’s government has thus far assembled only four battalions, about 4,000 men.
Field of Decision. Though Ho Chi Minh’s forces (70,000 regulars with equipment as good as the French, plus 70,000 well-trained guerrillas and an unknown number of poorly equipped village militia) have been pushed back from such centers as Hanoi in northern Viet Nam, French officers report that “the situation steadily grows no better.”
French Commander in Chief Marcel Carpentier aims to sweep Ho Chi Minh’s men from the lower, heavily populated Mekong and Red River valleys. These are the best rice-producing areas and consequently the best source of rebel supply. By airlift and truck convoy, the French maintain a line of forts at the Chinese border, where aid could flow in for Ho.
It is rugged hit & run fighting in forest and swamp terrain well suited for guerrilla tactics. By day the French control about half the countryside; and if they want to, they can penetrate where they will, though ambush takes its toll. At night, however, the French draw into their forts and garrisoned centers. Then Ho Chi Minh’s men steal forth, terrorize peasants, collect taxes (two-fifths of a farmer’s rice harvest), and run the countryside almost everywhere
The French insist that the military problem is the No.. i problem, and that Western men and arms must lick it. Given sufficient U.S. equipment, up to $150 million a year or more, they think they can crush Ho Chi Minh within three years. Lacking such support, they may be facing a debacle within one year; and, of course, down in the wreckage would go Bao Dai.
The Piecemeal Approach. All in all, the new U.S. ally in Southeast Asia is a weak reed. And the alliance is as ironic as anything in history. For the same U.S. Government which abandoned the Chinese Nationalists because they were not good enough was committed by last fortnight’s decision to defend a playboy emperor and the worst and almost the last example of white man’s armed imperialism in Asia.
Nevertheless, Indo-China had to be defended—if it could be defended. So had Formosa, last stand of China’s Nationalists, which has advantages not to be found in Indo-China—a strong government, a well-trained defending fighting force, and easily defensible tactical position. The U.S. decision to go into such a doubtful project as the defense of Indo-China was the result of an idea that it ought to do something, somehow, to stop the Communists in Southeast Asia. But the U.S. policy in Indo-China was a piecemeal operation. Not until it saw the Southeast Asia problem whole, until it went to the help of all threatened governments, would the U.S. be making soldier’s or statesman’s sense.
* There was one abortive attempt to get acquainted in the 18305, when President Andrew Jackson sent Envoy Edmund Roberts of New Hampshire to draw up a treaty with Emperor Ming Mang. Reported Roberts: “The insulting formalities required as preliminaries to the treaty . . . left me no alternative save that of terminating a protracted correspondence marked . . . by duplicity and prevarication in the official servant of the Emperor.” Roberts was told to 1) make five kowtows, 2) beg for “deep condescension,” 3) change a sentence in President Jackson’s letter to the Emperor from “I pray to God” to “I pray to the gods of heaven.” He refused.
† The Chinese invasions took place between 213 B.C. and 186 A.D. From the latter date until the loth Century the Chinese governed the country. Then the Annamites threw off the Chinese yoke; it was clapped on again for a brief span in the 15th Century. French missionaries and traders (preceded by the Portuguese and Dutch) came to Indo-China in the 17th Century. In 1802, a French East India Company expedition helped establish Nguye
INDO-CHINA: The New Frontier | <urn:uuid:a9e34aac-8a8f-497f-bee6-437d888956ea> | CC-MAIN-2013-20 | http://khamerlogue.wordpress.com/indochina/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.972173 | 4,988 | 3.1875 | 3 |
World Water Day is on Monday, March 22nd, and it's a crucial moment in the fight against the global sanitation and water crisis that’s killing thousands of people every single day. No matter where you are in the world you can celebrate World Water Day! Check out some of the ways below:
The World’s Longest Toilet Queue is a mass mobilization event and Guinness World Record attempt bringing together thousands of campaigners from across the world to demand real change.
Ever seen water flow uphill? Without help of petrol or electricity? Meet the hydraulic ram, a robust and simple water-powered water pump. The ram pumps uses the power of water with a height difference flowing in the spring, stream or river to lift a fraction of the water up to 200 meters vertically, and sometimes pump it over a kilometre or two to where it is needed. No fuel or electricity required. The ram pump holds great potential for rural drinking water and irrigation water supply in hilly and mountainous areas, such as Afghanistan, Colombia, Nepal, and the Philippines.
Photo above: Children surrounding a hydraulic ram produced by AIDFI on the island Negros, the Philippines.
If you are in the Los Angeles area, come celebrate World Water Day with a Night of Generosity at SKYBAR on March 22nd at 8pm.
Water is big business. Just five beverage companies consume enough water over the course of a year to satisfy the daily water needs of every person on the planet. Of course, we may not be able to control how much water is put in a can of soda or a beer (less water, more alcohol, please) or the amount it takes to make paper, but we can control our own use at the workplace and even influence those who manage supplies.
It may not be our nickel that gets spent on the utility bill at work, but the gains are certainly ours when we reduce the corporate water footprint on the planet. Water prices are poised to rise due to increased water stress, and corporate growth is expected to be impeded as resources dwindle. Make no mistake, all of this comes out of our paychecks in one way or another.
At this very moment, millions of women are carrying 40 pounds of water on the return leg of their average 3.5-mile daily trek.
So today, on International Women's Day, I want to pay tribute to the resiliency of these women, and highlight the collective possibility they embody -- if freed from the back-breaking and time-consuming burden of collecting water.
Providing women with access to a nearby source of clean water frees up their days to earn an income or engage in other more productive activities – which can help significantly elevate their status in the community.
photo by Beth Harper via Creative Commons.
Outdoors is where we as residents tend to use huge amounts of water. In some parts of the country, mostly out in the arid West, 70 percent or more of residential water is used for lawn irrigation.
Something is seriously wrong with this picture. Pink flamingos and fountains aside, decorative lawns that need lots of care and lots of water are scourges. It may be that suburbia is making the wells run dry. Indeed, homeowners use an average of 120 gallons of water each day for things outside.
Think about that for a second: "things outside" -- where rain should be able to do the job nicely -- if we stick with the vegetation that grows naturally in our locale, that is. Irrigation, my dear water-freak neighbor, was invented to keep our fields of food alive, not your imported turf.
UN Secretary General Ban Ki-Moon and former U.S. President Bill Clinton unveiled a new United Nations program to raise money to help fight HIV/AIDS, malaria, tuberculosis, and more. Through the program, called MASSIVEGOOD, travelers can donate a minimum of $2 on top of their airfare to support an international UN Health financing initiative by clicking on MASSIVEGOOD. Five clicks, the equivalent of $10, bought an insecticide-treated bed net, while 25 clicks was enough to pay for a year’s worth of HIV medication for one child.
Around the world, most boreholes are drilled with big, heavy equipment which arrives by truck, makes a lot of noise, and gets the job done in a short time, at a cost of about $5,000 to $20,000 per borehole. But there is a growing interest in doing it in a different way -- drilling by hand. It takes longer, it is heavy work, but it also gets the job done. Why are people getting interested? A hand-drilled borehole costs about $500 or less.
Läkarmissionen is one of our NGO partners in Sweden and is helping us to fight the global water crisis. Their operation began in 1958 to support a Swedish church related mission hospital in South Africa. That is what gave them the name Läkarmissionen - the Swedish Medical Mission Foundation.
Läkarmissionen’s intention is to make it possible for marginalized people to gain improved quality of life. Our experiences indicate that circumstances can be changed, and that sustainable results can be achieved, as we include the people in need in the process of change. | <urn:uuid:69ef3535-bb5e-4b13-a8cc-4034bfea1f88> | CC-MAIN-2013-20 | http://liveearth.org/nl/liveearthblog/solutions?page=1 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.948721 | 1,095 | 2.5625 | 3 |