text
stringlengths 3.69k
124k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 25
190
| file_path
stringlengths 138
138
| language
stringclasses 1
value | language_score
float64 0.67
0.99
| token_count
int64 1k
26.7k
| score
float64 2.52
4.44
| int_score
int64 3
4
|
---|---|---|---|---|---|---|---|---|---|
Taking Play Seriously
By ROBIN MARANTZ HENIG
Published: February 17, 2008
On a drizzly Tuesday night in late January, 200 people came out to hear a psychiatrist talk rhapsodically about play -- not just the intense, joyous play of children, but play for all people, at all ages, at all times. (All species too; the lecture featured touching photos of a polar bear and a husky engaging playfully at a snowy outpost in northern Canada.) Stuart Brown, president of the National Institute for Play, was speaking at the New York Public Library's main branch on 42nd Street. He created the institute in 1996, after more than 20 years of psychiatric practice and research persuaded him of the dangerous long-term consequences of play deprivation. In a sold-out talk at the library, he and Krista Tippett, host of the public-radio program ''Speaking of Faith,'' discussed the biological and spiritual underpinnings of play. Brown called play part of the ''developmental sequencing of becoming a human primate. If you look at what produces learning and memory and well-being, play is as fundamental as any other aspect of life, including sleep and dreams.''
The message seemed to resonate with audience members, who asked anxious questions about what seemed to be the loss of play in their children's lives. Their concern came, no doubt, from the recent deluge of eulogies to play . Educators fret that school officials are hacking away at recess to make room for an increasingly crammed curriculum. Psychologists complain that overscheduled kids have no time left for the real business of childhood: idle, creative, unstructured free play. Public health officials link insufficient playtime to a rise in childhood obesity. Parents bemoan the fact that kids don't play the way they themselves did -- or think they did. And everyone seems to worry that without the chance to play stickball or hopscotch out on the street, to play with dolls on the kitchen floor or climb trees in the woods, today's children are missing out on something essential.
The success of ''The Dangerous Book for Boys'' -- which has been on the best-seller list for the last nine months -- and its step-by-step instructions for activities like folding paper airplanes is testament to the generalized longing for play's good old days. So were the questions after Stuart Brown's library talk; one woman asked how her children will learn trust, empathy and social skills when their most frequent playing is done online. Brown told her that while video games do have some play value, a true sense of ''interpersonal nuance'' can be achieved only by a child who is engaging all five senses by playing in the three-dimensional world.
This is part of a larger conversation Americans are having about play. Parents bobble between a nostalgia-infused yearning for their children to play and fear that time spent playing is time lost to more practical pursuits. Alarming headlines about U.S. students falling behind other countries in science and math, combined with the ever-more-intense competition to get kids into college, make parents rush to sign up their children for piano lessons and test-prep courses instead of just leaving them to improvise on their own; playtime versus r?m?uilding.
Discussions about play force us to reckon with our underlying ideas about childhood, sex differences, creativity and success. Do boys play differently than girls? Are children being damaged by staring at computer screens and video games? Are they missing something when fantasy play is populated with characters from Hollywood's imagination and not their own? Most of these issues are too vast to be addressed by a single field of study (let alone a magazine article). But the growing science of play does have much to add to the conversation. Armed with research grounded in evolutionary biology and experimental neuroscience, some scientists have shown themselves eager -- at times perhaps a little too eager -- to promote a scientific argument for play. They have spent the past few decades learning how and why play evolved in animals, generating insights that can inform our understanding of its evolution in humans too. They are studying, from an evolutionary perspective, to what extent play is a luxury that can be dispensed with when there are too many other competing claims on the growing brain, and to what extent it is central to how that brain grows in the first place.
Scientists who study play, in animals and humans alike, are developing a consensus view that play is something more than a way for restless kids to work off steam; more than a way for chubby kids to burn off calories; more than a frivolous luxury. Play, in their view, is a central part of neurological growth and development -- one important way that children build complex, skilled, responsive, socially adept and cognitively flexible brains.
Their work still leaves some questions unanswered, including questions about play's darker, more ambiguous side: is there really an evolutionary or developmental need for dangerous games, say, or for the meanness and hurt feelings that seem to attend so much child's play? Answering these and other questions could help us understand what might be lost if children play less. | <urn:uuid:316c7af5-14e1-4d0b-9576-753e17ef2cc5> | CC-MAIN-2013-20 | http://query.nytimes.com/gst/fullpage.html?res=9404E7DA1339F934A25751C0A96E9C8B63&scp=2&sq=taking%20play%20seriously&st=cse | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.961459 | 1,055 | 2.5625 | 3 |
CTComms sends on average 2 million emails monthly on behalf of over 125 different charities and not for profits.
Take the complexity of technology and stir in the complexity of the legal system and what do you get? Software licenses! If you've ever attempted to read one you know how true this is, but you have to know a little about software licensing even if you can't parse all of the fine print.
By: Chris Peters
March 10, 2009
A software license is an agreement between you and the owner of a program which lets you perform certain activities which would otherwise constitute an infringement under copyright law. The software license usually answers questions such as:
The price of the software and the licensing fees, if any, are sometimes discussed in the licensing agreement, but usually it's described elsewhere.
If you read the definitions below and you're still scratching your head, check out Categories of Free and Non-Free Software which includes a helpful diagram.
Free vs Proprietary:
When you hear the phrase "free software" or "free software license," "free" is referring to your rights and permissions ("free as in freedom" or "free as in free speech"). In other words, a free software license gives you more rights than a proprietary license. You can usually copy, modify, and redistribute free software without paying a fee or obtaining permission from the developers and distributors. In most cases "free software" won't cost you anything, but that's not always the case – in this instance the word free is making no assertion whatsoever about the price of the software. Proprietary software puts more restrictions and limits on your legal permission to copy, modify, and distribute the program.
Free, Open-Source or FOSS?
In everyday conversation, there's not much difference between "free software," "open source software," and "FOSS (Free and Open-Source Software)." In other words, you'll hear these terms used interchangeably, and the proponents of free software and the supporters of open-source software agree with one another on most issues. However, the official definition of free software differs somewhat from the official definition of open-source software, and the philosophies underlying those definitions differ as well. For a short description of the difference, read Live and Let License. For a longer discussion from the "free software" side, read Why Open Source Misses the Point of Free Software. For the "open-source" perspective, read Why Free Software is Too Ambiguous.
Public domain and copyleft.
These terms refer to different categories of free, unrestricted licensing. A copyleft license allows you all the freedoms of a free software license, but adds one restriction. Under a copyleft license, you have to release any modifications under the same terms as the original software. In effect, this blocks companies and developers who want to alter free software and then make their altered version proprietary. In practice, almost all free and open-source software is also copylefted. However, technically you can release "free software" that isn't copylefted. For example, if you developed software and released it under a "public domain" license, it would qualify as free software, but it isn't copyleft. In effect, when you release something into the public domain, you give up all copyrights and rights of ownership.
Shareware and freeware.
These terms don't really refer to licensing, and they're confusing in light of the discussion of free software above. Freeware refers to software (usually small utilities at sites such as Tucows.com) that you can download and install without paying. However, you don't have the right to view the source code, and you may not have the right to copy and redistribute the software. In other words, freeware is proprietary software. Shareware is even more restrictive. In effect, shareware is trial software. You can use it for a limited amount of time (usually 30 or 60 days) and then you're expected to pay to continue using it.
End User Licensing Agreement (EULA).
When you acquire software yourself, directly from a vendor or retailer, or directly from the vendor's Web site, you usually have to indicate by clicking a box that you accept the licensing terms. This "click-through" agreement that no one ever reads is commonly known as a EULA. If you negotiate a large purchase of software with a company, and you sign a contract to seal the agreement, that contract usually replaces or supersedes the EULA.
Most major vendors of proprietary software offer some type of bulk purchasing and volume licensing mechanism. The terms vary widely, but if you order enough software to qualify, the benefits in terms of cost and convenience are significant. Also, not-for-profits sometimes qualify for it with very small initial purchases.
Some of the benefits of volume licensing include:
Lower cost. As with most products, software costs less when you buy more of it.
Ease of installation. Without volume licenses, you usually have to enter a separate activation code (also known as a product key or license key) for each installed copy of the program. On the other hand, volume licenses provide you with a single, organisation-wide activation code, which makes it much easier to find when you need to reinstall the software.
Easier tracking of licenses. Keeping track of how many licenses you own, and how many copies you've actually installed, is a tedious, difficult task. Many volume licensing programs provide an online account which is automatically updated when you obtain or activate a copy of that company's software. These accounts can also coordinate licensing across multiple offices within your organisation.
To learn more about volume licensing from a particular vendor, check out some of the resources below:
Qualified not-for-profits and libraries can receive donated volume licenses for Microsoft products through TechSoup. For more information, check out our introduction to the Microsoft Software Donation Program, and the Microsoft Software Donation Program FAQ. For general information about the volume licensing of Microsoft software, see Volume Licensing Overview.
If you get Microsoft software from TechSoup or other software distributors who work with not-for-profits, you may need to go to the eOpen Web site to locate your Volume license keys. For more information, check out the TechSoup Donation Recipient's Guide to the Microsoft eOpen Web Site.
Always check TechSoup Stock first to see if there's a volume licensing donation program for the software you're interested in. If TechSoup doesn't offer that product or if you need more copies than you can find at TechSoup, search for "volume licensing not-for-profits software" or just "not-for-profits software." For example, when we have an inventory of Adobe products, qualifying and eligible not-for-profits can obtain four individual products or one copy of Creative Suite 4 through TechSoup. If we're out of stock, or you've used up your annual Adobe donation, you can also check TechSoup's special Adobe donation program and also Adobe Solutions for Nonprofits for other discounts available to not-for-profits. For more software-hunting tips, see A Quick Guide to Discounted Software Programs.
Pay close attention to the options and licensing requirements when you acquire server-based software. You might need two different types of license – one for the server software itself, and a set of licenses for all the "clients" accessing the software. Depending on the vendor and the licensing scenario, "client" can refer either to the end users themselves (for example, employees, contractors, clients, and anyone else who uses the software in question) or their computing devices (for example, laptops, desktop computers, smartphones, PDAs, etc.). We'll focus on Microsoft server products, but similar issues can arise with other server applications.
Over the years, Microsoft has released hundreds of server-based applications, and the licensing terms are slightly different for each one. Fortunately, there are common license types and licensing structures across different products. In other words, while a User CAL (Client Access License) for Windows Server is distinct from a User CAL for SharePoint Server, the underlying terms and rights are very similar. The TechSoup product pages for Microsoft software do a good job of describing the differences between products, so we'll focus on the common threads in this article.
Moreover, Microsoft often lets you license a single server application in more than one way, depending on the needs of your organisation. This allows you the flexibility to choose the licenses that best reflect your organisation's usage patterns and thereby cost you the least amount of money. For example, for Windows Server and other products you can acquire licenses on a per-user basis (for example, User CALs) or per-device basis (for example, Device CALs).
The license required to install and run most server applications usually comes bundled with the software itself. So you can install and run most applications "out of the box," as long as you have the right number of client licenses (see the section below for more on that). However, when you're running certain server products on a computer with multiple processors, you may need to get additional licenses. For example, if you run Windows Server 2008 DataCenter edition on a server with two processors, you need a separate license for each processor. SQL Server 2008 works the same way. This type of license is referred to as a processor license. Generally you don't need client licenses for any application that's licensed this way.
Client Licenses for Internal Users
Many Microsoft products, including Windows Server 2003 and Windows Server 2008, require client access licenses for all authenticated internal users (for example, employees, contractors, volunteers, etc.). On the other hand, SQL Server 2008 and other products don't require any client licenses. Read the product description at CTXchange if you're looking for the details about licensing a particular application.
User CALs: User CALs allow each user access to all the instances of a particular server product in an organisation, no matter which device they use to gain access. In other words, if you run five copies of Windows Server 2008 on five separate servers, you only need one User CAL for each person in your organisation who access those servers (or any software installed on those servers), whether they access a single server, all five servers, or some number in between. Each user with a single CAL assigned to them can access the server software from as many devices as they want (for example, desktop computers, laptops, smartphones, etc.). User CALs are a popular licensing option.
Device CALs: Device CALs allow access to all instances of a particular server application from a single device (for example, a desktop computer, a laptop, etc.) in your organisation. Device CALs only make sense when multiple employees use the same computer. For example, in 24-hour call centres different employees on different shifts often use the same machine, so Device CALs make sense in this situation.
Choosing a licensing mode for your Windows Server CALs: With Windows Server 2003 and Windows Server 2008, you use a CAL (either a User CAL or a Device CAL) in one of two licensing modes: per seat or per server. You make this decision when you're installing your Windows Server products, not when you acquire the CALs. The CALs themselves don't have any mode designation, so you can use either a User CAL or a Device CAL in either mode. Per seat mode is the default mode, and the one used most frequently. The description of User CALs and Device CALs above describes the typical per seat mode. In "per server" mode, Windows treats each license as a "simultaneous connection." In other words, if you have 40 CALs, Windows will let 40 authenticated users have access. The 41st user will be denied access. However, in per server mode, each CAL is tied to a particular instance of Windows Server, and you have to acquire a new set of licenses for each new server you build that runs Windows. Therefore, per server mode works for some small organisations with one or two servers and limited access requirements.
You don't "install" client licenses the way you install software. There are ways to automate the tracking of software licenses indirectly, but the server software can't refuse access to a user or device on licensing grounds. The licenses don't leave any "digital footprint" that the server software can read. An exception to this occurs when you license Windows Server in per server mode. In this case, if you have 50 licenses, the 51st authenticated user will be denied access (though anonymous users can still access services).
Some key points to remember about client licensing:
The licensing scenarios described in this section arise less frequently, and are too complex to cover completely in this article, so they're described briefly below along with more comprehensive resources.
You don't need client licenses for anonymous, unauthenticated external users. In other words, if someone accesses your Web site, and that site runs on Internet Information Server (IIS), Microsoft's Web serving software, you don't need a client license for any of those anonymous users.
If you have any authenticated external users who access services on your Windows-based servers, you can obtain CALs to cover their licensing requirements. However, the External Connector License (ECL) is a second option in this scenario. The ECL covers all use by authenticated external users, but it's a lot more expensive than a CAL, so only get one if you'll have a lot of external users. For example, even if you get your licenses through the CTXchange donation program, an ECL for Windows Server 2008 has an £76 administrative fee, while a User CAL for Windows Server 2008 carries a £1 admin fee. If only a handful of external users access your Windows servers, you're better off acquiring User CALs. Also, an ECL only applies to external users and devices. In other words, if you have an ECL, you still have to get a CAL for all employees and contractors.
Even though Terminal Services (TS) is built into Windows Server 2003 and 2008, you need to get a separate TS CAL for each client (i.e. each user or each device) that will access Terminal Services in your organisation. This TS license is in addition to your Windows Server CALs.
Microsoft's System Centre products (a line of enterprise-level administrative software packages) use a special type of license known as a management license (ML). Applications that use this type of licensing include System Center Configuration Manager 2007 and System Center Operations Manager 2007. Any desktop or workstation managed by one of these applications needs a client management license. Any server managed by one of these applications requires a server management license, and there are two types of server management licenses – standard and enterprise. You need one or the other but not both. There are also special licensing requirements if you're managing virtual instances of Windows operating systems. For more information, see TechSoup's Guide to System Center Products and Licensing and Microsoft's white paper on Systems Center licensing.
Some Microsoft server products have two client licensing modes, standard and enterprise. As you might imagine, an Enterprise CAL grants access to more advanced features of a product. Furthermore, with some products, such as Microsoft Exchange, the licenses are additive. In other words, a user needs both a Standard CAL AND an Enterprise CAL in order to access the advanced features. See Exchange Server 2007 Editions and Client Access Licenses for more information.
With virtualisation technologies, multiple operating systems can run simultaneously on a single physical server. Every time you install a Microsoft application, whether on a physical hardware system or a virtual hardware system, you create an "instance" of that application. The number of "instances" of particular application that you can run using a single license varies from product to product. For more information see the Volume Licensing Briefs, Microsoft Licensing for Virtualization and the Windows Server Virtualization Calculator. For TechSoup Stock products, see the product description for more information.
There are a lot of nuances to Microsoft licensing, and also a lot of excellent resources to help you understand different scenarios.
About the Author:
Chris is a former technology writer and technology analyst for TechSoup for Libraries, which aims to provide IT management guidance to libraries. His previous experience includes working at Washington State Library as a technology consultant and technology trainer, and at the Bill and Melinda Gates Foundation as a technology trainer and tech support analyst. He received his M.L.S. from the University of Michigan in 1997.
Originally posted here.
Copyright © 2009 CompuMentor. This work is published under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License.
The latest version of Microsoft Office Professional Plus is an integrated collection of programs, servers, and services designed to work together to enable optimised information work. | <urn:uuid:c337bcd8-6aa1-4f2d-8c48-b916442ebbee> | CC-MAIN-2013-20 | http://www.ctt.org/resource_centre/getting_started/learning/understanding_licenses | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.910602 | 3,479 | 3.234375 | 3 |
Hold the salt: UCLA engineers develop revolutionary new desalination membrane
Process uses atmospheric pressure plasma to create filtering 'brush layer'
Desalination can become more economical and used as a viable alternate water resource.
By Wileen Wong Kromhout
Originally published in UCLA Newsroom
Researchers from the UCLA Henry Samueli School of Engineering and Applied Science have unveiled a new class of reverse-osmosis membranes for desalination that resist the clogging which typically occurs when seawater, brackish water and waste water are purified.
The highly permeable, surface-structured membrane can easily be incorporated into today's commercial production system, the researchers say, and could help to significantly reduce desalination operating costs. Their findings appear in the current issue of the Journal of Materials Chemistry.
Reverse-osmosis (RO) desalination uses high pressure to force polluted water through the pores of a membrane. While water molecules pass through the pores, mineral salt ions, bacteria and other impurities cannot. Over time, these particles build up on the membrane's surface, leading to clogging and membrane damage. This scaling and fouling places higher energy demands on the pumping system and necessitates costly cleanup and membrane replacement.
The new UCLA membrane's novel surface topography and chemistry allow it to avoid such drawbacks.
"Besides possessing high water permeability, the new membrane also shows high rejection characteristics and long-term stability," said Nancy H. Lin, a UCLA Engineering senior researcher and the study's lead author. "Structuring the membrane surface does not require a long reaction time, high reaction temperature or the use of a vacuum chamber. The anti-scaling property, which can increase membrane life and decrease operational costs, is superior to existing commercial membranes."
The new membrane was synthesized through a three-step process. First, researchers synthesized a polyamide thin-film composite membrane using conventional interfacial polymerization. Next, they activated the polyamide surface with atmospheric pressure plasma to create active sites on the surface. Finally, these active sites were used to initiate a graft polymerization reaction with a monomer solution to create a polymer "brush layer" on the polyamide surface. This graft polymerization is carried out for a specific period of time at a specific temperature in order to control the brush layer thickness and topography.
"In the early years, surface plasma treatment could only be accomplished in a vacuum chamber," said Yoram Cohen, UCLA professor of chemical and biomolecular engineering and a corresponding author of the study. "It wasn't practical for large-scale commercialization because thousands of meters of membranes could not be synthesized in a vacuum chamber. It's too costly. But now, with the advent of atmospheric pressure plasma, we don't even need to initiate the reaction chemically. It's as simple as brushing the surface with plasma, and it can be done for almost any surface."
In this new membrane, the polymer chains of the tethered brush layer are in constant motion. The chains are chemically anchored to the surface and are thus more thermally stable, relative to physically coated polymer films. Water flow also adds to the brush layer's movement, making it extremely difficult for bacteria and other colloidal matter to anchor to the surface of the membrane.
"If you've ever snorkeled, you'll know that sea kelp move back and forth with the current or water flow," Cohen said. "So imagine that you have this varied structure with continuous movement. Protein or bacteria need to be able to anchor to multiple spots on the membrane to attach themselves to the surface — a task which is extremely difficult to attain due to the constant motion of the brush layer. The polymer chains protect and screen the membrane surface underneath."
Another factor in preventing adhesion is the surface charge of the membrane. Cohen's team is able to choose the chemistry of the brush layer to impart the desired surface charge, enabling the membrane to repel molecules of an opposite charge.
The team's next step is to expand the membrane synthesis into a much larger, continuous process and to optimize the new membrane's performance for different water sources.
"We want to be able to narrow down and create a membrane selection system for different water sources that have different fouling tendencies," Lin said. "With such knowledge, one can optimize the membrane surface properties with different polymer brush layers to delay or prevent the onset of membrane fouling and scaling.
"The cost of desalination will therefore decrease when we reduce the cost of chemicals [used for membrane cleaning], as well as process operation [for membrane replacement]. Desalination can become more economical and used as a viable alternate water resource."
Cohen's team, in collaboration with the UCLA Water Technology Research (WaTeR) Center, is currently carrying out specific studies to test the performance of the new membrane's fouling properties under field conditions.
"We work directly with industry and water agencies on everything that we're doing here in water technology," Cohen said. "The reason for this is simple: If we are to accelerate the transfer of knowledge technology from the university to the real world, where those solutions are needed, we have to make sure we address the real issues. This also provides our students with a tremendous opportunity to work with industry, government and local agencies."
A paper providing a preliminary introduction to the new membrane also appeared in the Journal of Membrane Science last month.
Published: Thursday, April 08, 2010 | <urn:uuid:c0b175bb-65fb-420e-a881-a80b91d00ecd> | CC-MAIN-2013-20 | http://www.environment.ucla.edu/water/news/article.asp?parentid=6178 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.924981 | 1,115 | 2.8125 | 3 |
This section provides primary sources that document how Indian and European men and one English and one Indian woman have described the practice of sati, or the self-immolation of Hindu widows.
Although they are all critical of self-immolation, Francois Bernier, Fanny Parks, Lord William Bentinck, and Rev. England present four different European perspectives on the practice of sati and what it represents about Indian culture in general, and the Hindu religion and Hindu women in particular. They also indicate increasing negativism in European attitudes toward India and the Hindu religion in general. It would be useful to compare the attitudes of Bentinck and England as representing the secular and sacred aspects of British criticism of sati. A comparison of Bentinck’s minute with the subsequent legislation also reveals differences in tone between private and public documents of colonial officials. Finally, a comparison between the Fanny Parks and the three men should raise discussion on whether or not the gender and social status of the writer made any difference in his or her appraisal of the practice of self-immolation.
The three sources by Indian men and one by an Indian woman illustrate the diversity of their attitudes toward sati. The Marathi source illuminates the material concerns of relatives of the Hindu widow who is urged to adopt a son, so as to keep a potentially lucrative office within the extended family. These men are willing to undertake intense and delicate negotiations to secure a suitably related male child who could be adopted. This letter also documents that adoption was a legitimate practice among Hindus, and that Hindu women as well as men could adopt an heir. Ram Mohan Roy’s argument illustrates a rationalist effort to reform Hindu customs with the assistance of British legislation. Roy illustrates one of the many ways in which Indians collaborate with British political power in order to secure change within Indian society. He also enabled the British to counter the arguments of orthodox Hindus about the scriptural basis for the legitimacy of self-immolation of Hindu widows. The petition of the orthodox Hindu community in Calcutta, the capital of the Company’s territories in India, documents an early effort of Indians to keep the British colonial power from legislating on matters pertaining to the private sphere of Indian family life. Finally, Pandita Ramabai reflects the ways in which ancient Hindu scriptures and their interpretation continued to dominate debate. Students should consider how Ramabai’s effort to raise funds for her future work among child widows in India might have influenced her discussion of sati.
Two key issues should be emphasized. First, both Indian supporters and European and Indian opponents of the practice of self-immolation argue their positions on the bodies of Hindu women, and all the men involved appeal to Hindu scriptures to legitimate their support or opposition. Second, the voices of Indian women were filtered through the sieve of Indian and European men and a very few British women until the late 19th century.
- How do the written and visual sources portray the Hindu women who commit self-immolation? Possible aspects range from physical appearance and age, motivation, evidence of physical pain (that even the most devoted woman must suffer while burning to death), to any evidence of the agency or autonomy of the Hindu widow in deciding to commit sati. Are any differences discernible, and if so, do they seem related to gender or nationality of the observer or time period in which they were observed?
- How are the brahman priests who preside at the self-immolation portrayed in Indian and European sources? What might account for any similarities and differences?
- What reasons are used to deter Hindu widows from committing sati? What do these reasons reveal about the nature of family life in India and the relationships between men and women?
- What do the reasons that orthodox Hindus provide to European observers and to Indian reformers reveal about the significance of sati for the practice of the Hindu religion? What do their arguments reveal about orthodox Hindu attitudes toward women and the family?
- How are Hindu scriptures used in various ways in the debates before and after the prohibition of sati?
- What is the tone of the petition from 800 Hindus to their British governor? Whom do they claim to represent? What is their justification for the ritual of self-immolation? What is their attitude toward the Mughal empire whose Muslim rulers had preceded the British? What is their characterization of the petitioners toward those Hindus who support the prohibition on sati? How do the petitioners envision the proper relationship between the state and the practice of religion among its subjects?
- Who or what factors do European observers, British officials, and Indian opponents of sati hold to be responsible for the continuance of the practice of sati?
- What were the reasons that widows gave for committing sati? Were they religious, social or material motives? What is the evidence that the widows were voluntarily committing sati before 1829? What reasons did the opponents of sati give for the decisions of widows to commit self-immolation? What reasons did opponents give for widows who tried to escape from their husbands’ pyres?
- What are the reasons that Lord Bentinck and his Executive Council cite for their decision to declare the practice of sati illegal? Are the arguments similar to or different from his arguments in his minute a month earlier? What do these reasons reveal about British attitudes toward their role or mission in India? Do they use any of the arguments cited by Ram Mohan Roy or Pandita Ramabai?
- What do these sources, both those who oppose sati and those who advocate it, reveal about their attitudes to the Hindu religion in particular and Indian culture in general? | <urn:uuid:672e69ee-fd10-42dc-8e01-f4fde95914a0> | CC-MAIN-2013-20 | http://chnm.gmu.edu/wwh/modules/lesson5/lesson5.php?menu=1&c=strategies&s=0 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.957252 | 1,164 | 3.734375 | 4 |
March 30, 2012
CDC Releases New Report on Autism Prevalence in U.S.
Researchers at the Johns Hopkins Bloomberg School of Public Health contributed to a new Centers for Disease Control and Prevention (CDC) report that estimates the prevalence of Autism Spectrum Disorders (ASD) as affecting 1 in 88 U.S. children overall, and 1 in 54 boys.
This is the third such report by the CDC’s Autism and Developmental Disabilities Monitoring Network (ADDM), which has used the same surveillance methods for more than a decade. Previous ADDM reports estimated the rate of ASDs at 1 in 110 children in the 2009 report that looked at data from 2006, and 1 in 150 children in the 2007 report, which covered data from 2002. The current prevalence estimate, which analyzed data from 2008, represents a 78 percent increase since 2002, and a 23 percent increase since 2006.
ASDs include diagnoses of autistic disorder, Asperger disorder, and Pervasive Developmental Disorder-Not Otherwise Specified (PDD-NOS). ASDs encompass a wide spectrum of conditions, all of which affect communication, social and behavioral skills. The causes of these developmental disorders are not completely understood, although studies show that both environment and genetics play an important and complex role. There is no known cure for ASDs, but studies have shown that behavioral interventions, particularly those begun early in a child’s life, can greatly improve learning and skills.
The latest CDC report, “Prevalence of Autism Spectrum Disorders – Autism and Developmental Disabilities Monitoring Network, 14 Sites, United States, 2008,” provides autism prevalence estimates from different areas of the United States, including Maryland. The purpose of the report is to provide high-quality data on the extent and distribution of ASDs in the U.S. population, to promote better planning for health and educational services, and to inform the further development of research on the causes, progression, and treatments.
“We continue observing increases in prevalence since the inception of the project in 2000,” said Li-Ching Lee, PhD, a psychiatric epidemiologist with the Bloomberg School"s Departments of Epidemiology and Mental Health and the principal investigator for the prevalence project’s Maryland site. “In Maryland, we found 27 percent of children with ASDs were never diagnosed by professionals. So, we know there are more children out there and we may see the increase continue in coming years.”
The new report, which focuses on 8-year-olds because that is an age where most children with ASD have been identified, shows that the number of those affected varies widely among the 14 participating states, with Utah having the the highest overall rate (1 in 47) and Alabama the lowest (1 in 210). Across all sites, nearly five times as many boys as girls are affected. Additionally, growing numbers of minority children are being diagnosed, with a 91 percent increase among black non-Hispanic children and a 110 percent increase for Hispanic children. Researchers say better screening and diagnosis may contribute to those increases among minority children.
The overall rate in Maryland is 1 in 80 children; 1 in 49 boys and 1 in 256 girls. In Maryland, the prevalence has increased 85 percent from 2002 to 2008. The increase was 41 percent between 2004 and 2008, and 35 percent between 2006 and 2008.
The data were gathered through collaboration with the Maryland State Department of Education and participating schools in Anne Arundel, Baltimore, Carroll, Cecil, Harford and Howard counties, as well as clinical sources such as Kennedy Krieger Institute, Mt. Washington Pediatric Hospital, and University of Maryland Medical System.
While the report focuses on the numbers, its authors acknowledge that the reasons for the increase are not completely understood and that more research is needed. They note that the increase is likely due in part to a broadened definition of ASDs, greater awareness among the public and professionals, and the way children receive services in their local communities. “It’s very difficult, if not impossible, to tease these factors apart to quantify how much each of these factors contributed to the increase,” Dr. Lee said.
But whatever the cause, “This report paints a picture of the magnitude of the condition across our country and helps us understand how communities identify children with autism. One thing the data tell us with certainty – there are more children and families that need help,” said CDC Director Thomas Frieden, MD, MPH.
Researchers also identified the median age of ASD diagnosis, documented in records. In Maryland, that age was 5 years and 6 months, compared with 4 years, 6 months nationally. Across all sites, children who have autistic disorder tend to be identified earlier, while those with Asperger Disorder tend to be diagnosed later. Given the importance of early intervention, ADDM researchers carefully track at what age children receive an ASD diagnosis.
“Unfortunately, most children still are not diagnosed until after they reach age 4. We’ve heard from too many parents that they were concerned long before their child was diagnosed. We are working hard to change that,” said Coleen Boyle, PhD, MSHyg, director of CDC’s National Center on Birth Defects and Developmental Disabilities.
To see the full report: http://www.cdc.gov/mmwr/preview/mmwrhtml/ss6103a1.htm?s_cid=ss6103a1_w
To the Community Report with state statistics: http://www.cdc.gov/ncbddd/autism/documents/ADDM-2012-Community-Report.pdf
Media contact for Johns Hopkins Bloomberg School of Public Health: Natalie Wood-Wright at 410-614-6029 or email@example.com | <urn:uuid:c63d3de8-6d63-4687-b45d-41b137a97594> | CC-MAIN-2013-20 | http://www.jhsph.edu/news/news-releases/2012/lee-autism-prevalence.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.946067 | 1,200 | 2.8125 | 3 |
Throughout life there are many times when outside influences change or influence decision-making. The young child has inner motivation to learn and explore, but as he matures, finds outside sources to be a motivating force for development, as well. Along with being a beneficial influence, there are moments when peer pressure can overwhelm a child and lead him down a challenging path. And, peer pressure is a real thing – it is not only observable, but changes the way the brain behaves.
As a young adult, observational learning plays a part in development through observing and then doing. A child sees another child playing a game in a certain way and having success, so the observing child tries the same behavior. Albert Bandura was a leading researcher in this area. His famous bobo doll studies found that the young child is greatly influenced by observing other’s actions. When a child sees something that catches his attention, he retains the information, attempts to reproduce it, and then feels motivated to continue the behavior if it is met with success.
Observational learning and peer pressure are two different things – one being the observing of behaviors and then the child attempting to reproduce them based on a child’s own free will. Peer pressure is the act of one child coercing another to follow suit. Often the behavior being pressured is questionable or taboo, such as smoking cigarettes or drinking alcohol.
Peer Pressure and the Brain
Recent studies find that peer pressure influences the way our brains behave, which leads to better understanding about the impact of peer pressure and the developing child. According to studies from Temple University, peer pressure has an effect on brain signals involved in risk and reward department, especially when the teen’s friends are around. Compared to adults in the study, teenagers were much more likely to take risks they would not normally take on their own when with friends. Brain signals were more activated in the reward center of the brain, firing greatest during at risk behaviors.
Peer pressure can be difficult for young adults to deal with, and learning ways to say “no” or avoid pressure-filled situations can become overwhelming. Resisting peer pressure is not just about saying “no,” but how the brain functions. Children that have stronger connections among regions in their frontal lobes, along with other areas of the brain, are better equipped to resist peer pressure. During adolescence, the frontal lobes of the brain develop rapidly, causing axioms in the region to have a coating of fatty myelin, which insulates them and causes the frontal lobes to more effectively communicate with other brain regions. This helps the young adult to develop judgment and self-control needed to resist peer pressure.
Along with the frontal lobes contributing to the brain and peer pressure, other studies find that the prefrontal cortex plays a role in how teens respond to peer pressure. Just as with the previous study, children that were not exposed to peer pressure had greater connectivity within the brain as well as abilities to resist peer pressure.
Working through Peer Pressure
The teenage years are exciting years. The young adult is often going through physical changes due to puberty, adjusting to new friends and educational environments, and learning how to make decisions for themselves. Adults can offer a helping and supportive hand to young adults when dealing with peer pressure by considering the following:
Separation: Understanding that this is a time for the child to separate and learn how to be his own individual is important. It is hard to let go and allow the child to make mistakes for himself, especially when you want to offer input or change plans and actions, but allowing the child to go down his own path is important. As an adult, offering a helping hand if things go awry and being there to offer support is beneficial.
Talk it Out: As an adult, take a firm stand on rules and regulations with your child. Although you cannot control whom your child selects as friends, you can take a stand on your control of your child. Setting specific goals, rules, and limits encourages respect and trust, which must be earned in response. Do not be afraid to start talking with your child early about ways to resist peer pressure. Focus on how it will build your child’s confidence when he learns to say “no” at the right time and reassure him that it can be accomplished without feeling guilty or losing self-confidence.
Stay Involved: Keep family dinner as a priority, make time each week for a family meeting or game time, and plan family outings and vacations regularly. Spending quality time with kids models positive behavior and offers lots of opportunities for discussions about what is happening at school and with friends.
If at any time there are concerns a child is becoming involved in questionable behavior due to peer pressure, ask for help. Understand that involving others in helping a child cope with peer pressure, such as a family doctor, youth advisor, or other trusted friend, does not mean that the adult is not equipped to properly help the child, but that including others in assisting a child, that may be on the brink of heading down the wrong path, is beneficial.
By Sarah Lipoff. Sarah is an art educator and parent. Visit Sarah’s website here.
Read More → | <urn:uuid:4fafe4c1-2dd0-49fd-8b1b-41d1829f7cdf> | CC-MAIN-2013-20 | http://www.funderstanding.com/category/child-development/brain-child-development/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.963305 | 1,062 | 3.8125 | 4 |
Vol. 17 Issue 6
One-Legged (Single Limb) Stance Test
The One-Legged Stance Test (OLST)1,2 is a simple, easy and effective method to screen for balance impairments in the older adult population.
You may be asking yourself, "how can standing on one leg provide you with any information about balance, after all, we do not go around for extended periods of time standing on one leg?"
True, as a rule we are a dynamic people, always moving, our world always in motion, but there are instances were we do need to maintain single limb support. The most obvious times are when we are performing our everyday functional activities.
Stepping into a bath tub or up onto a curb would be difficult, if not impossible to do without the ability to maintain single limb support for a given amount of time. The ability to switch from two- to one-leg standing is required to perform turns, climb stairs and dress.
As we know, the gait cycle requires a certain amount of single limb support in order to be able to progress ourselves along in a normal pattern. When the dynamics of the cycle are disrupted, loss of balance leading to falls may occur.
This is especially true in older individuals whose gait cycle is altered due to normal and potentially abnormal changes that occur as a result of aging.
The One-Legged Stance Test measures postural stability (i.e., balance) and is more difficult to perform due to the narrow base of support required to do the test. Along with five other tests of balance and mobility, reliability of the One-Legged Stance Test was examined for 45 healthy females 55 to 71 years old and found to have "good" intraclass correlations coefficients (ICC range = .95 to .099). Within raters ICC ranged from 0.73 to 0.93.3
To perform the test, the patient is instructed to stand on one leg without support of the upper extremities or bracing of the unweighted leg against the stance leg. The patient begins the test with the eyes open, practicing once or twice on each side with his gaze fixed straight ahead.
The patient is then instructed to close his eyes and maintain balance for up to 30 seconds.1
The number of seconds that the patient/client is able to maintain this position is recorded. Termination or a fail test is recorded if 1) the foot touches the support leg; 2) hopping occurs; 3) the foot touches the floor, or 4) the arms touch something for support.
Normal ranges with eyes open are: 60-69 yrs/22.5 ± 8.6s, 70-79 yrs/14.2 ± 9.3s. Normal ranges for eyes closed are: 60-69 yrs/10.2 ± 8.6s, 70-79 yrs/4.3 ± 3.0s.4 Briggs and colleagues reported balance times on the One-Legged Stance Test in females age 60 to 86 years for dominant and nondominant legs.
Given the results of this data, there appears to be some difference in whether individuals use their dominant versus their nondominant leg in the youngest and oldest age groups.
When using this test, having patients choose what leg they would like to stand on would be appropriate as you want to record their "best" performance.
It has been reported in the literature that individuals increase their chances of sustaining an injury due to a fall by two times if they are unable to perform a One-Legged Stance Test for five seconds.5 Other studies utilizing the One-Legged Stance Test have been conducted in older adults to assess static balance after strength training,6 performance of activities of daily living and platform sway tests.7
Interestingly, subscales of other balance measures such as the Tinetti Performance Oriented Mobility Assessment8 and Berg Balance Scale9 utilize unsupported single limb stance times of 10 seconds and 5 seconds respectively, for older individuals to be considered to have "normal" balance.
Thirty percent to 60 percent of community-dwelling elderly individuals fall each year, with many experiencing multiple falls.10 Because falls are the leading cause of injury-related deaths in older adults and a significant cause of disability in this population, prevention of falls and subsequent injuries is a worthwhile endeavor.11
The One-Legged Stance Test can be used as a quick, reliable and easy way for clinicians to screen their patients/clients for fall risks and is easily incorporated into a comprehensive functional evaluation for older adults.
1. Briggs, R., Gossman, M., Birch, R., Drews, J., & Shaddeau, S. (1989). Balance performance among noninstitutionalized elderly women. Physical Therapy, 69(9), 748-756.
2. Anemaet, W., & Moffa-Trotter, M. (1999). Functional tools for assessing balance and gait impairments. Topics in Geriatric Rehab, 15(1), 66-83.
3. Franchignoni, F., Tesio, L., Martino, M., & Ricupero, C. (1998). Reliability of four simple, quantitative tests of balance and mobility in healthy elderly females. Aging (Milan), 10(1), 26-31.
4. Bohannon, R., Larkin, P., Cook, A., & Singer, J. (1984). Decrease in timed balance test scores with aging. Physical Therapy, 64, 1067-1070.
5. Vellas, B., Wayne, S., Romero, L., Baumgartner, R., et al. (1997). One-leg balance is an important predictor of injurious falls in older persons. Journal of the American Geriatric Society, 45, 735-738.
6. Schlicht, J., Camaione, D., & Owen, S. (2001). Effect of intense strength training on standing balance, walking speed, and sit-to-stand performance in older adults. Journal of Gerontological Medicine and Science, 56A(5), M281-M286.
7. Frandin, K., Sonn, U., Svantesson, U., & Grimby, G. (1996). Functional balance tests in 76-year-olds in relation to performance, activities of daily living and platform tests. Scandinavian Journal of Rehabilitative Medicine, 27(4), 231-241.
8. Tinetti, M., Williams, T., & Mayewski, R. (1986). Fall risk index for elderly patients based on number of chronic disabilities. American Journal of Medicine, 80, 429-434.
9. Berg, K., et al. (1989). Measuring balance in the elderly: Preliminary development of an instrument. Physio Therapy Canada, 41(6), 304-311.
10. Rubenstein, L., & Josephson, K. (2002). The epidemiology of falls and syncope. Clinical Geriatric Medicine, 18, 141-158.
11. National Safety Council. (2004). Injury Facts. Itasca, IL: Author.
Dr. Lewis is a physical therapist in private practice and president of Premier Physical Therapy of Washington, DC. She lectures exclusively for GREAT Seminars and Books, Inc. Dr. Lewis is also the author of numerous textbooks. Her Website address is www.greatseminarsandbooks.com. Dr. Shaw is an assistant professor in the physical therapy program at the University of South Florida dedicated to the area of geriatric rehabilitation. She lectures exclusively for GREAT Seminars and Books in the area of geriatric function.
APTA Encouraged by Cap Exceptions
New process grants automatic exceptions to beneficiaries needing care the most
Calling it "a good first step toward ensuring that Medicare beneficiaries continue to have coverage for the physical therapy they need," Ben F Massey, Jr, PT, MA, president of the American Physical Therapy Association (APTA), expressed optimism that the new exceptions process will allow a significant number of Medicare patients to receive services exceeding the $1,740 annual financial cap on Medicare therapy coverage. The new procedure, authorized by Congress in the recently enacted Deficit Reduction Act (PL 109-171), will be available to Medicare beneficiaries on March 13 under rules released this week by the Centers for Medicare and Medicaid Services (CMS).
"APTA is encouraged by the new therapy cap exceptions process," Massey said. "CMS has made a good effort to ensure that Medicare beneficiaries who need the most care are not harmed by an arbitrary cap."
As APTA recommended, the process includes automatic exceptions and also grants exceptions to beneficiaries who are receiving both physical therapy and speech language pathology (the services are currently combined under one $1,740 cap).
"We have yet to see how well Medicare contractors will be able to implement and apply this process. Even if it works well, Congress only authorized this new process through 2006. Congress must address this issue again this year, and we are confident that this experience will demonstrate to legislators that they must completely repeal the caps and provide a more permanent solution for Medicare beneficiaries needing physical therapy," Massey continued.
The therapy caps went into effect on Jan. 1, 2006, limiting Medicare coverage on outpatient rehabilitation services to $1,740 for physical therapy and speech therapy combined and $1,740 for occupational therapy.
The American Physical Therapy Association is a national professional organization representing more than 65,000 members. Its goal is to foster advancements in physical therapy practice, research and education.
New Mouthwash Helps With Pain
Doctors in Italy are studying whether a new type of mouthwash will help alleviate pain for patients suffering from head and neck cancer who were treated with radiation therapy, according to a new study (International Journal of Radiation Oncology*Biology*Physics, Feb. 1, 2006).
Fifty patients, suffering from various forms of head and neck cancer and who received radiation therapy, were observed during the course of their radiation treatment. Mucositis, or inflammation of the mucous membrane in the mouth, is the most common side effect yet no additional therapy has been identified that successfully reduces the pain.
This study sought to discover if a mouthwash made from the local anesthetic tetracaine was able to alleviate the discomfort associated with head and neck cancer and if there would be any negative side effects of the mouthwash. The doctors chose to concoct a tetracaine-based mouthwash instead of a lidocaine-based version because it was found to be four times more effective, worked faster and produced a prolonged relief.
The tetracaine was administered by a mouthwash approximately 30 minutes before and after meals, or roughly six times a day. Relief of oral pain was reported in 48 of the 50 patients. Sixteen patients reported that the mouthwash had an unpleasant taste or altered the taste of their food. | <urn:uuid:f8131c7f-1b2a-41bd-9eaa-951dad06e313> | CC-MAIN-2013-20 | http://physical-therapy.advanceweb.com/Article/One-Legged-Single-Limb-Stance-Test.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.919898 | 2,250 | 3.078125 | 3 |
Problems of Philosophy
Chapter 5 - Knowledge by Acquaintance and Knowledge by Description
After distinguishing two types of knowledge, knowledge of things and knowledge of truths, Russell devotes this fifth chapter to an elucidation of knowledge of things. He further distinguishes two types of knowledge of things, knowledge by acquaintance and knowledge by description. We have knowledge by acquaintance when we are directly aware of a thing, without any inference. We are immediately conscious and acquainted with a color or hardness of a table before us, our sense-data. Since acquaintance with things is logically independent from any knowledge of truths, we can be acquainted with something immediately without knowing any truth about it. I can know the color of a table "perfectly and completely when I see it" and not know any truth about the color in itself. The other type of knowledge of things is called knowledge by description. When we say we have knowledge of the table itself, a physical object, we refer to a kind of knowledge other than immediate, direct knowledge. "The physical object which causes such-and-such sense-data" is a phrase that describes the table by way of sense-data. We only have a description of the table. Knowledge by description is predicated on something with which we are acquainted, sense-data, and some knowledge of truths, like knowing that "such- and-such sense-data are caused by the physical object." Thus, knowledge by description allows us to infer knowledge about the actual world via the things that can be known to us, things with which we have direct acquaintance (our subjective sense-data).
According to this outline, knowledge by acquaintance forms the bedrock for all of our other knowledge. Sense-data is not the only instance of things with which we can be immediately acquainted. For how would we recall the past, Russell argues, if we could only know what was immediately present to our senses. Beyond sense-data, we also have "acquaintance by memory." Remembering what we were immediately aware of makes it so that we are still immediately aware of that past, perceived thing. We may therefore access many past things with the same requisite immediacy. Beyond sense-data and memories, we possess "acquaintance by introspection." When we are aware of an awareness, like in the case of hunger, "my desiring food" becomes an object of acquaintance. Introspective acquaintance is a kind of acquaintance with our own minds that may be understood as self-consciousness. However, this self-consciousness is really more like a consciousness of a feeling or a particular thought; the awareness rarely includes the explicit use of "I," which would identify the Self as a subject. Russell abandons this strand of knowledge, knowledge of the Self, as a probable but unclear dimension of acquaintance.
Russell summarizes our acquaintance with things as follows: "We have acquaintance in sensation with the data of the outer senses, and in introspection with the data of what may be called the inner sense—thoughts, feelings, desires, etc.; we have acquaintance in memory with things which have been data either of the outer senses or of the inner sense. Further, it is probable, though not certain, that we have acquaintance with Self, as that which is aware of things or has desires towards things." All these objects of acquaintance are particulars, concrete, existing things. Russell cautions that we can also have acquaintance with abstract, general ideas called universals. He addresses universals more fully later in chapter 9.
Russell allocates the rest of the chapter to explaining how the complicated theory of knowledge by description actually works. The most conspicuous things that are known to us by description are physical objects and other people's minds. We approach a case of having knowledge by description when we know "that there is an object answering to a definite description, though we are not acquainted with any such object." Russell offers several illustrations in the service of understanding knowledge by description. He claims that it is important to understand this kind of knowledge because our language uses depends so heavily on it. When we say common words or proper names, we are really relying on the meanings implicit in descriptive knowledge. The thought connoted by the use of a proper name can only really be explicitly expressed through a description or proposition.
Bismarck, or "the first Chancellor of the German Empire," is Russell's most cogent example. Imagine that there is a proposition, or statement, made about Bismarck. If Bismarck is the speaker, admitting that he has a kind of direct acquaintance with his own self, Bismarck might have voiced his name in order to make a self-referential judgment, of which his name is a constituent. In this simplest case, the "proper name has the direct use which it always wishes to have, as simply standing for a certain object, and not for a description of the object." If one of Bismarck's friends who knew him directly was the speaker of the statement, then we would say that the speaker had knowledge by description. The speaker is acquainted with sense-data which he infers corresponds with Bismarck's body. The body or physical object representing the mind is "only known as the body and the mind connected with these sense-data," which is the vital description. Since the sense-data corresponding to Bismarck change from moment to moment and with perspective, the speaker knows which various descriptions are valid.
Still more removed from direct acquaintance, imagine that someone like you or I comes along and makes a statement about Bismarck that is a description based on a "more or less vague mass of historical knowledge." We say that Bismarck was the "first Chancellor of the German Empire." In order to make a valid description applicable to the physical object, Bismarck's body, we must find a relation between some particular with which we have acquaintance and the physical object, the particular with which we wish to have an indirect acquaintance. We must make such a reference in order to secure a meaningful description.
To usefully distinguish particulars from universals, Russell posits the example of "the most long-lived of men," a description which wholly consists of universals. We assume that the description must apply to some man, but we have no way of inferring any judgment about him. Russell remarks, "all knowledge of truths, as we shall show, demands acquaintance with things which are of an essentially different character from sense-data, the things which are sometimes called 'abstract ideas', but which we shall call 'universals'." The description composed only of universals gives no knowledge by acquaintance with which we might anchor an inference about the longest-lived man. A further statement about Bismarck, like "The first Chancellor of the German Empire was an astute diplomatist," is a statement that contains particulars and asserts a judgment that we can only make in virtue of some acquaintance (like something heard or read).
Statements about things known by description function in our language as statements about the "actual thing described;" that is, we intend to refer to that thing. We intend to say something with the direct authority that only Bismarck himself could have when he makes a statement about himself, something with which he has direct acquaintance. Yet, there is a spectrum of removal from acquaintance with the relevant particulars: from Bismarck himself, "there is Bismarck to people who knew him; Bismarck to those who only know of him through history" and at a far end of the spectrum "the longest lived of men." At the latter end, we can only make propositions that are logically deducible from universals, and at the former end, we come as close as possible to direct acquaintance and can make many propositions identifying the actual object. It is now clear how knowledge gained by description is reducible to knowledge by acquaintance. Russell calls this observation his fundamental principle in the study of "propositions containing descriptions": "Every proposition which we can understand must be composed wholly of constituents with which we are acquainted."
Indirect knowledge of some particulars seems necessary if we are to expressively attach meanings to the words we commonly use. When we say something referring to Julius Caesar, we clearly have no direct acquaintance with the man. Rather, we are thinking of such descriptions as "the man who was assassinated on the Ides of March" or "the founder of the Roman Empire." Since we have no way of being directly acquainted with Julius Caesar, our knowledge by description allows us to gain knowledge of "things which we have never experienced." It allows us to overstep the boundaries of our private, immediate experiences and engage a public knowledge and public language.
This knowledge by acquaintance and knowledge by description theory was a famous epistemological problem-solver for Russell. Its innovative character allowed him to shift to his moderate realism, a realism ruled by a more definite categorization of objects. It is a theory of knowledge that considers our practice of language to be meaningful and worthy of detailed analysis. Russell contemplates how we construct a sense of meaning about objects remote from our experience. The realm of acquaintance offers the most secure references for our understanding of the world. Knowledge by description allows us to draw inferences from our realm of acquaintance but leaves us in a more vulnerable position. Since knowledge by description also depends on truths, we are prone to error about our descriptive knowledge if we are somehow mistaken about a proposition that we have taken to be true.
Critics of this theory have held that Russell's hypothesis of knowledge by description is confusing. His comments when defining sense-data, that the physical world is unknowable to us, contradict his theory of knowledge by descriptions. He implies that "knowledge by description" is not really a form of knowledge since we can only know those things with which we are acquainted and we cannot be acquainted with physical objects. Russell's theory amounts to the proposition that our acquaintance with mental objects appears related in a distant way to physical objects and renders us obliquely acquainted with the physical world. Sense-data are our subjective representations of the external world, and they negotiate this indirect contact.
While innovative, Russell's theory of knowledge by description is not an attractive theory of knowledge. It is clearly unappealing because our impressions of the real world, on his view, are commensurate with muddy representations of reality. Though we have direct access to these representations, it seems impossible to have any kind of direct experience of reality. Reality, rather, consists in unconscious, inferential pieces of reasoning.
Readers' Notes allow users to add their own analysis and insights to our SparkNotes—and to discuss those ideas with one another. Have a novel take or think we left something out? Add a Readers' Note! | <urn:uuid:0abbeab1-fafa-4389-9f7e-7573eb693c9e> | CC-MAIN-2013-20 | http://www.sparknotes.com/philosophy/problems/section5.rhtml | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.963978 | 2,196 | 3.21875 | 3 |
When he shot President Lincoln, John Wilkes Booth was 26 years old, and one of the nation’s most famous actors. (Charles DeForest Fredericks/National Portrait Gallery)
John Wilkes Booth, a Maryland native, spent the war performing in theatrical productions. But the conflict was never far from his mind. In a letter to his mother, he expressed chagrin that he hadn’t joined the Confederate army, writing, “I have … begun to deem myself a coward, and to despise my own existence.” He was outraged by the reelection of Lincoln, whom he viewed as the instigator of all the country’s woes. The month after the inauguration, Booth learned that Lincoln would be attending a performance at Ford’s Theatre on April 14. That night, he crept into Lincoln’s theater box and shot him in the back of the head. It was the first time a president had been murdered. “Wanted” posters were issued for Booth, and on April 26, he was cornered in a tobacco barn and shot by a federal sergeant, acting against orders to bring him in alive.
Several months later, Charles Creighton Hazewell, a frequent contributor, sought to make sense of the assassination—speculating that the plot may have been hatched in Canada (where a number of secessionist schemes had originated) and hinting at evidence that the plan had been endorsed at the highest levels of the Confederate government.—Sage Stossel
The assassination of President Lincoln threw a whole nation into mourning … Of all our Presidents since Washington, Mr. Lincoln had excited the smallest amount of that feeling which places its object in personal danger. He was a man who made a singularly favorable impression on those who approached him, resembling in that respect President Jackson, who often made warm friends of bitter foes, when circumstances had forced them to seek his presence; and it is probable, that, if he and the honest chiefs of the Rebels could have been brought face to face, there never would have been civil war,—at least, any contest of grand proportions; for he would not have failed to convince them that all that they had any right to claim, and therefore all that they could expect their fellow-citizens to fight for, would be more secure under his government than it had been under the governments of such men as Pierce and Buchanan, who made use of sectionalism and slavery to promote the selfish interests of themselves and their party … Ignorance was the parent of the civil war, as it has been the parent of many other evils,—ignorance of the character and purpose of the man who was chosen President in 1860–61, and who entered upon official life with less animosity toward his opponents than ever before or since had been felt by a man elected to a great place after a bitter and exciting contest …
That one of the most insignificant of [the secessionists’] number should have murdered the man whose election they declared to be cause for war is nothing strange, being in perfect keeping with their whole course. The wretch who shot the chief magistrate of the Republic is of hardly more account than was the weapon which he used. The real murderers of Mr. Lincoln are the men whose action brought about the civil war. Booth’s deed was a logical proceeding, following strictly from the principles avowed by the Rebels, and in harmony with their course during the last five years. The fall of a public man by the hand of an assassin always affects the mind more strongly than it is affected by the fall of thousands of men in battle; but in strictness, Booth, vile as his deed was, can be held to have been no worse, morally, than was that old gentleman who insisted upon being allowed the privilege of firing the first shot at Fort Sumter. Ruffin’s act is not so disgusting as Booth’s; but of the two men, Booth exhibited the greater courage,—courage of the basest kind, indeed, but sure to be attended with the heaviest risks, as the hand of every man would be directed against its exhibitor. Had the Rebels succeeded, Ruffin would have been honored by his fellows; but even a successful Southern Confederacy would have been too hot a country for the abode of a wilful murderer. Such a man would have been no more pleasantly situated even in South Carolina than was Benedict Arnold in England. And as he chose to become an assassin after the event of the war had been decided, and when his victim was bent upon sparing Southern feeling so far as it could be spared without injustice being done to the country, Booth must have expected to find his act condemned by every rational Southern man as a worse than useless crime, as a blunder of the very first magnitude. Had he succeeded in getting abroad, Secession exiles would have shunned him, and have treated him as one who had brought an ineffaceable stain on their cause, and also had rendered their restoration to their homes impossible. The pistol-shot of Sergeant Corbett saved him from the gallows, and it saved him also from the denunciations of the men whom he thought to serve. He exhibited, therefore, a species of courage that is by no means common; for he not only risked his life, and rendered it impossible for honorable men to sympathize with him, but he ran the hazard of being denounced and cast off by his own party … All Secessionists who retain any self-respect must rejoice that one whose doings brought additional ignominy on a cause that could not well bear it has passed away and gone to his account. It would have been more satisfactory to loyal men, if he had been reserved for the gallows; but even they must admit that it is a terrible trial to any people who get possession of an odious criminal, because they may be led so to act as to disgrace themselves, and to turn sympathy in the direction of the evil-doer … Therefore the shot of Sergeant Corbett is not to be regretted, save that it gave too honorable a form of death to one who had earned all that there is of disgraceful in that mode of dying to which a peculiar stigma is attached by the common consent of mankind.
Whether Booth was the agent of a band of conspirators, or was one of a few vile men who sought an odious immortality, it is impossible to say. We have the authority of a high Government official for the statement that “the President’s murder was organized in Canada and approved at Richmond”; but the evidence in support of this extraordinary announcement is, doubtless for the best of reasons, withheld at the time we write. There is nothing improbable in the supposition that the assassination plot was formed in Canada, as some of the vilest miscreants of the Secession side have been allowed to live in that country … But it is not probable that British subjects had anything to do with any conspiracy of this kind. The Canadian error was in allowing the scum of Secession to abuse the “right of hospitality” through the pursuit of hostile action against us from the territory of a neutral …
That a plan to murder President Lincoln should have been approved at Richmond is nothing strange; and though such approval would have been supremely foolish, what but supreme folly is the chief characteristic of the whole Southern movement? If the seal of Richmond’s approval was placed on a plan formed in Canada, something more than the murder of Mr. Lincoln was intended. It must have been meant to kill every man who could legally take his place, either as President or as President pro tempore. The only persons who had any title to step into the Presidency on Mr. Lincoln’s death were Mr. Johnson, who became President on the 15th of April, and Mr. Foster, one of the Connecticut Senators, who is President of the Senate … It does not appear that any attempt was made on the life of Mr. Foster, though Mr. Johnson was on the list of those doomed by the assassins; and the savage attack made on Mr. Seward shows what those assassins were capable of. But had all the members of the Administration been struck down at the same time, it is not at all probable that “anarchy” would have been the effect, though to produce that must have been the object aimed at by the conspirators. Anarchy is not so easily brought about as persons of an anarchical turn of mind suppose. The training we have gone through since the close of 1860 has fitted us to bear many rude assaults on order without our becoming disorderly. Our conviction is, that, if every man who held high office at Washington had been killed on the 14th of April, things would have gone pretty much as we have seen them go, and that thus the American people would have vindicated their right to be considered a self-governing race. It would not be a very flattering thought, that the peace of the country is at the command of any dozen of hardened ruffians who should have the capacity to form an assassination plot, the discretion to keep silent respecting their purpose, and the boldness and the skill requisite to carry it out to its most minute details: for the neglect of one of those details might be fatal to the whole project. Society does not exist in such peril as that.
john wilkes booth, a Maryland native, spent the war performing in theatrical productions. But the conflict was never far from his mind. In a letter to his mother, he expressed chagrin that he hadn’t joined the Confederate army, writing, “I have … begun to deem myself a coward, and to despise my own existence.” He was outraged by the reelection of Lincoln, whom he viewed as the instigator of all the country’s woes.
The month after the inauguration, Booth learned that Lincoln would be attending a performance at Ford’s Theatre on April 14. That night, he crept into Lincoln’s theater box and shot him in the back of the head. It was the first time a president had been murdered. “Wanted” posters were issued for Booth, and on April 26, he was cornered in a tobacco barn and shot by a federal sergeant, who acted against orders to bring him in alive.
Several months later, Charles Creighton Hazewell, a frequent Atlantic contributor, sought to make sense of the assassination—speculating that the plot may have been hatched in Canada (where a number of secessionist schemes had originated) and hinting at evidence that the plan had been endorsed at the highest levels of the Confederate government.
Read the full text of this article here.
This article available online at: | <urn:uuid:b48891ec-4670-49b3-85a7-ec1a2ad95bf5> | CC-MAIN-2013-20 | http://www.theatlantic.com/magazine/print/2012/02/assassination/308804/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.986341 | 2,194 | 3.8125 | 4 |
On January 9th, citizens living in southern Sudan will vote on a referendum to secede from the northern part of the country. A clock in the town of Juba, the political center of southern Sudan, counts down to this referendum, symbolical of the locals’ excitement to part from the hegemonic north. Nearby, the Darfur genocide crisis that continues to plague the area is not an isolated event. It’s all related, part of two brutal civil wars that have been for decades tearing the nation apart; as of late, literally.
Sudan has traditionally been seen by many as the bridge between the Arab and the African worlds—one not particularly easy to cross. The north and the south of Sudan are just about as culturally and religiously different from each other as you could possibly imagine. In the north, Arab culture dominates, and the majority religion is Islam. In the south, the predominant culture is more traditionally sub-Saharan African, and the primary religions are animist belief systems and Christianity. Ever since the country gained independence from Britain in 1956, the cultural and religious systems of the north have been heavily imposed on the whole of Sudan, resulting in southern resistance and the ongoing strife.
In particular, this imposition of a differing set of beliefs can in large part be attributed to the current Sudanese president, Omar al-Bashir. Al-Bashir arose to power in 1989 through a bloodless coup, and this past April, won the first ostensibly democratic election the nation has held in 24 years. I hesitate to call the election democratic because many believe that al-Bashir, who is notorious for his corruption, rigged it in his favor. While there is no proof, it is generally not unsafe to consider that leaders who are in power through a coup have significant sway in any following elections. Whether he is rightfully in power or not, al-Bashir has imposed northern ideals throughout the whole nation, a primary cause of the Sudanese civil wars. Many attribute the Darfur genocide, just a single episode of the extensive bloodshed since Sudan’s independence, to al-Bashir. Because of these accusations, he is currently on trial for war crimes, the only current head of state in such a predicament. To drive home his impositional tendencies further, al-Bashir has said that if the south secedes, he will impose Shari’a in the north, in an effort to make northern Sudan officially an Islamic state.
My first response to this situation was wondering: How did two peoples so immensely different from one another end up together in the first place? This is not the same as the American Civil War, where regional differences led to ideological differences, which in turn led to secession. In the Sudanese case, ideological and cultural differences existed long before the country gained independence. Thus, one should look to colonialism as the primary cause of Sudan’s problems. It seems to me that Sudan’s independence process was dangerously arbitrary; occurring at the time of mass European decolonization in Africa. It’s as if Britain backed out of the region and drew a national border at random. And now, after over half a century, the people want that to change.
Despite the referendum on schedule for next month, the potential new border still has not been set. Money, of course, is a factor. Sudan is one of the most oil-rich nations of Africa, but most of the country’s oil is found in the south. On the one hand, the north might not want to draw a new boundary where the south gets all of the resource wealth, a potential cause for even more strife. On the other hand, some see oil as a potential area that could keep the two sides friendly if they do end up splitting. Mutual desire for the oil wealth may bring the two sides together diplomatically if the split ends up happening peacefully.
As you can see, this situation is extremely complex, far more so than the south simply saying “we want to secede” and secession then happening. To better understand the context, one needs to consider the past, but one should also consider the future: what will happen if the current nation of Sudan does in fact split? I am wondering particularly about those who have their roots in the south but live in the north. Since the referendum was announced, many of these people have moved back to the south, but a fair number still remain in the north. What will happen to these primarily non-Muslim people (and Muslims alike) if the north does in fact impose Shari’a on al-Bashir’s whim? Al-Bashir will go from an imposer of northern Arab and Islamic values to being completely intolerant of this significant minority in his newly allotted half of Sudan, and the results would be tragic.
What message would a Sudanese split portray to the rest of Africa, the rest of the world? The African Union fears that a Sudanese split would incite other secessionists around the continent. Other nations undergoing similar domestic, regional conflicts of interest may feel not only that they have a right to secede, but may even feel encouraged to do so. Is this kind of outright division the right answer to such a complicated historical struggle?
Is there even a right answer? Experts seem to agree that the nation will inevitably split. Whether this bifurcation happens via a timely, democratic, and peaceful referendum or through continuing bloodshed is a matter that only time will tell. I will certainly be following this issue in the coming weeks, and I wrote this article before the scheduled referendum in the hope to spark more interest on the issue. I urge you to follow it in the news; the results affect a much wider area than simply Sudan.
Stay tuned for my next column, where I will compare and contrast two leaders in South America on opposite sides of the political spectrum and compare their respective political systems to that of the United States.
Latest posts by David Klayton (see all)
- Should Turkey be a part of "Europe?" - February 26, 2011
- Moderately Extreme: Ideological Flexibility in Latin American Politics - January 27, 2011
- When One Nation Becomes Two - December 31, 2010 | <urn:uuid:775f924c-1e42-4c9d-96ef-70a47ee35ac0> | CC-MAIN-2013-20 | http://www.wupr.org/2010/12/31/when-one-nation-becomes-two/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.96128 | 1,284 | 2.796875 | 3 |
Ki Tisa (Mitzvot)
For more teachings on this portion, see the archives to this blog, below at March 2006.
This week’s parasha is best known for the dramatic and richly meaningful story of the Golden Calf and the Divine anger, of Moses’ pleading on behalf of Israel, and the eventual reconciliation in the mysterious meeting of Moses with God in the Cleft of the Rock—subjects about which I’ve written at length, from various aspects, in previous years. Yet the first third of the reading (Exod 30:11-31:17) is concerned with various practical mitzvot, mostly focused on the ritual worship conducted in the Temple, which tend to be skimmed over in light of the intense interest of the Calf story. As this year we are concerned specifically with the mitzvot in each parasha, I shall focus on this section.
These include: the giving by each Israelite [male] of a half-shekel to the Temple; the making of the laver, from which the priests wash their hands and feet before engaging in Divine service; the compounding of the incense and of the anointing oil; and the Shabbat. I shall focus here upon the washing of the hands.
Hand-washing is a familiar Jewish ritual: it is, in fact, the first act performed by pious Jews upon awakening in the morning (some people even keep a cup of water next to their beds, so that they may wash their hands before taking even a single step); one performs a ritual washing of the hands before eating bread; before each of the daily prayers; etc. The section here dealing with the laver in the Temple (Exod 30:17-21) is also one of the four portions from the Torah recited by many each morning, as part of the section of the liturgy known as korbanot, chapters of Written and Oral Torah reminiscent of the ancient sacrificial system, that precede Pesukei de-Zimra.
Sefer ha-Hinukh, at §106, explains the washing of hands as an offshoot of the honor due to the Temple and its service—one of many laws intended to honor, magnify, and glorify the Temple. Even if the priest was pure and clean, he must wash (literally, “sanctify”) his hands before engaging in avodah. This simple gesture of purification served as a kind of separation between the Divine service and everyday life. It added a feeling of solemnity, of seriousness, a sense that one was engaged in something higher, in some way separate from the mundane activities of regular life. (One hand-washing by kohanim, in the morning, was sufficient, unless they left the Temple grounds or otherwise lost the continuity of their sacred activity.) Our own netilat yadaim, whether before prayer or breaking bread, may be seen as a kind of halakhic carryover from the Temple service, albeit on the level of Rabbinic injunction.
What is the symbolism of purifying one’s hands? Water, as a flowing element, as a solvent that washes away many of the things with which it comes in contact, is at once a natural symbol of both purity, and of the renewal of life. Mayim Hayyim—living waters—is an age old association. Torah is compared to water; water, constantly flowing, is constantly returning to its source. At the End of Days, “the land will be filled with knowledge of the Lord, like waters going down to the sea.” A small part of this is hinted in this simple, everyday gesture.
“See that this nation is Your people”
But I cannot pass over Ki Tisa without some comment on the incident of the Golden Calf and its ramifications. This week, reading through the words of the parasha in preparation for a shiur (what Ruth Calderon, founder of Alma, a secularist-oriented center for the study of Judaism in Tel Aviv, called “barefoot reading”—that is, naïve, without preconceptions), I discovered something utterly simple that I had never noticed before in quite the same way.
At the beginning of the Calf incident, God tells Moses, who has been up on the mountain with Him, “Go down, for your people have spoiled” (32:7). A few verses later, when God asks leave of Moses (!) to destroy them, Moses begs for mercy on behalf of the people with the words “Why should Your anger burn so fiercely against Your people…” (v. 11). That is, God calls them Moses’ people, while Moses refers to them as God’s people. Subsequent to this exchange, each of them refers to them repeatedly in the third person, as “the people” or “this people” (העם; העם הזה). Neither of them refers to them, as God did in the initial revelation to Moses at the burning bush (Exodus 3:7 and passim) as “my people,” or with the dignified title, “the children of Israel”—as if both felt a certain alienation, of distance from this tumultuous, capricious bunch. Only towards the end, after God agrees not to destroy them, but still states “I will not go up with them,” but instead promises to send an angel, does Moses says “See, that this nation is Your people” (וראה כי עמך הגוי הזה; 33:13).
What does all this signify? Reading the peshat carefully, there is one inevitable conclusion: that God wished to nullify His covenant with the people Israel. It is in this that there lies the true gravity, and uniqueness, of the Golden Calf incident. We are not speaking here, as we read elsewhere in the Bible—for example, in the two great Imprecations (tokhahot) in Lev 26 and Deut 28, or in the words of the prophets during the First Temple—merely of threats of punishment, however harsh, such as drought, famine, pestilence, enemy attacks, or even exile and slavery. There, the implicit message is that, after a period of punishment, a kind of moral purgation through suffering, things will be restored as they were. Here, the very covenant itself, the very existence of an intimate connection with God, hangs in the balance. God tells Moses, “I shall make of you a people,” i.e., instead of them.
This, it seems to me, is the point of the second phase of this story. Moses breaks the tablets; he and his fellow Levites go through the camp killing all those most directly implicated in worshipping the Calf; God recants and agrees not to destroy the people. However, “My angel will go before them” but “I will not go up in your midst” (33:2, 3). This should have been of some comfort; yet this tiding is called “this bad thing,” the people mourn, and remove the ornaments they had been wearing until then. Evidently, they understood the absence of God’s presence or “face” as a grave step; His being with them was everything. That is the true importance of the Sanctuary in the desert and the Tent of Meeting, where Moses speaks with God in the pillar of cloud (33:10). God was present with them there in a tangible way, in a certain way continuing the epiphany at Sinai. All that was threatened by this new declaration.
Moses second round of appeals to God, in Exod 33:12-23, focuses on bringing God, as it were, to a full reconciliation with the people. This is the significance of the Thirteen Qualities of Mercy, of what I have called the Covenant in the Cleft of the Rock, the “faith of Yom Kippur” as opposed to that of Shavuot (see HY I: Ki Tisa; and note Prof. Jacob Milgrom’s observation that this chapter stands in the exact center, in a literary sense, of the unit known as the Hextateuch—Torah plus the Book of Joshua).
But I would add two important points. One, that this is the first place in the Torah where we read about sin followed by reconciliation. After Adam and Eve ate of the fruit of the Garden, they were punished without hope of reprieve; indeed, their “punishment “ reads very much like a description of some basic aspects of the human condition itself. Cain, after murdering Abel, was banished, made to wander the face of the earth. The sin of the brothers in selling Joseph, and their own sense of guilt, is a central factor in their family dynamic from then on, but there is nary a word of God’s response or intervention. It would appear that God’s initial expectation in the covenant at Sinai was one of total loyalty and fidelity. The act of idolatry was an unforgivable breach of the covenant—much as adultery is generally perceived as a fundamental violation of the marital bond.
Moses, in persuading God to recant of His jealousy and anger, to give the faithless people another chance, is thus introducing a new concept: of a covenant that includes the possibility of even the most serious transgressions being forgiven; of the knowledge that human beings are fallible, and that teshuvah and forgiveness are essential components of any economy of men living before a demanding God.
The second, truly astonishing point is the role played by Moses in all this. Moshe Rabbenu, “the man of God,” is not only the great teacher of Israel, the channel through which they learn the Divine Torah, but also, as it were, one who teaches God Himself. It is God who “reveals His Qualities of Mercy” at the Cleft of the Rock; but without Moses cajoling, arguing, persuading (and note the numerous midrashim around this theme), “were it not for my servant Moses who stood in the breach,” all this would not have happened. It was Moses who elicited this response and who, so to speak, pushed God Himself to this new stage in his relation with Israel—to give up His expectations of perfection from His covenanted people, and to understand that living within a covenant means, not rigid adherence to a set of laws, but a living relationship with real people, taking the bad with the good. (Again, the parallel to human relationships is obvious) | <urn:uuid:c4c19472-691a-44c6-a55b-21fbb183475b> | CC-MAIN-2013-20 | http://hitzeiyehonatan.blogspot.com/2008_02_01_archive.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.966594 | 2,269 | 2.671875 | 3 |
America's oil and natural gas industry is committed to protecting the environment and to continuously improving its hurricane preparation and response plans. After any hurricane or tropical storm, the goal is to return to full operations as quickly and as safely as possible. For the 2012 hurricane season, the industry continues to build upon critical lessons learned from 2008's major hurricanes, Gustav and Ike, as well as other powerful storms, such as 2005's Katrina and Rita and 2004's Ivan.
API plays two primary roles for the industry in preparing for hurricanes. First, it helps the industry gain a better understanding of the environmental conditions in and around the Gulf of Mexico during hurricane or tropical storm activity and then assists industry in using that knowledge to make offshore and onshore facilities less vulnerable. Second, API collaborates with member companies, other industries and with federal, state and local governments to prepare for hurricanes and return operations as quickly and as safely as possible.
API member companies also independently work to improve preparedness for hurricanes and other natural or manmade disasters. They have, for example, reviewed and updated emergency response plans, established redundant communication paths and made pre-arrangements with suppliers to help ensure they have adequate resources during an emergency.
The API Subcommittee on Offshore Stuctures, the International Association of Drilling Contractors, and the Offshore Operators Committee, serve as a liaison to regulatory agencies, coordinate industry review of critical design standards and provide a forum for sharing lessons learned from previous hurricanes.
These combined efforts are critical since the Gulf of Mexico accounted for about 23 percent of the oil and 8 percent of total natural gas produced in the United States (approximately 82 percent of the oil supply comes from deepwater facilities), and the Gulf Coast region is home to almost half of the U.S. refining capacity.
Upstream (Exploration and Production)
During the major 2005 hurricanes, waves were higher and winds were stronger than anticipated in deeper parts of the Gulf so the industry moved away from viewing it as a uniform body of water. Evaluating the effects of those and other storms, helped scientists discover that the Central Gulf of Mexico was more prone to hurricanes because it acts as a gathering spot for warm currents that can strengthen a storm.
The revised wind, wave and water current measurements ("metocean" data) prompted API to reassess its recommended practices (RPs) for industry operations in the region.
- The upstream segment continues to integrate the updated environmental (metocean) data on how powerful storms affect conditions in the Gulf of Mexico into its offshore structure design standards. This effort led to the publication in 2008 of an update to RP 2SK, Design and Analysis of Stationkeeping Systems for Floating Structures, that provides guidance for design and operation of Mobile Offshore Drilling Unit (MODU) mooring systems in the Gulf of Mexico during the hurricane season. API RP 95J, Gulf of Mexico Jack-up Operations for Hurricane Season, which recommends locating jack-up rigs on more stable areas of the sea floor, and positioning platform decks higher above the sea surface, was also updated.
API publications are available at our (Search and Order
API in the past six years also has issued a number of bulletins to help better prepare for and bring production back online after Gulf hurricanes. These include:
Production and Hurricanes (steps industry takes to prepare for and return after a storm)
- Bulletin 2TD, Guidelines for Tie-downs on Offshore Production Facilities for Hurricane Season, which is aimed at better-securing separate platform equipment.
- Bulletin 2INT-MET, Interim Guidance on Hurricane Conditions in the Gulf of Mexico, which provides updated metocean data for four regions of the Gulf, including wind velocities, deepwater wave conditions, ocean current information, and surge and tidal data.
- Bulletin 2INT-DG, Interim Guidance for Design of Offshore Structures for Hurricane Conditions, which explains how to apply the updated metocean data during design.
- Bulletin 2INT-EX, Interim Guidance for Assessment of Existing Offshore Structures for Hurricane Conditions, which assists owners/operators and engineers with existing facilities.
- Bulletin 2HINS, Guidance on Post-hurricane Structural Inspection of Offshore Structures, which provides guidance on determining if a structure sustained hurricane-induced damage that affects the safety of personnel, the primary structural integrity, or its ability to perform the purpose for which it was intended.
Refineries and Pipelines
- Days in advance of a tropical storm or hurricane moving toward or near their drilling and production operations, companies will evacuate all non-essential personnel and begin the process of shutting down production.
- As the storm gets closer, all personnel will be evacuated from the drilling rigs and platforms, and production is shut down. Drillships may relocate to a safe location. Operations in areas not forecast to take a direct hit from the storm often will be shut down as well because storms can change direction with little notice.
- After a storm has passed and it is safe to fly, operators will initiate "flyovers" of onshore and offshore facilities to evaluate damage from the air. For onshore facilities, these "flyovers" can identify flooding, facility damage, road or other infrastructure problems, and spills. Offshore "flyovers" look for damaged drilling rigs, platform damage, spills, and possible pipeline damage.
- Many offshore drilling rigs are equipped with GPS locator systems, which allow federal officials and drilling contractors to remotely monitor the rigs' location before, during and after a hurricane. If a rig is pulled offsite by the storm, locator systems allow crews to find and recover the rig as quickly and as safely as possible.
- Once safety concerns are addressed, operators will send assessment crews to offshore facilities to physically assess the facilities for damage.
- If facilities are undamaged, and ancillary facilities, like pipelines that carry the oil and natural gas, are undamaged and ready to accept shipments, operators will begin restarting production. Drilling rigs will commence operations.
Despite sustaining unprecedented damage and supply outages during the 2005 and 2008 hurricanes, the industry quickly and safely brought refining and pipeline operations back online, delivering to consumers near-record levels of gasoline and record levels of distillate (diesel and heating oil) in 2008. The oil and oil-product pipelines operating on or near the Gulf of Mexico continue to review their assets and operations to minimize the potential impacts of storms and shorten the time it takes to recover. While there have been some shortages caused by hurricanes, supply disruptions have been temporary despite extensive damage to supporting infrastructure, such as electric power generation and distribution, production shut-ins and refinery shutdowns. Pipelines need a steady supply of crude oil or refined products to keep product flowing to its intended destinations.
To prepare for future severe storms, refiners and pipeline companies have
Refineries and hurricanes (steps industry takes to prepare for and return after a storm)
- Worked with utilities to clarify priorities for electric power restoration critical to restarting operations and to help minimize significant disruptions to fuel distribution and delivery.
- Secured backup power generation equipment and worked with federal, state and local governments to ensure that pipelines and refineries are considered "critical" infrastructure for back-up power purposes.
- Established redundant communications systems to support continuity of operations and locate employees.
- Worked with vendors to pre-position food, water and transportation, and updated emergency plans to secure other emergency supplies and services.
- Provided additional training for employees who have participated in various exercises and drills.
- Reexamined and improved emergency response and business continuity plans.
- Strengthened onshore buildings and elevated equipment where appropriate to minimize potential flood damage.
- Worked with the states and local emergency management officials to provide documentation and credentials for employees who need access to disaster sites where access is restricted during an emergency.
- Participated in industry conferences to share best practices and improvement opportunities.
Pipelines and hurricanes (steps industry takes to prepare for and return after a storm)
- Refiners, in the hours before a large storm makes landfall, will usually evacuate all non-essential personnel and begin shutting down or reducing operations.
- Operations in areas not forecast to take a direct hit from the storm often are shut down or curtailed as a precaution because storms can change direction with little notice.
- Once safe, teams come in to assess damage. If damage or flooding has occurred, it must be repaired and dealt with before the refinery can be brought back on-line.
- Other factors that can cause delays in restarting refineries include the availability of crude oil, electricity to run the plant and water used for cooling the process units.
- Refineries are complex. It takes more than a flip of a switch to get a refinery back up and running. Once a decision has been made that it is safe to restart, it can take several days before the facility is back to full operating levels. This is because the process units and associated equipment must be returned to operation in a staged manner to ensure a safe and successful startup.
- If facilities are undamaged or necessary repairs have been made, and ancillary facilities - like pipelines that carry the oil and natural gas - are undamaged and ready to accept shipments, operators will begin restarting production.
- Pipeline operations can be impacted by storms, primarily through power outages, but also by direct damage.
- Offshore pipelines damaged require the hiring of divers, repairs and safety inspections before supplies can flow. Damaged onshore pipelines must be assessed, repaired and inspected before resuming operations.
- Without power, crude oil and petroleum products cannot be moved through pipelines. Operators routinely hold or lease back-up generators but need time to get them onsite.
- If there is no product put into pipelines because Gulf Coast/Gulf of Mexico crude or natural gas production has been curtailed, or because of refinery shutdowns, the crude and products already in the pipelines cannot be pushed out the other end.
- Wind damage to above ground tanks at storage terminals can also impact supplies into the pipeline.
: The 2008 hurricane season was very active, with 16 named storms, of which eight became hurricanes and five of those were major hurricanes. For the U.S. oil and natural gas industry, the two most serious storms of 2008 were Hurricane Ike, which made landfall in mid-September near Baytown, Texas, and Hurricane Gustav, which made landfall on September 1 in Louisiana.
Hurricane Gustav, a strong Category 2 storm, kept off-line oil and natural gas delivery systems and production platforms that had not yet been fully restored from a smaller storm two weeks earlier, and brought significant flooding as far north as Baton Rouge. Hurricane Ike, another strong Category 2 hurricane, caused significant portions of the production, processing, and pipeline infrastructure along the Gulf Coast in East Texas and Louisiana to shut down. Ike caused significant destruction to electric transmission and distribution lines, and these damages delayed the restart of major processing plants, pipelines, and refineries. As many as 3.7 million customers were without electric power following the storm, with about 2.5 million in Texas alone.
At the peak of disruptions, more than 20 percent of total U.S. refinery capacity was idled. The Minerals Management Service - now called Bureau of Ocean Energy Management, Regulation and Enforcement (BOEMRE)
estimated that 2,127 of the 3,800 total oil and natural gas production platforms in the Gulf of Mexico were exposed to hurricane conditions, with winds greater than 74 miles per hour, from Hurricanes Gustav and Ike. A total of 60 platforms were destroyed as a result of Hurricanes Gustav and Ike. Some platforms which had been previously reported as having extensive damage were reassessed and determined to be destroyed. The destroyed platforms produced 13,657 barrels of oil and 96.5 million cubic feet of natural gas daily or 1.05 percent of the oil and 1.3 percent of the natural gas produced daily in the Gulf of Mexico.
: The 2005 hurricane season was the most active in recorded history, shattering previous records. According to the Department of Energy, refineries in the path of hurricanes Katrina and Rita accounting for about 29 percent of U.S. refining capacity were shut down at the peak of disruptions. Offshore, the Minerals Management Service (MMS) estimated 22,000 of the 33,000 miles of pipelines and 3,050 of the 4,000 platforms in the Gulf were in the direct paths of the two Category 5 storms. Together the storms destroyed 115 platforms and damaged 52 others.
Even so, there was no loss of life among industry workers and contractors. An MMS report found "no accounts of spills from facilities on the federal Outer Continental Shelf that reached the shoreline; oiled birds or mammals; or involved any discoveries of oil to be collected or cleaned up".
: Hurricane Ivan was the strongest hurricane of the 2004 season and among one of the most powerful Atlantic hurricanes on record. It moved across the Gulf of Mexico to make landfall in Alabama. Ivan then looped across Florida and back into the Gulf, regenerating into a new tropical system, which moved into Louisiana and Texas.
The MMS estimated approximately 150 offshore facilities and 10,000 miles of pipelines were in the direct path of Ivan. Seven platforms were destroyed and 24 others damaged. The oil and natural gas industry submitted numerous damage reports to MMS, including for mobile drilling rigs, offshore platforms, producing wells, topside systems including wellheads and production and processing equipment, risers, and pipeline systems that transport oil and gas ashore from offshore facilities. | <urn:uuid:5a1087ae-92e7-46db-8ae2-20172a204f5d> | CC-MAIN-2013-20 | http://www.api.org/news-and-media/hurricane-information/hurricane-preparation.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.95195 | 2,794 | 2.546875 | 3 |
History of Initiative & Referendum in Arizona
|Laws • History|
|List of measures|
The History of Initiative & Referendum in Arizona began when acquired statewide initiative, referendum, and recall rights at the time of statehood in 1912. The first initiative in the state was for women's suffrage. It was a landslide victory, passing by a margin of greater than two to one on Nov. 5, 1912.
Then, in 1914, Arizona saw of 15 qualified initiatives, which held the record until 2006 when 19 initiatives were passed. Four of the 1914 initiatives passed because of the efforts of organized labor. One prohibited blacklisting of union members; a second established an "old age and mothers' pension"; another established a state government contract system, and a fourth limited businesses employment of non-citizens. Lastly, the voters in 1914 passed an initiative that barred the governor and legislature from amending or repealing initiatives.
In response, the legislature tried to pass a constitutional amendment that would make it more difficult to pass initiatives. Because this amendment needed the approval of voters, the Arizona Federation of Labor waged a campaign against the measure. The amendment was narrowly defeated in 1916.
- This chart includes all ballot measures to appear on the Arizona ballot in the year indicated, not just initiated measures. See also Arizona ballot measures.
|Year||Propositions on ballot||How many were approved?||How many were defeated?|
Arizonans owe many of their reforms to John Kromko. Kromko, like most Arizonans, is not a native; he was born near Erie, Pennsylvania, in 1940 and moved to Tucson in the mid-1960s. He was active in protests against the Vietnam War, and in the 1970s and 1980s he was elected to the lower house of the state legislature several times. By night, he was a computer-programming instructor; by day, he was Arizona’s "Mr. Initiative."
Kromko’s first petition was a referendum drive to stop a Tucson city council ordinance banning topless dancing, arguing for free speech. In 1976 Kromko was among the handful of Arizonans who, in cooperation with the People’s Lobby Western Bloc campaign, succeeded in putting on the state ballot an initiative to phase out nuclear power. The initiative lost at the polls, but Kromko’s leadership on the issue got him elected to his first term in the legislature.
Repealing the sales tax on food
Once elected, Kromko set his sights on abolishing the sales tax on food, a "regressive" tax that hits the poor hardest. Unsuccessful in the legislature, Kromko launched a statewide initiative petition and got enough signatures to put food tax repeal on the ballot. The legislature, faced with the initiative, acted to repeal the tax.
After the food tax victory, Kromko turned to voter registration reform. Again the legislature was unresponsive, so he launched an initiative petition. He narrowly missed getting enough signatures in 1980, and he failed to win re-election that year.
Undaunted, he revived the voter registration campaign and turned to yet another cause: Medicaid funding. Arizona in 1981 was the only state without Medicaid, since the legislature had refused to appropriate money for the state's share of this federal program.
In 1982, with an initiative petition drive under way and headed for success, the legislature got the message and established a Medicaid program. Kromko and his allies on this issue, the state’s churches, were satisfied and dropped their petition drive.
Motor Voter initiative
The voter registration initiative, now under the leadership of Les Miller, a Phoenix attorney, and the state Democratic Party, gained ballot placement and voter approval. In the ensuing four years, this "Motor Voter" initiative increased by over 10 percent the proportion of Arizona’s eligible population who were registered to vote.
Late legislative career
Kromko, re-elected to the legislature in 1982, took up his petitions again in 1983 to prevent construction of a freeway in Tucson that would have smashed through several residential neighborhoods. The initiative was merely to make freeway plans subject to voter approval, but Tucson officials, seeing the campaign as the death knell for their freeway plans, blocked its placement on the ballot through various legal technicalities. Kromko and neighborhood activists fighting to save their homes refused to admit defeat. They began a new petition drive in 1984, qualified their measure for the ballot, and won voter approval for it in November 1985.
Arizona’s moneyed interests poured funds into a campaign to unseat Kromko in 1986. Kromko not only survived but also fought back by supporting a statewide initiative to limit campaign contributions, sponsored by his colleague in the legislature, Democratic State Representative Reid Ewing of Tucson. Voters passed the measure by a two to one margin.
Kromko’s initiative exploits have made him the most effective Democratic political figure, besides former governor Bruce Babbitt, in this perennially Republican-dominated state. And Babbitt owes partial credit for one of his biggest successes - enactment of restrictions on the toxic chemical pollution of drinking water - to Kromko. Early in 1986 Kromko helped organize an environmentalist petition drive for an anti-toxic initiative, while Babbitt negotiated with the legislature for passage of a similar bill. When initiative backers had enough signatures to put their measure on the ballot, the legislature bowed to the pressure and passed Babbitt's bill. Even today, Kromko is still active in politics, writing letters to the editor about immigration policies.
Petition drive problems in 2008
2008 was a tough year for ballot initiatives in Arizona. Nine citizen initiatives filed signatures to qualify for the November 2008 Arizona ballot by the state's July 3 petition drive deadline. In the end, only six of the initiatives were certified, with three initiatives disqualified as a result of an historically high number of problems with flawed petition signatures. When the November vote was held, of the six that qualified for the ballot, only one was approved.,
Criticisms of process
After 19 were proposed in 2006, legislators were worried about "ballot fatigue" or overuse of the initiative system. This led to legislators considering steps to limit or otherwise exert more control over the initiative process. Ironically, any attempt to alter the initiative and referendum process would require an amendment to the state constitution, and thus in itself be put forth as a referendum.
This article is significantly based on an article published by the Initiative & Referendum Institute, and is used with their permission. Their article, in turn, relies on research in David Schmidt's book, Citizen Lawmakers: The Ballot Initiative Revolution.
Also portions of this article were taken from Wikipedia, the free encyclopedia under the GNU license.
- ↑ Arizona Daily Star, "'Clown' takes some serious initiative", July 20, 2007
- ↑ Arizona Republic, "'Flawed' election petitions face review", September 13, 2008
- ↑ Phoenix New Times, "Citizen initiatives have been kicked off the ballot this year in record numbers, and the problems could go much deeper than invalid signatures", August 21, 2008
- ↑ Legislators seeking more control over initiatives, Arizona Republic, Feb. 13, 2007
- ↑ History of Arizona's initiative
- ↑ Citizen Lawmakers: The Ballot Initiative Revolution Temple University Press, 352 pp., ISBN-10: 0877229031, October 1991
History of I&R
Alaska · Arizona · Arkansas · California · Colorado · Florida · Idaho · Illinois · Kentucky · Maine · Maryland · Massachusetts · Michigan · Mississippi · Missouri · Montana · Nebraska · Nevada · New Mexico · North Dakota · Ohio · Oklahoma · Oregon · South Dakota · Utah · Washington · Wyoming
Direct Legislation by the Citizenship Through the Initiative and Referendum · Citizen Lawmakers: The Ballot Initiative Revolution · Direct Legislation: Voting on Ballot Propositions in the United States | <urn:uuid:a244d012-82a0-4c3c-8ce5-37d705085e34> | CC-MAIN-2013-20 | http://ballotpedia.org/wiki/index.php/History_of_Initiative_&_Referendum_in_Arizona | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.948028 | 1,630 | 3.140625 | 3 |
General Chemistry/Periodicity and Electron Configurations
Blocks of the Periodic Table
The Periodic Table does more than just list the elements. The word periodic means that in each row, or period, there is a pattern of characteristics in the elements. This is because the elements are listed in part by their electron configuration. The Alkali metals and Alkaline earth metals have one and two valence electrons (electrons in the outer shell) respectively. These elements lose electrons to form bonds easily, and are thus very reactive. These elements are the s-block of the periodic table. The p-block, on the right, contains common non-metals such as chlorine and helium. The noble gases, in the column on the right, almost never react, since they have eight valence electrons, which makes it very stable. The halogens, directly to the left of the noble gases, readily gain electrons and react with metals. The s and p blocks make up the main-group elements, also known as representative elements. The d-block, which is the largest, consists of transition metals such as copper, iron, and gold. The f-block, on the bottom, contains rarer metals including uranium. Elements in the same Group or Family have the same configuration of valence electrons, making them behave in chemically similar ways.
Causes for Trends
There are certain phenomena that cause the periodic trends to occur. You must understand them before learning the trends.
Effective Nuclear Charge
The effective nuclear charge is the amount of positive charge acting on an electron. It is the number of protons in the nucleus minus the number of electrons in between the nucleus and the electron in question. Basically, the nucleus attracts an electron, but other electrons in lower shells repel it (opposites attract, likes repel).
Shielding Effect
The shielding (or screening) effect is similar to effective nuclear charge. The core electrons repel the valence electrons to some degree. The more electron shells there are (a new shell for each row in the periodic table), the greater the shielding effect is. Essentially, the core electrons shield the valence electrons from the positive charge of the nucleus.
Electron-Electron Repulsions
When two electrons are in the same shell, they will repel each other slightly. This effect is mostly canceled out due to the strong attraction to the nucleus, but it does cause electrons in the same shell to spread out a little bit. Lower shells experience this effect more because they are smaller and allow the electrons to interact more.
Coulomb's Law
Coulomb's law is an equation that determines the amount of force with which two charged particles attract or repel each other. It is , where is the amount of charge (+1e for protons, -1e for electrons), is the distance between them, and is a constant. You can see that doubling the distance would quarter the force. Also, a large number of protons would attract an electron with much more force than just a few protons would.
Trends in the Periodic table
Most of the elements occur naturally on Earth. However, all elements beyond uranium (number 92) are called trans-uranium elements and never occur outside of a laboratory. Most of the elements occur as solids or gases at STP. STP is standard temperature and pressure, which is 0° C and 1 atmosphere of pressure. There are only two elements that occur as liquids at STP: mercury (Hg) and bromine (Br).
Bismuth (Bi) is the last stable element on the chart. All elements after bismuth are radioactive and decay into more stable elements. Some elements before bismuth are radioactive, however.
Atomic Radius
Leaving out the noble gases, atomic radii are larger on the left side of the periodic chart and are progressively smaller as you move to the right across the period. Conversely, as you move down the group, radii increase.
Atomic radii decrease along a period due to greater effective nuclear charge. Atomic radii increase down a group due to the shielding effect of the additional core electrons, and the presence of another electron shell.
Ionic Radius
For nonmetals, ions are bigger than atoms, as the ions have extra electrons. For metals, it is the opposite.
Extra electrons (negative ions, called anions) cause additional electron-electron repulsions, making them spread out farther. Fewer electrons (positive ions, called cations) cause fewer repulsions, allowing them to be closer.
|Ionization energy is the energy required to strip an electron from the atom (when in the gas state).
Ionization energy is also a periodic trend within the periodic table organization. Moving left to right within a period or upward within a group, the first ionization energy generally increases. As the atomic radius decreases, it becomes harder to remove an electron that is closer to a more positively charged nucleus.
Ionization energy decreases going left across a period because there is a lower effective nuclear charge keeping the electrons attracted to the nucleus, so less energy is needed to pull one out. It decreases going down a group due to the shielding effect. Remember Coulomb's Law: as the distance between the nucleus and electrons increases, the force decreases at a quadratic rate.
It is considered a measure of the tendency of an atom or ion to surrender an electron, or the strength of the electron binding; the greater the ionization energy, the more difficult it is to remove an electron. The ionization energy may be an indicator of the reactivity of an element. Elements with a low ionization energy tend to be reducing agents and form cations, which in turn combine with anions to form salts.
Electron Affinity
|Electron affinity is the opposite of ionization energy. It is the energy released when an electron is added to an atom.
Electron affinity is highest in the upper left, lowest on the bottom right. However, electron affinity is actually negative for the noble gasses. They already have a complete valence shell, so there is no room in their orbitals for another electron. Adding an electron would require creating a whole new shell, which takes energy instead of releasing it. Several other elements have extremely low electron affinities because they are already in a stable configuration, and adding an electron would decrease stability.
Electron affinity occurs due to the same reasons as ionization energy.
Electronegativity is how much an atom attracts electrons within a bond. It is measured on a scale with fluorine at 4.0 and francium at 0.7. Electronegativity decreases from upper right to lower left.
Electronegativity decreases because of atomic radius, shielding effect, and effective nuclear charge in the same manner that ionization energy decreases.
Metallic Character
Metallic elements are shiny, usually gray or silver colored, and good conductors of heat and electricity. They are malleable (can be hammered into thin sheets), and ductile (can be stretched into wires). Some metals, like sodium, are soft and can be cut with a knife. Others, like iron, are very hard. Non-metallic atoms are dull, usually colorful or colorless, and poor conductors. They are brittle when solid, and many are gases at STP. Metals give away their valence electrons when bonding, whereas non-metals take electrons.
The metals are towards the left and center of the periodic table—in the s-block, d-block, and f-block . Poor metals and metalloids (somewhat metal, somewhat non-metal) are in the lower left of the p-block. Non-metals are on the right of the table.
Metallic character increases from right to left and top to bottom. Non-metallic character is just the opposite. This is because of the other trends: ionization energy, electron affinity, and electronegativity. | <urn:uuid:7ab562e2-c61b-4988-9c51-24c5b3cb1d20> | CC-MAIN-2013-20 | http://en.wikibooks.org/wiki/General_Chemistry/Periodicity_and_Electron_Configurations | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.917487 | 1,666 | 4.4375 | 4 |
Per Square Meter
Warm-up: Relationships in Ecosystems (10 minutes)
1. Begin this lesson by presenting the powerpoint, “Per Square Meter”.
2. After the presentation, ask students to think of animal relationships that correspond to each of the following types; Competition, Predation, Parasitism, and Mutualism
a. For example, two animals that compete for food are lions and cheetahs (they compete for zebras and antelopes)
3. Record the different types of relationships on the board.
Activity One: My Own Square Meter (30 minutes)
1. Have students go outside and pick a small area (about a square meter each) to explore. It is preferable that this area be grassy or ‘natural’. The school playground might be a good spot.
2. Each student should keep a list of both the living organisms and man-made products found in their area (i.e grass, birds, insects, flowers, sidewalk etc.) Students are allowed to collect a few specimens from this area to show to the class. If students do not have jars, they can draw their observations. *See Reproducible #1
Activity Two: Who lives in our playground? (10 minutes)
1. After listing, collecting, and drawing specimens, students should return to the classroom and present their findings.
a. Have the students sit in a circle. Each student should read his or her list of findings out loud. If they collected specimens or drew observations, have them present them to the class.
2. Make a list of these findings on the board. Only write repeated findings once (to avoid writing grass as many times as there are students). Keep one list of living organisms and one list of man-made products.
3. For now, focus on the list of living organisms. As a class, help students name possible relationships between the organisms. See if they can find one of each type of relationship. For example, a bee on a flower is an example of mutualism because the nectar from the flower nourishes the bee and in return, the bee pollinates the flower.
Activity Three: Humans and the Environment: Human Effect on one Square Meter (15 minutes)
1. Now that students have focused on the animal relationships of their square meter, it is time to examine the effect of humans on the natural environment. Focus on the human-made product list. Ask students to consider the possible relationships between the human-made products and the environment. Prompt a brief class discussion on the effects of man-made products on the environment. Use the following questions as guidelines.
a. What is the effect of an empty drink bottle (or any other piece of trash) in a grassy field? Will it decompose? Will it be used by an animal as a habitat or food?
Answer: Trash is an invasive man-made product. Most trash is non-bio degradable and is harmful to the environment and to eco-system relations.Therefore, it is a harmful addition to the square meter.
b. Who left the bottle there? Do you think they are still thinking about it? Did they leave it there on purpose? Why did they leave it there?
Answer: Most people litter thoughtlessly. They are not thinking about their actions and how they may effect the environment or eco-systems. It is important that people recognize that litter has a major effect on the environment.
c. What about a bench? Does a park bench have the same effect on the environment as a piece of trash?
Answer: A park bench can be considered as a positive human-made product. A park bench has little negative effect on the environment and even helps humans further appreciate eco-systems. The Park Bench may even provide shelter or a perch for the eco-systems living organisms.
d. Is there a difference between positive human-made products and negative ones? What are some examples of each?
Answer: Yes, there is a difference between positive and negative human-made products. Positive products have minimal effect on the functioning of eco systems whereas negative products have major effects on eco systems. An example of a positive human-made product would be a solar powered house. An example of a negative human-made product would be a car that produces a lot of pollution.
Wrap Up: Our Classroom Eco-Web (20-30 minutes)
1. Have students create classroom artwork by illustrating the relationships between their eco-systems.
2. Each student should draw at least two components of his or her square meter.
3. After everyone has finished their illustrations, create a web relating the illustrations. Draw arrows between illustrated components with written indications of the type of relationship exemplified.
4. Post the finished product in the classroom so that students can see the interconnectedness of the earth’s eco-systems.
Extension: Exploring Aquatic Eco-Systems (On-going Activity)
Students can explore another type of eco-system by creating a classroom aquarium or terrarium. The supplies for both of these mini eco-systems can be found at your local pet store. Students should help set up and maintain the aquarium or terrarium throughout the year. Periodically, students should observe how the mini-ecosystem is progressing, note changes, and assess the relationships between the organisms of the eco-system. This way, students are able to directly participate in the functioning of a natural system.
Another related activity might be to take your students on a field trip to a different eco-system from that of your school. If you live near a river, lake, or ocean take them there to explore different ecological relations. If you live in a city, examples of diverse eco-systems can be found at the local zoo or aquarium. | <urn:uuid:c76adb43-fdc6-442d-882e-b7781f7e7d83> | CC-MAIN-2013-20 | http://www.earthday.org/square-meter | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.939051 | 1,207 | 3.921875 | 4 |
From the time of Aristotle (384-322 BC) until the late 1500’s, gravity was believed to act differently on different objects.
- Drop a metal bar and a feather at the same time… which one hits the ground first?
- Obviously, common sense will tell you that the bar will hit first, while the feather slowly flutters to the ground.
- In Aristotle’s view, this was because the bar was being pulled harder (and faster) by gravity because of its physical properties.
- Because everyone sees this when they drop different objects, it wasn’t questioned for almost 2000 years.
Galileo Galilei was the first major scientist to refute (prove wrong) Aristotle’s theories.
- In his famous (at least to Physicists!) experiment, Galileo went to the top of the leaning tower of Pisa and dropped a wooden ball and a lead ball, both the same size, but different masses.
- They both hit the ground at the same time, even though Aristotle would say that the heavier metal ball should hit first.
- Galileo had shown that the different rates at which some objects fall is due to air resistance, a type of friction.
- Get rid of friction (air resistance) and all objects will fall at the same rate.
- Galileo said that the acceleration of any object (in the absence of air resistance) is the same.
- To this day we follow the model that Galileo created.
ag = g = 9.81m/s2
ag = g = acceleration due to gravity
Since gravity is just an acceleration like any other, it can be used in any of the formulas that we have used so far.
- Just be careful about using the correct sign (positive or negative) depending on the problem.
Examples of Calculations with Gravity
Example 1: A ball is thrown up into the air at an initial velocity of 56.3m/s. Determine its velocity after 4.52s have passed.
In the question the velocity upwards is positive, and I’ll keep it that way. That just means that I have to make sure that I use gravity as a negative number, since gravity always acts down.
vf = vi + at
= 56.3m/s + (-9.81m/s2)(4.52s)
vf = 12.0 m/s
This value is still positive, but smaller. The ball is slowing down as it rises into the air.
Example 2: I throw a ball down off the top of a cliff so that it leaves my hand at 12m/s. Determine how fast is it going 3.47 seconds later.
In this question I gave a downward velocity as positive. I might as well stick with this, but that means I have defined down as positive. That means gravity will be positive as well.vf = vi + at
= 12m/s + (9.81m/s2)(3.47s)
vf = 46 m/s
Here the number is getting bigger. It’s positive, but in this question I’ve defined down as positive, so it’s speeding up in the positive direction.
Example 3: I throw up a ball at 56.3 m/s again. Determine how fast is it going after 8.0s.
We’re defining up as positive again.
vf = vi + at
= 56.3m/s + (-9.81m/s2)(8.0s)
vf = -22 m/s
Why did I get a negative answer?
- The ball reached its maximum height, where it stopped, and then started to fall down.
- Falling down means a negative velocity.
There’s a few rules that you have to keep track of. Let’s look at the way an object thrown up into the air moves.
As the ball is going up…
- It starts at the bottom at the maximum speed.
- As it rises, it slows down.
- It finally reaches it’s maximum height, where for a moment its velocity is zero.
- This is exactly half ways through the flight time.
As the ball is coming down…
- The ball begins to speed up, but downwards.
- When it reaches the same height that it started from, it will be going at the same speed as it was originally moving at.
- It takes just as long to go up as it takes to come down.
Example 4: I throw my ball up into the (again) at a velocity of 56.3 m/s.
a) Determine how much time does it take to reach its maximum height.
- It reaches its maximum height when its velocity is zero. We’ll use that as the final velocity.
- Also, if we define up as positive, we need to remember to define down (like gravity) as negative.
a = (vf - vi) / t
t = (vf - vi) / a
= (0 - 56.3m/s) / -9.81m/s2
t = 5.74s
b) Determine how high it goes.
- It’s best to try to avoid using the number you calculated in part (a), since if you made a mistake, this answer will be wrong also.
- If you can’t avoid it, then go ahead and use it.
vf2 = vi2 + 2ad
d = (vf2 = vi2) / 2a
= (0 - 56.32) / 2(-9.81m/s2)
d = 1.62e2 m
c) Determine how fast is it going when it reaches my hand again.
- Ignoring air resistance, it will be going as fast coming down as it was going up.
You might have heard people in movies say how many "gee’s" they were feeling.
- All this means is that they are comparing the acceleration they are feeling to regular gravity.
- So, right now, you are experiencing 1g… regular gravity.
- During lift-off the astronauts in the space shuttle experience about 4g’s.
- That works out to about 39m/s2.
- Gravity on the moon is about 1.7m/s2 = 0.17g | <urn:uuid:43ce7457-915e-4a8a-b78f-fca95b28656c> | CC-MAIN-2013-20 | http://www.studyphysics.ca/newnotes/20/unit01_kinematicsdynamics/chp04_acceleration/lesson12.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.94489 | 1,359 | 3.953125 | 4 |
March 30, 2011 by Valerie Elkins
The short answer is keizu. The longer answer is not so easy. There several reasons why it is difficult for those of Japanese ancestry living outside of Japan to trace their lineage. One of the main reasons is a lack of understanding of the language. I am not going to sugar coat it, learning Japanese is hard, BUT learning how to pronounce it is not.
There are 5 basic vowel sounds in Japanese. They are always pronounced the same unlike in English! Vowel lengths are all uniformly short:
|a||as in ‘father’|
|e||as in ‘bet’|
|i||as in ‘beet’|
|u||as in ‘boot’|
|o||as in ‘boat’|
You do not need to know everything in Japanese but learning some genealogical terms is helpful.
Glossary of Japanese genealogical terms to begin building your vocabulary.
- koseki ~ household register, includes everyone in a household under the head of house (who usually was male)
- koseki tohon ~ certified copy which recorded everything from the original record.
- koseki shohon ~ certified copy which recorded only parts from the original.
- joseki ~ expired register in which all persons originally entered have been removed because of death, change of residence, etc. A joseki file is ordinarily available for 80 years after its expiration.
- kaisei genkoseki ~ revised koseki
- honseki ~ permanent residence or registered address (i.e. person may move to Tokyo but their records remain in hometown city hall).
- genseki ~ another name for honseki
- kakocho ~ Buddist death register
- kaimyo ~ Buddist name given to deceased person and recorded in kakocho.
- homyo ~ Buddist name given to living converts, similar to homyo.
- kuni ~ country or nation
- ken ~ prefecture
- shi ~ city
- gun ~ county
- to ~ metropolitan prefecture (Tokyo-to). Similar to ken.
- do ~ urban prefecture (Hokkaido). Similar to ken.
- fu ~ urban prefecture (Kyoto-fu, Osaka-fu) similar to ken.
- ku ~ ward in some large cities (Sapparo, Sendai, Tokyo) divided in to town (cho).
- cho ~ town
- aza ~unorganized district
- machi ~ town within a city (cho) or ward (ku), town within a county (gun).
- chome ~ smaller division of a town (cho) in some neighborhoods.
- mura or son ~ village within a county (gun).
- koshu or hittousha or stainushi ~ head of household, the head of the family
- zen koshu ~ former head of household
- otto ~ husband
- tsuma ~ wife
- chichi or fu ~ father
- haha or bo ~ mother
- sofu ~ grandfather
- sobo ~ grandmother
- otoko or dan or nan ~ male, man, son
- onna or jo ~ female, woman, daughter
- ani or kei or kyou ~ older brother
- otouto or tei ~ younger brother
- ane or shi ~ older sister
- imouto or mai ~ younger sister
- mago or son ~ grandchild
- himago or souson ~ great-grandchild
- oi ~ nephew
- mei ~ niece
- youshi ~ adopted child or son
- youjo ~ adopted daughter
- muko youshi ~ a man without sons may adopt his eldest daughter’s husband as his own son and the young man will take his wife’s surname and be listed on her family’s koseki
- seimei or shime ~ full name, family name
- shussei or shusshou ~ birth
- shibou ~ deceased
- nen or toshi ~ year
- gatsu, getsu or tsuki ~ month
- hi or nichi or ka ~ day
- ji or toki ~ hour, time
- sai or toshi ~ age
- issei ~ person born in Japan and later immigrate elsewhere
- nisei ~ child/generation of issei and born outside of Japan
- sansei ~ child/generation of nisei and born outside of Japan
- yonsei ~ child/generation of sansei and born outside of Japan
- gosei ~ child/generation of yonsei and born outside of Japan
There is another Japanese term you really need to know. It is ganbatte which means ‘hang in there’ or ‘do your best’ and either one is will work.
Category Uncategorized | Tags: | <urn:uuid:6af83eea-e9c9-4bc3-83b0-3d1f7e904cd8> | CC-MAIN-2013-20 | http://advantagegenealogy.com/blog/2011/03/30/how-do-you-say-genealogy-in-japanese/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.909762 | 1,055 | 2.890625 | 3 |
Cleopatra, queen of Egypt and lover of Julius Caesar and Mark Antony, takes her life following the defeat of her forces against Octavian, the future first emperor of Rome.
Cleopatra, born in 69 B.C., was made Cleopatra VII, queen of Egypt, upon the death of her father, Ptolemy XII, in 51 B.C. Her brother was made King Ptolemy XIII at the same time, and the siblings ruled Egypt under the formal title of husband and wife. Cleopatra and Ptolemy were members of the Macedonian dynasty that governed Egypt since the death of Alexander the Great in 323 B.C. Although Cleopatra had no Egyptian blood, she alone in her ruling house learned Egyptian. To further her influence over the Egyptian people, she was also proclaimed the daughter of Re, the Egyptian sun god. Cleopatra soon fell into dispute with her brother, and civil war erupted in 48 B.C.
Rome, the greatest power in the Western world, was also beset by civil war at the time. Just as Cleopatra was preparing to attack her brother with a large Arab army, the Roman civil war spilled into Egypt. Pompey the Great, defeated by Julius Caesar in Greece, fled to Egypt seeking solace but was immediately murdered by agents of Ptolemy XIII. Caesar arrived in Alexandria soon after and, finding his enemy dead, decided to restore order in Egypt.
During the preceding century, Rome had exercised increasing control over the rich Egyptian kingdom, and Cleopatra sought to advance her political aims by winning the favor of Caesar. She traveled to the royal palace in Alexandria and was allegedly carried to Caesar rolled in a rug, which was offered as a gift. Cleopatra, beautiful and alluring, captivated the powerful Roman leader, and he agreed to intercede in the Egyptian civil war on her behalf.
In 47 B.C., Ptolemy XIII was killed after a defeat against Caesar's forces, and Cleopatra was made dual ruler with another brother, Ptolemy XIV. Julius and Cleopatra spent several amorous weeks together, and then Caesar departed for Asia Minor, where he declared "Veni, vidi, vici" (I came, I saw, I conquered), after putting down a rebellion. In June 47 B.C., Cleopatra bore a son, whom she claimed was Caesar's and named Caesarion, meaning "little Caesar."
Upon Caesar's triumphant return to Rome, Cleopatra and Caesarion joined him there. Under the auspices of negotiating a treaty with Rome, Cleopatra lived discretely in a villa that Caesar owned outside the capital. After Caesar was assassinated in March 44 B.C., she returned to Egypt. Soon after, Ptolemy XIV died, likely poisoned by Cleopatra, and the queen made her son co-ruler with her as Ptolemy XV Caesar.
With Julius Caesar's murder, Rome again fell into civil war, which was temporarily resolved in 43 B.C. with the formation of the second triumvirate, made up of Octavian, Caesar's great-nephew and chosen heir; Mark Antony, a powerful general; and Lepidus, a Roman statesman. Antony took up the administration of the eastern provinces of the Roman Empire, and he summoned Cleopatra to Tarsus, in Asia Minor, to answer charges that she had aided his enemies.
Cleopatra sought to seduce Antony, as she had Caesar before him, and in 41 B.C. arrived in Tarsus on a magnificent river barge, dressed as Venus, the Roman god of love. Successful in her efforts, Antony returned with her to Alexandria, where they spent the winter in debauchery. In 40 B.C., Antony returned to Rome and married Octavian's sister Octavia in an effort to mend his strained alliance with Octavian. The triumvirate, however, continued to deteriorate. In 37 B.C., Antony separated from Octavia and traveled east, arranging for Cleopatra to join him in Syria. In their time apart, Cleopatra had borne him twins, a son and a daughter. According to Octavian's propagandists, the lovers were then married, which violated the Roman law restricting Romans from marrying foreigners.
Antony's disastrous military campaign against Parthia in 36 B.C. further reduced his prestige, but in 34 B.C. he was more successful against Armenia. To celebrate the victory, he staged a triumphal procession through the streets of Alexandria, in which he and Cleopatra sat on golden thrones, and Caesarion and their children were given imposing royal titles. Many in Rome, spurred on by Octavian, interpreted the spectacle as a sign that Antony intended to deliver the Roman Empire into alien hands.
After several more years of tension and propaganda attacks, Octavian declared war against Cleopatra, and therefore Antony, in 31 B.C. Enemies of Octavian rallied to Antony's side, but Octavian's brilliant military commanders gained early successes against his forces. On September 2, 31 B.C., their fleets clashed at Actium in Greece. After heavy fighting, Cleopatra broke from the engagement and set course for Egypt with 60 of her ships. Antony then broke through the enemy line and followed her. The disheartened fleet that remained surrendered to Octavian. One week later, Antony's land forces surrendered.
Although they had suffered a decisive defeat, it was nearly a year before Octavian reached Alexandria and again defeated Antony. In the aftermath of the battle, Cleopatra took refuge in the mausoleum she had commissioned for herself. Antony, informed that Cleopatra was dead, stabbed himself with his sword. Before he died, another messenger arrived, saying Cleopatra still lived. Antony had himself carried to Cleopatra's retreat, where he died after bidding her to make her peace with Octavian. When the triumphant Roman arrived, she attempted to seduce him, but he resisted her charms. Rather than fall under Octavian's domination, Cleopatra committed suicide on August 30, 30 B.C., possibly by means of an asp, a poisonous Egyptian serpent and symbol of divine royalty.
Octavian then executed her son Caesarion, annexed Egypt into the Roman Empire, and used Cleopatra's treasure to pay off his veterans. In 27 B.C., Octavian became Augustus, the first and arguably most successful of all Roman emperors. He ruled a peaceful, prosperous, and expanding Roman Empire until his death in 14 A.D. at the age of 75. | <urn:uuid:299ef7cb-3829-4f4b-87d9-ff022fe5d13d> | CC-MAIN-2013-20 | http://www.history.com/this-day-in-history/cleopatra-commits-suicide | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.97725 | 1,406 | 3.453125 | 3 |
From Abracadabra to Zombies | View All
N'kisi & the N'kisi Project
N'kisi (pronounced ‘‘in-key-see’’) is a captive bred eight or nine-year-old hand raised African Grey Parrot whose owner, Aimée Morgana, thinks uses language. She doesn't think he just sounds out words. She thinks he communicates with her in language, which would in effect make N'kisi a rational parrot. For example, N'kisi utters "pretty smell medicine" when he wants to describe the aromatherapy oils that Aimée uses.* Furthermore, Aimée says her parrot has a fine sense of humor and knows how to laugh. Imagine having conversations with a humorous parrot. Think of all the things you could talk and joke about, besides aromatherapy. You could discuss the fame that would come to anyone who had a parrot that can think and converse in intelligent discourse, like pretty smell medicine and look at my pretty naked body.* And when some nasty skeptic makes fun of you, the two of you can joke about it.
I'm afraid that this story stretches the boundaries of reasonable credibility, though stories of rational parrots go back at least to the 17th century. John Locke, for example, relates a tale of a Portuguese-speaking parrot of some note in his Essay Concerning Human Understanding (II.xxvii.8). These cases are more likely cases of self-deception, delusion, and gullibility than of language-using parrots. Listen to this audio clip of N'kisi, Aimée, and a toy that "talks" when a button is pushed. First listen without reading the transcript. Some of it is intelligible, especially after the fourth or fifth repetition, but it is difficult to understand the "conversation," especially with the toy making its sounds as Aimée stimulates her parrot. Some of the tape sounds like gibberish until you are told what to listen for. When you listen while reading the transcript something amazing happens: you can hear just what you're reading. Why is that? The same thing happens when you listen to audio tapes played backward. When you just listen without anyone telling you what to listen for, you usually don't understand anything intelligible. But as soon as someone shows or tells you what to listen for, you can hear the message. Such is the power of suggestion and the way of audio perception. Hearing is a constructive process, like vision, in that bits of sensory data are "filled in" by the brain to produce a visual or auditory perception that is clear and distinct, and in accord with your expectations. Consider the following from an interview with Dr. Irene Pepperberg, Morgana's inspiration, who has been studying Alex, an African Grey Parrot, for many years:
We were doing demos at the Media Lab [at MIT] for our corporate sponsors; we had a very small amount of time scheduled and the visitors wanted to see Alex work. So we put a number of differently colored letters on the tray that we use, put the tray in front of Alex, and asked, "Alex, what sound is blue?" He answers, "Ssss." It was an "s", so we say "Good birdie" and he replies, "Want a nut."
Well, I don't want him sitting there using our limited amount of time to eat a nut, so I tell him to wait, and I ask, "What sound is green?" Alex answers, "Ssshh." He's right, it's "sh," and we go through the routine again: "Good parrot." "Want a nut." "Alex, wait. What sound is orange?" "ch." "Good bird!" "Want a nut." We're going on and on and Alex is clearly getting more and more frustrated. He finally gets very slitty-eyed and he looks at me and states, "Want a nut. Nnn, uh, tuh."
Not only could you imagine him thinking, "Hey, stupid, do I have to spell it for you?" but the point was that he had leaped over where we were and had begun sounding out the letters of the words for us. This was in a sense his way of saying to us, "I know where you're headed! Let's get on with it," which gave us the feeling that we were on the right track with what we were doing.*
Dr. Pepperberg thinks the bird is responding cognitively to her questions rather than simply responding to a stimulus. She thinks the bird is getting frustrated, but she has stipulated earlier in the interview:
I never claim that Alex has full-blown language; I never would. I'm not going to be able to put Alex on a "T" stand and have you interview him the way you interview me.
So, whereas you or I might say "give me the nut or this interview is over" were we parrots with intentionality and language, the parrot's movements and sounds have to be less direct and more complex, so that they have to be interpreted for us by Pepperberg. In her view, Alex is "clearly getting more frustrated" and his frustration culminates with a "very slitty-eyed" expression. But this is Pepperberg's interpretation, as is her hearing the bird sound out the letters of the word 'nut'. It could have been a stutter for all we know, but Pepperberg is facilitating Alex's communication by telling us what she hears. The final paragraph indicates that Pepperberg is having a hard time drawing the line between imagining what a parrot might be thinking and projecting those thoughts into the parrot's movements and sounds. She's also having a hard time getting grant money (NIH turned her down), so she started her own private foundation, the Alex Foundation.
When news of N'kisi broke on the pages of BBC online, there was no mention in the article by Alex Kirby of the parrot having conversations with people other than Aimée Morgana. (The story was originally told in USA Today in the February 12, 2001, edition.) Despite the headline "Parrot's oratory stuns scientists," there was no evidence given that the parrot had stunned anyone during a conversation. It seems that Aimée is to her parrot what the facilitator is to her client in facilitated communication, except that the parrot is actually providing data to interpret and is more like clever Hans, the horse that responded to unconscious movements of his master, than a disabled human who may not be providing any content or direction at all to the facilitator. It is Aimée who gives intentionality to the parrot's sounds. She is the one who attributes 'laughter' to his shrieks and conscious awareness to his responses, though those responses could be due to any one of many stimuli, consciously or unconsciously provided by Aimée or items in the immediate environment. Nevertheless, Dr. Jane Goodall, who studies chimpanzees, met N'kisi and said that he provides an "outstanding example of interspecies communication." There is some evidence, however, that much of the work with language-using primates also mistakes subjective validation by scientists for complex linguistic abilities of their animal subjects (Wallman 1992).
According to Mr. Kirby, N'kisi not only uses language but has been tested for telepathy and he passed the test with flying colors:
In an experiment, the bird and his owner were put in separate rooms and filmed as the artist opened random envelopes containing picture cards.
Analysis showed the parrot had used appropriate keywords three times more often than would be likely by chance.
Kirby doesn't provide any details about the experiment, so a reader might misinterpret this claim as implying that this parrot did about twice as well as people did in the ganzfeld telepathy experiments. In those experiments, subjects in separate rooms were monitored as one tried to telepathically send information from a picture or video to the other. Typically, there was a 20% chance of guessing what the item was but results as high as 38% were reported in some meta-analyses. If the parrot scored three times better than chance, then he would have gotten 60% correct. The odds of a parrot randomly blurting out words that match up 60% of the time with pictures being looked at simultaneously in another room are so high that there is virtually no way that this could happen by chance. However, as you might suspect, Kirby's claim is a bit misleading.
I assume that Kirby was writing about an experiment that was part of the N'kisi project, a joint effort by Morgana and Rupert Sheldrake to test not only the parrot's language-using abilities but his telepathic talents as well. Sheldrake has already validated the telepathic abilities of a dog and thinks the "findings [of this experiment] are consistent with the hypothesis that N'kisi was reacting telepathically to Aimée's mental activity."*
The full text of Sheldrake's study published in the peer reviewed Journal of Scientific Exploration is available online. The title of the paper would send most journal editors to their grave, killed by laughter: "Testing a Language-Using Parrot for Telepathy." Fortunately for Sheldrake and his associates there will always be a sympathetic editor for another story like that of J. B. Rhine and the telepathic horse, "Lady Wonder." At least Sheldrake's protocols show some measure of sophistication, unlike Rhine's. Even so, as the editor at the Journal of Scientific Exploration commented: "once again, we have suggestive results, a level of statistical significance that is less than compelling, and the devout wish that further work with refined protocols will ensue."* So, we'll just have to wait and see whether further study of N'kisi supports the telepathic hypothesis.
Anyway, here is how Sheldrake set up the experiment. He first compiled a list of 30 words from the bird's vocabulary that "could be represented by visual images." A package of 167 photos from a stock supplier was used for the test. Since only 20 of the photos corresponded to words on the list, the word list was reduced to 20. The word 'camera' was removed from the list because 'N’kisi "used it so frequently to comment on the cameras used in the tests themselves." Thus, they were left with 19 words.
During the tests, N’kisi remained in his cage in Aimée’s apartment in Manhattan, New York. There was no one in the room with him. Meanwhile, Aimée went to a separate enclosed room on a different floor. N’kisi could not see or hear her, and in any case, Aimée said nothing, as confirmed by the audio track recorded on the camera that filmed her continuously. The distance between Aimée and N’kisi was about 55 feet. Aimée could hear N’kisi through a wireless baby monitor, which she used to gain ‘‘feedback’’ to help her to adjust her mental state as image sender.
Both Aimée and N’kisi were filmed continuously throughout the test sessions by two synchronized cameras on time-coded videotape. The cameras were mounted on tripods and ran continuously without interruption throughout each session. N’kisi was also recorded continuously on a separate audio tape recorder. (Sheldrake and Morgana 2003)
According to Sheldrake:
We conducted a total of 147 two-minute trials. The recordings of N’kisi during these trials were transcribed blind by three independent transcribers....He scored 23 hits: the key words he said corresponded to the target pictures....If N’kisi said a key word that did not correspond to the photograph, that was counted as a miss, and if he said a key word corresponding to the photograph, that was a hit. (Sheldrake and Morgana 2003)
However, sixty of the trials were discarded because in those trials N'kisi either was silent or uttered things that were not key words, i.e., showed no signs of telepathy. A few other trials were discarded because the transcribers did not agree on what N'kisi said. In short, Sheldrake's statistical conclusions are based on the results of 71 of the trials. I'll let the reader decide whether it was proper to omit 40% of the data because the parrot didn't utter a word on the key word list during those trials. Some might argue that those sessions should be counted as misses and that by ignoring so much data where the parrot clearly did not indicate any sign of telepathy is strong evidence that Sheldrake was more interested in confirming his biases than in getting at the truth.
N'kisi's misses were listed at 94. Ten of the 23 hits were on the picture that corresponded to the word 'flower', which N'kisi uttered 23 times during the trials. The flower image, selected randomly, was used in 17 trials. The image corresponding to water was used in 10 of the trials. The bird said 'water' in twelve trials and got 2 hits. It seems oddly biased that almost one-third of the images and more than half the hits came from just 2 of the 19 pictures.
One of the peer reviewers thought that the fact that the flower word and picture played so heavy a role in the outcome that the paper's results were distorted and that the paper should not be published. The other reviewer accepted Sheldrake's observation that even if you throw out the flower data, you still get some sort of statistical significance. This may be true. However, since the bird allegedly had a vocabulary of some 950 words at the time of the test, omitting sessions where the bird said nothing or said something not on the key list, is unjustifiable. Furthermore, there is no evidence that it is reasonable to assume that when the parrot is by itself uttering words that it is trying to communicate telepathically with Morgana. Or are we to accept Sheldrake's assumption that the parrot turns his telepathic interest off and on, and it was on only when he uttered a word on the key list? That assumption is no more valid that Morgana's belief that the telepathy doesn't work as well when she makes an effort to send a telepathic message to her parrot. In any case, I wonder why Sheldrake didn't do a baseline study, where the parrot was videotaped for two-minutes at a time while Morgana was taking an aromatherapy bath or meditating or doing something unrelated to the key word pictures. Had he made several hundred such clips, he could then have randomly selected 71 and compared them to the 71 clips he used for his analysis. If there was no significant difference between the randomly selected clips and the ones that emerged during the experiment, then the telepathy hypothesis would not be supported. On the other hand, if he found a robust statistically significant difference, then the telepathy hypothesis would be supported. I suggest he do something along these lines when he attempts to replicate his parrot telepathy test.
In some trials, N’kisi repeated a given key word. For example, in one trial N’kisi said ‘‘phone’’ three times, and in another he said ‘‘flower’’ ten times, and in the tabulation of data the numbers of times he said these words are shown in parentheses as: phone (3); flower (10). For most of the statistical analyses, repetitions were ignored, but in one analysis the numbers of words that were said more than once in a given trial were compared statistically with those said only once for both hits and misses. For each trial, the key word or words represented in the photograph were tabulated. Some images had only one key word, but others had two or more. For example, a picture of a couple hugging in a pool of water involved two key words, ‘‘water’’ and ‘‘hug.’’ (Sheldrake and Morgana 2003)
He calculated 51 hits and 126 misses when repetitions were included. I'm not going to bother with any more detail because by now the overall picture should be clear. Once the statisticians went to work on the data, they were able to provide support for the claim that the data were consistent with the telepathic hypothesis. But nowhere in Sheldrake's paper can I find a claim that the parrot did three times better than expected by chance. In any case, I have to agree with the editor who published Sheldrake's parrot paper: the results have a statistical significance that is less than compelling. However, unlike that editor, my devout wish is that when such studies as these are published in the future, responsible journalists continue to ignore them and recognize them for the rubbish they are. On the other hand, if you happen to think your parrot is psychic, drop Dr. Sheldrake a line. He's set up a page just for you.
Sheldrake has responded to this article. His comments and my responses are posted here.
books and articles
new Grey parrots use reasoning where monkeys and dogs can’t "Christian Schloegl and his team at the University of Vienna, let six parrots choose between two containers, one containing a nut. Both containers were shaken, one eliciting a rattling sound and the other nothing. The parrots preferred the container that rattled, even if only the empty container was shaken....Thus, grey parrots seem to possess ape-like reasoning skills...." [/new]
Last updated 16-Aug-2012 | <urn:uuid:3c04e227-6238-4f75-ba68-f256ea5b9b66> | CC-MAIN-2013-20 | http://www.skepdic.com/nkisi.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.977373 | 3,683 | 2.546875 | 3 |
This is the second book wrote by Lee Lehman and presents in a very detailed manner the astrological dignities. It was published in 1989 by Whitford Press.
In Chapter 1 - Two Unsung Revolutions in Astrology the author explains how the Copernican Revolution changed the way astrologers understand dignities. At page 18 one can find a table with traditional and modern essential dignities.
Chapter 2 - Using Traditional Rulerships
Here you'll find many practical examples of charts analyzed using traditional dignities. There are presented five countries (Confederate States of America, Italy, Iran, Switzerland, USSR), five corporations (General Motors, Ford, Chrysler, Coca-Cola, Pepsi), five individuals (Jane Austen, Lewis Carroll, Doyle Arthur, Niccolo Machiavelli, Mark Twain) and one horary chart.
Of course, it is always nice to see how the theory applies in practice, but I was expecting from these examples to emphasize the different results which appears when analyzing the charts with traditional and modern dignities. Unfortunately, this is not happening, the charts are analyzed using only traditional dignities.
In Chapter 3 - The Origin of Rulerships: A Botanical Interlude you can find out which planer or sign rules every planet. You'll see that onion is ruled by Mars, beans by Venus, holly by Saturn etc. Also, there is a table with the medicinal uses of Jupiter- ruled plants. I didn't test these, but it may be helpful.
Chapter 4 - Modern “Rulerships”: Do They Work?
The author is trying to prove that modern rulerships aren't working well and to find arguments. She points out that:
“when modern astrologers discuss the modern rulerships the criterion appears to be: Which body (planet, asteroid or comet) has qualities which most resembles the sign in question?”
So, modern rulerships are assigned counting if a planet qualities are similar with the sign qualities and not looking at the planet strength in a sign. See another quotation:
“We haven't any evidence that the ancients thought that Pisces and Jupiter were synonymous. It was a question of the strength of Jupiter in Pisces, not the similarity of Jupiter and Pisces.”
Now, I think the idea is pretty clear. I must say that I totally agree with this point of view.
Then the charts of Marie Curie, Jeddu Krishnamurti, Adolf Hitler and Death of Dracula are analyzed. This time, Lee Lehman makes an analogy between the charts interpretations with modern and traditional rulerships. The results are pretty good and the lecture enjoyable.
Only one problem, from my point of view. It is analyzed the chart “Death of Dracula”, where Lee writes things like: “I have been fascinated by charts of people who are, so to speak, energy sucks”, “Scorpio Sun (life of the vampire)”, etc. Hei, I am from Romania and I tell you there is no vampire. Dracula is just a myth assigned to a Romanian prince, Vlad III of Wallachia. It is true that he was cruel and liked to kill people by impaling them on a sharp pole, but everything else is imagination.
Chapter 5 – The Meaning of Each of the Essential Dignities
In this chapter you'll find some general characteristics for the five essential dignities: ruler, exaltation, triplicity, term and face. At page 127 is a table with key words associated with these dignities. Starting from these key words Lee Lehman gives many descriptive explanations for dignities, but it just seems to much! There are the same things explained over and over again, it seemed pretty boring to me.
In Chapter 6 – A Statistical Interlude the author is trying to determine the influence of terms (both Chaldean and Egyptian) making a few tests. She selected a number of charts from different categories (suicide, scientist, sport champions) and counted the terms for each planet.
In the final, we can see that the planet that rules the category (for example, Mars for sport champions) obtained more points that usually, on a normal pattern. Even the results apparently validates the importance of terms I won't give to much credit to such a test. Why? Because I don't see terms so important to determine a person belong to a category or another. For example, more points in the term of Saturn won't drive you to suicide because can be many other (not even major) aspects that can change this influence.
Probably, I just don't believe terms are so important an if Lee Lehman is making those test it is clear that she also has doubts.
Chapter 7 – Detriment, Falls and Peregrines means several pages where you can find short descriptions for every planet detriment and fall.
In Chapter 8 – Conclusions there are the final words.
MY EVALUATION: 6
Conclusion. If I would have to say quickly, at my first impression, some words about this book I think would be: “too much noise for nothing”. But, then, if you think for a moment you realize that you can't say “for nothing” because dignities are a very important part in astrology and one could write a whole interesting book about this subject.
So, back to my reasoning, why this impression? Why “too much noise for nothing?”. Maybe, because this book presents shortly the five dignities associated with some main characteristics, ideas repeated in different chapters, but the rest of the book is somewhat near the subject.
You can read about history, botany, statistics, all connected with dignities, but the book doesn't seem to touch the essential points. It is a surface play. It doesn't have those clear, rational statements that gives you a better understanding of the subject.
If a medium astrologer reads this book I don't think will have much to learn and to integrate in his astrological system. Maybe I am a little too harsh, but it is my purpose here to criticize and to present a clear point of view about the astrological books I read. My evaluation is 6. | <urn:uuid:3a9f6d3f-436b-449d-865e-ed292dd45fc0> | CC-MAIN-2013-20 | http://astrologycritics.com/essential-dignities.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.954156 | 1,295 | 2.671875 | 3 |
Mali has been engrossed in civil war since January 2012, when separatists in Mali’s northern Azawad region began demanding independence from the southern, Bamako-based government. After forcing the Malian military from the north, however, the separatist forces soon became embroiled in a conflict of their own, between the original Mouvement National pour la Libération de l’Azawad (MNLA) and extremist Islamist splinter factions closely linked with Al-Qaeda. On 11 January 2013, France responded to Mali’s urgent request for international assistance and initiated ‘Operation Serval’ to aid the recapture of Azawad and defeat the extremist group. From the 18th, West African states began reinforcing French forces with at least 3,300 extra troops.
In a BBC ’From Our Own Correspondent’ editorial, Hugh Schofield wrote of ‘la Francafrique’, or France’s considerable interests in West Africa held over from the end of formal empire. In fits and spurts, France has sought to extract itself from la Francafrique and to seek a new relationship with the continent. But in the complex world of post-colonial relationships, such a move is difficult. France retains strong economic, political, and social links with West Africa. Paris, Marseille, and Lyon are home to large expatriate African communities. Opinions at l’Elycée Palace, too, have wildly shifted over the years. Jacques Chirac, at least according to Schofield, was ‘a dyed-in-the-wool Guallist’, and an ideological successor to a young François Mitterand who, in 1954, defiantly pronounced that ‘L’Algérie, c’est la France’. Nicolas Sarkozy, on the other hand, dramatically distanced himself both from Chirac and from the la Francafrique role.
The problem is, at least in part, topographical in nature. West Africa’s geography is dangerous, vast, and difficult to subordinate. On the eve of much of West Africa’s independence from France in 1961, R J Harrison Church spoke of the so-called Dry Zone, the area running horizontally from southern Mauritania across central Mali and Niger, as the great “pioneer fringe” of the region’s civilization. David Hilling, in his 1969 Geographical Journal examination, added that by “taming” the Saharan interior, France gained an important strategic advantage over their British rivals in the early twentieth century, enjoying access to resources unavailable along the coast.
But, as A T Grove discussed in his 1978 review, “colonising” West Africa was much easier said than done, and the French left a West Africa mired in dispute, open to incursions, and still heavily reliant on the former imperial power. The French relationship with the region’s extreme geography was difficult at best; political boundaries were similar to those of the Arabian Peninsula and the Rub ‘al-Khali in particular: fluid, ill-defined, and not always recognised by local peoples. European-set political boundaries only exacerbated tensions between indigenous constituencies who had little or no say in the border demarcations.
French and African efforts to dam the Niger River, for instance, were hampered by high costs, arduous terrain, and political instability well into the 1960s. On independence, the French left what infrastructure they could, mostly in West Africa’s capital and port cities; the vast interiors were often left to their own devices. As a result of these events, France has maintained a large military, economic, and social presence in the region ever since. The difficulty is that such areas under weak political control, such as the Malian, Somalian, and Sudanese deserts, have become havens for individuals who wish to operate outside international and national law.
R J Harrison Church, 1961, ‘Problems and Development of the Dry Zone of West Africa‘, The Geographical Journal 127 187-99.
David Hilling, 1969, ‘The Evolution of the Major Ports of West Africa‘, The Geographical Journal 135 365-78.
A T Grove, 1978, ‘Geographical Introduction to the Sahel‘, The Geographical Journal 144 407-15.
Ieuan Griffiths, 1986, ‘The Scramble for Africa: Inherited Political Boundaries‘, The Geographical Journal 152 204-16.
‘Le Mali attend le renfort des troupes ouest-africaines‘, Radio France Internationale, 19 January 2013, accessed 19 January 2013.
Hugh Schofield, ‘France and Mali: An “ironic” relationship’, BBC News, 19 January 2013, accessed 19 January 2013. | <urn:uuid:e1a169c6-9906-46a6-bb7d-724095f8ebef> | CC-MAIN-2013-20 | http://blog.geographydirections.com/tag/france/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.933288 | 1,015 | 2.75 | 3 |
Karuk Tribe: Learning from the First Californians for the Next California
Editor's Note: This is part of series, Facing the Climate Gap, which looks at grassroots efforts in California low-income communities of color to address climate change and promote climate justice.
This article was published in collaboration with GlobalPossibilities.org.
The three sovereign entities in the United States are the federal government, the states and indigenous tribes, but according to Bill Tripp, a member of the Karuk Tribe in Northern California, many people are unaware of both the sovereign nature of tribes and the wisdom they possess when it comes to issues of climate change and natural resource management.
“A lot of people don’t realize that tribes even exist in California, but we are stakeholders too, with the rights of indigenous peoples,” says Tripp.
Tripp is an Eco-Cultural Restoration specialist at the Karuk Tribe Department of Natural Resources. In 2010, the tribe drafted an Eco-Cultural Resources Management Plan, which aims to manage and restore “balanced ecological processes utilizing Traditional Ecological Knowledge supported by Western Science.” The plan addresses environmental issues that affect the health and culture of the Karuk tribe and outlines ways in which tribal practices can contribute to mitigating the effects of climate change.
Before climate change became a hot topic in the media, many indigenous and agrarian communities, because of their dependence upon and close relationship to the land, began to notice troubling shifts in the environment such as intense drought, frequent wildfires, scarcer fish flows and erratic rainfall.
There are over 100 government recognized tribes in California, which represent more than 700,000 people. The Karuk is the second largest Native American tribe in California and has over 3,200 members. Their tribal lands include over 1.48 million acres within and around the Klamath and Six Rivers National Forests in Northwest California.
Tribes like the Karuk are among the hardest hit by the effects of climate change, despite their traditionally low-carbon lifestyles. The Karuk, in particular have experienced dramatic environmental changes in their forestlands and fisheries as a result of both climate change and misguided Federal and regional policies.
The Karuk have long depended upon the forest to support their livelihood, cultural practices and nourishment. While wildfires have always been a natural aspect of the landscape, recent studies have shown that fires in northwestern California forests have risen dramatically in frequency and size due to climate related and human influences. According to the California Natural Resources Agency, fires in California are expected to increase 100 percent due to increased temperatures and longer dry seasons associated with climate change.
Some of the other most damaging human influences to the Karuk include logging activities, which have depleted old growth forests, and fire suppression policies created by the U.S. Forest Service in the 1930s that have limited cultural burning practices. Tripp says these policies have been detrimental to tribal traditions and the forest environment.
“It has been huge to just try to adapt to the past 100 years of policies that have led us to where we are today. We have already been forced to modify our traditional practices to fit the contemporary political context,” says Tripp.
Further, the construction of dams along the Klamath River by PacifiCorp (a utility company) has impeded access to salmon and other fish that are central to the Karuk diet. Fishing regulations have also had a negative impact.
Though the Karuk’s dependence on the land has left them vulnerable to the projected effects of climate change, it has also given them and other indigenous groups incredible knowledge to impart to western climate science. Historically, though, tribes have been largely left out of policy processes and decisions. The Karuk decided to challenge this historical pattern of marginalization by formulating their own Eco-Cultural Resources Management Plan.
The Plan provides over twenty “Cultural Environmental Management Practices” that are based on traditional ecological knowledge and the “World Renewal” philosophy, which emphasizes the interconnectedness of humans and the environment. Tripp says the Plan was created in the hopes that knowledge passed down from previous generations will help strengthen Karuk culture and teach the broader community to live in a more ecologically sound way.
“It is designed to be a living document…We are building a process of comparative learning, based on the principals and practices of traditional ecological knowledge to revitalize culturally relevant information as passed through oral transmission and intergenerational observations,” says Tripp.
One of the highlights of the plan is to re-establish traditional burning practices in order to decrease fuel loads and the risk for more severe wildfires when they do happen. Traditional burning was used by the Karuk to burn off specific types of vegetation and promote continued diversity in the landscape. Tripp notes that these practices are an example of how humans can play a positive role in maintaining a sound ecological cycle in the forests.
“The practice of utilizing fire to manage resources in a traditional way not only improves the use quality of forest resources, it also builds and maintains resiliency in the ecological process of entire landscapes” explains Tripp.
Another crucial aspect of the Plan is the life cycle of fish, like salmon, that are central to Karuk food traditions and ecosystem health. Traditionally, the Karuk regulated fishing schedules to allow the first salmon to pass, ensuring that those most likely to survive made it to prime spawning grounds. There were also designated fishing periods and locations to promote successful reproduction. Tripp says regulatory agencies have established practices that are harmful this cycle.
“Today, regulatory agencies permit the harvest of fish that would otherwise be protected under traditional harvest management principles and close the harvest season when the fish least likely to reach the very upper river reaches are passing through,” says Tripp.
The Karuk tribe is now working closely with researchers from universities such as University of California, Berkeley and the University of California, Davis as well as public agencies so that this traditional knowledge can one day be accepted by mainstream and academic circles dealing with climate change mitigation and adaptation practices.
According to the Plan, these land management practices are more cost effective than those currently practiced by public agencies; and, if implemented, they will greatly reduce taxpayer cost burdens and create employment. The Karuk hope to create a workforce development program that will hire tribal members to implement the plan’s goals, such as multi-site cultural burning practices.
The Plan has a long way to full realization and Federal recognition. According to the National Indian Forest Resources Management Act and the National Environmental Protection Act, it must go through a formal review process. Besides that, the Karuk Tribe is still solidifying funding to pursue its goals.
The work of California’s environmental stewards will always be in demand, and the Karuk are taking the lead in showing how community wisdom can be used to generate an integrated approach to climate change. Such integrated and community engaged policy approaches are rare throughout the state but are emerging in other areas. In Oakland, for example, the Oakland Climate Action Coalition engaged community members and a diverse group of social justice, labor, environmental, and business organizations to develop an Energy and Climate Action Plan that outlines specific ways for the City to reduce greenhouse gas emissions and create a sustainable economy.
In the end, Tripp hopes the Karuk Plan will not only inspire others and address the global environmental plight, but also help to maintain the very core of his people. In his words: “Being adaptable to climate change is part of that, but primarily it is about enabling us to maintain our identity and the people in this place in perpetuity.”
Dr. Manuel Pastor is Professor of Sociology and American Studies & Ethnicity at the University of Southern California where he also directs the Program for Environmental and Regional Equity and co-directs USC’s Center for the Study of Immigrant Integration. His most recent books include Just Growth: Inclusion and Prosperity in America’s Metropolitan Regions (Routledge 2012; co-authored with Chris Benner) Uncommon Common Ground: Race and America’s Future (W.W. Norton 2010; co-authored with Angela Glover Blackwell and Stewart Kwoh), and This Could Be the Start of Something Big: How Social Movements for Regional Equity are Transforming Metropolitan America (Cornell 2009; co-authored with Chris Benner and Martha Matsuoka). | <urn:uuid:003baaf4-69c7-4ee7-b37f-468bf9b55842> | CC-MAIN-2013-20 | http://www.resilience.org/stories/2012-10-19/karuk-tribe-learning-from-the-first-californians-for-the-next-california | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945849 | 1,714 | 3.296875 | 3 |
John Langley Howard was a revolutionary regionalist painter known for depicting labor and industry in California as well as his reverence for the natural world. Howard took a strong stance on social and environmental issues and used his art to communicate his strong emotional response toward each of his subjects.
Table of Contents
John Langley Howard was born in 1902 into a respected family of artists and architects. His father, John Galen Howard relocated the family to California in 1904 to become campus architect of the University of California, Berkeley. It was only after attending the very same campus his father helped to create, that Howard suddenly decided he wanted to pursue a career as an artist and not an engineer as previously planned. Following this decision, Howard enrolled in the California Guild of Arts and Crafts in Oakland and then transferred to the Arts Students’ League in New York City.
At the school, he met Kenneth Hayes Miller who supported Howard’s attitude because the “taught the bare rudiments of painting and composition, and stressed the cultivation of the ultra-sensitive, intuitive approach” (Hailey 56). After saving his money, Howard travelled to Paris for six months to seek out his own artistic philosophy. However, it quickly became apparent to Howard that he placed more value on pure talent than professional training. In 1924, Howard left art school to pursue his career and marry his first wife, Adeline Day. He had his first one-person exhibition at the Modern Gallery in San Francisco in 1927. Shortly after, he attempted portraiture.
Following the start of the Depression, Howard found himself appalled by the social conditions and began to follow “his own brand of Marxism.” Howard and his wife began to attend meetings of the Monterey John Reed Club, discussing politics and social concerns. Soon, the artist became determined to communicate society’s needs for the betterment of the future. His landscapes began to include industry and its effects to the surrounding region. In 1934, Howard was hired through the New Deal Public Works Art Project to create a mural for the inside of Coit Tower on Telegraph Hill in San Francisco depicting California industry. The project called for twenty-seven artists to be hired to paint frescos inside the newly erected monument funded by philanthropist Lillie Hitchcock Coit. Each artist was to depict a scene central to California living, including industry, agriculture, law, and street scenes of San Francisco.
Howard’s completed fresco drew notorious attention for showing an unemployed worker reading Marxist materials, a gathered group of unemployed workers, and a man panning for gold while watching a wealthy couple outside of their limousine. In a nearby mural by Bernard Zakheim (1896-1985), Howard himself was used as a model. He is shown crumpling a newspaper and grabbing a Marxist book from a library shelf. This soon led to the artists being linked to a local group of striking dock workers. They were accused of attempting to lead a Communist revolution. Howard’s murals as well as the work of Clifford Wight (1900-1960) and Zakheim became highly scrutinized, and the uproar over the works led to a delay in opening Coit Tower. In order to protect their work from being defaced or completely destroyed, the muralists chose to sleep outside the tower. The SF Art Commission ultimately cancelled the opening of Coit Tower as a result of the controversy and did not open it until months later.
During this time, Howard relocated his family to Santa Fe, New Mexico citing his son’s health concerns for almost two years before returning to Monterey in 1940. Following the onset of World War II, he had a renewed interest in landscape and soon ceased to include social commentary within his work, thus removing the human figure from his paintings. The artist divorced his first wife in 1949. In 1951, Howard’s art took another turn when the artist painted The Rape of the Earth which rallied against the destruction of nature by technology, making Howard one of the first “eco-artists.” During the same year he also married sculptor Blanche Phillips (1908-1976). He began illustrating for Scientific American Magazine and used this medium to refine his technique.
Howard’s landscapes began turning to “magic realism” or “poetic realism” as Howard preferred to call it. This method is described as the use of naturalistic images and forms “to suggest relationships that cannot always be directly described in words” (Aldrich 184). His aim was to communicate a poetic and spiritual connection with the landscape depicted. Overall, Howard lived in more than 20 different locations during his career.
In 1997, Howard attended the dedication of Pioneer Park at Coit Tower and was the only surviving member of the twenty-seven muralists included in the original project. The murals were restored by the City of San Francisco in 1990 after water damage and age dictated the need for restoration. Howard died at the age of 97 in his sleep at his Potrero Hill home in 1999.
II. AN ANALYSIS OF THE ARTIST'S WORK
“I think of painting as poetry and I think of myself as a representational poet. I want to describe my subject minutely, but I also way to describe my emotional response to it…what I’m doing is making a self-portrait in a peculiar kind of way.” – John Langley Howard
John Langley Howard was widely considered a wanderer and a free spirit. While Howard did receive academic training from the California Guild of Arts and Crafts in Oakland and the Arts Students’ League in New York City, he chose to align himself with instructors whose opinions of art education matched his predetermined beliefs. These teachers included Kenneth Hayes Miller (1876-1952) who valued an analytical, bare bones approach to art instruction and supported greater personal development of intuitive talent. Howard expressed this viewpoint stating that:
“I want everything to be meaningful in a descriptive way. I want expression and at the same time I want to control it down to a gnat’s eyebrow. I identify with my subject. I empathize with my subject” (Moss 62).
In the 1920s, Howard became known as a Cezanne-influenced landscape artist and portraitist. Tempera, oil, and etching became his primary media while his subject matter turned to poetic and often spiritually infused imagery which would resurface later in his career. Earth tones and very small brushstrokes were utilized, allowing Howard to refine his images.
Howard exhibited frequently with his brothers Charles Howard (1899-1978) and Robert Howard (1896-1983). Critic Jehanne Bietry wrote of their joint Galerie Beaux Arts show that: “of (the Howard brothers), John Langley is the poet, the mystic and the most complex…there predominates in his work a certain quality, an element of sentiment that escapes definition but is the unmistakable trait by which one recognizes deeper art” (Hailey 60). It is significant that a critic would accurately take note of Howard artistic aims at such an early stage because what Bietry describes ultimately became the primary focus of Howard’s career.
Howard experienced a dramatic change in medium when he was commissioned to paint a mural for the Coit Tower WPA project in 1934. The project was Howard’s first and only mural and provided the artist with an outlet for his newly discovered Marxist social beliefs. While Howard supported a political agenda rather explicitly in his image, his focus on deeper subject matter permeates throughout the work. Most important to Howard is “the idea of human conflict that [he] pictorializes and deplores – man’s tragic flaw manifest again in this particular situation” (Nash 79). Howard’s work had progressed steadily into the realm of social realism until the backlash against the Coit Tower murals led him in a new direction.
Howard abandoned explicit statements of social commentary and returned to his roots as a landscape painter. However, this did not prevent the artist from illustrating important issues because he then became one of the first “eco-artists.” Through his painting, Howard investigated the role of technology on the environment and used the San Francisco Bay Area as well as Monterey to demonstrate his point of view. He continued following his original artistic tendencies by delving into “magic realism” or “poetic realism” which utilized the spiritual connection that Howard sought to find within his work. Art critic Henrietta Shore recognized the balance that Howard achieved within his work, stating that he “is modern in that he is progressive, yet his work proves that he does not discard the traditions from which all fine art has grown” (Hailey 65). Overall, Howard’s career presents a unique portrait of individual expression and spiritual exploration.
1902 Born in Montclair, New Jersey
1920 Enrolls as an Engineering major at UC Berkeley
1922 Realizes he wants to be an artist
1923-24 Attends Art Students’ League in New York
1924 Leaves art school
1924 Marries first wife, Adeline Day
1927 First one-person exhibition held at The Modern Gallery, San Francisco
1928 First child, Samuel born
1930 Daughter Anne born
1934 Commissioned to Paint Coit Tower mural, San Francisco
1940 Studies ship drafting and worked as a ship drafter during World War II
1942 Serves as air raid warden in Mill Valley, CA
1949 Divorces his first wife
1950 Teaches at California School of Fine Arts, San Francisco
1951 Marries second wife, sculptor Blanche Phillips
1951 Moves to Mexico
1951 Paints The Rape of the Earth communicating his eco-friendly stance
1953-1965 Illustrates for Scientific American magazine
1958 Teaches at Pratt Institute Art School, Brooklyn, NY
1965 Moves to Hydra, Greece
1967 Moves to London
1970 Returns to California
1979 Blanche Phillips dies
1980 Marries Mary McMahon Williams
1999 Died in his sleep at home San Francisco, California
California Palace of the Legion of Honor, CA
City of San Francisco, CA
IBM Building, New York, NY
The Oakland Museum, CA
The Phillips Collection, Washington D.C.
San Francisco Museum of Modern Art, CA
Security Pacific National Bank Headquarters, Los Angeles, CA
Springfield Museum of Fine Arts
University of Utah, UT
1927 Modern Gallery, San Francisco, CA
1928 Beaux Arts Gallery, San Francisco, CA
1928 East-West Gallery, San Francisco, CA
1928-51 San Franciso Art Association, CA
1935 Paul Elder Gallery, San Francisco, CA
1936 Cincinnati Art Museum, OH
1936 Museum of Modern Art, San Francisco, CA
1939 Golden Gate International Exposition, Department of Fine Arts, Treasure Island, CA
1939 Museum of Modern Art, San Francisco, CA
1941 Carnegie Institute, Pittsburgh, PA
1943 Corcoran Gallery, Washington D.C.
1943 M. H. de Young Memorial Museum, San Francisco, CA
1946-47 Whitney Museum, NY
1947 Rotunda Gallery, City of Paris, San Francisco, CA
1952 Carnegie Institute, Pittsburgh, PA
1956 Santa Barbara Museum of Art, CA
1973 Capricorn Asunder Gallery, San Francisco, CA
1974 Lawson Galleries, San Francisco, CA
1976 de Saisset Art Gallery and Museum, CA
1982 San Francisco Museum of Modern Art Rental Gallery, San Francisco, CA
1983 California Academy of Sciences, CA
1983 Monterey Museum of Art, CA
1986 Charles Campbell Gallery, San Francisco, CA
1987 Martina Hamilton Gallery, NY
1988 Oakland Museum, CA
1989 Tobey C. Moss Gallery, CA
1991 M. H. de Young Memorial Museum, San Francisco, CA
1992 Tobey C. Moss Gallery, CA
1993 Tobey C. Moss Gallery, CA
California Society of Mural Painters’ and Writers’ and Artists’ Union
Carmel Art Association
Club Beaux Arts
San Francisco Art Association
Society of Mural Painters
Marin Society of Artists
Monterey John Reed Club
Anne Bremer Memorial Award for Painting, San Francisco Art Association
First Prize, Pepsi-Cola Annual “Portrait of America”
First Prize, San Francisco Art Association
Award, City of San Francisco Art Festival
Citation for Merit, Society of Illustrators, New York
- 1. Aldrich, Linda. “John Langley Howard.” American Scene Painting: California, 1930s and 1940s. Irvine, Westphal Publishing: 1991.
- 2. Hailey, Gene. “John Langley Howard…Biography and Works.” California Art Research Monographs, v. 17, p.54-92. San Francisco: Works Progress Administration: 1936-1937.
- 3. Moss, Stacey. The Howards, First Family of Bay Area Modernism. Oakland Museum: 1988.
- 4. Nash, Steven A. Facing Eden: 100 Years of Landscape Art in Bay Area. University of California Press: 1995.
IX. WORKS FOR SALE BY THIS ARTIST | <urn:uuid:b1ad8cf1-1721-4d74-824a-229b6b70a91b> | CC-MAIN-2013-20 | http://www.sullivangoss.com/johnlangley_Howard/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.95768 | 2,757 | 3.625 | 4 |
This tutorial shows how to send modifications of code in the right way: by using patches.
The word developer is used here for someone having a KDE SVN account.
We suppose that you have modified some code in KDE and that you are ready to share it. First a few important points:
Now you have the modification as a source file. Sending the source file will not be helpful, as probably someone else has done other modifications to the original file in the meantime. So your modified file could not replace it.
That is why patches exist. Patches list the modifications, the line numbers and a few other useful information to be able to put that patch back into the existing code. (This process is called "patching" or also "applying a patch.")
The main tool for creating patches is a tool called diff, which makes the difference between two files. This tool has a mode called unified diff, which KDE developers use. Unified diffs have not just the difference between the file but also the neighborhood around the differences. That allows to patch even if the line numbers are not the same anymore.
The most simple patch is created between the modified file (here called source.cpp) and the non-modified version of the file (here called source.cpp.orig.)
diff -u -p source.cpp.orig source.cpp
That lists the difference between the two files in the unified diff format (and with function name information if possible.) However it only displays it to screen, which is of course not the goal. So you need to redirect the output.
diff -u -p source.cpp.orig source.cpp > ~/patch.diff
~/patch.diff is here an example and you can create the file where you prefer with the name that you prefer. (You will soon find out that it is probably not a good idea to create a patch where the source is.)
But normally, you do not just change one file and you do not keep the original version around to be able to make the difference later. But here too, there is a solution.
The program svn, which is used on the command line interact with the SVN server, has a diff function too: svn diff.
You can run it like this and it will give you the difference of the current directory and all sub-directories below it. Of course, here too, you want to redirect the output.
svn diff > ~/patch.diff
There are useful variants too (shown here without redirection)
Note: even if svn can make the difference of another directory (svn diff mydirectory), it is not recommended to do it for a patch that should be applied again. (The problem is that the person that will apply the patch will have to be more careful about how he applies it.)
Note: for simple diff, like those shown in the examples above, svn diff can be used offline, therefore without an active connection to the KDE SVN server. This is possible, as svn keeps a copy of the original files locally. (This feature is part of the design of SVN.)
By default, svn diff does not have a feature like the -p parameter of diff. But svn allows that an external diff program is called, so you can call diff:
svn diff --diff-cmd diff --extensions "-u -p"
The procedures described above work very well with text files, for example C++ source code. However they do not work with binary files, as diff is not made to handle them. And even if SVN can internally store binary differences, svn diff is not prepared to do anything similar yet, mainly because it currently uses the unified diff format only, which is not meant for binary data.
Therefore, unfortunately, there is little choice but to attach binary files separately from the patch, of course attached in the same email.
First, you need to make svn aware of files you have added.
svn add path/to/new/file /path/to/another/new/file
Then run svn diff as before.
Note that if you do svn revert, for example, the files you created will NOT be deleted by svn - but svn will no longer care about them (so they won't show up when you do svn diff, for example). You will have to rm them manually.
(TODO: are there any other issues with adding new files if you don't have commit access?)
Now you are ready to share the patch. If your patch fixes a bug from KDE Bugs, then the easiest way is to attach it there, see next section.
The main way of sharing a patch is to email to a mailing list. But be careful not to send big patches to a mailing list, a few 10KB is the limit.
If you find that the patch is too big to send to a mailing list, the best is to create a bug report in KDE Bugs and to attach the patch there, after having created the bug report.
Another possibility, however seldom used, is to post the patch on a public Web server (be it by HTTP or FTP) and to send an email to the mailing list, telling that the patch is waiting there.
Another variant is to ask on the mailing list which developer is ready to get a big patch. (Try to give its size and ask if you should send it compressed, for example by bzip2.)
A last variant, if you know exactly which developer will process the patch and that you know or that you suppose that he currently has time, is to send the patch to a developer directly. (But here too, be careful if your patch is big. Some KDE developers have still analog modems.)
In this section we assume that you have chosen to add your patch to an existing KDE bug or that you have created a bug report just for your patch.
Even if this tutorial is more meant to send patches to a mailing list, most of it can be applied to adding a patch to KDE Bugs.
You have two ways to do it:
To send an email to a bug report, you can use an email address of the form firstname.lastname@example.org where 12345 is the bug number. Please be sure to attach your patch and not to have it inlined in your text. (If it is inlined, it would be corrupted by KDE Bugs, as HTML does not respect spaces.)
Note: if you send an email to KDE Bugs, be careful to use as sender the same email address as your login email address in KDE Bugs. Otherwise KDE Bugs will reject your email.
Note: if you create a new bug report just for your patch, be careful that you cannot attach a patch directly when creating a new bug. However as soon as the new bug is created, you can then attach files, one-by-one, therefore also patches.
Warning: sometimes your patch will be forgotten because the developers do not always closely monitor the bug database. In this case, try sending your patch by email as described below. If that also does not help, you can always talk to the developers on IRC
Assuming that you have chosen to send the patch to a mailing list, you might ask yourself: to which one?
The best destination for patches is the corresponding developer mailing list.
In case of doubt, you can send any patch for KDE to the kde-devel mailing list. (However with an increased risk that you would miss the right developer.)
Of course, if you know exactly which developer will process the patch and that you know or that you suppose that he currently has time, then you can send the patch to him directly.
Now you have a patch redirected into a file (for this example called patch.diff), you are ready to send it by email. But the first question: where?
Now that you have entered an email address, a good practice is to attach the patch to your file before writing anything else in the email. So you will not forget to attach it.
A little note here: yes, in KDE (unlike for the Linux Kernel for example), we prefer to have the patches sent as attachments.
Now you are ready to write the rest of the email. Please think of a title that matches your patch. (Think of having to find it again in the archives in a few months or even years.) A good habit is to precede the title by [PATCH]. So for example a title could be [PATCH] Fix backup files.
As for the body of the email, please tell to which file or directory your patch applies. For example for a file: The attached patch applies to the file koffice/kword/kwdoc.cpp or for a directory: The attached patch applies to the directory koffice/kword. This help the developers to have an overview of which code has been modified. Also tell for which branch it is meant, for example for trunk.
Then tell what your patch does. If it fixes a bug, then please give the bug number too. If the bug was not registered in KDE Bugs, then please describe instead the bug that is fixed. Similarly, if you know that the patch fixes a bug introduced from a precise SVN revision, please add the revision number.
Tell also what could be useful to the developers, for example if you could not completely test the patch (and why), if you need help to finish fixing the code or if it is a quick&dirty solution that should be fixed better in long-term.
Now check the email again to see if you have not forgotten anything (especially to attach the patch) and you can send the email.
One popular way of submitting patches is KDE's reviewboard. A big advantage over using the bugtracker of KDE is that the patches are less likely to be forgotten here. Also, the reviewboard allows inline review of diffs and other gimmicks.
First you need to check if the project you've created the patch for is actually using reviewboard. For this, go to the groups section and see if the project's group is listed there. If it is listed there, you should use the reviewboard, otherwise send the patch by other means.
For sending a patch, you first need to register. Then simply click New Review Request and fill out the form. The most important parts of the form are:
After you completed the form, a notification mail will be sent to the developers and they will answer you.
Now you have to wait that a developer reacts on your patch. (If you are not subscribed to the mailing lists where you have sent the patch, then monitor the mailing list archives] for such a message.)
The reaction is normally one of the following:
The first case is when nobody has answered. That perhaps means that you have chosen the wrong mailing list. Perhaps you have not explained correctly what the patch fixes or you have given a title that is not precise enough. If this happens, the developer might have overlooked the patch. Perhaps the developer that should have answered has not any time currently. (That too happens unfortunately.) The best is to try to work a little more on the patch, make a better description and try again a second time, perhaps to another mailing list or to use KDE Bugs instead.
If the developer tells you that your patch conflicts with changes that he is currently doing, you could probably not do much against it. Maybe you can discuss with him how you can effectively work with him on this piece of code.
If your patch was not accepted, you could work further on it. Probably you should discuss the problem on the mailing list to know in which direction you should work further.
If a developer wants a few changes, then work on the code to make the changes according to the critic. If you need help because you do not understand how to do the needed change, then ask it on the mailing list.
If your patch was accepted, congratulations! :) | <urn:uuid:b1579d04-7a6b-420c-9fe2-a0b676d91ec3> | CC-MAIN-2013-20 | http://techbase.kde.org/index.php?title=Contribute/Send_Patches&oldid=40759 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959777 | 2,482 | 3.0625 | 3 |
Started conversation May 4, 2004
It's incorrect to say that "gyratory system" is just the name for a roundabout that has got too big for its boots. In fact it's the original term which was soon displaced when they became common.
I'm not sure if the term "gyratory circus" was ever really used. There are a number of junctions in London known as circuses after the circular range of buildings around them. This was a particularly 18th century fashion, so of course the most famously beautiful one is in Bath.
The OED says of "gyratory":
Applied to a system of directing road traffic round a roundabout or through a system of one-way streets to avoid the need for one line of traffic to intersect another.
1909 Westm. Gaz. 7 Aug. 4/2 The gyratory principle, by which vehicles are directed into circular lines ingeniously devised to avoid intersection. 1926 Rep. Comm. Police Metropolis, 1925 16 in Parl. Papers (Cmd. 2660) XV. 239 Gyratory systems for the circulation of traffic, after years of discussion, reached the point of practical demonstration this year. 1928 Observer 5 Feb. 13/7 Now that every week dedicates a new bunch of streets to the Gyratory System. 1966 Guardian 8 Sept. 5/4 A new gyratory road system to ease traffic congestion..is to be built..at Stretford.
And of "roundabout":
A junction at which traffic moves one way round a central island. Cf. RONDPOINT b, ROTARY n. 3.
1927 Glasgow Herald 3 Jan. 7/2 There is only one draw~back to the roundabout, and that is the inconvenience caused to pedestrians. 1937 Times 13 Apr. (British Motor No.) p. viii/1 Roundabouts..have the advantage of keeping vehicles on the move. 1947 Daily Mail 22 May 3/4 Removal of the Mansion House to make room for a big round-about. 1955 Times 2 Aug. 9/7 Makeshift tactics are particularly evident in the proposed treatment at Hyde Park Corner which includes an extremely complicated roundabout. 1967 Listener 28 Sept. 398/1 People make only occasional use of their speedometer..on such critical occasions as the approach to roundabouts. 1977 Belfast Tel. 14 Feb. 5/9, 12 shots were fired at an armoured police vehicle near the roundabout at Narrow~water Castle.
And of "circus":
A circular range of houses. Also, a traffic roundabout. Often in proper names as Oxford Circus, Regent Circus.
1714 POPE Rape Lock IV. 117 Sooner shall Grass in Hide-Park Circus grow. 1766 ANSTEY Bath Guide II. ix. 57 To breathe a purer Air In the Circus or the Square. 1771 SMOLLETT Humph. Cl. 23 Apr., The same artist who planned the Circus has likewise projected a crescent [at Bath]. Ibid. The Circus is a pretty bauble..and looks like Vespasian's amphitheatre turned outside in. 1794 Looker-on No. 89 The squares and circuses are no longer the only scenes of dignified dissipation. 1898 Tit-Bits 15 Jan. 300/3 Bridges, of light and tasty design, across all the main thoroughfares, and at the various ‘circuses’ and cross roads.
Posted Sep 2, 2004
I'm a little confused. The definitions you gave seem to support the idea that there *is* a difference between a roundabout and a gyratory.
Those definitions say that a roundabout is a junction, while a gyratory is a particular kind of one way system.
Of course a roundabout could be viewed as a very small one way system, so you could argue that all roundabouts are gyratories. But not all gyratories are really roundabouts according to the definitions you produced. (Specifically, a gyratory consisting of more than one junction is, by definition, not a roundabout.)
But in practice, isn't the common usage pretty much as I suggested? I'm only actually aware of two road systems commonly referred to as gyratories - one is the subject of this article, and the other is in Reading. And both of them are distinctly on the large side. In particular, they both have multiple junctions.
So the common usage of the word seems to be to describe overgrown roundabouts (and more specifically, multi-junction ones) in practice.
Can you point to any counterexamples in real use? I'm just going on the gyratories I know - I've not done any exhaustive research on gryatories across the nation.
Posted Aug 12, 2005
There is a gyratory system at Park Gate near Southampton that consists of a pair of roundabouts connected by a pair of one way carriage ways.
Posted Aug 12, 2005
I have to correct my previous entry because, while I have heard it called the Park Gate gyratory system, it is not recorded as a gyratory system on Hampshire's register of adopted roads. It is recorded as the following separate components... Botley Road roundabout, the Bridge Road dual carriageway and the Brook Lane roundabout.
However, I find there is a gyratory system just east of Park Gate on the A27 at Titchfield and this is a large roundabout with traffic lights and many junctions.
Complain about this post | <urn:uuid:ecec54f2-86e4-4767-b811-568614e9a17a> | CC-MAIN-2013-20 | http://www.h2g2.com/approved_entry/A303346/conversation/view/F37910/T416799 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.944392 | 1,153 | 2.59375 | 3 |
A notch or groove cut into a piece of material to allow two sections to be combined with a flush joint.
A woodcutting tool used to cut an L-shaped groove into a piece of material. see also Rabbet
A strong current in a stream or river.
1. An enclosed metal channel, usually fire-resistant, installed in a building to hold electrical wiring.
2. A chute that directs the flow of a material to a specific location in a device.
A channel holding electrical wiring that is designed to look like a piece of decorative trim or molding.
A channel holding electrical wiring designed to be installed on a floor. The unit has a low profile and sloping edges to facilitate walking over it.
The illegal practice of directing certain races away from some neighborhoods and into others.
1. A storage unit designed to hold various objects.
2. To cause a structure to shift so that it is out of plumb.
1. A force that causes a structure to shift so that it is out of plumb.
2. Installation of bricks or other masonry units so that each course is stepped back from the previous one.
Straight-line outward movement from a circle's center.
A power saw with a circular blade that is mounted on a moveable arm. The arm is lowered or raised to move the cutting blade to or away from the material to be cut.
A drill press with a moveable arm that can be swung to various positions on the work table.
An HVAC system with ductwork running outward from a central heating and/or cooling unit.
The surface of a log cut down the center.
Heating system where electrical or hot-water heating elements are installed in a concrete slab floor.
see Radiant heating
Use of radiation to generate heat such as with baseboard heating where the circulating hot water is radiated through conduction by thin metal fins at the bottom of the wall. The room is warmed by air circulating around the heating unit using convention.
Heating unit that is exposed and which transfers heat generated by hot eater or steam through conduction. When the air circulates around the radiator using convention, the room is heated.
The distance from the center of a circle to the circumference. One-half of the diameter of a circle.
A tool used for checking the radii of convex and concave surfaces.
Radioactive gas that seeps into some homes, from the ground, through sump pumps, cracks in the foundation, etc., it is considered a health hazard.
Any of the beams that slope from the ridge of a roof to the eaves to serve as support for the roof.
A metal fastener attached to the top plate of a wall to hold a rafter.
A rafter parallel to the gable end that projects out to form an overhang.
The end of a rafter extending beyond the line of a building's walls.
A guide used when cutting rafters.
The top plate of a building's walls. The rafters rest on the rafter plate.
The vertical cut made into a rafter so it will rest on the wall plate. see also Rafter Seat Cut
The horizontal cut made into a rafter so it will rest on the wall plate. see also Rafter Plumb Cut
Cutting a section off of the end of a rafter equal to one-half of the thickness of the ridge board (the rafter on the other side of the ridge board receives a similar cut).
Tables, often printed on a framing square, containing the data required to calculate angles and lengths of rafters for various roof types.
see Rafter Overhang
A horizontal structure used as a handhold or to block off a drop or other unsafe area.
1. Continuous metal bars on which wheeled vehicles travel (i.e. railroads).
2. The horizontal sections of a panel door.
3. The top and bottom sections of a window sash.
Waterproof cap, also called weatherheads, mast heads or entrance caps, which is placed at the upper part of an electrical mast at the point where the wires are run to the inside electrical meter. Wires hang from the pole to the entrance cap so that the entrance cap is not the low point in the downhill run from the pole because water will run to the low point before dripping to the ground. Wires enter the entrance cap at an upward angle through a tight insulator. Water is further stopped from getting through the entrance cap because of this entrance angle.
Wood where the fibers have swelled, usually because of becoming wet. Wood is often sanded with the grain raised to achieve an extremely smooth finished surface.
1. A fork-like tool used for gathering materials (i.e. leaves) or smoothing an area of soil.
2. A roof overhang on a building's gable end.
3. An angle between objects.
A masonry joint where a portion of mortar has been removed, creating a groove between masonry units. A raked joint if often used in brickwork.
Mortgage, most commonly used by the elderly who have substantial equity in their homes. A periodic payment is made to the borrower from the lender thus, increasing the loan balance, causing negative amortization.
A hydraulically powered piston used for driving a weight.
A sloping surface used to move from one elevation to another. | <urn:uuid:792f0695-cc94-4f2f-a4fe-56cf6b911ddf> | CC-MAIN-2013-20 | http://www.hrmls.com/awright/cgi-bin/aa.fcgi?+ZDFlN2Y2OTY0ZmE3ODBmM2IwM2EwZmFlYTg4MWRhNTISlI0ibvs8tuFLdidYcPWsbrYzAtzw0l6PsmvbEvsSvSHxLc7ZUWfuT%2Fq8jXxxMg2fZvJBI3GsCgqsqB4A6YU2OvgSbvU%3D | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.926264 | 1,123 | 2.984375 | 3 |
What Is Air Pollution?
in its great magnitude has existed in the 20th century from the
coal burning industries of the early century to the fossil burning technology in
the new century. The problems of
air pollution are a major problem for highly developed nations whose large
industrial bases and highly developed infrastructures generate much of the air
Every year, billions of tonnes of pollutants are released into the
atmosphere; the sources include power plants burning fossil fuels to the effects
of sunlight on certain natural materials. But
the air pollutants released from natural materials pose very little health
threat, only the natural radioactive gas radon poses any threat to health.
So much of the air pollutants being released into the atmosphere are all
results of man’s activities.
In the United Kingdom, traffic
is the major cause of air pollution in British cities. Eighty six percent of families own either one or two
vehicles. Because of the
high-density population of cities and towns, the number of people exposed to air
pollutants is great. This had led
to the increased number of people getting chronic diseases over these past years
since the car ownership in the UK has nearly trebled. These include asthma and respiratory complaints ranging
through the population demographic from children to elderly people who are most
at risk. Certainly those who are
suffering from asthma will notice the effects more greatly if living in the
inner city areas or industrial areas or even near by major roads.
Asthma is already the fourth biggest killer, after heart diseases and
cancers in the UK and currently, it affects more than three point four million
In the past, severe pollution in London during 1952 added with low winds
and high-pressure air had taken more than four thousand lives and another seven
hundred in 1962, in what was called the ‘Dark Years’ because of the dense
dark polluted air.
is also causing devastation for the environment; many of these causes are by man
made gases like sulphur dioxide that results from electric plants burning fossil
fuels. In the UK, industries and
utilities that use tall smokestacks by means of removing air pollutants only
boost them higher into the atmosphere, thereby only reducing the concentration
at their site.
These pollutants are often transported over the North Sea and produce
adverse effects in western Scandinavia, where sulphur dioxide and nitrogen oxide
from UK and central Europe are generating acid rain, especially in Norway and
Sweden. The pH level, or relative
acidity of many of Scandinavian fresh water lakes has been altered dramatically
by acid rain causing the destruction of entire fish populations.
In the UK, acid rain formed by subsequent sulphur dioxide atmospheric
emissions has lead to acidic erosion in limestone in North Western Scotland and
marble in Northern England.
In 1998, the
London Metropolitan Police launched the ‘Emissions Controlled Reduction’
scheme where by traffic police would monitor the amount of pollutants being
released into the air by vehicle exhausts.
The plan was for traffic police to stop vehicles randomly on roads
leading into the city of London, the officer would then measure the amounts of
air pollutants being released using a CO2 measuring reader fixed in
the owner's vehicle's exhaust. If the
exhaust exceeded the legal amount (based on micrograms of pollutants) the driver
would be fined at around twenty-five pounds.
The scheme proved unpopular with drivers, especially with those driving
to work and did little to help improve the city air quality.
In Edinburgh, the main causes of bad air quality were from the vast
number of vehicles going through the city centre from west to east.
In 1990, the Edinburgh council developed the city by-pass at a cost of
nearly seventy five million pounds. The
by-pass was ringed around the outskirts of the city where its main aim was to
limit the number of vehicles going through the city centre and divert vehicles
to use the by-pass in order to reach their destination without going through the
city centre. This released much of
the congestion within the city but did little very little in solving the
city’s overall air quality.
To further decrease the number of vehicles on the roads, the government
promoted public transport. Over two
hundred million pounds was devoted in developing the country's public transport
network. Much of which included the development of more bus lanes in
the city of London, which increased the pace of bus services.
Introduction of gas and electric powered buses took place in Birmingham
in order to decrease air pollutants emissions around the centre of the city.
Because children and the elderly are at most risk to chronic diseases,
such as asthma, major diversion roads were build in order to divert the vehicles
away from residential areas, schools and elderly institutions.
In some councils, trees were planted along the sides of the road in order
to decrease the amount of carbon monoxide emissions.
Other ways of improving the air quality included the restriction on the
amounts of air pollutants being released into the atmosphere by industries;
tough regulations were placed whereby if the air quality dropped below a certain
level around the industries area, a heavy penalty would be wavered against them.
© Copyright 2000, Andrew Wan. | <urn:uuid:ea6c54fe-1f6e-4a4c-bcb5-4f4c9e0fb6de> | CC-MAIN-2013-20 | http://everything2.com/user/KS/writeups/air+pollution | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.948933 | 1,097 | 3.25 | 3 |
The formation of counties was one of the first matters attended to by the Lords Proprietors after they received their charter in 1663 from King Charles II for the vast tract of land in America he called the province of Carolina. In 1664 the Proprietors formed "all that parte of the province which lyeth on the north east side or starboard side entering of the river Chowan now named by us Albemarle River together with the Islands and Isletts within tenn leagues thereof" into a county that they named Albemarle County for George Monck, the duke of Albemarle, himself one of the Proprietors. This was the site of the first permanent settlement in Carolina. They then divided the new county into four precincts: Currituck , Perquimans , Pasquotank , and Chowan . Albemarle County was subsequently enlarged, and in 1696 the area south of Albemarle Sound was removed from Albemarle and made into a new county, named Bath, which in turn was divided into the precincts of Beaufort , Hyde , Craven and Carteret .
The primary reason for establishing counties (or precincts) was to provide local seats of government where citizens could record documents, such as deeds or wills, and participate in court proceedings. At the same time, the sheriff was provided with a home base from which to fulfill his basic responsibilities of collecting taxes and maintaining law and order.
By 1738 Albemarle and Bath Counties had been dissolved and the 14 precincts then in existence became counties, a designation that has remained since the seventeenth century. Throughout the remainder of the colonial period, as settlement spread westward and population increased, older counties were divided and new ones formed. With statehood came an even greater rate of growth, and by 1800 the number had risen to 59 counties covering all of the state. In many cases, the dividing of counties caused heated political controversy, as eastern counties were often divided to maintain that region's majority in the state legislature against expanding representation from the piedmont and mountain regions. Shifts in population continued throughout the nineteenth century and into the twentieth century, resulting in even more counties. Larger counties were divided, and those in turn were sometimes divided yet again, until the seemingly magical figure of 100 was reached in 1911. (For a time, the number of counties was actually greater than 100, but some of these were ceded to Tennessee in 1789 and others were absorbed into other counties or never fully developed.) The number remained at 100, although in 1933 the General Assembly authorized the consolidation of existing counties subject to approval of the electorate. This could have resulted for the first time in a decrease from the 100 county figure, but as of the early 2000s there had been no such consolidations.
Initially, county government and judicial matters were in the hands of justices of the peace , who formed a body known as the Court of Pleas and Quarter Sessions . The justices were appointed by the governor, with strong input from the members of the colonial Assembly from the affected county, leaving the average citizen with no say as to who would run the government of the county in which he lived. At first the Court of Pleas and Quarter Sessions met wherever it was convenient to assemble a quorum, usually in a private home. A 1722 act of the Assembly instructed the justices to pick a site for a permanent seat of government for each precinct, where they were to buy an acre of land and build a courthouse. Whether in the early precinct days, or after the name of the local government entity was changed from precinct to county, the justices had the support of a sheriff for law enforcement, as well as a clerk of court and a register of deeds. Of the three, both the clerk of court and the register of deeds needed to remain in their offices in the courthouse, which left only the sheriff free to travel about the county. Accordingly, he was also designated tax collector, a position sheriffs continued to hold until the latter part of the twentieth century.
The general system of county government of the early colonial period, with the appointed members of the Court of Pleas and Quarter Sessions running things, was carried over into statehood, and little changed until the adoption of the North Carolina Constitution of 1868 . The system called for by the new constitution, known as the Township and County Commissioner Plan, gave control of county government to five commissioners, to be elected at large by the county's voters. In addition, each county was divided into townships whose residents elected two justices to serve as the township's governing body, as well as a three-member school committee and a constable. The new system significantly reduced the General Assembly's control of county government, since the legislators no longer appointed the justices of the peace who made up the county court.
The Township and County Commissioner Plan, patterned after one previously adopted in Pennsylvania, did not prove universally popular in North Carolina and lasted less than a decade. At a constitutional convention in 1875, the General Assembly was authorized to change the system, and in the session of 1877 townships were reduced to little more than geographic and administrative subdivisions of the counties. This seriously reduced the authority of county commissioners.
The modern system of county government, in which an elected board of commissioners is responsible for managing a county's affairs, including setting the rate and collecting taxes and determining where funds should be expended, dates to the early twentieth century. Periodically after that, the General Assembly conferred additional authority and responsibility on the county commissioners, until at the end of the century they had been provided with such a wide range of "home-rule" statutes that many counties found it impossible to run their greatly expanded business without professional help. This led to the adoption by many counties of the County Manager Plan. Under this plan, commissioners employ a county manager to serve as a sort of chief executive of the county business-in some instances, the largest business in the county-with the manager having certain independent authority, including that of hiring and firing employees.
As with other matters, the state determines what sources the counties may tap for income. Traditionally, the real estate tax has been the primary revenue source for North Carolina counties. However, especially in the last half of the twentieth century, counties were able to prevail on the General Assembly to let them collect from a variety of other sources, among those favored being local sales taxes, land transfer taxes, meals taxes, and occupancy taxes.
A. Fleming Bell, ed., County Government in North Carolina (3rd ed., 1989).
David LeRoy Corbitt, The Formation of the North Carolina Counties, 1663-1943 (1969).
1 January 2006 | Stick, David | <urn:uuid:59d1379a-70ad-4360-a62a-5e2f17f7c1b4> | CC-MAIN-2013-20 | http://ncpedia.org/print/2268 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.978285 | 1,382 | 3.90625 | 4 |
These are poly-aromatic compounds, insoluble in n-heptane, with a number of carbon atoms greater than 50. The asphalthene content of a crude may be the cause of deposits in inter-changers and/or lines. In fact, the mix of a crude having a high asphalthene content with a paraffinic crude can displace the balance of the asphalthenes, precipitating them. A high asphalthene content ensures that the vacuum pitch will be suitable for producing asphalt.
ASTM D86 distillation is a test that measures the volatility of gasoline, kerosene and diesel.
Basic Sediment and Water (BSW)
The BSW relates to the content of free water (not dissolved) and sediments (mud, sand) in the crude. It is important that its reading is low in order to avoid dirtiness and difficulties during the crude processing, in which the steam produced by the free water can damage the oven. It is reported as a percentage in volume over the crude.
This is the weight of the residue remaining after the combustion of a fuel sample. It represents the facility of a heavy fuel to produce particles during combustion.
This is the measurement of the mass of a volume. It is expressed in kilograms per liter, or grams per cubic centimeter. Density depends on the temperature as this affects the volume of the substances.
Temperature at which a liquid stops flowing when cooled, through the precipitation of crystals of solid paraffin.
The draining temperature is very important as, in the unloading of paraffinic crudes using sea terminals with underwater pipelines of a certain length, the temperature of the crude can fall below the draining point, creating deposits of wax or solid paraffin in the pipelines, thus obstructing the flow.
This is the minimum temperature at which the vapors of a product flash or detonate momentarily when a flame is applied in controlled conditions. It represents the maximum temperature at which a product can be stored or transported in safe conditions.
This is the temperature at which the crystals formed during the cooling of a product sample disappear completely when the temperature rises in a controlled way
The metals content of a crude, vanadium and nickel, gives us an indication of their content in the heaviest products obtained in the refining. This is important because, for example, the metals in gas oil in vacuum are poison for the catalytic and hydro-cracking catalysts. A high vanadium content or metals in the combustible oil may cause oven and boiler tube breakage problems because they form corrosive products during combustion.
Number of cetane
This measures the ease with which the spontaneous ignition occurs of diesel oil using a standardized engine and a reference fuel.
The cetane rating is determined by making a comparison of the ignition time of a mix of cetane (C16)) and hepta-methyl-nonane (C 15), which has the same delay time in ignition as the fuel being examined. The cetane rating measured is the percentage of the cetane compound in the cetane/hepta-methyl-nonane mix.
The C16 has a cetane rating equal to 100 (it is an easily-ignited paraffin) and C15 has a cetane rating equal to 0 (as being a slow-combustion aromatic).
A high cetane rating represents a high ignition quality or a short delay time between the fuel injection and the start of combustion.
The diesel engine uses a high compression ratio to produce the spontaneous ignition of the diesel, instead of a spark as in the case of the internal combustion engine. The compressed air temperature in the diesel engine is sufficiently high to fire the diesel.
The lineal paraffins have a high cetane rating and therefore burn well; on the other hand, the aromatics are of a low cetane rating and burn badly, producing deposits of carbon and the production of black smoke. For that reason, high-quality diesel should have an aromatic content compatible with the specified cetane rating.
The cetane rating can be calculated based on the volatility (corresponding to the temperature of 50% distilled) and the density of the diesel and is called Calculated Cetane Rating. The reason for using the formula is the high cost of the cetane engine.
Octane number (NOR)
The RVP and NOR are the most important parameters of gasoline quality. The NOR measures the resistance of the gasoline to self-ignition or premature detonation in an engine's functioning conditions.
Self-ignition is noted for the hammering or noise produced when the gasoline self-ignites, detonating before the cylinder compresses all the gasoline and air mixture, losing power. The detonation produces sound waves that are detected using special microphones.
The octane rating is measured by comparing the noise of the detonation made by a reference fuel mixture in a standardized engine with that for which the fuel examined is made. The reference fuels are iso-octane (2, 2, 4 trimethyl pentane), with an octane rating equal to 100 (high resistance to hammering) and the n-heptane which has an octane rating of zero (very low resistance). The octane rating determined is the percentage in volume of iso-octane in the iso-octane/heptane mixture.
Fuels with a high octane rating have greater resistance to premature detonation than those of a lower octane rating. In addition, fuels with a high octane rating can be used in engines with a high compression ratio, which are more efficient.
There are two types of engine for the determination of the octane rating of gasoline. One uses the Research method and the other the Motor method.
The Research method represents the behavior of an engine in cities at low and moderate speeds. The Motor method represents situations with fast acceleration, like climbing gradients or overtaking.
There is another way of expressing the octane rating of a gasoline which is called Highway Octane. The Highway Octane rating is expressed as the sum of the Research octane and the Motor octane ratings divided by 2. The Highway Octane rating is used in the United States while the Research method is used in Chile.
Reid Vapor Pressure (RVP)
The Reid vapor pressure is an empirical test that measures the pressure in pounds per square inch (psi) exercised by the vapors or light components of the crude or of an oil product, in a closed container at a temperature of 100 °F (38 °C).
A high vapor pressure of the crude tells us that light products are present in it and that they will burn in the torch in the processing if there is no suitable recovery system. In the case of an internal combustion engine, excessive vapor pressure will cause a blockage which will impede the flow of gasoline.
Crude oil contains salt (NaCI) which comes from the oil fields or the sea water used as ballast by oil tankers. It is necessary to extract the salt with desalination equipment before the crude oil can enter the atmospheric distillation oven in order to avoid corrosion that is produced in the upper part of the atmospheric tower. The salt decomposes and produces chlorhydric acid. It is expressed in grams of salt per cubic meter of crude.
The temperature at which some products inflame spontaneously in contact with the air (without flame), probably due to the heat the show oxidation produces, which accumulates, raising the temperature to the inflammation point. Fortunately, the oil distillers have very high self-ignition temperatures and are therefore difficult to achieve; this is 450ºC in gasoline. Oily rags, on the other hand, self-ignite easily and cause fires and so should be suitably destroyed.
The ratio of the weight of a substance and the weight of an equal volume of water at the same temperature. In oil, the API specific gravity is used which is measured with hydrometers that float in the liquid. The API grades are read directly on the scale that stands above the liquid at the flotation line point. The API scale arose from the facility of graduating the hydrometer rod uniformly.
°API =141.5/(specific gravity) -131.5
The °API determines whether the crude or product is light or heavy and enables us to calculate the tons of this unloaded. A light crude has an API of 40-50 while a heavy one has 10-24.
Sulfur and the API are the properties with the greatest influence on the price of crude.
This is the resistance to degradation through heat or oxidation of an oil product. Products containing olefinic material are unstable and susceptible to degradation.
The sulfur content permits the foreseeing of difficulties in meeting product and atmospheric emission specifications, as treatment units are needed to meet these; it is also poison for some catalysts. It also enables us to see whether the plant metallurgy is the most suitable for processing it. It is expressed as a percentage in weight of sulfur.
Sulfuric acid (H2S)
A prior knowledge of the sulfuric acid content of the crude permits preventive actions and avoids accidents to people. The sulfhydric acid is very dangerous because it anesthetizes the olfactory nerve which prevents people from being aware of the situation and is mortal in small quantities. Personnel working in contact with the crude have therefore to wear protection equipment and personal sulfhydric acid sensors.
This is the degree of resistance of a liquid to flow. The greater the viscosity, the greater the resistance to flow. Viscosity is affected by the temperature, reducing it when the latter rises. It is measured by using special viscosimeters and is expressed in USS (Universal Saybolt Seconds), FSS (Furol Saybolt Seconds) and in centi-stokes.
Viscosity is important for fuel injection in engines and burners. It is also critical in the pumping of crude oil and products by pipeline. A higher viscosity than that designed for will reduce the desired flow and make a greater pump motor capacity necessary. The viscosity also affects measuring instrument factors, altering the readings.
The measurement of the facility with which a product vaporizes. Volatile products have high steam pressure and a low boiling point. It is measured through the ASTM D86 test and is expressed as the temperature at which certain volumes are distilled. | <urn:uuid:af7864c1-8a24-425f-8b7a-b3fc220bcc94> | CC-MAIN-2013-20 | http://www.enap.cl/english/glosario/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.917328 | 2,152 | 2.796875 | 3 |
Recent acts of violence alongside pending legislation and international pressure have brought to light the pressing need for lawmaking in support of LGBT rights in Chile. Together with protests for reforms in the education system, the public seems to be increasingly impatient about what the government is doing to protect LGBT rights. These demands are important beyond the scope of gay rights, because they have brought attention to the need for Chile to recognize, accept and protect the human rights of an evolving, heterogeneous culture as a fundamental prerequisite for continued prosperity.
The passage of an antidiscrimination law, which remained unresolved for over seven years, by a close 58-56 vote in the Chamber of Deputies this month was a basic necessity for the country. The Chilean Movement for Sexual Minorities (MOVILH) notes that in 2011 gay, lesbian and transgender Chileans were increasingly outspoken in reporting abuse and discrimination based on sexual orientation and gender identity. However, this recently passed antidiscrimination law does not deal with hate crimes per se, but rather defines illegal discrimination. Furthermore, certain passages have yet to be finalized in a mixed commission of Senators and Deputies on May 2. The recent death of gay youth Daniel Zamudio points to precisely why legislating solely on discrimination does not suffice in this case, serving as an exceptionally violent example as to why hate crimes require specific punishment under the law.
Zamudio received not only the public’s sympathy, but also worldwide attention including a briefing note from the UN Office of the High Commissioner for Human Rights’ spokesman, Rupert Colville, urging Chile to enact hate crime legislation. In this regard, the MOVILH also argues that Chilean society is not opposed to legislating on issues of gay rights and antidiscrimination in its entirety, but there is a lack of bravery and willingness within Congress to approach these pending issues. The recent Inter-American Court of Human Rights’ overturning of a Chilean court ruling against lesbian Judge Karen Atala, who lost custody of her children because of her same-sex relationship, is further international pressure for Chile to meet requirements stipulated by international agreements it has signed onto.
Chile’s gay rights deficit is worrying as the country continues to be viewed as an example for continued economic growth despite global market volatility. President Sebastian Piñera’s administration is cautious about giving into all public demands, as Chile’s Minister of Finance Felipe Larraín recently said: “If we surrender to the temptation of appeasing demands by giving in to all of them, we will never get to our final goal [development].” However, most gay rights issues rely merely on political willingness rather than investment for social welfare. Furthermore, acting on gay rights is not the investment equivalent of reforming a public education system.
On the contrary, the lack of legislative initiative to protect gay Chileans is hindering the business community’s business opportunities. Private initiatives have been taken to reach out to gay customers, like for example the granting of access to mortgages and family insurance plans to Banco de Chile customers in same-sex relationships, proving how it is not only the public that is restless, but also the private sector that recognizes the positive effects of social inclusion in business. What could potentially prove threatening to Chile’s continued economic success might just be the lack of recognition given to the longstanding need for social inclusion of minorities in the country’s legal framework. In contrast, Chile’s neighbor Argentina has clearly reaped the benefits of gay tourism and investment since legislating on gay marriage, while Chile continues to turn a blind eye to opportunity.
This is not to say that Piñera’s administration has not taken basic steps to promote the rights of gay couples, like for example Piñera’s public presentation of a civil unions bill in August 2011. However, this legislative project was not only perceived as a political tool in the midst of cabinet discussions with student protest leaders, but is also minimal when considering the prominent use of gay couples in campaign ads for Sebastian Piñera during his presidential campaign. For this reason, the inclusion of a question in the 2012 census that allows gay couples to state whether they live with a same-sex partner may become an important, confidential and unbiased figure that current and future administrations could use to advocate for legislation on gay unions.
This paradox persists despite the heightened quality of life of Chileans. Economic growth is creating an increasingly obvious deficit in basic human rights the country has yet to attain. In spite of continued development and well-regarded fiscal policy through the recession and into today’s administration, Chile continues to be in debt to not only its citizens and the country’s business community, but also to international human rights agreements on sexual orientation and gender identity. These international standards continue to be well above those demonstrated by the current state of gay rights legislation in Chile.
Eduardo Ayala is a guest blogger to AQ Online. He is Chilean-American and works at the Council of the Americas in New York. He graduated from The George Washington University, during which time he also completed coursework at The University of Chile and The Pontifical Catholic University of Chile in Santiago. | <urn:uuid:538849d2-1abb-4a38-9afe-136950fa9f61> | CC-MAIN-2013-20 | http://americasquarterly.org/node/3598 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.955406 | 1,054 | 2.53125 | 3 |
2012-2013 Service Learning Courses
Hispanic Literature in Translation—"Defiant Acts: Spanish and Latin American Theatre"
Isabel de Sena
This course will explore the full spectrum of theatre from the early modern period in Spain and colonial Spanish America to contemporary theatre on both sides of the Atlantic, including U.S. Latino playwrights. We will read across periods to identify preoccupations and generic characteristics as theatre evolves and moves between the street and the salon, the college yard and the court, enclosed theatres and theatre for the enclosed. In the process we will address a wide swath of ideas, on gender, class, freedom and totalitarianism, the boundaries of identity. Students will be introduced to some basic concepts and figures ranging from Lope de Vega’s brilliant articulation of “comedia” to Augusto Boal’s concept of an engaged theatre, and investigate the work of FOMMA (Fortaleza de la Mujer Maya) and similar contemporary collectives. And we will read plays as plays, as literature and as texts intended for performance on a stage. At the same time students will have the opportunity to explore creative practices, through engagement with different community organizations: schools, retirement homes, local theatre organizations, etc. Students are encouraged to apply concepts learned in class to their internships, and to bring their ideas and reflections on their weekly practices for discussion in class. Each other week one hour will be devoted to discussing their work in the community. NO Spanish required, but students who are sufficiently fluent in the language may opt to work in a community where Spanish is the primary language of communication. NO expertise in theatre required though theatre students are very welcome. Open to any interested student.
Fall & Spring
First Year Studies
Umuntu ngumuntu ngabantu
[Isizulu: A person is only a person through other persons]
How do the contexts in which we live influence our development? And how do these contexts influence the questions we ask about development, and the ways in which we interpret our observations? How do local, national and international policies impact the contexts in which children live? Should we play a role in changing some of these contexts? What are the complications of doing this?
In this course, we will discuss these and other key questions about child and adolescent development in varying cultural contexts, with a specific focus on the United States and sub-Saharan Africa. As we do so, we will discuss factors contributing to both opportunities and inequalities within and between these contexts. In particular, we will discuss how physical and psychosocial environments differ for poor and non-poor children and their families in rural Upstate New York, urban Yonkers, and rural and urban Malawi, Zimbabwe, South Africa, Kenya and Tanzania. We will also discuss individual and environmental protective factors that buffer some children from the adverse effects of poverty, as well as the impacts of public policy on poor children and their families. Topics will include health and educational disparities; environmental inequalities linked to race, class, ethnicity, gender, language and nationality; environmental chaos; children’s play and access to green space; cumulative risk and its relationship to chronic stress; and the HIV/AIDS pandemic and the growing orphan problem in sub-Saharan Africa. Readings will be drawn from both classic and contemporary research in psychology, human development, anthropology, sociology, and public health; memoirs and other first-hand accounts; and classic and contemporary African literature and film.
This course will also serve as an introduction to the methodologies of community based and participatory action research within the context of a service-learning course. As a class, we will collaborate with local high school students in developing, implementing and evaluating effective community based work in partnership with organizations in urban Yonkers and rural Tanzania. As part of this work, all students will spend an afternoon a week working in a local after-school program. In addition, we will have monthly seminars with local high school students during our regular class time.
Environment, Race and the Psychology of Place
This service learning course will focus on the experience of humans living within physical, social and psychological spaces. We will use a constructivist, multidisciplinary, multilevel lens to examine the interrelationship between humans and the natural and built environment, to explore the impact of racial/ethnic group membership on person/environment interactions, and to provide for a critical analysis of social dynamics in the environmental movement. The community partnership/ service learning component is an important part of this class - we will work with local agencies to promote adaptive person-environment interactions within our community.
Children’s Health in a Multicultural Context
This course offers, within a cultural context, an overview of theoretical and research issues in the psychological study of health and illness in children. We will examine theoretical perspectives in the psychology of health, health cognition, illness prevention, stress, and coping with illness and highlight research, methods, and applied issues. This class is appropriate for those interested in a variety of health careers. Conference work can range from empirical research to bibliographic research in this area. Community partnership/service learning work is encouraged in this class. A background in social sciences or education is recommended. | <urn:uuid:4932f7a4-ad95-4a3c-aceb-f7b7ec9a24f6> | CC-MAIN-2013-20 | http://mobile.slc.edu/studentlife/community-partnerships/Service_Learning_Courses.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.930146 | 1,062 | 3.015625 | 3 |
The “presidi” translates as “garrisons” (from the French word, “to equip”), as protectors of traditional food production practices
Monday, March 23, 2009
The “presidi” translates as “garrisons” (from the French word, “to equip”), as protectors of traditional food production practices
This past year, I have had rewarding opportunities to observe traditional food cultures in varied regions of the world. These are:
Athabascan Indian in the interior of Alaska (the traditional Tanana Chiefs Conference tribal lands) in July, 2008 (for more, read below);
Swahili coastal tribes in the area of Munje village (population about 300), near Msambweni, close to the Tanzania border in December, 2008-January, 2009 (for more, read below); and,Laikipia region of Kenya (January, 2009), a German canton of Switzerland (March, 2009), and the Piemonte-Toscana region of northern/central Italy (images only, February-March, 2009).
In Fort Yukon, Alaska, salmon is a mainstay of the diet. Yet, among the Athabascan Indians, threats to subsistence foods and stresses on household economics abound. In particular, high prices for external energy sources (as of July, 2008, almost $8 for a gallon of gasoline and $6.50 for a gallon of diesel, which is essential for home heating), as well as low Chinook salmon runs for information click here, and moose numbers.
Additional resource management issues pose threats to sustaining village life – for example, stream bank erosion along the Yukon River, as well as uneven management in the Yukon Flats National Wildlife Refuge. People are worried about ever-rising prices for fuels and store-bought staples, and fewer and fewer sources of wage income. The result? Villagers are moving out from outlying areas into “hub” communities like Fort Yukon -- or another example, Bethel in Southwest Alaska – even when offered additional subsidies, such as for home heating. But, in reality, “hubs” often offer neither much employment nor relief from high prices.
In Munje village in Kenya, the Digo, a Bantu-speaking, mostly Islamic tribe in the southern coastal area of Kenya, enjoy the possibilities of a wide variety of fruits, vegetables, and fish/oils.
Breakfast in the village typically consists of mandazi (a fried bread similar to a doughnut), and tea with sugar. Lunch and dinner is typically ugali and samaki (fish), maybe with some dried cassava or chickpeas.
On individual shambas (small farms), tomatoes, cassava, maize, cowpeas, bananas, mangos, and coconut are typically grown. Ugali is consumed every day, as are cassava, beans, oil, fish -- and rice, coconut, and chicken, depending on availability.
Even with their own crops, villagers today want very much to enter the market economy and will sell products from their shambas to buy staples and the flour needed to make mandazis, which they in turn sell. Sales of mandazis (and mango and coconut, to a lesser extent) bring in some cash for villagers.
A treasured food is, in fact, the coconut. This set of pictures show how coconut is used in the village. True, coconut oil now is reserved only for frying mandazi. But it also is used as a hair conditioner, and the coconut meat is eaten between meals. I noted also that dental hygiene and health were good in the village. Perhaps the coconut and fish oils influence this (as per the work of Dr. Weston A. Price).
Photos L-R: Using a traditional conical basket (kikatu), coconut milk is pressed from the grated meat; Straining coconut milk from the grated meat, which is then heated to make oil; Common breakfast food (and the main source of cash income), the mandazi, is still cooked in coconut oil
Note: All photos were taken by G. Berardi
Thursday, February 19, 2009
Despite maize in the fields, it is widely known that farmers are hoarding stocks in many districts. Farmers are refusing the NCPB/government price of Sh1,950 per 90-kg bag. They are waiting to be offered at least the same amount of money as that which was being assigned to imports (Bii, 2009b). “The country will continue to experience food shortages unless the Government addresses the high cost of farm inputs to motivate farmers to increase production,” said Mr. Jonathan Bii of Uasin Gish (Bartoo & Lucheli, 2009; Bii, 2009a, 2009b; Bungee, 2009).
Pride and politics, racism and corruption are to blame for food deficits (Kihara & Marete, 2009; KNA, 2009; Muluka, 2009; Siele, 2009). Clearly, what are needed in Kenya are food system planning, disaster management planning, and protection and development of agricultural and rural economies.
Click here for the full text.
Photos taken by G. Berardi
Cabbage, an imported food (originally), and susceptible to much pest damage.
Camps still remain for Kenya’s Internally Displaced Persons resulting from post-election violence forced migrations. Food security is poor.
Lack of sustained recent short rains have resulted in failed maize harvests.
Friday, January 16, 2009
Today I went to a lunch time discussion of sustainability. This concept promoted development with an equitable eye to the triple bottom line - financial, social, and ecological costs. We discussed the how it seemed relatively easier to discuss the connections between financial and ecological costs, than between social costs and other costs. Sustainable development often comes down to "green" designs that consider environmental impacts or critiques of the capitalist model of financing.
As I thought about sustainable development, or sustainable community management if you are a bit queasy with the feasibility of continuous expansion, I considered its corollaries in the field of disaster risk reduction. It struck me again that it is somewhat easier to focus on some components of the triple bottom line in relation to disasters.
The vulnerability approach to disasters has rightly brought into focus the fact that not all people are equally exposed to or impacted by disasters. Rather, it is often the poor or socially marginalized most at risk and least able to recover. This approach certainly brings into focus the social aspects of disasters.
The disaster trap theory, likewise, brings into focus the financial bottom line. This perspective is most often discussed in international development and disaster reduction circles. It argues that disasters destroy development gains and cause communities to de-develop unless both disaster reduction and development occur in tandem. Building a cheaper, non-earthquake resistant school in an earthquake zone, may make short-term financial sense. However,over the long term, this approach is likely to result in loss of physical infrastructure, human life, and learning opportunities when an earthquake does occur.
What seems least developed to me, though I would enjoy being rebutted, is the ecological bottom line of disasters. Perhaps it is an oxymoron to discuss the ecological costs of disasters, given that many disasters are triggered natural ecological processes like cyclones, forest fires, and floods. It might also be an oxymoron simply because a natural hazard disaster is really looking at an ecological event from an almost exclusively human perspective. Its not a disaster if it doesn't destroy human lives and human infrastructure. But, the lunch-time discussion made me wonder if there wasn't something of an ecological bottom line to disasters in there somewhere. Perhaps it is in the difference between an ecological process heavily or lightly impacted by human ecological modification. Is a forest fire in a heavily managed forest different from that in an unmanaged forest? Certainly logging can heighten the impacts of heavy rains by inducing landslides, resulting in a landscape heavily rather than lightly impacted by the rains. Similar processes might also be true in the case of heavily managed floodplains. Flooding is concentrated and increased in areas outside of levee systems. What does that mean for the ecology of these locations? Does a marsh manage just as well in low as high flooding? My guess would be no.
And of course, there is the big, looming disaster of climate change. This is a human-induced change that may prove quite disasterous to many an ecological system, everything from our pine forests here, to arctic wildlife, and tropical coral reefs.
Perhaps, we disaster researchers, need to also consider a triple bottom line when making arguments for the benefits of disaster risk reduction.
Tuesday, January 13, 2009
This past week the Northwest experienced a severe barrage of weather systems back to back. Everyone seemed to be affected. Folks were re-routed on detours, got soaked, slipped on ice, or had to spend money to stay a little warmer. In Whatcom and Skagit Counties, there are hundreds to thousands of people currently in the process of recovering and cleaning-up after the floods. These people live in the rural areas throughout the county, with fewer people knowing about their devastation and having greater vulnerability to flood hazards.
Luckily, there are local agencies and non-profits who are ready at a moment’s call to help anyone in need. The primary organization that came to the aid of the flood victims was the American Red Cross.
The last week I began interning and volunteering with one of these non-profits, the Mt. Baker American Red Cross (ARC) Chapter. While I am still in the process of getting screened and officially trained, I received first-hand experience and saw how important this organization is to the community.
With the flood waters rising throughout the week, people were flooded out of their homes and rescued from the overflowing rivers and creeks. As the needs for help increased, hundreds of ARC volunteers were called to service. Throughout the floods there have been several shelters opened to accommodate the needs of these flood victims. On Saturday I was asked to help staff one of these shelters overnight in Ferndale.
While I talked with parents and children, I became more aware of the stark reality of how these people have to recover from having all their possessions covered in sewage and mud and damaged by flood waters. In the meantime, these flood victims have all their privacy exposed to others in a public shelter, while they work to find stability in the middle of all the traumas of the events. As I sat talking and playing with the children, another thought struck me. Children are young and resilient, but it must be very difficult when they connect with a volunteer and then lose that connection soon after. Sharing a shelter with the folks over the weekend showed a higher degree of reality and humanity to the situation than the news coverage ever could.
I posted this bit about my volunteer experience because it made me realize something about my education and degree track in disaster reduction and emergency planning. We look at ways to create a more sustainable community, and we need to remember that community service is an important part of creating this ideal. Underlying sustainable development is the triple bottom line (social, economy, and environment). Volunteers and non-profits are a major part of this social line of sustainability. Organizations like the American Red Cross only exist because of volunteers. So embrace President-elect Obama’s call for a culture of civil service this coming week and make a commitment to the organization of your choice with your actions or even your pocketbook. Know that sustainable development cannot exist with out social responsibility.
Thursday, January 8, 2009
Its been two days now that schools have been closed in Whatcom County, not for snow, but for rain and flooding. This unusual event coincides with record flooding throughout Western Washington, just a year after record flooding closed I5 for three days and Lewis County businesses experienced what they then called an unprecedented 500 year flood. I guess not.
There are many strange things about flood risk notation, and this idea that a 500 year flood often trips people up. They often believe a flood of that size will happen only once in 500 years. On a probabilistic level, this is inaccurate. A 500 year flood simply has a .2% probability of happening each year. A more useful analogy might be to tell people they are rolling a 500 sided die every year and hoping that it doesn’t come up with a 1. Next year they’ll be forced to roll again.
But, this focus on misunderstandings of probability often hides an even larger societal misunderstanding . Flood risk changes when we change the environment in which it occurs. If a flood map tells you that you are not in the flood plain, better check the date of the map. Most maps are utterly out of date and many vastly underestimate present flood risk. There are several reasons this happens. Urban development, especially development with a lot of parking lots and buildings that don’t let water seep into the ground, will cause rainwater to move quickly into rivers rather than seep into the ground and slowly release. Developers might complain that they are required to create runoff catchment wetlands when they do build. They do, but these requirements may very well be based upon outdated data on flood risk. Thus, each new development never fully compensates for its runoff, a small problem for each site but a mammoth problem when compounded downstream.
Deforesting can have the same effect, with the added potential for house-crushing and river-clogging mudslides. Timber harvesting is certainly an important industry in our neck of the woods. Not only is commercial logging an important source of jobs for many rural and small towns, logging on state Department of Natural Resource land is the major source of funding for K-12 education. Yet, commercial logging, like other industries, suffers from a problem of cost externalization. When massive mudslides occurred during last year’s storm, Weyerhaeuser complained that it wasn’t it’s logging practices, but the fact that it was an unprecedented, out of the blue, 500 year storm that caused it. While it is doubtful the slides would have occurred uncut land, that isn’t the only fallacy. When the slide did occur, the costs of repairing roads, treatment plants, and bridges went to the county and often was passed on to the nation’s tax payers through state and federal recovery grants. Thus, what should have been paid by Weyerhaeuser, 500 year probability or not, was paid by someone else.
Finally, there is local government. Various folks within local governments set regulations for zoning, deciding what will be built and where. Here is the real crux of the problem. Local government also gets an increase in revenue in the form of property, sales, and business income taxes. Suppress the updating of flood plain maps, and you get a short term profit and often, a steady supply of happy voters. You might think these local governments will have to pay when the next big flood comes, but often that can be avoided. Certainly, they must comply with federal regulations on flood plain management to be part of the National Flood Insurance program, but that plan has significant leeway and little monitoring. Like the commercial logging, disaster-stricken local governments can often push the recovery costs off to individual homeowners through the FEMA homeowner’s assistance program, and off to state and federal agencies by receiving disaster recovery and community development grants and loans. Certainly, some communities are so regularly devastated, and are so few resources, that disasters simply knock them down before they can given stand up again. But others have found loopholes and can profit by continuing to use old food maps and failing to aggressively control flood plain development.
What is it going to take to really change this system and make it unprofitable to profit from bad land use management?
Here’s a good in-depth article on last year’s landslides in Lewis County. http://seattletimes.nwsource.com/html/localnews/2008048848_logging13m.html
An interesting article on the failure of best management practices in development catchment basins can be found here: Hur, J. et al (2008) Does current management of storm water runoff adequately protect water resources in developing catchments? Journal of Soil and Water Conservation, 63 (2) pp. 77-90.
Monday, December 29, 2008
It’s difficult to imagine a more colorful book, celebrating locally-grown and –marketed foods, than David Westerlund’s Simone Goes to the Market: A Children’s Book of Colors Connecting Face and Food. This book is aimed at families and the foods they eat. Who doesn’t want to know where their food is coming from – the terroir, the kind of microclimate it’s produced in, as well as who’s selling it? Gretchen sells her pole beans (purple), Maria her Serrano peppers (green), Dana and Matt sell their freshly-roasted coffee (black), Katie her carrots (orange), a blue poem from Matthew, brown potatoes from Roslyn, yellow patty pan squash from Jed, red tomatoes (soft and ripe) from Diana, and golden honey from Bill (and his bees). This is a book perfect for children of any age who want to connect to and with the food systems that sustain community. Order from firstname.lastname@example.org. | <urn:uuid:e139d24e-7144-4cf8-866c-6066d64a435f> | CC-MAIN-2013-20 | http://igcr.blogspot.com/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.962803 | 3,622 | 2.875 | 3 |
The Brazilian Supreme Court's recognition of same-sex unions in early May marks the latest victory for gay rights in Latin America. The Court's ruling grants equal legal rights to same-sex civil unions as those enjoyed by married heterosexuals, including retirement benefits, joint tax declarations, inheritance rights, and child adoption.
An Unlikely Victory
As the world's largest Roman Catholic country, Brazil was an unlikely venue for such a promising gay rights victory. The Roman Catholic Church has actively fought proposals for same-sex unions in Brazil, arguing that the Brazilian Constitution defines a "family entity" as "a stable union between a man and a woman."2 The Catholic Church responded to the recent ruling with outrage. As Archbishop Anuar Battisti put it, the Supreme Court's decision marked a "frontal assault" on the sanctity of the family.3
The Catholic Church is losing its power in Brazil, which helped pave the way for the Supreme Court's recent decision in favor of homosexuals. Nevertheless, homophobia retains a tenacious grip on Brazilian society. Despite the fact that the nation boasts the world's largest gay pride parade, the LGBT movement has been unable to achieve fundamental progress and quell discrimination at a societal level. For instance, Marcelo Cerqueira, the head of the Gay Group of Bahia, claims the country is "number one when it comes to assassination, discrimination and violence against homosexuals."4 Additionally, in a disconcerting report, the Gay Group of Bahia found that 260 Brazilian gay people were murdered in 2010, exemplifying the level of hostility towards homosexuals.5 Because of this discriminating environment, gay rights activists traditionally have had little success in Brazil. Most notably, Congress disregarded proposals for gay rights legislation for nearly ten years.
The Supreme Court’s recent ruling was therefore a major turning point after a history of protracted, unsuccessful struggles. The judicial decision was made in response to two lawsuits, one of which was filed by Rio de Janeiro Governor Sérgio Cabral and the other by the Office of the Attorney General. While Congress repeatedly ignored requests for equal rights for gay Brazilian citizens, the Supreme Court argued that "Those who opt for a homosexual union cannot be treated less than equally as citizens."6 In this way, by appealing to the judicial system, the LGBT movement was able to achieve success despite deep-seated hostility throughout Brazilian society and in other branches of the government.
Latin America's Gay Rights Revolution
Professor Omar Encarnación of Bard College calls the recent string of gay rights legislation in Latin America a "gay rights revolution."7 Brazil's ruling came on the heels of several other noteworthy gay rights victories in Latin America, such as Uruguay’s legalization of same-sex civil unions in 2007. Shortly thereafter, in 2010, Argentina became the first Latin American nation and eighth nation worldwide to legalize gay marriage. Other landmark decisions in the past few years include Uruguay's decision to allow all men and women, regardless of sexual orientation, to serve in the military and Mexico City's legalization of same-sex civil unions.
The recent surge in gay rights victories throughout Latin America is altogether stunning, considering the region has generally been regarded as very homophobic. The Catholic Church has traditionally been a formidable enemy to gay rights movements in the region, but the secularization of much of Latin America has led to the impressive expansion of opportunities for gay rights movements.
Yet this success of gay rights movements throughout Latin America cannot be attributed solely to the declining importance of religion in the region. It is equally important, if not more so, to recognize the vital roles played by gay activist groups and the dynamic strategies these groups employ. For instance, gay rights groups in Brazil were able to reverse legislation banning gays from the workplace by forming partnerships with progressive businesses. In recent years, the use of social media has provided much of the gay movement's momentum by enhancing activist groups' ability to communicate and spread information. For instance, as Javier Corrales notes, by simply posting a video of a hate crime in San Juan or of a gay wedding in Argentina on YouTube, gay rights groups have been able to reach thousands of people and garner support.8 These innovative strategies have brought success despite a notably hostile environment towards homosexuals.
Through a comparison with the United States, we can see how remarkable the success of gay rights in Latin America has been. Latin America is marked by a much more homophobic environment than the US, according to a survey conducted by Mitchell Seligson and Daniel Moreno Morales.9 However, although the US has lower levels of societal discrimination towards gays, it is hard to imagine that the United States would completely legalize same-sex civil unions or gay marriage on a national scale. The fact that this legalization occurred in several Latin American nations, despite the formidable opposition there, makes these recent rulings even more significant.
Furthermore, the recent victories for gay rights exemplify the considerable progress toward the region's consolidation of democracy. The three Latin American countries that have now legalized same-sex unions—Brazil, Argentina, and Uruguay—were each ruled by repressive military regimes just over two decades ago. Even Colombia, which is one of the region's worst human rights violators, granted same-sex unions equal rights regarding social security benefits and inheritance rights in 2007. The fact that gay liberation movements have been successful in these unlikely places is a testament to how far these countries have progressed in recent years.
Marilia Brocchetto and Luciani Gomes. "Same-sex unions recognized by Brazil's high court." 5 May 2011.
Yana Marull. "Brazil top court recognizes same-sex civil unions." American Free Press. 5 May 2011.
Omar Encarnación. "A Gay Rights Revolution in Latin America." Americas Quarterly. 17 May 2011.
Javier Corrales. "Latin American Gays: The Post-Left Leftists." Americas Quarterly. 19 March 2010.
Mitchell A. Seligson and Daniel E. Moreno Morales, "Gay in the Americas," Americas Quarterly, Winter 2010. | <urn:uuid:a2a8e629-d460-463d-a849-ddef86c77c38> | CC-MAIN-2013-20 | http://truth-out.org/index.php?option=com_k2&view=item&id=1281&Itemid=228 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945603 | 1,219 | 3.5 | 4 |
Memory loss (amnesia) is unusual forgetfulness. You may not be able to remember new events, recall one or more memories of the past, or both.
Forgetfulness; Amnesia; Impaired memory; Loss of memory; Amnestic syndrome
Normal aging may cause some forgetfullness. It's normal to have some trouble learning new material, or needing more time to remember it.
However, normal aging does NOT lead to dramatic memory loss. Such memory loss is due to other diseases. Sometimes, memory loss may be seen with depression. It can be hard to tell the difference between memory loss and confusion due to depression.
Some types of memory loss may cause you to forget recent or new events, past or remote events, or both. You may forget memories from a single event, or all events.
Memory loss may cause you to have trouble learning new information or forming new memories.
The memory loss may be temporary (transient), or permanent.
Memory loss can be caused by many different things. To determine a cause, your doctor or nurse will ask if the problem came on suddenly or slowly.
Many areas of the brain help you create and retrieve memories. A problem in any of these areas can lead to memory loss.
Causes of memory loss include:
- Alcohol or use of illicit drugs
- Not enough oxygen to the brain (heart stopped, stopped breathing, complications from anesthesia)
- Brain growths (caused by tumors or infection)
- Brain infections such as Lyme disease, syphilis, or HIV/AIDS
- Brain surgery, such as surgery to treat seizure disorders
- Cancer treatments, such as brain radiation, bone marrow transplant, or after chemotherapy
- Certain medications
- Certain types of seizures
- Depression, bipolar disorder, or schizophrenia when symptoms have not been well controlled
- Dissociative disorder (not being able to remember a major, traumatic event; the memory loss may be short-term or long-term)
- Drugs such as barbiturates or benzodiazepines
- Electroconvulsive therapy (especially if it is long-term)
- Encephalitis of any type (infection, autoimmune disease, chemical/drug induced)
- Epilepsy that is not well controlled with medications
- Head trauma or injury
- Heart bypass surgery
- Illness that results in the loss of, or damage to, nerve cells (neurodegenerative illness), such as Parkinson's disease, Huntington's disease, or multiple sclerosis
- Long-term alcohol abuse
- Migraine headache
- Mild head injury or concussion
- Nutritional problems (vitamin deficiencies such as low vitamin B12)
- Permanent damage or injuries to the brain
- Transient global amnesia
- Transient ischemic attack (TIA)
A person with memory loss needs a lot of support. It helps to show them familiar objects, music, or photos.
Write down when the person should take any medication or complete any other important tasks. It is important to write it down.
If a person needs help with everyday tasks, or safety or nutrition is a concern, you may want to consider extended care facilities, such as a nursing home.
What to Expect at Your Office Visit
The doctor or nurse will perform a physical exam and ask questions about the person's medical history and symptoms. This will almost always include asking questions of family members and friends. They should come to the appointment.
Medical history questions may include:
- Can the person remember recent events (is there impaired short-term memory)?
- Can the person remember events from further in the past (is there impaired long-term memory)?
- Is there a loss of memory about events that occurred before a specific experience (anterograde amnesia)?
- Is there a loss of memory about events that occurred soon after a specific experience (retrograde amnesia)?
- Is there only a minimal loss of memory?
- Does the person make up stories to cover gaps in memory (confabulation)?
- Is the person suffering from low moods that impair concentration?
- Time pattern
- Has the memory loss been getting worse over years?
- Has the memory loss been developing over weeks or months?
- Is the memory loss present all the time or are there distinct episodes of amnesia?
- If there are amnesia episodes, how long do they last?
- Aggravating or triggering factors
- Has there been a head injury in the recent past?
- Has the person experienced an event that was emotionally traumatic?
- Has there been a surgery or procedure requiring general anesthesia?
- Does the person use alcohol? How much?
- Does the person use illegal/illicit drugs? How much? What type?
- Other symptoms
- What other symptoms does the person have?
- Is the person confused or disoriented?
- Can they independently eat, dress, and perform similar self-care activities?
- Have they had seizures?
Tests that may be done include:
Cognitive therapy, usually through a speech/language therapist, may be helpful for mild to moderate memory loss.
See: Dementia - homecare for information about taking care of a loved one with dementia.
Kirshner HS. Approaches to intellectual and memory impairments. In: Gradley WG, Daroff RB, Fenichel GM, Jankovic J, eds. Neurology in Clinical Practice. 5th ed. Philadelphia, Pa: Butterworth-Heinemann; 2008:chap 6.
Luc Jasmin, MD, PhD, Department of Neurosurgery at Cedars-Sinai Medical Center, Los Angeles, and Department of Anatomy at UCSF, San Francisco, CA. Review provided by VeriMed Healthcare Network. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Health Solutions, Ebix, Inc.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997-
A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited. | <urn:uuid:7e67ab91-4ead-4cf8-b832-93fd010f1eb8> | CC-MAIN-2013-20 | http://www.glendaleadventist.com/body.cfm?id=8&action=detail&AEArticleID=003257&AEProductID=Adam2004_105&AEProjectTypeIDURL=APT_1 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.909174 | 1,334 | 3.5625 | 4 |
Gonorrhoea - the drugs don't work
Published: 23rd Dec 2011 08:41:37
The prospect of untreatable gonorrhoea has provoked alarm around the world, and there are no new classes of antibiotics in development.
In this week's Scrubbing Up column, Peter Greenhouse of the British Association for Sexual Health & HIV (BASHH) argues financial incentives will be needed to seek a new cure.
We're all familiar with stories about hospital-acquired superbugs - MRSA and the like - becoming more difficult to treat, and are fearful whenever an elderly relative needs in-patient care.
But now, with a report from Japan of multidrug-resistant gonorrhoea, and the festive season in full swing, the spectre of an untreatable sexually transmitted infection looms over us - and our teenagers - for the first time in a generation.
Since penicillin was first used to treat gonorrhoea in 1943, the organism has gradually developed novel means of evading control by each new antibiotic.
For treatment to be effective and practical, it must be simple to administer by mouth as a single dose, achieving a high enough concentration of the drug in the body to treat over 95% of infections.
If the efficacy drops below this figure, the treatment has to change.
But over-the-counter medication, widely available in Africa and Asia means people self-medicate often taking the wrong dose at the wrong time, perhaps with alcohol which further reduces the concentration of the drug.
Strains of gonorrhoea which need a higher concentration of a drug to kill them become the dominant ones. This keeps happening until the drug no longer works.
If gonorrhoea becomes untreatable in these countries, the effect on increasing HIV rates could be disastrous - because any sexually transmitted infection which causes inflammation and discharge increases the transmission efficiency of HIV.
On average, transmission is five times more likely to occur if gonorrhoea or chlamydia are present
In the UK, the situation is monitored annually by the Health Protection Agency, providing an essential early warning of drugs which are about to fail, allowing a switch of treatment regimes before they become ineffective.
There's a desperate world-wide demand for new antibiotics, yet the drug companies aren't interested”
Ciprofloxacin - a drug introduced in the mid-1980s after the failure of penicillin - lasted in the UK until 2002: This may have survived longer because of the world-wide drop in gonorrhoea rates following the arrival of HIV, when fear of the new virus meant people practised safe-sex and changed partners less.
But it had already failed in the Far East, some four years previously.
Resistance develops faster in homosexual men, not just because of high rates of partner change.
Most people don't realise that oral sex is an important route of transmission for gonorrhoea, which doesn't usually cause a sore throat.
Gonorrhoea mixes with organisms which live naturally in the rectum and throat, picking up new types of antibiotic resistance from these bugs.
The next drug, cefixime, was introduced around 2003, but lasted only six years in the UK before resistance rose suddenly, hitting 25% among homosexual men.
Now, their only treatment option is an injection (Ceftriaxone) which has recently failed in Japan.
But why isn't there a new drug in development?
Since the mid-1980s and the arrival of HIV, almost all drug company research has focused on antiviral medicines, with no new classes of antibiotics being produced since the 1970s, and none on the horizon.
There's a desperate world-wide demand for new antibiotics, yet the drug companies aren't interested, so how could we motivate them?
Financial reality dictates research policy: Why bother to develop a drug which works in one day or one week, when you could make one - such as an antidepressant, statin or antiviral - which must be taken for months, for years, or for life?
So either the new drug(s) would have to be seriously expensive, precluding their use where they would be most needed, or there would have to be a substantial reward offered, perhaps of a magnitude only affordable by a fund such as the Gates Foundation.
Yet even if novel drugs could be produced, the biology and transmission dynamics of gonorrhoea mean that each new regime would probably fail within five-to-ten years of its introduction, unless we use multi-dose, multi-drug regimes, which will be less practical and more expensive to administer.
Faced with this, what can we do to stay sexually healthy? Stay at home, or take your partner to the New Year party: If that's not possible, use condoms - meticulously, and visit your local clinic - frequently.
Harvard CitationBBC News, 2011. Gonorrhoea - the drugs don't work. [Online] (Updated 23 Dec 2011)
Available at: http://www.ukwirednews.com/news.php/212819-Gonorrhoea-the-drugs-dont-work [Accessed 14th May 2013]
At 07:53:07 in BusinessA group of international investors is interested in buying UK water supplier Severn Trent, the company has said....
At 07:49:21 in WalesMajor road safety works have started on a one-mile stretch of coast road at Flintshire which has been the scene of serious accidents....
At 07:48:53 in EnglandStuart Hazell is due to be sentenced later after admitting mid-trial to killing his partner's granddaughter....
At 07:47:03 in WalesAn independent TV company has issued redundancy notices to 10 members of staff due to a cut in work from Welsh language broadcaster S4C....
At 07:43:33 in WalesFurther forensic evidence is expected to be heard later in the trial of the man accused of murdering April Jones....
At 07:41:47 in WalesA Cardiff theatre was asked to explain by the Arts Council of Wales why there was no plan to tackle an £800,000 overspend, it has emerged....
At 07:40:22 in EntertainmentHollywood actress Angelina Jolie has undergone a double mastectomy to reduce her chances of getting breast cancer....
At 07:32:39 in HeadlinesA boat carrying Rohingya Muslims has capsized off western Burma, aid agencies say. ...
At 07:31:47 in BusinessIndia's top drugmaker Ranbaxy Laboratories is to pay a record fine in the US for lying to officials and selling badly made generic drug...
At 07:31:30 in Northern IrelandPolice have issued a description of a man they want to question about the rape of a teenage girl in west Belfast....
News In Other Categories
A group of international investors is interested in buying UK water supplier Severn Trent, the company has said....
Free wi-fi access is to be introduced at 25 of Scotland's busiest railway stations before the end of the year. ...
Major road safety works have started on a one-mile stretch of coast road at Flintshire which has been the scene of serious accidents....
What makes a great foreign minister? Some of those who have held the great office of state, including Lord Carrington, David Owen and David ...
It was a melodious spectacle....
With the doors to its brand new £1million training centre officially open, one of the UK's leading apprentice training providers, Bristol ba... | <urn:uuid:fe523499-838c-400f-bae6-9c2bb3623a4d> | CC-MAIN-2013-20 | http://www.ukwirednews.com/news.php/212819-Gonorrhoea-the-drugs-dont-work | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.953702 | 1,579 | 2.625 | 3 |
SAINT ALEXANDER NEVSKY
Alexander Nevsky (1220-1263) was proclaimed Saint of the Russian Orthodox Church by Metropolite Macarius in 1547
||| Back to the Royal Russia News Archive |||
||| Royal Russia Bulletin - Our Official Blog. Updated Daily With News Clips, Videos & Photographs |||
||| Royal Russia Video & Film Archive ||| Romanov & Imperial Russia Links |||
||| Our Bookshop: Books on the Romanovs & Imperial Russia ||| Gilbert's Books - Publisher of Books on the Romanovs |||
||| What's New @ Royal Russia - Updated Monthly |||
||| Return to Royal Russia - Directory ||| Return to Royal Russia - Main Page |||
Alexander Nevsky (1220-1263)
The life of a saint is always a mystery both for contemporaries and for descendants. Only our Lord and Savoir Jesus Christ can evaluate in full measure all the troubles and accomplishments of his hermit. The life of a saint, although it belongs to the church on our sinful earth, serves as an expression of the will of the Church in Heaven. We recall the saints in the most difficult minutes of our life, when it seemed there is no way out. And that is when God sends us his hermit. This happened to our Russian land more than once.
“In the year 1237,” runs the chronicle, “Batu Khan, cruel and godless, came to the Russian land and there were many Tartars with him. Batu Khan’s army was so big that one Russian had to fight against a thousand Tartars, and two – against a legion. There had never been a battle like that and there were no survivors.
Almost all Russia, except its Northern areas, languished under the Tartar-Mongol yoke for nearly 300 years.
Having conquered the Russian land, the Tartars went down the Volga River where they founded their kingdom – the “Golden Horde”. The town of Sarai was its capital.
Fortunately, the Tartars, though they imposed a heavy tax on Russia, left untouched the Russian Orthodox Church – the guarantee of the future liberation of Russia.
Alexander’s father, Grand Prince Yaroslav, undertook to take care of the devastated Russian land.
The Tartars and Mongols were not the only enemies Russia had to face at the time. Germans and Swedes threatened it from the West. Prince Alexander was 20 years old when he clashed with them for the first time.
Be that time Grand Prince Yaroslav, Alexander’s father, made him Prince of Novgorod. Taught martial arts since early childhood, Alexander was a skilled warrior. He was a man of military bearing and rare beauty. A contemporary wrote about him:
“I traveled about many countries and saw many celebrities, but never did I meet a prince or king equal to Prince Alexander.”
Prince Alexander was a wise and just ruler, and had good manners which made him very popular and highly respected. A contemporary of his wrote:
“He treated priests and monks with love and respect; he was considerate to the poor. And as for metropolitans and bishops, Prince Alexander honored them as he honored Jesus Christ.”
Prince Alexander’s subjects used to say: “Our Prince is sinless.”
In 1240 Prince Birger of Sweden sent his messengers to Prince Alexander with the following address.
“Hey, Prince Alexander! You may resist if you can. But remember that I am already here ready to conquer your land.”
For a long time Prince Alexander prayed in the St. Sofia Cathedral of Novgorod. He recalled the words of Jesus Christ: “No love is greater than the love of a man who gives life for his friends”. When he left the church, he addressed the army in these words:
“God is not force, God is truth.”
Prince Alexander entrusted his hopes to the Holy Trinity and made up his mind to fight.
The two armies met on the banks of the Neva River. One was the army of the proud invader, the other – of the Russian combatants. Our Lord helped his hermit — Prince Alexander. The legend has it that a soldier named Philip had a miraculous vision on the eve of the battle. He was on patrol on the Neva River. At dawn he saw a boat with martyr princes Boris and Gleb, ancestors of Prince Alexander, in full combat gear. Suddenly Philip heard a voice from the boat:
“Gleb, brother of mine! Hurry up, we must help our relative, Grand Prince Alexander.”
And the vision disappeared…
Encouraged by the miraculous vision, Alexander rushed his men to the scene and gave the battle on the banks of the Neva.
The battle was great, says the chronicle, and many people were killed, both Russians and Latins, and Prince Alexander left a scar on the face of their leader with a lance.
At this point I must explain that in medieval Russia all intruders from the West were called Latins.
After the glorious victory in the battle of the Neva Prince Alexander was awarded the honorary title Alexander Nevsky or Alexander of the Neva.
Birger and his warriors were defeated. But another threat to Russia already loomed in the West. The German crusaders (or Teutonic knights) conquered the ancient Russian fortress of Koporye. In 1241 Alexander Nevsky regained it, but in a year the Teutonic knights were back. They also seized the ancient Russian towns Pskov and Izborsk.
By the winter of 1242 Prince Alexander had gathered an army to defend Russia from the German crusaders. And on April 5th Russian warriors and Teutonic knights met in a merciless battle on the ice of Lake Chudskoye, also known as Lake Peipus.
Prince Alexander was praying: God and our Savior Jesus Christ! Help us defend our country, our mothers and fathers, our sons and daughters, as many years ago you helped Moses.
Here’s how one chronicler described the battle on Lake Chudskoye:
“Prince Alexander arranged his men in battle formation and moved towards the enemy. Alexander had many brave men, like King David in ancient times. It was a Saturday. The troops clashed when the sun rose. It was a fierce battle. The crackle of breaking spears and the clanging swords sounded as though the ice began to move. The ice couldn’t be seen for blood…
With God’s help the courageous Russian warriors defeated the Teutonic knights. Many knights drowned, others were taken prisoner, and only few of them escaped.
The contemporaries rejoiced over the victory in what would later be called “The Battle on the Ice”.
Entrance of Alexander Nevsky at Pskov after the Battle on the Ice. Artist: Valentin Serov
Having preserved Russian land in the West, the Grand Prince Alexander clearly realized he ought to maintain peace with the Tartars, the “Golden Horde”. Weak and devastated, Russia was in no position to fight again.
The Tartar yoke was a heavy burden on the Russian people. Before he could become a real ruler of his domain any Russian prince had to go to the “Golden Horde” where, after a long and humiliating procedure he might (or might not) get the so called “Yarlyk” – a license to rule. For many Russian princes, landlords and warriors the way to the “Golden Horde” was the last.
Alexander Nevsky too, went to see Batu Khan, the king of the Tartars. Batu Khan was amazed to see him; he told his nobles: “It’s true what they said, that there’s no one like him.” He paid the prince all due honors and let him go safely.
Alexander Nevsky had to go to Sarai, the capital of the “Golden Horde” on three occasions. And every time he was not sure he would return. But he never lost heart, for he was sure that God would not leave him.
In the “Golden Horde” Prince Alexander always remembered he was not only a prince but also a Christian. He told pagans and Muslims about the Christian faith, about our Lord and Savior Jesus Christ and about the Holy Trinity. That was the beginning of Christianization of oriental peoples. Thousands of them turned their souls to Jesus Christ. Owing to the efforts of Alexander Nevsky the Russian Orthodox episcopate was established in Sarai in 1261.
In 1252 Alexander Nevsky became the absolute ruler of the Russian land. His responsibilities were enormous. He managed to protect southern, eastern and western boundaries of Russia firmly enough. His wise rule breathed a new life into Russia after the Tartar invasion. Churches, monasteries and towns were built all over the country.
Unfortunately, Prince Alexander’s farsighted policy was sometimes disapproved of by his fellow countrymen. In 1261 many Russian towns rose in revolt. The Tartar envoys who had come to collect tribute were killed. People waited for revenge with horror. Prince Alexander had to go to the Tartar capital again in order to ward off devastating raids against Russia. An excellent diplomat, the prince saved Russia and coped with his duty to God and the country.
However years of wars and the affairs of the state undermined Alexander Nevsky’s health. Upon his return from Sarai the Grand Prince fell ill and died in a small monastery of St. Feodor in the town of Gorodets, not far from the ancient city of Vladimir, on November 14, 1263. Just before his death Alexander, in keeping with the ancient Russian tradition, took monastic vows and was named Alexi.
The death of Alexander Nevsky in 1263. Artist: M. V. Nesterov
“Brethren! The sun has set over Russian land! Our Grand Prince Alexander has passes away. No one like him will be found in Russian land.”
And there was so much weeping and groaning as had never been heard before; the land trembled.
Nine days people carried Grand Prince Alexander’s body to the city of Vladimir where the burial service took place in the St. Vladimir Cathedral on November 23. During that service a miracle happened. When a priest about to pull a scroll with the last prayer in the late prince’s hand approached the body, the dead prince stretched out his hand, took the prayer and crossed his hands on his chest again. This episode caused awe and terror in the crowd present…
In 120 years, shortly before a great battle with the Tartars, a monk at the church where the Grand Prince Alexander’s body was buried saw a vision during the night praying.
The candles in front of Prince Alexander’s tomb suddenly lit up and two elders came up to the tomb and said: “Arise, our prince! Hurry to help your relative, Prince Dmitry!” And the saint prince arose and became invisible. After that vision the saint’s tomb was opened and the relics found undecayed. Many sick people who came close to them are said to have been healed.
The Russian Orthodox Church canonized Grand Prince Alexander.
1724 was the time of sweeping and rapid reforms of the Russian emperor Peter the Great. Those were the times of great changes in all spheres of life: spiritual, political, economic. The Emperor and the Holy Synod began to play the leading role in the Russian Church, whereas before the Patriarch was the most important figure. The Emperor and the Holy Synod decided that the relics of Alexander Nevsky be transferred to St. Petersburg, to the monastery built in his honor. On August 30 the capital of the Russian Empire welcomed the boat carrying the holy relics. Peter the Great piloted the boat himself; senior officials assisted him as sailors. A festive religious service for the Grand Prince took place at the monastery of his name. The commemoration of St. Alexander took place shortly after Russia’s victory in the Northern war of 1700-1721 against Sweden.
Almost for 200 years the relics of St. Alexander were kept in the Alexander Nevsky Monastery. After the Bolshevik revolution of 1917 the relics were taken away and put on display at a museum of atheism. As for the Alexander Nevsky Monastery, it was closed down. Alexander Nevsky remained a saint most revered by the Russian people. People named in his honor are too numerous to count.
In 1990 the relics of Saint Alexander Nevsky were returned to the Russian Orthodox Church to take their place in the Alexander Nevsky Lavra in St. Petersburg to the joy of all Russian believers. And we know that owing to St. Alexander’s prayers, our Russian land will be revived and stand firm forever…
Watercolour of the Alexander Nevsky Lavra in St. Petersburg
7 December, 2012 | <urn:uuid:2480913d-15e6-40fb-bd2f-6aef45cf4cc6> | CC-MAIN-2013-20 | http://www.angelfire.com/pa/ImperialRussian/news/512news.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.973254 | 2,727 | 3.078125 | 3 |
Digital Audio Networking Demystified
The OSI model helps bring order to the chaos of various digital audio network options.
Credit: Randall Fung/Corbis
Networking has been a source of frustration and confusion for pro AV professionals for decades. Fortunately, the International Organization of Standardization, more commonly referred to as ISO, created a framework in the early 1980s called the Open Systems Interconnection (OSI) Reference Model, a seven-layer framework that defines network functions, to help simplify matters.
Providing a common understanding of how to communicate to each layer, the OSI model (Fig. 1) is basically the foundation of what makes data networking work. Although it's not important for AV professionals to know the intricate details of each layer, it is vital to at least have a grasp of the purpose of each layer as well as general knowledge of the common protocols in each one. Let's take a look at the some key points.
The Seven Layers
Starting from the bottom up, the seven layers of the OSI Reference Model are Physical, Data Link, Network, Transport, Session, Presentation, and Application. The Physical layer is just that — the hardware's physical connection that describes its electrical characteristics. The Data Link layer is the logic connection, defining the type of network. For example, the Data Link layer defines whether or not it is an Ethernet or Asynchronous Transfer Mode (ATM) network. There is also more than one data network transport protocol. The Data Link layer is divided into two sub-layers: the Media Access Control (MAC) and the Logical Link Control (above the MAC as you move up the OSI Reference Model).
The seven layers of the Open Systems Interconnection (OSI) Reference Model for network functions.
Here is one concrete example of how the OSI model helps us understand networking technologies. Some people assume that any device with a CAT-5 cable connected to it is an Ethernet device. But it is Ethernet's Physical layer that defines an electrical specification and physical connection — CAT-5 terminated with an RJ-45 connector just happens to be one of them. For a technology to fully qualify as an Ethernet standard, it requires full implementation of both the Physical and Data Link layers.
The Network layer — the layer at which network routers operate — “packetizes” the data and provides routing information. The common protocol for this layer is the Internet Protocol (IP).
Layer four is the Transport layer. Keep in mind that this layer has a different meaning in the OSI Reference Model compared to how we use the term “transport” for moving audio around. The Transport layer provides protocols to determine the delivery method. The most popular layer four protocol is Transmission Control Protocol (TCP). Many discuss TCP/IP as one protocol, but actually they are two separate protocols on two different layers. TCP/IP is usually used as the data transport for file transfers or audio control applications.
Comparison of four digital audio technologies using the OSI model as a framework.
TCP provides a scheme where it sends an acknowledge message for each packet received by a sending device. If it senses that it is missing a packet of information, it will send a message back to the sender to resend. This feature is great for applications that are not time-dependent, but is not useful in real-time applications like audio and video.
Streaming media technologies most common on the Web use another method called User Datagram Protocol (UDP), which simply streams the packets. The sender never knows if it actually arrives or not. Professional audio applications have not used UDP because they are typically Physical layer or Data Link layer technologies — not Transport layer. However, a newcomer to professional audio networking, Australia-based Audinate, has recently become the first professional audio networking technology to use UDP/IP technology over Ethernet with its product called Dante.
The Session and Presentation layers are not commonly used in professional audio networks; therefore, they will not be covered in this article. Because these layers can be important to some integration projects, you may want to research the OSI model further to complete your understanding of this useful tool.
The purpose of the Application layer is to provide the interface tools that make networking useful. It is not used to move audio around the network. It controls, manages, and monitors audio devices on a network. Popular protocols are File Transfer Protocol (FTP), Telnet, Hypertext Transfer Protocol (HTTP), Domain Name System (DNS), and Virtual Private Network (VPN), to name just a few.
Now that you have a basic familiarity with the seven layers that make up the OSI model, let's dig a little deeper into the inner workings of a digital audio network.
Breaking Down Audio Networks
Audio networking can be broken into in two main concepts: control and transport. Configuring, monitoring, and actual device control all fall into the control category and use several standard communication protocols. Intuitively, getting digital audio from here to there is the role of transport.
Control applications can be found in standard protocols of the Application layer. Application layer protocols that are found in audio are Telnet, HTTP, and Simple Network Management Protocol (SNMP). Telnet is short for TELetype NETwork and was one of the first Internet protocols. Telnet provides command-line style communication to a machine. One example of Telnet usage in audio is the Peavey MediaMatrix, which uses this technology, known as RATC, as a way to control MediaMatrix devices remotely.
SNMP is a protocol for monitoring devices on a network. There are several professional audio and video manufacturers that support this protocol, which provides a method for managing the status or health of devices on a network. SNMP is a key technology in Network Operation Center (NOC) monitoring. It is an Application layer protocol that communicates to devices on the network through UDP/IP protocols, which can be communicated over a variety of data transport technologies.
Control systems can be manufacturer-specific, such as Harman Pro's HiQnet, QSC Audio's QSControl, or third party such as Crestron's CresNet, where the control software communicates to audio devices through TCP/IP. In many cases, TCP/IP-based control can run on the same network as the audio signal transport, and some technologies (such as CobraNet and Dante) are designed to allow data traffic to coexist with audio traffic.
The organizing and managing of audio bits is the job of the audio Transport. This is usually done by the audio protocol. Aviom, CobraNet, and EtherSound are protocols that organize bits for transport on the network. The transport can be divided into two categories: logical and physical.
Purely physical layer technologies, such as Aviom, use hardware to organize and move digital bits. More often than not, a proprietary chip is used to organize and manage them. Ethernet-based technologies packetize the audio and send it to the Data Link and Physical layers to be transported on Ethernet devices. Ethernet is both a logical and physical technology that packetizes or “frames” the audio in the Data Link layer and sends it to the Physical layer to be moved to another device on the network. Ethernet's Physical layer also has a Physical layer chip, referred to as the PHY chip, which can be purchased from several manufacturers.
Comparing Digital Audio Systems
The more familiar you are with the OSI model, the easier it will be to understand the similarities and differences of the various digital audio systems. For many people, there is a tendency to gloss over the OSI model and just talk about networking-branded protocols. However, understanding the OSI model will bring clarity to your understanding of digital audio networking (Fig. 2).
Due to the integration of pro AV systems, true networking schemes are vitally important. A distinction must be made between audio networking and digital audio transports. Audio networks are defined as those meeting the commonly used standard protocols, where at least the Physical and Data Link layer technologies and standard network appliances (such as hubs and switches) can be used. There are several technologies that meet this requirement using IEEE 1394 (Firewire), Ethernet, and ATM technologies, to name a few. However, because Ethernet is widely deployed in applications ranging from large enterprises to the home, this will be the technology of focus. All other technologies that do not meet this definition will be considered digital audio transport systems, and not a digital audio network.
There are at least 15 schemes for digital audio transport systems and audio networking. Three of the four technologies presented here have been selected because of their wide acceptance in the industry based on the number of manufacturers that support it.
Let's compare four CAT-5/Ethernet technologies: Aviom, EtherSound, CobraNet, and Dante. This is not to be considered a “shoot-out” between technologies but rather a discussion to gain understanding of some of the many digital system options available to the AV professional.
As previously noted, Aviom is a Physical layer–only technology based on the classifications outlined above. It does use an Ethernet PHY chip, but doesn't meet the electrical characteristics of Ethernet. Therefore, it cannot be connected to standard Ethernet hubs or switches. Aviom uses a proprietary chip to organize multiple channels of audio bits to be transported throughout a system, and it falls in the classification of a digital audio transport system.
EtherSound and CobraNet are both 802.3 Ethernet– compliant technologies that can be used on standard data Ethernet switches. There is some debate as to whether EtherSound technology can be considered a true Ethernet technology because it requires a dedicated network. EtherSound uses a proprietary scheme for network control, and CobraNet uses standard data networking methods. The key difference for both the AV and data professional is that EtherSound uses a dedicated network, and CobraNet does not. There are other differences that may be considered before choosing between CobraNet and EtherSound, but both are considered to be layer two (Data Link) technologies.
Dante uses Ethernet, but it is considered a layer four technology (Transport). It uses UDP for audio transport and IP for audio routing on an Ethernet transport, commonly referred to as UDP/IP over Ethernet.
At this point you may be asking yourself why does the audio industry have so many technologies? Why can't there be one standard like there is in the data industry?
The answer to the first question relates to time. Audio requires synchronous delivery of bits. Early Ethernet networks weren't much concerned with time. Ethernet is asynchronous, meaning there isn't a concern when and how data arrives as long as it gets there. Therefore, to put digital audio on a data network requires a way to add a timing mechanism. Time is an issue in another sense, in that your options depend on technology or market knowledge available at the time when you develop your solution. When and how you develop your solution leads to the question of a single industry standard.
Many people don't realize that the data industry does in fact have more than one standard: Ethernet, ATM, FiberChannel, and SONET. Each layer of the OSI model has numerous protocols for different purposes. The key is that developers follow the OSI model as a framework for network functions and rules for communicating between them. If the developer wants to use Ethernet, he or she is required to have this technology follow the rules for communicating to the Data Link layer, as required by the Ethernet standard.
Because one of the key issues for audio involves time, it's important to use it wisely.
There are two types of time that we need to be concerned with in networking: clock time and latency. Clock time in this context is a timing mechanism that is broken down into measurable units, such as milliseconds. In digital audio systems, latency is the time duration between when audio or a bit of audio goes into a system until the bit comes out the other side. Latency has many causes, but arguably the root cause in audio networks is the design of its timing mechanism. In addition, there is a tradeoff between the timing method and bandwidth. A general rule of thumb is that as the resolution of the timing mechanism increases, the more bandwidth that's required from the network.
Ethernet, being an asynchronous technology, requires a timing method to be added to support the synchronous nature of audio. The concepts and methodology of clocking networks for audio are key differences among the various technologies.
CobraNet uses a time mechanism called a beat packet. This packet is sent out in 1.33 millisecond intervals and communicates with CobraNet devices. Therefore, the latency of a CobraNet audio network can't be less than 1.33 milliseconds. CobraNet was introduced in 1995 when large-scale DSP-based digital systems started replacing analog designs in the market. Because the “sound system in a box” was new, there was great scrutiny of these systems. A delay or latency in some time-critical applications was noticed, considered to be a challenge of using digital systems. However, many believe that latency is an overly exaggerated issue in most applications where digital audio systems are deployed. In fact, this topic could be an article unto itself.
A little history of digital systems and networking will provide some insight on the reason why there are several networking technologies available today. In the late '90s, there were two “critical” concerns in the digital audio industry: Year of 2000 compliance (Y2K) and latency. To many audio pros, using audio networks like CobraNet seemed impossible because of the delay —at that time, approximately 5 milliseconds, or in video terms, less time than a frame of video.
Enter EtherSound, introduced in 2001, which addressed the issue of latency by providing an Ethernet networking scheme with low latency and better bit-depth and higher sampling rate than CobraNet. The market timing and concern over latency gave EtherSound an excellent entry point. But since reducing latency down to 124 microseconds limits available bandwidth for data traffic, a dedicated network is required for a 100-MB EtherSound network. Later, to meet the market demands of lower latency requirements, CobraNet introduced variable latency, with 1.33 milliseconds being the minimum. With the Ethernet technologies discussion thus far, there is a relationship between the bit-depth and sample rate to the clocking system.
Audio is not the only industry with a need for real-time clocking schemes. Communications, military, and industrial applications also require multiple devices to be connected together on a network and function in real-time. A group was formed from these markets, and they took on the issue of real-time clocking while leveraging the widely deployed Ethernet technology. The outcome was the IEEE 1588 standard for a real-time clocking system for Ethernet networks in 2002.
As a late entry to the networking party, Audinate's Dante comes to the market with the advantage of using new technologies like IEEE 1588 to solve many of the current challenges in networking audio. Using this clocking technology in Ethernet allows Dante to provide sample accurate timing and synchronization while achieving latency as low as 34 microseconds. Coming to the market later also has the benefit of Gigabit networking being widely supported, which provides the increased bandwidth requirement of ultra-low latency. It should be noted here that EtherSound does have a Gigabit version, and CobraNet does work on Gigabit infrastructure with added benefits but it is currently a Fast Ethernet technology.
Dante provides a flexible solution to many of the current tradeoffs that require one system on another due to design requirements of latency verses bandwidth, because Dante can support different latency, bit depth, and sample rates in the same system. For example, this allows a user to provide a low-latency, higher bandwidth assignment to in-ear monitoring while at the same time use a higher latency assignment in areas where latency is less of a concern (such as front of house), thereby reducing the overall network bandwidth requirement.
The developers of CobraNet and Dante are both working toward advancing software so that AV professionals and end-users can configure, route audio, and manage audio devices on a network. The goal is to make audio networks “plug-and-play” for those that don't want to know anything about networking technologies. One of the advances to note is called “device discovery,” where the software finds all of the audio devices on the network so you don't have to configure them in advance. The software also has advance features for those who want to dive into the details of their audio system.
Advances in digital audio systems and networking technologies will continue to change to meet market applications and their specific requirements. Aviom's initial focus was to create a personal monitoring system, and it developed a digital audio transport to better serve this application. Aviom's low-latency transport provided a solution to the market that made it the perfect transport for many live applications. CobraNet provides the AV professional with a solution to integrate audio, video, and data systems on an enterprise switched network. EtherSound came to the market by providing a low-latency audio transport using standard Ethernet 802.3 technology. Dante comes to the market after significant change and growth and Gigabit networking and new technologies like IEEE 1588 to solve many of challenges of using Ethernet in real-time systems.
Networking audio and video can seem chaotic, but gaining an understanding of the OSI model helps bring order to the chaos. It not only provides an understanding of the various types of technology, but it also provides a common language to communicate for both AV and data professionals. Keeping it simple by using the OSI model as the foundation and breaking audio networking down into two functional parts (control and transport) will help you determine which networking technology will best suit your particular application.
Brent Harshbarger is the founder of m3tools located in Atlanta. He can be reached at email@example.com. | <urn:uuid:471ac2ee-9ae9-4ba4-873f-f68c3ed27f25> | CC-MAIN-2013-20 | http://svconline.com/proav/audiovisual-equipment_digital-audio-networking-demystified/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.936211 | 3,677 | 3.375 | 3 |
General Information / Education / Medical / Cultural / Entertainment
The History ... Cumberland Gap has been used as a crossing point in the Appalachian Mountains. Animals have used it as a path to the green pastures of Kentucky. Native Americans used the Gap as the Warrior's Path that led from the Potomac River down the south side of the Appalachians through the Gap and north to "The Dark and Bloody Ground" known as Kentucky and on to Ohio.
In 1750 Dr. Thomas Walker found the Gap and mapped its location, but the French and Indian Wars closed the new frontiers.
Daniel Boone and many other long-hunters used the Gap to the Kentucky hunting grounds. In 1775, after the Treaty of Sycamore Shoals ended most Indian troubles, Boone and thirty men marked out the Wilderness Trail from what is now Kingsport Tennessee through the Cumberland Gap to Kentucky. Part of the Wilderness Road can be walked in Cumberland Gap, Tennessee by the Iron Furnace.
Before the Revolutionary War over 12,000 people crossed into the new frontier territory. By the time of Kentucky's admission to the Union, over 100,000 people had passed through the Gap. By 1800 the Gap was being used for transportation and commerce, both east and west. In the 1830's, other routes west caused the Gap to decline in importance.
During the Civil War the Gap was called the Keystone of the Confederacy and the Gibraltar of America. Both armies felt the invasion of the North or South would come through the Gap. Both armies held and fortified the Gap against the invasion that never came.
The Gap exchanged hands four times to be finally abandoned in 1866 by the Federal Army.
Today the Cumberland Gap is the main local route North and South, via Cumberland Gap Parkway (Hwy. 25E). By the mid 1990's a four lane tunnel under the Gap will open a new North-South, East-West route and the Cumberland Gap will be restored like the first pioneer saw it.
Claiborne County located on the Tennessee-Kentucky-Virginia borders in East Tennessee, one of the state's three "Grand Divisions." It was formed in 1801 from parts of Hawkins and Grainger Counties. The county seat is Tazewell.
The communities of Tazewell and New Tazewell are in Claiborne County, Tennessee. We are located in the beautiful mountains of the Cumberland Gap area. Cumberland Gap is located where Tennessee, Kentucky, and Virginia meet.
Claiborne County is a rural county with a population of 28,828. The county covers 2400 square miles. Tazewell, the county seat, is located about 40 miles north of Knoxville, Tennessee. Along with our beautiful mountains we have beautiful Norris Lake with 850 miles of shoreline. Norris Lake was the first T.V.A. lake built in the late 1930's. The lake is fed by two large rivers, the Clinch and the Powell. The lake is enjoyed by fisherman and water lovers of all ages.
Some of the larger communities in the county are Tazewell, New Tazewell, Harrogate, Speedwell, Forge Ridge, Midway, Springdale, Cumberland Gap, Cedar Grove, Dogwood Heights, and Lone Mountain.
Population in Claiborne County 28,828
Communi Comm Services
Claiborne County Utility District
United Cities Propane Gas
The Claiborne County area is home to 11 schools. The Claiborne County Board of Education consists of 7 members.
For additional information contact our superintendent of schools is Dr. Roy K. Norris. You can contact the central office at Box 179, Tazewell, Tennessee 37879. The phone number is (423)626-5225.
Welcome to Lincoln Memorial University (LMU). For more than 100 years, LMU has helped serve the higher education needs of our tri-state area and beyond. We are excited by that heritage, and we invite you to share it! The University offers a talented, dedicated faculty and staff, a strong and varied curriculum, a well-rounded student life, a beautiful campus, and excellent facilities.
In keeping with its Lincoln legacy, LMU prides itself in providing well developed and relevant academic programs for today's students destined to compete in tomorrow's competitive workplace. Some of our nation's most competent lawyers, doctors, nurses, artists, veterinarians, business persons, and writers have their academic roots at Lincoln Memorial University
Claiborne County Hospital and Nursing Home 1850 Old Knoxville Road P.O. Box 219 Tazewell, TN 37879 (865) 626-4211
The Abraham Lincoln Library and Museum houses one of the most diverse Lincoln and Civil War collections in the country. Located on the beautiful campus of Lincoln Memorial University in Harrogate, Tennessee.
Exhibited are many rare items - the silver-topped cane Lincoln carried the night of his assassination, a lock of his hair clipped as he lay on his death bed, two life masks made of Lincoln, the tea set he and Mary Todd owned in their home in Springfield, and numerous other belongings. Over 20,000 books, manuscripts, pamphlets, photographs, paintings, and sculptures tell the story of President Lincoln and the Civil War period in America.
The Cumberland Gap National Historical Park in Cumberland Gap is a natural opening in the mountains made famous by Daniel Boone. The Indians used this path long before Boone arrived. Today, you can visit the Cumberland Gap National Historical Park and enjoy the history and beauty of our area.
If your interested in fishing, boating, or any water activity, then Norris Lake offers all that and more. There are several marinas and boat docks throughout the county.
Toll Free 800-747-0713 | <urn:uuid:8f8ec58b-1a8f-4afd-8a3e-5e77c6bbf2ca> | CC-MAIN-2013-20 | http://www.knoxville-tn.com/claiborne.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.941477 | 1,195 | 3.90625 | 4 |
Variation is a term used in genetic science, and concerns the emergence of different varieties, or species. This genetic phenomenon causes individuals or groups within a given species to possess different features from others. For example, all human beings on Earth possess essentially the same genetic information. But thanks to the variation potential permitted by that genetic information, some people have round eyes, or red hair, or a long nose, or are short and stocky in stature.
Darwinists, however, seek to portray variation within a species as evidence for evolution. The fact is, however, that variations constitute no such thing, because variation consists of the emergence of different combinations of genetic information that already exists, and cannot endow individuals with any new genetic information or characteristics.
Variation is always restricted by existing genetic information. These boundaries are known as the gene pool in genetic science. (See The Gene Pool.) Darwin, however, thought that variation had no limits when he proposed his theory267, and he depicted various examples of variation as the most important evidence for evolution in his book The Origin of Species.
All human beings on Earth share basically the same genetic information, but thanks to the variation potential permitted by this genetic information, they often look very different from one another.
According to Darwin, for example, farmers mating different variations of cow in order to obtain breeds with better yields of milk would eventually turn cows into another species altogether. Darwin’s idea of limitless change stemmed from the primitive level of science in his day. As a result of similar experiments on living things in the 20th century, however, science revealed a principle known as genetic homeostasis. This principle revealed that all attempts to change a living species by means of interbreeding (forming different variations) were in vain, and that between species, there were unbreachable walls. In other words, it was absolutely impossible for cattle to evolve into another species as the result of farmers mating different breeds to produce different variations, as Darwin had claimed would happen.
Luther Burbank, one of the world’s foremost authorities on the subject of genetic hybrids, expresses a similar truth: “there are limits to the development possible, and these limits follow a law.” 268 Thousands of years of collective experience have shown that the amount of biological change obtained using cross-breeding is always limited, and that there is a limit to the variations that any one species can undergo.
Indeed, in the introduction to their book Natural Limits to Biological Change Professor of Biology Lane P. Lester and the molecular biologist Raymond G. Bohlin wrote:
That populations of living organisms may change in their anatomy, physiology, genetic structure, etc., over a period of time is beyond question. What remains elusive is the answer to the question, How much change is possible, and by what genetic mechanism will these changes take place? Plant and animal breeders can marshal an impressive array of examples to demonstrate the extent to which living systems can be altered. But when a breeder begins with a dog, he ends up with a dog—a rather strange looking one, perhaps, but a dog nonetheless. A fruit fly remains a fruit fly; a rose, a rose, and so on.269
Variations and their various changes are restricted inside the bounds of a species’ genetic information, and they can never add new genetic information to species. For that reason, no variation can be regarded as an example of evolution.
The Danish scientist W. L. Johannsen summarizes the situation:
The variations upon which Darwin and Wallace placed their emphasis cannot be selectively pushed beyond a certain point, that such variability does not contain the secret of “indefinite departure.” 270
The fact that there are different human races in the world or the differences between parents and children can be explained in terms of variation. Yet there is no question of any new component being added to their gene pool. For example, no matter how much you seek to enrich their species, cats will always remain cats, and will never evolve into any other mammal. It is impossible for the sophisticated sonar system in a marine mammal to emerge through recombination. (See Recombination.) Variation may account for the differences between human races, but it can never provide any basis for the claim that apes developed into human beings.
Vestigial Organs Thesis
One claim that long occupied a place in the literature of evolution but was quietly abandoned once it was realized to be false is the concept of vestigial organs. Some evolutionists, however, still imagine that such organs represent major evidence for evolution and seek to portray them as such.
A century or so ago, the claim was put forward that some living things had organs that were inherited from their ancestors, but which had gradually become smaller and even functionless from lack of use.
The tonsils, which evolutionists long sought to define as vestigial organs, have been found to play an important role in protecting against throat in fections, particularly up until adulthood.
Those organs were in fact ones whose functions had not yet been identified. And so, the long list of organs believed by evolutionists to be vestigial grew ever shorter. The list of originally proposed by the German anatomist R. Wiedersheim in 1895 contain approximately 100 organs, including the human appendix and the coccyx. But the appendix was eventually realized to be a part of the lymph system that combats microbes entering the body, as was stated in one medical reference source in 1997:
Other bodily organs and tissues—-the thymus, liver, spleen, appendix, bone marrow, and small collections of lymphatic tissue such as the tonsils in the throat and Peyer’s patch in the small intestine—are also part of the lymphatic system. They too help the body fight infection. 271
The tonsils, which also appeared on that same list of vestigial organs, were likewise discovered to play an important role against infections, especially up until adulthood. (Like the appendix, tonsils sometimes become infected by the very bacteria they seek to combat, and so must be surgically removed.) The coccyx, the end of the backbone, was seen to provide support for the bones around the pelvic bone and to be a point of fixation for certain small muscles.
In the years that followed, other organs regarded as vestigial were shown to serve specific purposes: The thymus gland activates the body’s defense system by setting the T cells into action. The pineal gland is responsible for the production of important hormones. The thyroid establishes balanced growth in babies and children. The pituitary ensures that various hormone glands are functioning correctly.
Today, many evolutionists accept that the myth of vestigial organs stemmed from sheer ignorance. The evolutionist biologist S.R. Scadding expresses this in an article published in the magazine Evolutionary Theory:
Since it is not possible to unambiguously identify useless structures, and since the structure of the argument used is not scientifically valid, I conclude that ‘vestigial organs’ provide no special evidence for the theory of evolution.272
Evolutionists also make a significant logical error in their claim that vestigial organs in living things are a legacy from their ancestors: Some organs referred to as “vestigial” are not present in the species claimed to be the forerunners of man.
For example, some apes have no appendix. The zoologist Professor Hannington Enoch, an opponent of the vestigial organ thesis, sets out this error of logic:
Apes possess an appendix, whereas their less immediate relatives, the lower apes, do not; but it appears again among the still lower mammals such as the opossum. How can the evolutionists account for this? 273
The scenario of vestigial organs put forward by evolutionists contains its own internal inconsistencies, besides being scientifically erroneous. We humans have no vestigial organs inherited from our supposed ancestors, because humans did not evolve randomly from other living things, but were fully and perfectly created in the form we have today.
It has now been realized that the appendix (below), which evolutionist biologists imagined to be vestigial, plays an important role in the body's immune system. The lowest bone in the spinal column, known as the coccyx, is al so not vestigial, but a point for muscles to at tach to. | <urn:uuid:cbaed394-8c49-43f9-8158-1c78ccd127c2> | CC-MAIN-2013-20 | http://harunyahya.com/en/books/14789/The_Evolution_Impasse_II/chapter/4896 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.957133 | 1,725 | 4.21875 | 4 |
Doing u-substitution twice (second time with w) Example where we do substitution twice to get the integral into a reasonable form
Doing u-substitution twice (second time with w)
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- Let's see if we can take the integral of cosine of 5x over e to the sine of 5x dx.
- And there's a crow squawking outside of my window so I'll try to stay focused.
- So let's think about whether u-substitution might be appropriate. Your first temptation might to said,
- "Hey, maybe we let u equal sine of 5x, and if u is equal to sine of 5x,
- we have something that is pretty close to du up here." Let's verify that.
- So du could be equal to -- so du/dx (derivative of u with respect to x),
- well we just use the chain rule. Derivative of 5x is 5,
- times the derivative of sine of 5x with respect to 5x, that's just going to be cosine of 5x.
- If we want to write this in differential form, which is useful when we do our u-substitution,
- we could say that du is equal to 5 cosine 5x.
- Now when you look over here, we don't have quite du there. We have just cosine of 5x dx--
- sorry, I need cosine of 5x dx, just like that. So when you look over here,
- you have a cosine of 5x dx, but we don't have a 5 cosine of 5x dx,
- but we know how to solve that. We can multiply by 5 and divide by 5.
- 1/5 times 5 is just going to be 1. So we haven't changed the value of the expression.
- But when we do it this way, we see pretty clearly, we have our u and we have our du.
- Our du is 5 -- let me circle that and let me do that in that blue color --
- is 5 cosine of 5x dx. So we can rewrite this entire expression as --
- I'll do that 1/5 in purple -- this is going to be equal to 1/5 --
- I hope you don't hear that crow outside; he's getting quite obnoxious --
- 1/5 times the integral of, well all this stuff in blue is my du,
- and then that is over e to the u. So how do we take the anti-derivative of this?
- Well, you might be tempted to -- well, what would you do here?
- Well, we're still not quite ready to simply take the anti-derivative here.
- If I were to rewrite this, I could rewrite this as (this is equal to)
- 1/5 times the integral of e to the negative u du.
- And so, what might jump out of you is maybe we do another substitution,
- and we already use the letter u, so maybe we might use w. We'll do some "w-substitution."
- And you might be able to do this in your head, but we'll do w-substitution just to make it a little bit clearer.
- So let's -- this would've been really useful if this was just e to the u,
- because we know the anti-derivative of e to the u. It's just e to the u.
- So let's just try to get it in terms of the form of e to the negative something.
- So let's set -- and I'm running out of colors here -- w equal to negative u.
- And in that case, then dw (derivative of w with respect to u) is negative 1,
- or if we were to write that statement in differential form,
- dw is equal to du times negative 1 is negative du.
- So this right over here would be our w, and do we have a dw here?
- Well we just have du; we don't have a negative du there.
- But we can create a negative du by multiplying this inside by negative 1,
- but then also multiplying the outside by negative 1.
- Negative 1 times negative 1 is positive 1; we haven't changed the value.
- We have to do both of these in order for it to make sense.
- Or I could do it like this. So negative 1 over here, and a negative 1 right over there.
- And if we do it in that form, then this negative 1 times du --
- that's the same thing as negative du -- this is this right over here.
- And so we can rewrite our integral -- it's going to be equal to --
- now it's going to be negative 1/5 -- trying to use the colors as best as I can --
- times the indefinite integral of e to the -- well instead of negative u, we could right w.
- E to the w. And instead of du times negative 1 or negative du, we can write "dw."
- Now this simplifies things a good bit. We know what the anti-derivative of this in terms of w.
- This is going to be equal to negative 1/5 e to the w, and then we might have some constant there,
- so I just do a plus C. And now we just have to all of our un-substituting.
- So we know that w is equal to negative u, so we could write that --
- so this is equal to negative 1/5 -- I want to stay true to my colors -- e to the negative u,
- that's what w is equal to, plus C. But we're still not done un-substituting.
- We know that u is equal to sine of 5x. So we can write this as being equal to
- negative 1/5 times e to the negative u, which is negative u is sine of 5x,
- and then finally, we have our plus C. Now, there was a simpler way that we could've done this
- by just doing one substitution. But then you kind of have to look ahead a little bit
- and realize that it was not trivial to take -- not to bad to take your anti-derivative of e to the negative u.
- The inside that you might of have although you shouldn't really hold yourself
- when you feel too bad when you didn't see that inside.
- We could've rewritten that original integral -- let me rewrite it --
- it's cosine of 5x over e to the sine of 5x dx. We could've written this entire integral as being equal to
- cosine of 5x times e to the negative sine of 5x dx. And in this situation, we could've said
- u to be equal to negative of 5x, and say well, if u is equal to --
- or negative sine of 5x, then du is going to be equal to negative 5 cosine of 5x,
- and we don't have a negative 5 -- oh, dx, we don't have a negative 5 here,
- but we can construct one by putting negative 5 there, then multiplying by negative 1/5,
- and then that would've immediately simplified this integral right over here to be equal to
- negative 1/5 times the integral of -- well, we have our du -- let me do this in a different color --
- that's the negative 5 -- let me do it this way -- negative 5 cosine of 5x dx.
- So that is our du -- I'm just changing the order of multiplication -- times e to the u.
- This whole thing now is u this second time around. So if we did it this way, with just one substitution,
- we could've immediately gotten to the result that we wanted. You take the anti-derivative of this --
- I'll do it in one color now, just 'cause I think you get the idea -- this is equal to
- negative 1/5 e to the u plus C. u is equal to negative sine of 5x,
- so this is equal to negative 1/5 e to the negative sine of 5x plus C. And we're done.
- So this one is faster; it's simpler, and over time, you might even start being able to do this in your head.
- This top one, you still didn't mess up by just setting u equal to sine of 5x;
- we just have to do an extra substitution in order to work it through all the way.
- And I was able to do this video despite the crowing crow outside -- or squawking crow.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | <urn:uuid:9d906c38-8304-413e-95f2-fa363a9fe708> | CC-MAIN-2013-20 | http://www.khanacademy.org/math/calculus/integral-calculus/u_substitution/v/doing-u-substitution-twice--second-time-with-w | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.944163 | 2,163 | 2.75 | 3 |
After failing to win reelection to the Congress Morris moved to Philadelphia and resumed his law practice. A series of newspaper articles on finance secured him the post of assistant to Robert Morris (no relative) in handling the finances of the new government (1781-85). In this position he planned the U.S. decimal coinage system. As a member of the U.S. Constitutional Convention of 1787 Morris played an active role, defending a strong centralized government and a powerful executive, opposing concessions on slavery, and putting the Constitution into its final literary form. He remained, however, a champion of aristocracy who distrusted democratic rule.
In 1789 Moris went to France as a private business agent, remained in Europe, and was appointed (1792) U.S. minister to France. During the French Revolution his sympathies lay with the royalists; he even helped plan a scheme to rescue Louis XVI. His recall was requested in 1794, but he traveled for several years before returning to America in 1798. From 1800 to 1803, Morris, a Federalist, was a U.S. senator from New York. He then retired to his estate. He condemned the War of 1812, going so far as to recommend the severance of the federal union. Morris was a strong advocate of the Erie Canal and served as chairman (1810-13) of the canal commission.
See his Diary of the French Revolution (1939), edited by his great-granddaughter, Beatrix Cary Davenport; biographies by T. Roosevelt (1888, repr. 1972), D. Walther (tr. 1934), and R. Brookhiser (2003); M. M. Mintz, Gouverneur Morris and the American Revolution (1970).
(born , Jan. 31, 1752, Morrisania house, Manhattan—died Nov. 6, 1816, Morrisania house) American statesman and financial expert. He was admitted to the bar (1771) and served in the New York Provincial Congress (1775–77) and the Continental Congress (1778–79). He distrusted the democratic tendencies of colonists who wanted to break with England, but his belief in independence led him to join their ranks. As assistant superintendent of finance (1781–85), he proposed the decimal coinage system that became the basis for U.S. currency. A delegate to the Constitutional Convention, he helped write the final draft of the Constitution of the United States. He served as minister to France (1792–94) and as a U.S. Senator (1800–03), and he was the first chairman of the Erie Canal commission (1810–16).
Learn more about Morris, Gouverneur with a free trial on Britannica.com.
Born in what is now part of New York City in 1752, Gouverneur Morris was of Welsh and Huguenot background. Morris graduated from King's College, known since the American Revolution as Columbia University, in 1768. He practiced law in the city starting in 1771.
Morris had a wooden leg as a result of an accident that occurred while he was climbing onto a carriage without anyone tending to the horses, which suddenly took off, catching his left leg in one of the carriage wheels on May 14, 1780. Physicians told Morris that they had no choice but to remove the leg below the knee.
On May 8, 1775, Morris was elected to represent his family estate in the New York Provincial Congress, an extralegal assembly dedicated to achieving independence. His advocacy of independence brought him into conflict with his family, as well as his mentor William Smith, who had abandoned the patriot cause when it moved towards independence.
Despite an automatic exemption from military duty because of his handicap and his service in the legislature, he joined a special "briefs" club for the protection of New York City, a forerunner of the modern New York Guard.
After the Battle of Long Island in August 1776, the British seized New York City and his family's estate. His mother, a Loyalist, gave the estate over to the British for military use. Because his estate was now in the possession of the enemy, he was no longer eligible for election to the New York state legislature and was instead appointed as a delegate to the Continental Congress.
He took his seat in Congress on January 28, 1778 and was immediately selected to a committee in charge of coordinating reforms in the military with General Washington. On a trip to Valley Forge, he was so appalled by the conditions of the troops that he became the spokesman for the Continental Army in Congress and pushed for substantial reforms in the training and methods of the army. He also signed the Articles of Confederation in 1778.
In 1779, he was defeated for re-election to Congress, largely because his advocacy of a strong central government was at odds with the decentralist views in New York. Defeated in his home state, he moved to Philadelphia to work as a lawyer and merchant.
In Philadelphia, he was appointed assistant superintendent of finance (1781-1785), and was a Pennsylvania delegate to the Constitutional Convention in 1787, before returning to live in New York in 1788.
During the convention, he was a friend and ally of George Washington and others who favored a stronger central government. Morris was elected to serve on a committee of five (chaired by William Samuel Johnson) that would draft the final language of the proposed Constitution. Catherine Drinker Bowen, in Miracle at Philadelphia, called Morris the committee's "amanuensis," meaning that it was his pen that was responsible for most of the draft.
"An aristocrat to the core," Morris believed that "there never was, nor ever will be a civilized Society without an Aristocracy". He also thought that common people were incapable of self-government and feared that the poor would sell their votes to rich people, and consequently thought that voting should be restricted to property owners. Morris also opposed admitting new Western states on an equal basis with the existing Eastern states, fearing that the interior wilderness could not furnish "enlightened" statesmen. At the Convention he gave more speeches than any other delegate, totaling 173.
He went to Europe on business in 1789 and served as Minister Plenipotentiary to France from 1792-1794. His diaries written during that time have become an invaluable chronicle of the French Revolution, capturing much of the turbulence and violence of that era. He returned to the United States in 1798 and was elected in 1800 as a Federalist to the United States Senate to fill the vacancy caused by the resignation of James Watson, serving from April 3, 1800, to March 3, 1803. He was an unsuccessful candidate for reelection in 1802. After leaving the Senate, he served as chairman of the Erie Canal Commission, 1810-1813.
At the age of 57, he married Anne Cary ("Nancy") Randolph, who was the sister to Thomas Mann Randolph, husband of Thomas Jefferson's daughter Martha Jefferson Randolph. He died at the family estate of Morrisania and is buried at St. Ann's Episcopal Church in the Bronx borough of New York City.
Morris's half-brother, Lewis Morris (1726-1798), was a signer of the Declaration of Independence. Another half-brother, Staats Long Morris, was a Loyalist and major-general in the British army during the American Revolution. His nephew, Lewis Richard Morris, served in the Vermont legislature and in the United States Congress. His grandnephew was William M. Meredith, United States Secretary of the Treasury under Zachary Taylor. Morris's great-grandson, also named Gouverneur (1876-1953), was an author of pulp novels and short stories during the early twentieth century. Several of his works were adapted into films, including the famous Lon Chaney, Sr. film The Penalty.
Envoy to the Terror: Gouverneur Morris and the French Revolution.(Napoleon's Troublesome Americans: Franco-American Relations, 1804-1815)(Book review)
Jun 22, 2007; Envoy to the Terror: Gouverneur Morris and the French Revolution. By Melanie Randolph Miller. (Dulles, VA: Potomac Books, 2005.... | <urn:uuid:ca50703c-be5e-4ae8-b152-642554eb74ed> | CC-MAIN-2013-20 | http://www.reference.com/browse/Gouverneur+Morris | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.978672 | 1,712 | 2.8125 | 3 |
Title IX built generation of better athletesby Alex Friedrich, Minnesota Public Radio
ST. PAUL, Minn. — For the girls at this year's Minnesota high school track and field championships, Title IX is a lesson in a history book.
The sports it helped open up now dominate the lives of many of the girls.
Take 18-year-old Aitkin, Minn., High School senior Emily Lundgren, who runs the 400 meters.
Lundgren, who has been captain of her track and basketball teams and also plays tennis, said sports have been a big part of her life since the first grade.
"That's what you do in your summer — you practice your sports," she said. "They mean a lot to me, to say the least. They've really made me who I am today."
Lundgren and her teammates are arguably better athletes because of Title IX. It has given them better coaching, training facilities and even sports and nutrition research than girls received 40 years ago.
The change is the result of Title IX, a landmark piece of federal legislation enacted 40 years ago tomorrow.
Designed to prevent gender discrimination in the nation's education system, Title IX has paid dividends. Ten times more young women now play high-school sports than in 1972, according to statistics from National Federation of State High School Associations. There are six times as many female college athletes in NCAA colleges.
But it has become better known for opening up sports to women. Critics, meanwhile, say that progress has come at the cost of men's programs.
Stronger high school athletes grow into better college players -- and a few into professional athletes.
One example of the resources now available is the basketball camp held for young girls this month at the University of St. Thomas in St. Paul. Varsity women's basketball players helped run the girls through drills, teach them special moves and immerse them in the game.
One of the teachers is Alyssa Favilla, a St. Thomas sophomore varsity forward who said sports gave her structure.
"They taught me time management — and how to be a competitor," she said. "And I use that in the classroom, too."
With each generation since Title IX, female athletes have grown bigger, stronger and faster, said Mary Jo Kane, director of the University of Minnesota's center for research on women's sports.
"They are infinitely more gifted than they have ever been," Kane said. "And it is a direct result of the decision of Title IX."
Kane, 61, knows what it's like to have been excluded. She grew up a tomboy in Illinois in the 1950s and '60s and played touch football, basketball and baseball in her neighborhood. But her high school had no organized sports for girls.
The choice was little better for JoAnn Andregg, now associate athletic director at the University of St. Thomas. At her California high school, women's volleyball and basketball held second-class status: No coaches. No budget. No rights to the gym.
"That usually meant we got in there after the boys' team was finished practicing," Andregg said. "Then we got in there. And we had to wear these God-awful uniforms, these one-piece uniforms for practice and games. I remember that so distinctly. "
Title IX changed that. Now, high schools and colleges typically provide sports opportunities proportional to the number of enrolled students of each gender.
In the last four decades, the number of women's college teams per campus has almost quadrupled, according to a report by Brooklyn College professors R. Vivian Acosta and Linda Jean Carpenter. But that has led to criticism that growth has come at the expense of men's programs.
While women now outnumber men on many college campuses, critics note that fewer women play college sports.
The American Sports Council, a group that wants to Congress to overhaul Title IX, has filed two lawsuits contesting the requirement of proportional sports offerings. Council media director Jim McCarthy said cash-strapped colleges have limited or slashed men's sports such as swimming, volleyball and soccer to make room for women's sports such as rowing, ice hockey and bowling.
McCarthy said Title IX requirements have led to a quota system that excludes male athletes who normally would have a shot at sports.
"That's been an absolute outrage, because the law is supposed to protect against discrimination," he said.
The solution, McCarthy said, is to end proportionality -- or at least survey all students about their real sports interests.
"Let the students themselves have a voice in how the law is applied," he said.
Kane and other Title IX supporters don't oppose surveys. But she said the methodology of past attempts has been flawed, and new surveys would have to be constructed and administered correctly.
Title IX supporters also say colleges are
Kane says colleges could cut the fat from large football programs and use the savings to pay for women's sports without dropping men's.
Students still seem to prefer men's sports, judging from what some college athletes say.
"We've earned a lot of respect," Favilla said. "But I can see how they don't see us as equal sometimes. We don't dunk or don't sprint as fast. But usually they treat us well."
Andregg, of St. Thomas, is concerned with one ironic outcome of the legislation.
Thanks to Title IX, women's college sports have become more competitive. So men now want to coach them. According to the Brooklyn College report, more than half of women's college sports teams have male coaches. Before Title IX, only one in 10 did.
"We're going to have daughters coming up who never see a female coach," Andregg said of the trend. "And I just think that's a shame."
If that happens, Andregg says, girls won't have many female athletic role models - and that might risk the gains they've made in the past 40 years. | <urn:uuid:9124bd57-926a-43e8-966a-169164a4176e> | CC-MAIN-2013-20 | http://minnesota.publicradio.org/display/web/2012/06/22/social-issue/title-ix-anniversary?refid=0 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.981286 | 1,236 | 2.734375 | 3 |
Even if, internationally, Austria is not considered to be a special case, there is still widespread agreement on the fact that cooperation and the coordination of interests between the federations is one of this country’s distinctive features. The common definition for this type of cooperation is “social partnership”.
The federations and chambers work in close contact with one or other of the two political parties, the Austrian People’s Party or the Social Democratic Party of Austria. The considerable economic growth and rise in employment and wages during the 1950s and 1960s created a favourable basis for the exchange of economic and socio-political interests. All this contributed to the wide-spread establishment of the Austrian system of social partnership in the 1960s. If the 1970s could be regarded as its heyday, the 1990s, in particular, have witnessed a change in this system’s significance.
Social partnership is neither anchored in the Austrian constitution nor laid down in any specific act. It is rooted in the free will of the players concerned. To a large extent, it is implemented informally and confidentially and is not normally accessible to the general public.
The umbrella federations of the social partners wield great influence as regards political opinion-forming and decision-making. Their co-operation has thus often been criticised as a “secondary government”, although the political omnicompetence often attributed to the social partners has, in fact, never existed as such. The co-operation and coordination of interests among the associations and with the government have only ever applied to specific fields of politics, such as income policies and certain aspects of economic and social policies, (e.g. industrial safety regulations, agrarian market legislation, labour market policies and principles of equal treatment). In these areas, during the past decades the social partners have substantially contributed to Austria’s economic, social and political stability – evidence of which can be found in economic growth, in the rise of employment, in the expansion of the welfare state and also in the often quoted “social peace”.
Several avenues for political decision-making are open to the large national federations. A traditionally used channel is their close relationship with one or the other of the long-standing government parties, i.e. the Social Democratic Party or the Austrian People’s Party. In addition, the federations are incorporated, both formally and informally, into the political opinion-forming process of the relevant ministries, as evidenced by their participation in a number of committees, advisory boards and commissions. Even at the parliamentary level, involvement of experts from the federations and chambers is a normal practice.
Austria’s accession to the European Union has expanded the federations’ scope in that they not only have privileged access to relevant information and documentation. Of even greater importance are their possibilities for influencing the Austrian position in proposing EU legislation. All in all, by comparison with many other countries, this means that the large national federations in Austria have excellent possibilities for shaping the policies relating to their interests. However, social partnership in the true sense of the word goes beyond this: its core task consists of the balancing of opposing interests in the aforementioned political fields through contextual compromises among federations or between the federations and the government.
Since the 1980s, economic, social and political changes have become apparent in Austria, too. Evidence of this lies in reduced economic growth, rising budgetary deficits, increasing competition and unemployment, and an expanding rivalry between the political parties. Against this backdrop, it has not only become more difficult for the federations to align the different interests of their members to a common denominator: reduced turnout in elections to the chambers and the general calling into question of compulsory membership are symptoms of change. In addition, it is not only becoming increasingly difficult, but also rarer, to strike a balance between the federations’ interests. Well-known institutions, such as the Paritätische Kommission für Lohn- und Preisfragen (Parity Commission for Wages and Prices), which – particularly in the comments of foreign observers – has been widely recognised as a central institution of the Austrian social partnership, have lost some of their significance. The changes are mainly manifest in the re-weighting of the influence of the players involved in the political decision-making process; the government has gained formative power and influence. In important budgetary, economic and socio-political questions it decides both the procedure and the core contents. Austria’s accession to the European Union has reinforced this development. At the same time, however, EU membership also entails a loss of terrain for the federations. Decisions on topics such as agricultural, competition and monetary policies are decided at EU level. Here, the influence of the federations is essentially limited to formulating the Austrian position, which is just one out of 15.
All this does not currently mean that the system of social partnership has come to an end. There are also visible signs of continuity. The privileged position of the national federations remains unchanged. In the political decision-making process a balance of interests can still be achieved. However, the influence has lessened. Not the end, but certainly changes and reforms of the social partnership, are currently on the agenda. | <urn:uuid:4cb83bfd-7a4b-43d4-b30c-4cc358633a50> | CC-MAIN-2013-20 | http://www.bmeia.gv.at/en/foreign-ministry/austria/government-and-politics/social-partnership.html?ADMCMD_editIcons='%22() | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.961314 | 1,078 | 2.96875 | 3 |
Herbal In Italy
( Originally Published 1912 )
The Italian botanists of the Renaissance devoted them-selves chiefly to interpreting the works of the classical writers on Natural History, and to the identification of the plants to which they referred. This came about quite naturally, from the fact that the Mediterranean flora, which they saw around them, was actually that with which the writers in question had been, in their day, familiar. The botanists of southern Europe were not compelled, as were those whose homes lay north of the Alps, to distort facts before they could make the plants of their native country fit into the procrustean bed of classical descriptions.
One of the chief of the commentators and herbalists of this period was Pierandrea Mattioli [or Matthiolus] (Text-fig. 40), who was born at Siena in 1501, and died of the plague in 1577. We realise something of the frightful extent of this scourge, when we remember that it claimed as victims no less than three of the small company of Renaissance botanists, Gesner, Mattioli and Zaluzian. Leonhard Fuchs was brought into fame by his successful treatment of one of these epidemics. It should also be recalled that, while Gaspard Bauhin, one of the best known of the later herbalists, was practising as a physician at Basle, no less than three of these terrible outbreaks occurred in the town.
Mattioli was the son of a doctor, and his early life was passed in Venice, where his father was in practice. He was destined for the law, but his inherited tastes led him away from jurisprudence to medicine. He practised in several different towns, and became physician, successively, to the Archduke Ferdinand, and to the Emperor Maximilian II.
Mattioli's ` Commentarii in sex libros Pedacii Dioscoridis,' his chef-d'oeuvre, the gradual production and improvement of which occupied his leisure hours throughout his life, was first published in 1544. It was translated into many languages and appeared in countless editions. The success of the work was phenomenal, and it is said that 32,000 copies of the earlier editions were sold. The title does not do the book justice, for it contains, besides an exposition of Dioscorides, a Natural History dealing with all the plants known to Mattioli. The early editions had small illustrations only (Text-figs. 41, 42, 93 and 94), but, later on, editions with large and very beautiful figures were published, such as that which appeared at Venice in 1565.
Mattioli's descriptions of the plants with which he deals are not so good as those of some of his contemporaries.
He found and recorded a certain number of new plants, especially from the Tyrol, but most of the species, which he described for the first time, were not his own discoveries, but were communicated to him by others. Luca Ghini, for instance, had projected a similar work, but handed over all his material to Mattioli, who also placed on record the discoveries made by the physician, Wilhelm Quakelbeen, who had accompanied the celebrated diplomatist, Auger-Gislain Busbecq, on a mission to Turkey.
Busbecq brought from Constantinople a wonderful collection of Greek manuscripts, including Juliana Anicia's copy of the Materia Medica of Dioscorides, now in the Vienna Library (see pp. 8 and 154). He discovered this great manuscript in the hands of a Jew, who required a hundred ducats for it. This price was almost prohibitive, but Busbecq was an enthusiast, and he successfully urged the Emperor, whose representative he was, to redeem so illustrious an author from that servitudes." His purpose in buying the manuscript seems to have been largely in order to communicate it to Mattioli, who would thus be able to make use of it in preparing his Commentaries on Dioscorides.
The personal character of Mattioli does not appear to have been a pleasant one. He engaged in numerous controversies with his fellow botanists, and hurled the most abusive language at those who ventured to criticise him.
Another Italian herbalist, Castor Durante, slightly later in date than Mattioli, should perhaps be mentioned here, not because of the intrinsic value of his work, but because of its widespread popularity. At least two of his books appeared in many editions and translations.
Durante was a physician who issued a series of botanical compilations, bedizened with Latin verse. The best known of his works is the Herbaro Nuovo,' published at Rome in 1585 (Text-figs. 45 and 103). A second book, the original version of which is seldom met with, has survived in the form of a German translation, by Peter Uffenbach. The German version was named ` Hortulus Sanitatis.' As an illustration of Durante's charmingly unscientific manner, we may take the legend of the " Arbor tristis " which occurs in both these works. The figure which accompanies it (Text-fig. 45) shows, beneath the moon and stars, a drawing of a tree whose trunk has a human form. The description, as it occurs in the ` Hortulus Sanitatis,' ay be translated as follows :
"Of this tree the Indians say, there was once a very beautiful maiden, daughter of a mighty lord called Parisa taccho. This maiden loved the Sun, but the Sun forsook her because he loved another. So, being scorned by the Sun, she slew herself, and when her body had been burned, according to the custom of that land, this tree sprang from her ashes. And this is the reason why the flowers of this tree shrink so intensely from the Sun, and never open in his presence. And thus it is a special delight to see this tree in the night time, adorned on all sides with its lovely flowers, since they give forth a delicious perfume, the like of which is not to be met with in any other plant, but no sooner does one touch the plant with one's hand than its sweet scent vanishes away. And however beautiful the tree has appeared, and however sweetly it has bloomed at night, directly the Sun rises in the morning it not only fades but all its branches look as though they were withered and dead."
Much more famous than Durante was Fabio Colonna, or, as he is more generally called, Fabius Columna (Plate IX), who was born at Naples in 1567. His father was a well-known littérateur. Fabio Colonna's profession was that of law, but he was also well acquainted with languages, music, mathematics and optics. He tells us in the preface to his principal work that his interest in plants was aroused by his difficulty in obtaining a remedy for epilepsy, a disease from which he suffered. Having tried all sorts of prescriptions without result, he examined the literature on the subject, and discovered that most of the writers of his time merely served up the results obtained by the ancients, often in a very incorrect form. So he went to the fountain head, Dioscorides, and after much research identified Valerian as being the herb which that writer had recommended against epilepsy, and succeeded in curing himself by its use.
This experience convinced Colonna that the knowledge of the identity of the plants described by the ancients was in a most unsatisfactory condition, and he set himself to produce a work which should remedy this state of things. This book was published in 1592, under the name of ` Phytobasanos,' which embodies a quaint conceit after the fashion of the time. The title is a compound Greek word meaning "plant torture," and was apparently employed by Colonna to explain that he had subjected the plants to ordeal by torture, in order to wrest from them the secret of their identity. But it must be confessed that Colonna himself is by no means free from error, as regards the names which he assigns to them.
The great feature of the `Phytobasanos,' however, is the excellence of the descriptions and figures. The latter are famous as being the first etchings on copper used to illustrate a botanical work (Text-figs. 46 and 105). They were an advance on all previous plant drawings, except the work of Gesner and Camerarius, in giving, in many cases, detailed analyses of the flowers and fruit as well as habit drawings. We owe to Colonna also the technical use of the word "petaI," which he suggested as a descriptive term for the coloured floral leaves'.
By means of his wide scientific correspondence, Colonna kept in touch with many of the naturalists of his time, notably with de l'Écluse and Gaspard Bauhin.
A passing reference may be made here to a book which is rather of the nature of a local flora than a herbal, entitled Prosperi Alpini de plantis .iEgypti,' which was published at Venice in 1592. It contains a number of wood-cuts, which appear to be original. The one reproduced (Text-fig. 47) represents Salicornia, the Glasswort. The author was a doctor who went to Egypt with the Venetian consul, Giorgio Emo, and had opportunities of collecting plants there. He is said to have been the first European writer to mention the Coffee plant, which he saw growing at Cairo. Prospero Alpino eventually became Professor of Botany at Padua, and enriched the botanical garden of that town with Egyptian plants. | <urn:uuid:1b1ebe26-94c4-4b92-bb8f-ba22fa0f2460> | CC-MAIN-2013-20 | http://www.oldandsold.com/articles31n/herbals-14.shtml | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.986529 | 2,017 | 2.5625 | 3 |
new zealand curriculum
The New Zealand Curriculum is
built around the acquisition of essential academic and practical
skills. It identifies 7 academic or essential
These are balanced by 8 practical
or essential skills:
- Language and languages
- Social sciences
- The arts
- Health & physical
- Communication skills
- Numeracy skills
- Information skills
- Problem-solving skills
- Self-management and
- Social and co-operative
- Physical skills
- Work and study skills
Each term, most schools prepare
student Progress Reports and hold parent-teacher evenings.
Subjects Taught At New Zealand Schools
The following is a general list of subjects taught in
New Zealand schools. Not all schools offer all the subjects
listed and others may offer additional disciplines. Some subjects
||Agriculture & Horticulture |
||Business Studies |
||Classical Studies |
||Media Studies |
||Physical Education |
||Social Studies |
||Graphics & Design |
||Clothing & Design
The school year begins in late January or early
February, after a summer holiday of about 6 weeks, and ends in
December. It is divided into 4 terms with breaks of two to three
weeks between them.
Secondary school students have slightly longer holidays
then primary school students.
Check with your local New Zealand school for actual term dates,
however the terms usually run as follows:
Term 1 - End of January to early April
Term 2 - Late April to end of June
Term 3 - Mid July to late September
Term 4 - Mid October to mid December (or early December for
New Zealand’s qualifications system is changing from traditional examination based awards to standards based qualifications. In 2002, level 1 of the National Certificate of Educational Achievement (NCEA) replaced School Certificate. The NCEA will replace Sixth Form Certificate in 2003 and University Bursaries in 2004.
National Certificate of Educational Achievement
NCEA (National Certificate of Educational Achievement) is New Zealand's main national qualification for secondary school students and part of the National Qualifications Framework.
The Qualifications Framework covers industry and education qualifications from year 11 (formerly Form 5) of secondary schooling and entry level to vocations, through to post-graduate level.
All qualifications currently on the Framework are made up of national standards. A standard describes what a learner should aim to achieve in a skill or knowledge area. Standards are set by written criteria along with a national moderation system. Learners who meet all requirements get credit for that standard; those who don't may be reassessed when they are ready.
Each standard is at a level from 1 to 8. Level 1 is similar to School Certificate level; level 2 to Sixth Form Certificate; levels 3 and 4 are similar to University Bursaries. Each standard also has a credit rating.
Schools can also use many standards from beyond the school curriculum. Any number or combination of standards may be assessed within a course, so schools can develop courses to suit their students.
Students accumulate Framework credits towards National Certificates and National Diplomas. As well as being able to work towards a range of National Certificates, eg, National Certificate in Computing, from 2002 school students will work towards a general qualification, the National Certificate of Educational Achievement (NCEA). Students can start on Framework qualifications at school and carry on in the workplace or tertiary studies.
NCEA provides the pathway to tertiary education and workplace training and gives everyone a full picture of what students know and can do.
- Challenges students of all abilities, in all learning areas
- Reports more details about a student's achievement
- Is officially recognised in New Zealand and internationally
- Is recognised by employers, universities and polytechnics and used as the benchmark for selection
- Provides opportunities to begin studying for tertiary and industry qualifications * Enables students to gain credits from traditional school curriculum areas AND alternative school curriculum programmes
- Has exams as well as internal assessment
- Has a national system for checking internal assessments
- Shows credits and grades for separate skills and knowledge in some standards
The National Qualifications Framework contains two types of national standards: achievement standards and unit standards. Credits from all achievement standards and all unit standards count towards NCEA.
Choosing A School
Most New Zealand students attend state-funded schools
and every student has the right to enrol at the state school
nearest to their home. If the school is at risk of overcrowding,
it can set a home zone that is geographically
defined. Students living in this zone have the right to go to
that school. Those living outside the zone can be enrolled only
under special circumstances. These include situations where
students have brothers or sisters attending the school or require
access to special programmes such as special education or Maori
language. If the school is still at risk of over-crowding,
selection is made through a supervised ballot.
ERO reports are available at no charge from New Zealand schools
and ERO offices.
Families also have the right to visit schools and meet with the
principal and staff before deciding to enrol their children as
State schools are fully funded by the Government. At
primary and intermediate level they are co-educational with
classes that include both boys and girls. Both co-educational and
single-sex schooling is available at secondary level.
State schools do not charge fees, however parents are expected to
make donations towards the support of special programmes or
services. These are also charges for stationery and uniforms.
Meals are not provided but snacks can generally be purchased from
the school Tuck Shop, but many parents prefer to
provide a packed lunch.
The term integrated schools generally refers
to schools with a religious focus - usually Roman Catholic
in denomination that used to operate as private institutions. In
recent years, these schools have been integrated into the state
system, hence the name, integrated schools, and receive
government funding. Although they follow the state curriculum
requirements, all have retained their special religious or
philosophical character. A small number of institutions, such as
Montessori or Rudolf Steiner schools, are secular in character.
Private or independent schools receive only limited
government funding and are almost entirely dependent on income
derived from student fees. There are no standard fees as each
school determines its own fee scale. Fees also vary according to
levels, with fees in Years 12 and 13 usually significantly higher
than those charged in Years 9 and 10.
Fees at primary schools also vary according to level, although
these are generally much lower than secondary school fees.
Private schools are governed by their own independent boards but
must meet government standards in order to be registered and they
are also subject to the same ERO audits as state schools.
Boarding schools exist mainly at secondary school level.
Currently 78 state and integrated schools and 24 private schools
have boarding arrangements.
The Correspondence School teaches a full range of school
Home-based schooling must meet the same standards as
registered schools, and approval to exempt the student from
regular schooling must be obtained from the Ministry of
A small annual grant is available for teaching materials.
Home schooling accounts for less than 1% of school enrolments.
Most schools require students to wear a uniform at all
times unless the school has an optional uniform policy. School
uniforms are sold by most major department stores and some
schools also operate their own Uniform Shops and sell both new
and second-hand items.
Teachers are not allowed to physically punish students
in their care. Legal disciplinary methods include removal of
privileges, extra homework or detention. Parents or guardians are
advised in advance if a child is given detention, as this will
require the child to stay at school for a specified time after
the end of the standard school day.
For serious offences, students may be suspended from school for a
period of time and if they are over 16 years of age, they can be
expelled permanently. Expulsion generally occurs when a
students conduct either sets a dangerous example to other
students or threatens their safety. There are formal procedures
for suspending or expelling a student.
Most secondary and primary schools expect students to do
homework. Each school has its own rules on the amount and type of
Parents or guardians are responsible for ensuring that a
child can get to school. Each year about 100,000 children use
school buses. Although school bus services are contracted by the
Ministry of Education, students are expected to meet the costs of
If a child has to travel a long distance to school, and there is
no public transport or school bus service, financial assistance
can be provided. Financial assistance and/or bus and taxi
services are provided for special education students.
If you plan to change schools, the principal of your
childs current school should be informed as soon as
Transfer involving a change in the level of schooling, such as
from primary to intermediate or intermediate to secondary,
require additional documentation. Details of application
procedures are provided by the school the student plans to
transfer to. Most intermediate and secondary schools have open | <urn:uuid:aa6c71c9-c0e4-41ca-ab3d-5d51afacd830> | CC-MAIN-2013-20 | http://www.nz-immigration.co.nz/education/curriculum.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945796 | 1,922 | 3.5 | 4 |
NINE BANDED ARMADILLO
Photo Credit: U. S. Fish and Wildlife Service
SCIENTIFIC NAME: Dasypus novemcinctus
OTHER NAMES: Armadillo, Common Long-Nosed Armadillo
DESCRIPTION: The nine-banded armadillo (Dasypus novemcinctus) cannot easily be confused with any other North American wild mammal. The armadillo’s body is covered with an armored carapace or shell. The carapace is a double layer of horn and bone, segmented into three main divisions: an anterior scapular shield covering the shoulder; a posterior pelvic shield covering the hip region; and a middle section comprised of a series of bands connected by soft, infolded skin between the bands. The head and legs are covered with thick scales, and the tail is encased in a series of bony rings. Coloration of nine-banded armadillos is generally grayish brown, with yellowish-white scales along the side of the carapace. The armadillo has a long, pointed snout, small eyes, and large, cylindrical ears. The armadillo’s pointed snout, short, stout legs, and heavy claws are well suited for digging and burrowing. Armadillos have a limited number of vocalizations: a low, wheezy grunt associated with digging and rooting; a wheezy grunt uttered by recently captured individuals; an audible buzzing noise given when highly alarmed or fleeing, a pig-like squeal given by frightened individuals; and a weak purring given by young attempting to nurse from an unrelated female. Total length ranges from 24 to 31 inches and weights vary from 8 to 15 pounds. There are six subspecies of Dasypus novemcinctus in Central and South America, but only one subspecies, D. n. mexicanus occurs in North America.
DISTRIBUTION: Dasypus novemcinctus mexicanus’ original distribution was from the lower Rio Grande Valley between Mexico and Texas, southward through Mexico and Central America to northwestern Peru on the west side of the Andes, and all of South America to northern Argentina east of the Andes, including the islands of Grenada, Trinidad, Tobago, and Margarita. The range of the nine-banded armadillo has undergone rapid expansion into the southern United States since the late 1800s. The recent rapid expansion of the armadillo’s range was facilitated by a number of factors: reduction in the number of large carnivores; climatic and biotic changes; and accidental and deliberate relocations of animals to unoccupied areas. Armadillos now occur throughout the southern and southeastern U.S., as far north as Missouri, Kansas, Colorado, and Nebraska. These animals are common throughout most of Alabama, but less common in several northeastern counties.
HABITAT: The armadillo is very adaptable and does well in most habitat types found in Alabama. They generally avoid or are scarce in very wet or very dry habitats. Habitat suitability likely depends more on the characteristics of the substrate or soils, rather than vegetation type due to the armadillo’s feeding and burrowing behavior.
FEEDING HABITS: A major portion of the armadillo’s time spent outside its burrow is devoted to feeding. They typically start foraging as they emerge from their burrow and move at a slow pace following an often erratic course. Prey is apparently detected by smell, although sound also may play a role. Typical foraging behavior involves quickly probing with the nose and occasionally pausing to dig for prey. Armadillos are opportunistic feeders and consume a wide variety of food items. Invertebrates, primarily insects, make up roughly 90 percent of their diet. Small vertebrates and plant material make up the remainder of their diet. Researchers also have seen evidence of armadillos feeding on small reptiles and amphibians, the eggs of ground-nesting birds, and carrion.
LIFE HISTORY AND ECOLOGY: Armadillos seem to exhibit a polygynous mating system, with most females paired with a single male and most males paired with more than one female. Den burrows have an enlarged nest chamber and are more complicated than a burrow dug for other purposes. The nest is a bulky mass of dried plant debris crammed into the nest chamber without any obvious structure. Armadillos in areas with poorly drained soils will construct above ground nests of dry plant material. Most breeding among armadillos occurs during the summer (June-August). The normal gestation period is 8 to 9 months, with most young born between February and May. The armadillo exhibits monozygotic polyembryony in which a single fertilized egg normally gives rise to four separate embryos at the blastula stage of development. This results in a litter of four genetically identical haploid clone offspring. Dasypus is the only genus of vertebrates in which this reproductive phenomenon occurs. The offspring are precocial and begin accompanying the female outside of the burrow at about 2 to 3 months of age. By 3 to 4 months, the young are self-sufficient. Most males reach sexual maturity between 6 to 12 months of age, but females do not become sexually mature until they are 1 to 2 years old.
REFERENCES: Author: Chris Cook, Wildlife Biologist, June 2005
Armstrong, J. Controlling Armadillo Ddamage in Alabama. ANR-773. Alabama Cooperative Extension System. 2pp.
Layne, J. N. 2003. Armadillo. Pages 75-97 in G. A. Feldhamer, B. C. Thompson, and J. A. Chapman, eds. Wild Mammals of North America: Biology, Management, and Conservation. Second edition. The Johns Hopkins University Press, Baltimore, MD and London, U.K.
Nowak, R. M. 1999. Walker’s Mammals of the World, sixth edition, volume one. The Johns Hopkins University Press, Baltimore, MD and London, U.K. 903 pp.
Outdoor Alabama Magazine Article, Nine-banded Armadillo
Watchable Wildlife Article | <urn:uuid:2f4dd03f-9452-4d19-bff5-a881b710035a> | CC-MAIN-2013-20 | http://www.outdooralabama.com/watchable-wildlife/what/Mammals/armadillo.cfm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.914825 | 1,311 | 3.828125 | 4 |
Time to think big
Did the designation of 2010 as the first-ever International Year of Biodiversity mean anything at all? Is it just a publicity stunt, with no engagement on the real, practical issues of conservation, asks Simon Stuart, Chair of IUCN’s Species Survival Commission.
Eight years ago 183 of the world’s governments committed themselves “to achieve by 2010 a significant reduction of the current rate of biodiversity loss at the global, regional and national level as a contribution to poverty alleviation and to the benefit of all life on Earth”. This was hardly visionary—the focus was not on stopping extinctions or loss of key habitats, but simply on slowing their rate of loss—but it was, at least, the first time the nations of the world had pledged themselves to any form of concerted attempt to face up to the ongoing degradation of nature.
Now the results of all the analyses of conservation progress since 2002 are coming in, and there is a unanimous finding: the world has spectacularly failed to meet the 2010 Biodiversity Target, as it is called. Instead species extinctions, habitat loss and the degradation of ecosystems are all accelerating. To give a few examples: declines and extinctions of amphibians due to disease and habitat loss are getting worse; bleaching of coral reefs is growing; and large animals in South-East Asia are moving rapidly towards extinction, especially from over-hunting and degradation of habitats.
|This month the world’s governments will convene in Nagoya, Japan, for the Convention on Biological Diversity’s Conference of the Parties. Many of us hope for agreement there on new, much more ambitious biodiversity targets for the future. The first test of whether or not the 2010 International Year of Biodiversity means anything will be whether or not the international community can commit itself to a truly ambitious conservation agenda.|
The early signs are promising. Negotiating sessions around the world have produced 20 new draft targets for 2020. Collectively these are nearly as strong as many of us hoped, and certainly much stronger than the 2010 Biodiversity Target. They include: halving the loss and degradation of forests and other natural habitats; eliminating overfishing and destructive fishing practices; sustainably managing all areas under agriculture, aquaculture and forestry; bringing pollution from excess nutrients and other sources below critical ecosystem loads; controlling pathways introducing and establishing invasive alien species; managing multiple pressures on coral reefs and other vulnerable ecosystems affected by climate change and ocean acidification; effectively protecting at least 15 per cent of land and sea, including the areas of particular importance for biodiversity; and preventing the extinction of known threatened species. We now have to keep up the pressure to prevent these from becoming diluted.
We at IUCN are pushing for urgent action to stop biodiversity loss once and for all. The well-being of the entire planet—and of people—depends on our committing to maintain healthy ecosystems and strong wildlife populations. We are therefore proposing, as a mission for 2020, “to have put in place by 2020 all the necessary policies and actions to prevent further biodiversity loss”. Examples include removing government subsidies which damage biodiversity (as many agricultural ones do), establishing new nature reserves in important areas for threatened species, requiring fisheries authorities to follow the advice of their scientists to ensure the sustainability of catches, and dramatically cutting carbon dioxide emissions worldwide to reduce the impacts of climate change and ocean acidification.
If the world makes a commitment along these lines, then the 2010 International Year of Biodiversity will have been about more than platitudes. But it will still only be a start: the commitment needs to be implemented. We need to look for signs this year of a real change from governments and society over the priority accorded to biodiversity.
|One important sign will be the amount of funding that governments pledge this year for replenishing the Global Environment Facility (GEF), the world’s largest donor for biodiversity conservation in developing countries. Between 1991 and 2006, it provided approximately $2.2 billion in grants to support more than 750 biodiversity projects in 155 countries. If the GEF is replenished at much the same level as over the last decade we shall know that the governments are still in “business as usual” mode. But if it is doubled or tripled in size, then we shall know that they are starting to get serious.|
IUCN estimates that even a tripling of funding would still fall far short of what is needed to halt biodiversity loss. Some conservationists have suggested that developed countries need to contribute 0.2 per cent of gross national income in overseas biodiversity assistance to achieve this. That would work out at roughly $120 billion a year—though of course this would need to come through a number of sources, not just the GEF. It is tempting to think that this figure is unrealistically high, but it is small change compared to the expenditures governments have committed to defence and bank bail outs.
It is time for the conservation movement to think big. We are addressing problems that are hugely important for the future of this planet and its people, and they will not be solved without a huge increase in funds. | <urn:uuid:2d3e80a0-ca7b-4358-80a9-0f5129e87a3e> | CC-MAIN-2013-20 | http://cms.iucn.org/es/recursos/focus/enfoques_anteriores/cbd_2010/noticias/opinion/?6131/time-to-think-big | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.940236 | 1,055 | 3.296875 | 3 |
Following Oceana’s newly released report on the harmful impacts of illegal fishing, one of the questions that I as Oceana's Northeast representative was asked most often was, “Where is this happening?” The short answer: Illegal fishing happens everywhere, from the most distant waters near Antarctica to just off the U.S. coast.
This week brought great news for shark populations that are dwindling both in U.S. waters and worldwide. Today, the Delaware House of Representatives introduced a bill prohibiting the possession, trade, sale and distribution of shark fins within the state. If passed, House Bill 41 would make Delaware the first East Coast state to pass a ban on the shark fin trade, following in the footsteps of Oregon, Washington, California, Hawaii and Illinois.
Current federal law prohibits shark finning in U.S. waters, requiring that sharks be brought into port with their fins still attached. However, this law does not prohibit the sale and trade of processed fins that are imported into the country from other regions that could have weak or even nonexistent shark protections in place.
This unsustainable catch is driven by the demand for shark fins, often used as an ingredient in shark fin soup, and kills millions of sharks every year. Delaware’s bill would close the loopholes that fuel the trade and demand for fins, and ensure that the state is not a gateway for shark products to enter into other U.S. state markets.
Not only was there great news coming out of the U.S., international shark lovers have reason to celebrate as well. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), voted this week to place stricter regulations on the trade of manta rays, three species of hammerheads, oceanic whitetip and porbeagle sharks, acknowledging that these species are in dire need of protection. When countries export these species, they are required to possess special permits that prove these species were harvested sustainably. This decision will greatly curb illegal overfishing and reduce the numbers of endangered sharks killed globally.
History was made today in Bangkok, when Parties to CITES (the Convention on International Trade in Endangered Species of Wild Flora and Fauna) voted to protect five species of sharks and two species of manta rays. The seven protected species are: oceanic whitetip (Carcharhinus longimanus), porbeagle (Lamna nasus), scalloped hammerhead (Sphyrna lewini), great hammerhead (S. mokarran), smooth hammerhead (S. zygaena), oceanic manta ray (Manta birostris) and reef manta ray (M. alfredi).
All seven species are considered threatened by international trade – the sharks for their fins, and the manta rays for their gills, which are used in Traditional Chinese Medicine. CITES protection is an important complement to fisheries management measures, which, for these species, have failed to safeguard their survival.
The vote was to list the animals for protection under Appendix II which does not entail a ban on the trade, but instead means that trade must be regulated. Exporting countries are required to issue export permits, and can only do so if they can ensure that they have been legally caught, and that their trade is not detrimental to the species’ survival.
All of the proposals received the two-thirds majority needed to be accepted – but the listing is not yet final. Decisions can be overturned with another vote during the final plenary session of the meeting, which wraps up on Thursday. This is what happened with porbeagle sharks in the 2010 CITES meeting in Qatar – an Appendix II listing approved by the Committee evaporated with another vote in plenary. As a result, at that meeting, none of the proposed shark species were granted protection. Now, three years later, we’re hopeful that the international community finally sees the importance of regulating the trade that puts these animals at risk.
Keep your fingers crossed!
Happy Friday, everyone.
It's been a rough few weeks for the oceans at CITES, but now it's time to pick up the pieces. If CITES taught us anything, it's that the work of the ocean conservation community is more important than ever.
This week in ocean news,
....Rick at Malaria, Bed bugs, Sea Lice and Sunsets discussed one of the more shady aspects of CITES: the secret ballots, which were invoked for votes on bluefin tuna, sharks, polar bears, and deep water corals.
…The Washington Post reported that Maryland is cracking down on watermen who catch oysters in protected sanctuaries or with banned equipment. Once a principal source of oysters, the Chesapeake now provides less than 5 percent of the annual U.S. harvest.
…For the first time, scientists were able to use videos to observe octupuses’ behavioral responses. The result? The octupuses had no consistent reaction to one film -- in other words, they had no “personality.” Curiously, other cephalopods display consistent personalities for most of their lives.
…The New York Times wondered if the 700,000 saltwater home aquariums in the United States and the associated trade in reef invertebrates are threatening real reef ecosystems.
This is the ninth in a series of dispatches from the CITES meeting in Doha, Qatar.
As Oceana marine scientist Elizabeth Griffin put it: “This meeting was a flop.”
CITES has been a complete failure for the oceans. The one success -- the listing of the porbeagle shark under Appendix II -- was overturned yesterday in the plenary session.
“It appears that money can buy you anything, just ask Japan,” said Dave Allison, senior campaign director. “Under the crushing weight of the vast sums of money gained by unmanaged trade and exploitation of endangered marine species by Japan, China, other major trading countries and the fishing industry, the very foundation of CITES is threatened with collapse.”
Maybe next time -- if these species are still around to be protected.
The failure of CITES means that Oceana’s work – and your support and activism – is more important than ever. You can start by supporting our campaign work to protect these creatures.
Here's Oceana's Gaia Angelini on the conclusion of CITES:
This is the eighth in a series of dispatches from the CITES conference in Doha, Qatar.
More difficult news out of Doha today.
While seven of the eight proposed shark species (including several species of hammerheads, oceanic whitetip and spiny dogfish) were not included in Appendix II, the one bright spot was for the porbeagle shark, which is threatened by widespread consumption in Europe.
The porbeagle’s Appendix II listing is a huge improvement because it requires the use of export permits to ensure that the species are caught by a legal and sustainably managed fishery.
And there is a slight chance that the other shark decisions could be reversed during the plenary session in the final two days.
Here are Oceana scientists Elizabeth Griffin and Rebecca Greenberg reflecting on the shark decisions:
This is the seventh in a series of posts from CITES. Check out the rest of the dispatches from Doha here.
Eight shark species have been proposed for listing to Appendix II of CITES, including the oceanic whitetip, scalloped hammerhead, dusky, sandbar, smooth hammerhead, great hammerhead, porbeagle and spiny dogfish.
Listing these species, which are threatened by shark finning, is necessary to ensure international trade does not drive these shark species to extinction.
Here's Oceana's Ann Schroeer from our Brussels office with an optimistic outlook on the upcoming shark proposals at CITES.
This is the latest in a series of posts from CITES. See the rest of the dispatches here.
Over the weekend, CITES failed to include 31 species of red and pink coral in Appendix II, trade protections that were promised during the last CITES Conference more than two and a half years ago.
These corals are harvested to meet the growing demand for jewelry and souvenirs. The unregulated and virtually unmanaged collection and trade of these species is driving them to extinction.
Many of the corals are long-lived, reaching more than 100 years of age, and grow slowly, usually less than one millimeter in thickness per year. These colonies are fragile and extremely vulnerable to exploitation and destruction, and their biological characteristics severely limit their ability to recover.
Oceana campaign director Dave Allison had this to say about the corals decision (first video), as well as the failure of CITES to protect marine species in general (second video.)
Happy Friday, ocean fans. It's almost spring, and a surfing alpaca exists in the world. Things are looking up.
Before we get to the week's best marine tidbits, an important announcement: Oceana board member Ted Danson will be answering questions live on CNN.com on April 1, so send your ocean queries in, stat!
Also, don't forget that today is the last day to take the Ocean IQ quiz for a chance to win prizes, including a trip with SEE Turtles.
This week in ocean news,
…Yes, CITES failed to deliver on bluefin tuna yesterday, but as Monterey Bay Aquarium’s Julie Packard pointed out, at least the conversation is changing. Bluefin is now in the same rhetorical realm as endangered land creatures such as tigers and elephants.
…Deep Sea News wrote a requiem for a robot -- the Autonomous Benthic Explorer (ABE) that was lost at sea last week during a research expedition to the Chilean Subduction Zone. On a recent dive, ABE had detected evidence of hydrothermal vents. At the time of its loss, ABE had just begun a second dive to home into a vent site and photograph it.
This is the fifth in a series of dispatches from CITES. You can read the other dispatches here.
Although there were repeated calls from delegates from the E.U., U.S. and Monaco to allow time for parties to meet and arrive at a compromise position, a Libya delegate forced a preemptory vote on the E.U. proposal, which resulted in a 43 to 72 vote, with 14 abstaining.
Campaign director Dave Allison called the defeat "a clear win by short-term economic interest over the long-term health of the ocean and the rebuilding of Atlantic bluefin tuna populations."
The decision could spell the beginning of the end for the tigers of the sea.
Here's Oceana's Maria Jose Cornax on the decision: | <urn:uuid:d2464720-2e87-4061-9f5b-3ccfe0c3f6db> | CC-MAIN-2013-20 | http://oceana.org/es/category/blog-free-tags/cites | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945931 | 2,336 | 2.96875 | 3 |
Opportunities and Challenges in High Pressure Processing of Foods
By Rastogi, N K; Raghavarao, K S M S; Balasubramaniam, V M; Niranjan, K; Knorr, D
Consumers increasingly demand convenience foods of the highest quality in terms of natural flavor and taste, and which are free from additives and preservatives. This demand has triggered the need for the development of a number of nonthermal approaches to food processing, of which high-pressure technology has proven to be very valuable. A number of recent publications have demonstrated novel and diverse uses of this technology. Its novel features, which include destruction of microorganisms at room temperature or lower, have made the technology commercially attractive. Enzymes and even spore forming bacteria can be inactivated by the application of pressure-thermal combinations, This review aims to identify the opportunities and challenges associated with this technology. In addition to discussing the effects of high pressure on food components, this review covers the combined effects of high pressure processing with: gamma irradiation, alternating current, ultrasound, and carbon dioxide or anti-microbial treatment. Further, the applications of this technology in various sectors-fruits and vegetables, dairy, and meat processing-have been dealt with extensively. The integration of high-pressure with other matured processing operations such as blanching, dehydration, osmotic dehydration, rehydration, frying, freezing / thawing and solid- liquid extraction has been shown to open up new processing options. The key challenges identified include: heat transfer problems and resulting non-uniformity in processing, obtaining reliable and reproducible data for process validation, lack of detailed knowledge about the interaction between high pressure, and a number of food constituents, packaging and statutory issues.
Keywords high pressure, food processing, non-thermal processing
Consumers demand high quality and convenient products with natural flavor and taste, and greatly appreciate the fresh appearance of minimally processed food. Besides, they look for safe and natural products without additives such as preservatives and humectants. In order to harmonize or blend all these demands without compromising the safety of the products, it is necessary to implement newer preservation technologies in the food industry. Although the fact that “high pressure kills microorganisms and preserves food” was discovered way back in 1899 and has been used with success in chemical, ceramic, carbon allotropy, steel/alloy, composite materials and plastic industries for decades, it was only in late 1980′s that its commercial benefits became available to the food processing industries. High pressure processing (HPP) is similar in concept to cold isostatic pressing of metals and ceramics, except that it demands much higher pressures, faster cycling, high capacity, and sanitation (Zimmerman and Bergman, 1993; Mertens and Deplace, 1993). Hite (1899) investigated the application of high pressure as a means of preserving milk, and later extended the study to preserve fruits and vegetables (Hite, Giddings, and Weakly, 1914). It then took almost eighty years for Japan to re- discover the application of high-pressure in food processing. The use of this technology has come about so quickly that it took only three years for two Japanese companies to launch products, which were processed using this technology. The ability of high pressure to inactivate microorganisms and spoilage catalyzing enzymes, whilst retaining other quality attributes, has encouraged Japanese and American food companies to introduce high pressure processed foods in the market (Mermelstein, 1997; Hendrickx, Ludikhuyze, Broeck, and Weemaes, 1998). The first high pressure processed foods were introduced to the Japanese market in 1990 by Meidi-ya, who have been marketing a line of jams, jellies, and sauces packaged and processed without application of heat (Thakur and Nelson, 1998). Other products include fruit preparations, fruit juices, rice cakes, and raw squid in Japan; fruit juices, especially apple and orange juice, in France and Portugal; and guacamole and oysters in the USA (Hugas, Garcia, and Monfort, 2002). In addition to food preservation, high- pressure treatment can result in food products acquiring novel structure and texture, and hence can be used to develop new products (Hayashi, 1990) or increase the functionality of certain ingredients. Depending on the operating parameters and the scale of operation, the cost of highpressure treatment is typically around US$ 0.05-0.5 per liter or kilogram, the lower value being comparable to the cost of thermal processing (Thakur and Nelson, 1998; Balasubramaniam, 2003).
The non-availability of suitable equipment encumbered early applications of high pressure. However, recent progress in equipment design has ensured worldwide recognition of the potential for such a technology in food processing (Could, 1995; Galazka and Ledward, 1995; Balci and Wilbey, 1999). Today, high-pressure technology is acknowledged to have the promise of producing a very wide range of products, whilst simultaneously showing potential for creating a new generation of value added foods. In general, high-pressure technology can supplement conventional thermal processing for reducing microbial load, or substitute the use of chemical preservatives (Rastogi, Subramanian, and Raghavarao, 1994).
Over the past two decades, this technology has attracted considerable research attention, mainly relating to: i) the extension of keeping quality (Cheftel, 1995; Farkas and Hoover, 2001), ii) changing the physical and functional properties of food systems (Cheftel, 1992), and iii) exploiting the anomalous phase transitions of water under extreme pressures, e.g. lowering of freezing point with increasing pressures (Kalichevsky, Knorr, and Lillford, 1995; Knorr, Schlueter, and Heinz, 1998). The key advantages of this technology can be summarized as follows:
1. it enables food processing at ambient temperature or even lower temperatures;
2. it enables instant transmittance of pressure throughout the system, irrespective of size and geometry, thereby making size reduction optional, which can be a great advantage;
3. it causes microbial death whilst virtually eliminating heat damage and the use of chemical preservatives/additives, thereby leading to improvements in the overall quality of foods; and
4. it can be used to create ingredients with novel functional properties.
The effect of high pressure on microorganisms and proteins/ enzymes was observed to be similar to that of high temperature. As mentioned above, high pressure processing enables transmittance of pressure rapidly and uniformly throughout the food. Consequently, the problems of spatial variations in preservation treatments associated with heat, microwave, or radiation penetration are not evident in pressure-processed products. The application of high pressure increases the temperature of the liquid component of the food by approximately 3C per 100 MPa. If the food contains a significant amount of fat, such as butter or cream, the temperature rise is greater (8-9C/100 MPa) (Rasanayagam, Balasubramaniam, Ting, Sizer, Bush, and Anderson, 2003). Foods cool down to their original temperature on decompression if no heat is lost to (or gained from) the walls of the pressure vessel during the holding stage. The temperature distribution during the pressure-holding period can change depending on heat transfer across the walls of the pressure vessel, which must be held at the desired temperature for achieving truly isothermal conditions. In the case of some proteins, a gel is formed when the rate of compression is slow, whereas a precipitate is formed when the rate is fast. High pressure can cause structural changes in structurally fragile foods containing entrapped air such as strawberries or lettuce. Cell deformation and cell damage can result in softening and cell serum loss. Compression may also shift the pH depending on the imposed pressure. Heremans (1995) indicated a lowering of pH in apple juice by 0.2 units per 100 MPa increase in pressure. In combined thermal and pressure treatment processes, Meyer (2000) proposed that the heat of compression could be used effectively, since the temperature of the product can be raised from 70-90C to 105-120C by a compression to 700 MPa, and brought back to the initial temperature by decompression.
As a thermodynamic parameter, pressure has far-reaching effects on the conformation of macromolecules, the transition temperature of lipids and water, and a number of chemical reactions (Cheftel, 1992; Tauscher, 1995). Phenomena that are accompanied by a decrease in volume are enhanced by pressure, and vice-versa (principle of Le Chatelier). Thus, under pressure, reaction equilibriums are shifted towards the most compact state, and the reaction rate constant is increased or decreased, depending on whether the “activation volume” of the reaction (i.e. volume of the activation complex less volume of reactants) is negative or positive. It is likely that pressure a\lso inhibits the availability of the activation energy required for some reactions, by affecting some other energy releasing enzymatic reactions (Farr, 1990). The compression energy of 1 litre of water at 400 MPa is 19.2 kJ, as compared to 20.9 kJ for heating 1 litre of water from 20 to 25C. The low energy levels involved in pressure processing may explain why covalent bonds of food constituents are usually less affected than weak interactions. Pressure can influence most biochemical reactions, since they often involve change in volume. High pressure controls certain enzymatic reactions. The effect of high pressure on protein/enzyme is reversible unlike temperature, in the range 100-400 MPa and is probably due to conformational changes and sub-unit dissociation and association process (Morild, 1981).
For both the pasteurization and sterilization processes, a combined treatment of high pressure and temperature are frequently considered to be most appropriate (Farr, 1990; Patterson, Quinn, Simpson, and Gilmour, 1995). Vegetative cells, including yeast and moulds, are pressure sensitive, i.e. they can be inactivated by pressures of ~300-600 MPa (Knorr, 1995; Patterson, Quinn, Simpson, and Gilmour, 1995). At high pressures, microbial death is considered to be due to permeabilization of cell membrane. For instance, it was observed that in the case of Saccharomyces cerevasia, at pressures of about 400 MPa, the structure and cytoplasmic organelles were grossly deformed and large quantities of intracellular material leaked out, while at 500 MPa, the nucleus could no longer be recognized, and a loss of intracellular material was almost complete (Farr, 1990). Changes that are induced in the cell morphology of the microorganisms are reversible at low pressures, but irreversible at higher pressures where microbial death occurs due to permeabilization of the cell membrane. An increase in process temperature above ambient temperature, and to a lesser extent, a decrease below ambient temperature, increases the inactivation rates of microorganisms during high pressure processing. Temperatures in the range 45 to 50C appear to increase the rate of inactivation of pathogens and spoilage microorganisms. Preservation of acid foods (pH ≤ 4.6) is, therefore, the most obvious application of HPP as such. Moreover, pasteurization can be performed even under chilled conditions for heat sensitive products. Low temperature processing can help to retain nutritional quality and functionality of raw materials treated and could allow maintenance of low temperature during post harvest treatment, processing, storage, transportation, and distribution periods of the life cycle of the food system (Knorr, 1995).
Bacterial spores are highly pressure resistant, since pressures exceeding 1200 MPa may be needed for their inactivation (Knorr, 1995). The initiation of germination or inhibition of germinated bacterial spores and inactivation of piezo-resistive microorganisms can be achieved in combination with moderate heating or other pretreatments such as ultrasound. Process temperature in the range 90-121C in conjunction with pressures of 500-800 MPa have been used to inactivate spores forming bacteria such as Clostridium botulinum. Thus, sterilization of low-acid foods (pH > 4.6), will most probably rely on a combination of high pressure and other forms of relatively mild treatments.
High-pressure application leads to the effective reduction of the activity of food quality related enzymes (oxidases), which ensures high quality and shelf stable products. Sometimes, food constituents offer piezo-resistance to enzymes. Further, high pressure affects only non-covalent bonds (hydrogen, ionic, and hydrophobic bonds), causes unfolding of protein chains, and has little effect on chemical constituents associated with desirable food qualities such as flavor, color, or nutritional content. Thus, in contrast to thermal processing, the application of high-pressure causes negligible impairment of nutritional values, taste, color flavor, or vitamin content (Hayashi, 1990). Small molecules such as amino acids, vitamins, and flavor compounds remain unaffected by high pressure, while the structure of the large molecules such as proteins, enzymes, polysaccharides, and nucleic acid may be altered (Balci and Wilbey, 1999).
High pressure reduces the rate of browning reaction (Maillard reaction). It consists of two reactions, condensation reaction of amino compounds with carbonyl compounds, and successive browning reactions including metanoidin formation and polymerization processes. The condensation reaction shows no acceleration by high pressure (5-50 MPa at 50C), because it suppresses the generation of stable free radicals derived from melanoidin, which are responsible for the browning reaction (Tamaoka, Itoh, and Hayashi, 1991). Gels induced by high pressure are found to be more glossy and transparent because of rearrangement of water molecules surrounding amino acid residues in a denatured state (Okamoto, Kawamura, and Hayashi, 1990).
The capability and limitations of HPP have been extensively reviewed (Thakur and Nelson, 1998; Smelt, 1998;Cheftal, 1995; Knorr, 1995; Fair, 1990; Tiwari, Jayas, and Holley, 1999; Cheftel, Levy, and Dumay, 2000; Messens, Van Camp, and Huyghebaert, 1997; Ontero and Sanz, 2000; Hugas, Garriga, and Monfort, 2002; Lakshmanan, Piggott,and Paterson, 2003; Balasubramaniam, 2003; Matser, Krebbers, Berg, and Bartels, 2004; Hogan, Kelly, and Sun, 2005; Mor-Mur and Yuste, 2005). Many of the early reviews primarily focused on the microbial efficacy of high-pressure processing. This review comprehensively covers the different types of products processed by highpressure technology alone or in combination with the other processes. It also discusses the effect of high pressure on food constituents such as enzymes and proteins. The applications of this technology in fruits and vegetable, dairy and animal product processing industries are covered. The effects of combining high- pressure treatment with other processing methods such as gamma- irradiation, alternating current, ultrasound, carbon dioxide, and anti microbial peptides have also been described. Special emphasis has been given to opportunities and challenges in high pressure processing of foods, which can potentially be explored and exploited.
EFFECT OF HIGH PRESSURE ON ENZYMES AND PROTEINS
Enzymes are a special class of proteins in which biological activity arises from active sites, brought together by a three- dimensional configuration of molecule. The changes in active site or protein denaturation can lead to loss of activity, or changes the functionality of the enzymes (Tsou, 1986). In addition to conformational changes, enzyme activity can be influenced by pressure-induced decompartmentalization (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996). Pressure induced damage of membranes facilitates enzymesubstrate contact. The resulting reaction can either be accelerated or retarded by pressure (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996; Morild, 1981). Hendrickx, Ludikhuy ze, Broeck, and Weemaes ( 1998) and Ludikhuyze, Van Loey, and Indrawati et al. (2003) reviewed the combined effect of pressure and temperature on enzymes related to the ity of fruits and vegetables, which comprises of kinetic information as well as process engineering aspects.
Pectin methylesterase (PME) is an enzyme, which normally tends to lower the viscosity of fruits products and adversely affect their texture. Hence, its inactivation is a prerequisite for the preservation of such products. Commercially, fruit products containing PME (e.g. orange juice and tomato products) are heat pasteurized to inactivate PME and prolong shelf life. However, heating can deteriorate the sensory and nutritional quality of the products. Basak and Ramaswamy (1996) showed that the inactivation of PME in orange juice was dependent on pressure level, pressure-hold time, pH, and total soluble solids. An instantaneous pressure kill was dependent only on pressure level and a secondary inactivation effect dependent on holding time at each pressure level. Nienaber and Shellhammer (2001) studied the kinetics of PME inactivation in orange juice over a range of pressures (400-600 MPa) and temperatures (25-5O0C) for various process holding times. PME inactivation followed a firstorder kinetic model, with a residual activity of pressure-resistant enzyme. Calculated D-values ranged from 4.6 to 117.5 min at 600 MPa/50C and 400 MPa/25C, respectively. Pressures in excess of 500 MPa resulted in sufficiently faster inactivation rates for economic viability of the process. Binh, Van Loey, Fachin, Verlent, Indrawati, and Hendrickx (2002a, 2002b) studied the kinetics of inactivation of strawberry PME. The combined effect of pressure and temperature on inactivation kinetics followed a fractional-conversion model. Purified strawberry PME was more stable toward high-pressure treatments than PME from oranges and bananas. Ly-Nguyen, Van Loey, Fachin, Verlent, Hendrickx (2002) showed that the inactivation of the banana PME enzyme during heating at temperature between 65 and 72.5C followed first order kinetics and the effect of pressure treatment of 600-700 MPa at 10C could be described using a fractionalconversion model. Stoforos, Crelier, Robert, and Taoukis (2002) demonstrated that under ambient pressure, tomato PME inactivation rates increased with temperature, and the highest rate was obtained at 75C. The inactivation rates were dramatically reduced as soon as the essing pressure was raised beyond 75C. High inactivation rates were obtained at a pressure higher than 700 MPa. Riahi and Ramaswamy (2003) studied high- pressure inactivation kinetics of PME isolated from a variety of sources and showed that PME from a microbial source was more resistant \to pressure inactivation than from orange peel. Almost a full decimal reduction in activity of commercial PME was achieved at 400 MPa within 20 min.
Verlent, Van Loey, Smout, Duvetter, Nguyen, and Hendrickx (2004) indicated that the optimal temperature for tomato pectinmethylesterase was shifted to higher values at elevated pressure compared to atmospheric pressure, creating the possibilities for rheology improvements by the application of high pressure.
Castro, Van Loey, Saraiva, Smout, and Hendrickx (2006) accurately described the inactivation of the labile fraction under mild-heat and high-pressure conditions by a fractional conversion model, while a biphasic model was used to estimate the inactivation rate constant of both the fractions at more drastic conditions of temperature/ pressure (10-64C, 0.1-800 MPa). At pressures lower than 300 MPa and temperatures higher than 54C, an antagonistic effect of pressure and temperature was observed.
Balogh, Smout, Binh, Van Loey, and Hendrickx (2004) observed the inactivation kinetics of carrot PME to follow first order kinetics over a range of pressure and temperature (650800 MPa, 10-40C). Enzyme stability under heat and pressure was reported to be lower in carrot juice and purified PME preparations than in carrots.
The presence of pectinesterase (PE) reduces the quality of citrus juices by destabilization of clouds. Generally, the inactivation of the enzyme is accomplished by heat, resulting in a loss of fresh fruit flavor in the juice. High pressure processing can be used to bypass the use of extreme heat for the processing of fruit juices. Goodner, Braddock, and Parish (1998) showed that the higher pressures (>600 MPa) caused instantaneous inactivation of the heat labile form of the enzyme but did not inactivate the heat stable form of PE in case of orange and grapefruit juices. PE activity was totally lost in orange juice, whereas complete inactivation was not possible in case of grapefruit juices. Orange juice pressurized at 700 MPa for l min had no cloud loss for more than 50 days. Broeck, Ludikhuyze, Van Loey, and Hendrickx (2000) studied the combined pressure-temperature inactivation of the labile fraction of orange PE over a range of pressure (0.1 to 900 MPa) and temperature (15 to 65C). Pressure and temperature dependence of the inactivation rate constants of the labile fraction was quantified using the well- known Eyring and Arrhenius relations. The stable fraction was inactivated at a temperature higher than 75C. Acidification (pH 3.7) enhanced the thermal inactivation of the stable fraction, whereas the addition of Ca^sup ++^ ions (IM) suppressed inactivation. At elevated pressure (up to 900 MPa), an antagonistic effect of pressure and temperature on inactivation of the stable fraction was observed. Ly-Nguyen, Van Loey, Smout, Ozean, Fachin, Verlent, Vu- Truong, Duvetter, and Hendrickx (2003) investigated the combined heat and pressure treatments on the inactivation of purified carrot PE, which followed a fractional-conversion model. The thermally stable fraction of the enzyme could not be inactivated. At a lower pressure (<300 MPa) and higher temperature (>50C), an antagonistic effect of pressure and heat was observed.
High pressures induced conformational changes in polygalacturonase (PG) causing reduced substrate binding affinity and enzyme inactivation. Eun, Seok, and Wan ( 1999) studied the effect of high-pressure treatment on PG from Chinese cabbage to prevent the softening and spoilage of plant-based foods such as kimchies without compromising quality. PG was inactivated by the application of pressure higher than 200 MPa for l min. Fachin, Van Loey, Indrawati, Ludikhuyze, and Hendrickx (2002) investigated the stability of tomato PG at different temperatures and pressures. The combined pressure temperature inactivation (300-600 MPa/50 -50C) of tomato PG was described by a fractional conversion model, which points to Ist-order inactivation kinetics of a pressure-sensitive enzyme fraction and to the occurrence of a pressure-stable PG fraction. Fachin, Smout, Verlent, Binh, Van Loey, and Hendrickx (2004) indicated that in the combination of pressure-temperature (5- 55C/100-600 MPa), the inactivation of the heat labile portion of purified tomato PG followed first order kinetics. The heat stable fraction of the enzyme showed pressure stability very similar to that of heat labile portion.
Peelers, Fachin, Smout, Van Loey, and Hendrickx (2004) demonstrated that effect of high-pressure was identical on heat stable and heat labile fractions of tomato PG. The isoenzyme of PG was detected in thermally treated (140C for 5 min) tomato pieces and tomato juice, whereas, no PG was found in pressure treated tomato juice or pieces.
Verlent, Van Loey, Smout, Duvetter, and Hendrickx (2004) investigated the effect of nigh pressure (0.1 and 500 MPa) and temperature (25-80C) on purified tomato PG. At atmospheric pressure, the optimum temperature for enzyme was found to be 55-60C and it decreased with an increase in pressure. The enzyme activity was reported to decrease with an increase in pressure at a constant temperature.
Shook, Shellhammer, and Schwartz (2001) studied the ability of high pressure to inactivate lipoxygenase, PE and PG in diced tomatoes. Processing conditions used were 400,600, and 800 MPa for 1, 3, and 5 min at 25 and 45C. The magnitude of the applied pressure had a significant effect in inactivating lipoxygenase and PG, with complete loss of activity occurring at 800 MPa. PE was very resistant to the pressure treatment.
Polyphenoloxidase and Pemxidase
Polyphenoloxidase (PPO) and peroxidase (POD), the enzymes responsible for color and flavor loss, can be selectively inactivated by a combined treatment of pressure and temperature. Gomes and Ledward (1996) studied the effects of pressure treatment (100-800 MPa for 1-20 min) on commercial PPO enzyme available from mushrooms, potatoes, and apples. Castellari, Matricardi, Arfelli, Rovere, and Amati ( 1997) demonstrated that there was a limited inactivation of grape PPO using pressures between 300 and 600 MPa. At 900 MPa, a low level of PPO activity was apparent. In order to reach complete inactivation, it may be necessary to use high- pressure processing treatments in conjunction with a mild thermal treatment (40-50C). Weemaes, Ludikhuyze, Broeck, and Hendrickx (1998) studied the pressure stabilities of PPO from apple, avocados, grapes, pears, and plums at pH 6-7. These PPO differed in pressure stability. Inactivation of PPO from apple, grape, avocado, and pear at room temperature (25C) became noticeable at approximately 600, 700, 800 and 900 MPa, respectively, and followed first-order kinetics. Plum PPO was not inactivated at room temperature by pressures up to 900 MPa. Rastogi, Eshtiaghi, and Knorr (1999) studied the inactivation effects of high hydrostatic pressure treatment (100-600 MPa) combined with heat treatment (0-60C) on POD and PPO enzyme, in order to develop high pressure-processed red grape juice having stable shelf-life. The studies showed that the lowest POD (55.75%) and PPO (41.86%) activities were found at 60C, with pressure at 600 and 100 MPa, respectively. MacDonald and Schaschke (2000) showed that for PPO, both temperature and pressure individually appeared to have similar effects, whereas the holding time was not significant. On the other hand, in case of POD, temperature as well as interaction between temperature and holding time had the greatest effect on activity. Namkyu, Seunghwan, and Kyung (2002) showed that mushroom PPO was highly pressure stable. Exposure to 600 MPa for 10 min reduced PPO activity by 7%; further exposure had no denaturing effect. Compression for 10 and 20 min up to 800 MPa, reduced activity by 28 and 43%, respectively.
Rapeanu, Van Loey, Smout, and Hendrickx (2005) indicated that the thermal and/or high-pressure inactivation of grape PPO followed first order kinetics. A third degree polynomial described the temperature/pressure dependence of the inactivation rate constants. Pressure and temperature were reported to act synergistically, except in the high temperature (≥45C)-low pressure (≥300 MPa) region where an antagonistic effect was observed.
Gomes, Sumner, and Ledward (1997) showed that the application of increasing pressures led to a gradual reduction in papain enzyme activity. A decrease in activity of 39% was observed when the enzyme solution was initially activated with phosphate buffer (pH 6.8) and subjected to 800 MPa at ambient temperature for 10 min, while 13% of the original activity remained when the enzyme solution was treated at 800 MPa at 60C for 10 min. In Tris buffer at pH 6.8 after treatment at 800 MPa and 20C, papain activity loss was approximately 24%. The inactivation of the enzyme is because of induced change at the active site causing loss of activity without major conformational changes. This loss of activity was due to oxidation of the thiolate ion present at the active site.
Weemaes, Cordt, Goossens, Ludikhuyze, Hendrickx, Heremans, and Tobback (1996) studied the effects of pressure and temperature on activity of 3 different alpha-amylases from Bacillus subtilis, Bacillus amyloliquefaciens, and Bacillus licheniformis. The changes in conformation of Bacillus licheniformis, Bacillus subtilis, and Bacillus amyloliquefaciens amylases occurred at pressures of 110, 75, and 65 MPa, respectively. Bacillus licheniformis amylase was more stable than amylases from Bacillus subtilis and Bacillus amyloliquefaciens to the combined heat/pressure treatment.
Riahi and Ramaswamy (2004) demonstrated that pressure inactivation of amylase in apple juice was significantly (P < 0.01 ) influenced by pH, pressure, holding time, and temperature. The inactivation was described using a bi-phasic model. The application of high pressure was sh\own to completely inactivate amylase. The importance of the pressure pulse and pressure hold approach for inactivation of amylase was also demonstrated.
High pressure denatures protein depending on the protein type, processing conditions, and the applied pressure. During the process of denaturation, the proteins may dissolve or precipitate on the application of high pressure. These changes are generally reversible in the pressure range 100-300 MPa and irreversible for the pressures higher than 300 MPa. Denaturation may be due to the destruction of hydrophobic and ion pair bonds, and unfolding of molecules. At higher pressure, oligomeric proteins tend to dissociate into subunits becoming vulnerable to proteolysis. Monomeric proteins do not show any changes in proteolysis with increase in pressure (Thakur and Nelson, 1998).
High-pressure effects on proteins are related to the rupture on non-covalent interactions within protein molecules, and to the subsequent reformation of intra and inter molecular bonds within or between the molecules. Different types of interactions contribute to the secondary, tertiary, and quaternary structure of proteins. The quaternary structure is mainly held by hydrophobic interactions that are very sensitive to pressure. Significant changes in the tertiary structure are observed beyond 200 MPa. However, a reversible unfolding of small proteins such as ribonuclease A occurs at higher pressures (400 to 800 MPa), showing that the volume and compressibility changes during denaturation are not completely dominated by the hydrophobic effect. Denaturation is a complex process involving intermediate forms leading to multiple denatured products. secondary structure changes take place at a very high pressure above 700 MPa, leading to irreversible denaturation (Balny and Masson, 1993).
Figure 1 General scheme for pressure-temperature phase diagram of proteins, (from Messens, Van Camp, and Huyghebaert, 1997).
When the pressure increases to about 100 MPa, the denaturation temperature of the protein increases, whereas at higher pressures, the temperature of denaturation usually decreases. This results in the elliptical phase diagram of native denatured protein shown in Fig. 1. A practical consequence is that under elevated pressures, proteins denature usually at room temperature than at higher temperatures. The phase diagram also specifies the pressure- temperature range in which the protein maintains its native structure. Zone III specifies that at high temperatures, a rise in denaturation temperature is found with increasing pressure. Zone II indicates that below the maximum transition temperature, protein denaturation occurs at the lower temperatures under higher pressures. Zone III shows that below the temperature corresponding to the maximum transition pressure, protein denaturation occurs at lower pressures using lower temperatures (Messens, Van Camp, and Huyghebaert, 1997).
The application of high pressure has been shown to destabilize casein micelles in reconstituted skim milk and the size distribution of spherical casein micelles decrease from 200 to 120 nm; maximum changes have been reported to occur between 150-400 MPa at 20C. The pressure treatment results in reduced turbidity and increased lightness, which leads to the formation of a virtually transparent skim milk (Shibauchi, Yamamoto, and Sagara, 1992; Derobry, Richard, and Hardy, 1994). The gels produced from high-pressure treated skim milk showed improved rigidity and gel breaking strength (Johnston, Austin, and Murphy, 1992). Garcia, Olano, Ramos, and Lopez (2000) showed that the pressure treatment at 25C considerably reduced the micelle size, while pressurization at higher temperature progressively increased the micelle dimensions. Anema, Lowe, and Stockmann (2005) indicated that a small decrease in the size of casein micelles was observed at 100 MPa, with slightly greater effects at higher temperatures or longer pressure treatments. At pressure >400 MPa, the casein micelles disintegrated. The effect was more rapid at higher temperatures although the final size was similar in all samples regardless of the pressure or temperature. At 200 MPa and 1O0C, the casein micelle size decreased slightly on heating, whereas, at higher temperatures, the size increased as a result of aggregation. Huppertz, Fox, and Kelly (2004a) showed that the size of casein micelles increased by 30% upon high-pressure treatment of milk at 250 MPa and micelle size dropped by 50% at 400 or 600 MPa.
Huppertz, Fox, and Kelly (2004b) demonstrated that the high- pressure treatment of milk at 100-600 MPa resulted in considerable solubilization of alphas 1- and beta-casein, which may be due to the solubilization of colloidal calcium phosphate and disruption of hydrophobic interactions. On storage of pressure, treated milk at 5C dissociation of casein was largely irreversible, but at 20C, considerable re-association of casein was observed. The hydration of the casein micelles increased on pressure treatment (100-600 MPa) due to induced interactions between caseins and whey proteins. Pressure treatment increased levels of alphas 1- and beta-casein in the soluble phase of milk and produced casein micelles with properties different to those in untreated milk. Huppertz, Fox, and Kelly (2004c) demonstrated that the casein micelle size was not influenced by pressures less than 200 MPa, but a pressure of 250 MPa increased the micelle size by 25%, while pressures of 300 MPa or greater, irreversibly reduced the size to 50% ofthat in untreated milk. Denaturation of alpha-lactalbumin did not occur at pressures less than or equal to 400 MPa, whereas beta-lactoglobulin was denatured at pressures greater than 100 MPa.
Galazka, Ledward, Sumner, and Dickinson (1997) reported loss of surface hydrophobicity due to application of 300 MPa in dilute solution. Pressurizing beta-lactoglobulin at 450 MPa for 15 minutes resulted in reduced solubility in water. High-pressure treatment induced extensive protein unfolding and aggregation when BSA was pressurized at 400 MPa. Beta-lactoglobulin appears to be more sensitive to pressure than alpha-lactalbumin. Olsen, Ipsen, Otte, and Skibsted (1999) monitored the state of aggregation and thermal gelation properties of pressure-treated beta-lactoglobulin immediately after depressurization and after storage for 24 h at 50C. A pressure of 150 MPa applied for 30 min, or pressures higher than 300 MPa applied for 0 or 30 min, led to formation of soluble aggregates. When continued for 30 min, a pressure of 450 MPa caused gelation of the 5% beta-lactoglobulin solution. Iametti, Tansidico, Bonomi, Vecchio, Pittia, Rovere, and DaIl’Aglio (1997) studied irreversible modifications in the tertiary structure, surface hydrophobicity, and association state of beta-lactoglobulin, when solutions of the protein at neutral pH and at different concentrations, were exposed to pressure. Only minor irreversible structural modifications were evident even for treatments as intense as 15 min at 900 MPa. The occurrence of irreversible modifications was time-dependent at 600 MPa but was complete within 2 min at 900 MPa. The irreversibly modified protein was soluble, but some covalent aggregates were formed. Subirade, Loupil, Allain, and Paquin (1998) showed the effect of dynamic high pressure on the secondary structure of betalactoglobulin. Thermal and pH sensitivity of pressure treated beta-lactoglobulin was different, suggesting that the two forms were stabilized by different electrostatic interactions. Walker, Farkas, Anderson, and Goddik (2004) used high- pressure processing (510 MPa for 10 min at 8 or 24C) to induce unfolding of beta-lactoglobulin and characterized the protein structure and surface-active properties. The secondary structure of the protein processed at 8C appeared to be unchanged, whereas at 24C alpha-helix structure was lost. Tertiary structures changed due to processing at either temperature. Model solutions containing the pressure-treated beta-lactoglobulin showed a significant decrease in surface tension. Izquierdo, Alli, Gmez, Ramaswamy, and Yaylayan (2005) demonstrated that under high-pressure treatments (100-300 MPa), the β-lactoglobulin AB was completely hydrolyzed by pronase and α-chymotrypsin. Hinrichs and Rademacher (2005) showed that the denaturation kinetics of beta-lactoglobulin followed second order kinetics while for alpha-lactalbumin it was 2.5. Alpha- lactalbumin was more resistant to denaturation than beta- lactoglobulin. The activation volume for denaturation of beta- lactoglobulin was reported to decrease with increasing temperature, and the activation energy increased with pressure up to 200 MPa, beyond which it decreased. This demonstrated the unfolding of the protein molecules.
Drake, Harison, Apslund, Barbosa-Canovas, and Swanson (1997) demonstrated that the percentage moisture and wet weight yield of cheese from pressure treated milk were higher than pasteurized or raw milk cheese. The microbial quality was comparable and some textural defects were reported due to the excess moisture content. Arias, Lopez, and Olano (2000) showed that high-pressure treatment at 200 MPa significantly reduced rennet coagulation times over control samples. Pressurization at 400 MPa led to coagulation times similar to those of control, except for milk treated at pH 7.0, with or without readjustment of pH to 6.7, which presented significantly longer coagulation times than their non-pressure treated counterparts.
Hinrichs and Rademacher (2004) demonstrated that the isobaric (200-800 MPa) and isothermal (-2 to 70C) denaturation of beta- lactoglobulin and alpha-lactalbumin of whey protein followed 3rd and 2nd order kinetics, respectively. Isothermal pressure denaturation of both beta-lactoglobulin A and B did not differ significantly and an increase in temperature resulted in an increase in thedenaturation rate. At pressures higher than 200 MPa, the denaturation rate was limited by the aggregation rate, while the pressure resulted in the unfolding of molecules. The kinetic parameters of denaturation were estimated using a single step non- linear regression method, which allowed a global fit of the entire data set. Huppertz, Fox, and Kelly (2004d) examined the high- pressure induced denaturation of alpha-lactalbumin and beta- lactoglobulin in dairy systems. The higher level of pressure- induced denaturation of both proteins in milk as compared to whey was due to the absence of casein micelles and colloidal calcium phosphate in the whey.
The conformation of BSA was reported to remain fairly stable at 400 MPa due to a high number of disulfide bonds which are known to stabilize its three dimensional structure (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Kieffer and Wieser (2004) indicated that the extension resistance and extensibility of wet gluten were markedly influenced by high pressure (up to 800 MPa), while the temperature and the duration of pressure treatment (30-80C for 2-20 min) had a relatively lesser effect. The application of high pressure resulted in a marked decrease in protein extractability due to the restructuring of disulfide bonds under high pressure leading to the incorporation of alpha- and gamma-gliadins in the glutenin aggregate. The change in secondary structure following high- pressure treatment was also reported.
The pressure treatment of myosin led to head-to-head interaction to form oligomers (clumps), which became more compact and larger in size during storage at constant pressure. Even after pressure treatment at 210 MPa for 5 minutes, monomieric myosin molecules increased and no gelation was observed for pressure treatment up to 210 MPa for 30 minutes. Pressure treatment did not also affect the original helical structure of the tail in the myosin monomers. Angsupanich, Edde, and Ledward (1999) showed that high pressure- induced denaturation of myosin led to formation of structures that contained hydrogen bonds and were additionally stabilized by disulphide bonds.
Application of 750 MPa for 20 minutes resulted in dimerization of metmyoglobin in the pH range of 6-10, whereas maximum pH was not at isoelectric pH (6.9). Under acidic pH conditions, no dimers were formed (Defaye and Ledward, 1995). Zipp and Kouzmann ( 1973) showed the formation of precipitate when pressurized (750 MPa for 20 minutes) near the isoelectric point, the precipitate redissolved slowly during storage. Pressure treatment had no effect on lipid oxidation in the case of minced meat packed in air at pressure less than 300 MPa, while the oxidation increased proportionally at higher pressures. However, on exposure to higher pressure, minced meat in contact with air oxidized rapidly. Pressures > 300-400 MPa caused marked denaturation of both myofibriller and sarcoplasmic proteins in washed pork muscle and pork mince (Ananth, Murano and Dickson, 1995). Chapleau and Lamballerie (2003) showed that high-pressure treatment induced a threefold increase in the surface hydrophobicity of myofibrillar proteins between O and 450 MPa. Chapleau, Mangavel, Compoint, and Lamballerie (2004) reported that high pressure modified the secondary structure of myofibrillar proteins extracted from cattle carcasses. Irreversible changes and aggregation were reported at a pressure higher than 300 MPa, which can potentially affect the functional properties of meat products. Lamballerie, Perron, Jung, and Cheret (2003) indicated that high pressure treatment increases cathepsin D activity, and that pressurized myofibrils are more susceptible to cathepsin D action than non- pressurized myofibrils. The highest cathepsin D activity was observed at 300 MPa. Cariez, Veciana, and Cheftel ( 1995) demonstrated that L color values increased significantly in meat treated at 200-350 MPa, the meat becoming pink, and a-value decreased in meat treated at 400-500 MPa to give a grey-brown color. The total extractable myoglobin decreased in meat treated at 200- 500 MPa, while the metmyoglobin content of meat increased and the oxymyoglobin decreased at 400500 MPa. Meat discoloration from pressure processing resulted in a whitening effect at 200-300 MPa due to globin denaturation, and/or haem displacement/release, or oxidation of ferrous myoglobin to ferric myoglobin at pressure higher than 400 MPa.
The conformation of the main protein component of egg white, ovalbumin, remains fairly stable when pressurized at 400 MPa, may be due to the four disulfide bonds and non-covalent interactions stabilizing the three dimensional structure of ovalbumin (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Hayashi, Kawamura, Nakasa and Okinada (1989) reported irreversible denaturation of egg albumin at 500-900 MPa with concomitant increase in susceptibility to subtilisin. Zhang, Li, and Tatsumi (2005) demonstrated that the pressure treatment (200-500 MPa) resulted in denaturation of ovalbumin. The surface hydrophobicity of ovalbumin was found to increase with increase in pressure treatment and the presence of polysaccharide protected the protein against denaturation. Iametti, Donnizzelli, Pittia, Rovere, Squarcina, and Bonomi (1999) showed that the addition of NaCl or sucrose to egg albumin prior to high- pressure treatment (up to 10 min at 800 MPa) prevented insolubulization or gel formation after pressure treatment. As a consequence of protein unfolding, the treated albumin had increased viscosity but retained its foaming and heat-gelling properties. Farr (1990) reported the modification of functionality of egg proteins. Egg yolk formed a gel when subjected to a pressure of 400 MPa for 30 minutes at 25C, kept its original color, and was soft and adhesive. The hardness of the pressure treated gel increased and adhesiveness decreased with an increase in pressure. Plancken, Van Loey, and Hendrickx (2005) showed that the application of high pressure (400- 700 MPa) to egg white solution resulted in an increase in turbidity, surface hydrophobicity, exposed sulfhydryl content, and susceptibility to enzymatic hydrolysis, while it resulted in a decrease in protein solubility, total sulfhydryl content, denaturation enthalpy, and trypsin inhibitory activity. The pressure- induced changes in these properties were shown to be dependent on the pressuretemperature and the pH of the solution. Speroni, Puppo, Chapleau, Lamballerie, Castellani, Aon, and Anton (2005) indicated that the application of high pressure (200-600 MPa) at 2OC to low- density lipoproteins did not change the solubility even if the pH is changed, whereas aggregation and protein denaturation were drastically enhanced at pH 8. Further, the application of high- pressure under alkaline pH conditions resulted in decreased droplet flocculation of low-density lipoproteins dispersions.
The minimum pressure required for the inducing gelation of soya proteins was reported to be 300 MPa for 10-30 minutes and the gels formed were softer with lower elastic modulus in comparison with heat-treated gels (Okamoto, Kawamura, and Hayashi, 1990). The treatment of soya milk at 500 MPa for 30 min changed it from a liquid state to a solid state, whereas at lower pressures and at 500 MPa for 10 minutes, the milk remained in a liquid state, but indicated improved emulsifying activity and stability (Kajiyama, Isobe, Uemura, and Noguchi, 1995). The hardness of tofu gels produced by high-pressure treatment at 300 MPa for 10 minutes was comparable to heat induced gels. Puppo, Chapleau, Speroni, Lamballerie, Michel, Anon, and Anton (2004) demonstrated that the application of high pressure (200-600 MPa) on soya protein isolate at pH 8.0 resulted in an increase in a protein hydorphobicity and aggregation, a reduction of free sulfhydryl content and a partial unfolding of the 7S and 11S fractions at pH 8. The change in the secondary structure leading to a more disordered structure was also reported. Whereas at pH 3.0, the protein was partially denatured and insoluble aggregates were formed, the major molecular unfolding resulted in decreased thermal stability, increased protein solubility, and hydorphobicity. Puppo, Speroni, Chapleau, Lamballerie, An, and Anton (2005) studied the effect of high- pressure (200, 400, and 600 MPa for 10 min at 10C) on the emulsifying properties of soybean protein isolates at pH 3 and 8 (e.g. oil droplet size, flocculation, interfacial protein concentration, and composition). The application of pressure higher than 200 MPa at pH 8 resulted in a smaller droplet size and an increase in the levels of depletion flocculation. However, a similar effect was not observed at pH 3. Due to the application of high pressure, bridging flocculation decreased and the percentage of adsorbed proteins increased irrespective of the pH conditions. Moreover, the ability of the protein to be adsorbed at the oil- water interface increased. Zhang, Li, Tatsumi, and Isobe (2005) showed that the application of high pressure treatment resulted in the formation of more hydrophobic regions in soy protein, which dissociated into subunits, which in some cases formed insoluble aggregates. High-pressure denaturation of beta-conglycinin (7S) and glycinin (11S) occurred at 300 and 400 MPa, respectively. The gels formed had the desirable strength and a cross-linked network microstructure.
Soybean whey is a by-product of tofu manufacture. It is a good source of peptides, proteins, oligosaccharides, and isoflavones, and can be used in special foods for the elderly persons, athletes, etc. Prestamo and Penas (2004) studied the antioxidative activity of soybean whey proteins and their pepsin and chymotrypsin hydrolysates. The chymotrypsin hydrolysate showed a higher antioxidative activity than the non-hydrolyzed protein, but the pepsin hydrolysate showed an opposite trend. High pressure processing at 100 MPa inc\reased the antioxidative activity of soy whey protein, but decreased the antioxidative activity of the hydrolysates. High pressure processing increased the pH of the protein hydrolysates. Penas, Prestamo, and Gomez (2004) demonstrated that the application of high pressure (100 and 200 MPa, 15 min, 37C) facilitated the hydrolysis of soya whey protein by pepsin, trypsin, and chymotrypsin. It was shown that the highest level of hydrolysis occurred at a treatment pressure of 100 MPa. After the hydrolysis, 5 peptides under 14 kDa with trypsin and chymotrypsin, and 11 peptides with pepsin were reported.
COMBINATION OF HIGHPRESSURE TREATMENT WITH OTHER NON-THERMAL PROCESSING METHODS
Many researchers have combined the use of high pressure with other non-thermal operations in order to explore the possibility of synergy between processes. Such attempts are reviewed in this section.
Crawford, Murano, Olson, and Shenoy (1996) studied the combined effect of high pressure and gamma-irradiation for inactivating Clostridium spmgenes spores in chicken breast. Application of high pressure reduced the radiation dose required to produce chicken meat with extended shelf life. The application of high pressure (600 MPa for 20 min at 8O0C) reduced the irradiation doses required for one log reduction of Clostridium spmgenes from 4.2 kGy to 2.0 kGy. Mainville, Montpetit, Durand, and Farnworth (2001) studied the combined effect of irradiation and high pressure on microflora and microorganisms of kefir. The irradiation treatment of kefir at 5 kGy and high-pressure treatment (400 MPa for 5 or 30 min) deactivated the bacteria and yeast in kefir, while leaving the proteins and lipids unchanged.
The exposure of microbial cells and spores to an alternating current (50 Hz) resulted in the release of intracellular materials causing loss or denaturation of cellular components responsible for the normal functioning of the cell. The lethal damage to the microorganisms enhanced when the organisms are exposed to an alternating current before and after the pressure treatment. High- pressure treatment at 300 MPa for 10 min for Escherichia coli cells and 400 MPa for 30 min for Bacillus subtalis spores, after the alternating current treatment, resulted in reduced surviving fractions of both the organisms. The combined effect was also shown to reduce the tolerant level of microorganisms to other challenges (Shimada and Shimahara, 1985, 1987; Shimada, 1992).
The pretreatment with ultrasonic waves (100 W/cm^sup 2^ for 25 min at 25C) followed by high pressure (400 MPa for 25 min at 15C) was shown to result in complete inactivation of Rhodoturola rubra. Neither ultrasonic nor high-pressure treatment alone was found to be effective (Knorr, 1995).
Carbon Dioxide and Argon
Heinz and Knorr (1995) reported a 3 log reduction of supercritical CO2 pretreated cultures. The effect of the pretreatment on germination of Bacillus subtilis endospores was monitored. The combination of high pressure and mild heat treatment was the most effective in reducing germination (95% reduction), but no spore inactivation was observed.
Park, Lee, and Park (2002) studied the combination of high- pressure carbon dioxide and high pressure as a nonthermal processing technique to enhance the safety and shelf life of carrot juice. The combined treatment of carbon dioxide (4.90 MPa) and high-pressure treatment (300 MPa) resulted in complete destruction of aerobes. The increase in high pressure to 600 MPa in the presence of carbon dioxide resulted in reduced activities of polyphenoloxidase (11.3%), lipoxygenase (8.8%), and pectin methylesterase (35.1%). Corwin and Shellhammer (2002) studied the combined effect of high-pressure treatment and CO2 on the inactivation of pectinmethylesterase, polyphenoloxidase, Lactobacillus plantarum, and Escherichia coli. An interaction was found between CO2 and pressure at 25 and 50C for pectinmethylesterase and polyphenoloxidase, respectively. The activity of polyphenoloxidase was decreased by CO2 at all pressure treatments. The interaction between CO2 and pressure was significant for Lactobacillus plantarum, with a significant decrease in survivors due to the addition of CO2 at all pressures studied. No significant effect on E. coli survivors was seen with CO2 addition. Truong, Boff, Min, and Shellhammer (2002) demonstrated that the addition of CO2 (0.18 MPa) during high pressure processing (600 MPa, 25C) of fresh orange juice increases the rate of PME inactivation in Valencia orange juice. The treatment time due to CO2 for achieving the equivalent reduction in PME activity was from 346 s to 111 s, but the overall degree of PME inactivation remained unaltered.
Fujii, Ohtani, Watanabe, Ohgoshi, Fujii, and Honma (2002) studied the high-pressure inactivation of Bacillus cereus spores in water containing argon. At the pressure of 600 MPa, the addition of argon reportedly accelerated the inactivation of spores at 20C, but had no effect on the inactivation at 40C.
The complex physicochemical environment of milk exerted a strong protective effect on Escherichia coli against high hydrostatic pressure inactivation, reducing inactivation from 7 logs at 400 MPa to only 3 logs at 700 MPa in 15 min at 20C. A substantial improvement in inactivation efficiency at ambient temperature was achieved by the application of consecutive, short pressure treatments interrupted by brief decompressions. The combined effect of high pressure (500 MPa) and natural antimicrobial peptides (lysozyme, 400 g/ml and nisin, 400 g/ml) resulted in increased lethality for Escherichia coli in milk (Garcia, Masschalck, and Michiels, 1999).
OPPORTUNITIES FOR HIGH PRESSURE ASSISTED PROCESSING
The inclusion of high-pressure treatment as a processing step within certain manufacturing flow sheets can lead to novel products as well as new process development opportunities. For instance, high pressure can precede a number of process operations such as blanching, dehydration, rehydration, frying, and solid-liquid extraction. Alternatively, processes such as gelation, freezing, and thawing, can be carried out under high pressure. This section reports on the use of high pressures in the context of selected processing operations.
Eshtiaghi and Knorr (1993) employed high pressure around ambient temperatures to develop a blanching process similar to hot water or steam blanching, but without thermal degradation; this also minimized problems associated with water disposal. The application of pressure (400 MPa, 15 min, 20C) to the potato sample not only caused blanching but also resulted in a four-log cycle reduction in microbial count whilst retaining 85% of ascorbic acid. Complete inactivation of polyphenoloxidase was achieved under the above conditions when 0.5% citric acid solution was used as the blanching medium. The addition of 1 % CaCl^sub 2^ solution to the medium also improved the texture and the density. The leaching of potassium from the high-pressure treated sample was comparable with a 3 min hot water blanching treatment (Eshtiaghi and Knorr, 1993). Thus, high- pressures can be used as a non-thermal blanching method.
Dehydration and Osmotic Dehydration
The application of high hydrostatic pressure affects cell wall structure, leaving the cell more permeable, which leads to significant changes in the tissue architecture (Fair, 1990; Dornenburg and Knorr, 1994, Rastogi, Subramanian, and Raghavarao, 1994; Rastogi and Niranjan, 1998; Rastogi, Raghavarao, and Niranjan, 2005). Eshtiaghi, Stute, and Knorr (1994) reported that the application of pressure (600 MPa, 15 min at 70C) resulted in no significant increase in the drying rate during fluidized bed drying of green beans and carrot. However, the drying rate significantly increased in the case of potato. This may be due to relatively limited permeabilization of carrot and beans cells as compared to potato. The effects of chemical pre-treatment (NaOH and HCl treatment) on the rates of dehydration of paprika were compared with products pre-treated by applying high pressure or high intensity electric field pulses (Fig. 2). High-pressure (400 MPa for 10 min at 25C) and high intensity electric field pulses (2.4 kV/cm, pulse width 300 s, 10 pulses, pulse frequency 1 Hz) were found to result in drying rates comparable with chemical pre-treatments. The latter pre-treatments, however, eliminated the use of chemicals (Ade- Omowaye, Rastogi, Angersbach, and Knorr, 2001).
Figure 2 (a) Effects of various pre-treatments such as hot water blanching, high pressure and high intensity electric field pulse treatment on dehydration characteristics of red paprika (b) comparison of drying time (from Ade-Omowaye, Rastogi, Angersbach, and Knorr, 2001).
Figure 3 (a) Variation of moisture and (b) solid content (based on initial dry matter content) with time during osmotic dehydration (from Rastogi and Niranjan, 1998).
Generally, osmotic dehydration is a slow process. Application of high pressures causes permeabilization of the cell structure (Dornenburg and Knorr, 1993; Eshtiaghi, Stute, and Knorr, 1994; Fair, 1990; Rastogi, Subramanian, and Raghavarao, 1994). This phenomenon has been exploited by Rastogi and Niranjan (1998) to enhance mass transfer rates during the osmotic dehydration of pineapple (Ananas comsus). High-pressure pre-treatments (100-800 MPa) were found to enhance both water removal as well as solid gain (Fig. 3). Measured diffusivity values for water were found to be four-fold greater, whilst solute (sugar) diffusivity values were found to be two-fold greater. Compression and decompression occurring during high pressure pre-treatment itself caused the removal of a significant amount of water, which was attributed to the cell wall rupture (Rastogi and Niranjan, 1998). Differential interference contrast microscopic examination showed the ext\ent of cell wall break-up with applied pressure (Fig. 4). Sopanangkul, Ledward, and Niranjan (2002) demonstrated that the application of high pressure (100 to 400 MPa) could be used to accelerate mass transfer during ingredient infusion into foods. Application of pressure opened up the tissue structure and facilitated diffusion. However, higher pressures above 400 MPa induced starch gelatinization also and hindered diffusion. The values of the diffusion coefficient were dependent on cell permeabilization and starch gelatinization. The maximum value of diffusion coefficient observed represented an eight-fold increase over the values at ambient pressure.
The synergistic effect of cell permeabilization due to high pressure and osmotic stress as the dehydration proceeds was demonstrated more clearly in the case of potato (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003). The moisture content was reduced and the solid content increased in the case of samples treated at 400 MPa. The distribution of relative moisture (M/M^sub o^) and solid (S/S^sub o^) content as well as the cell permeabilization index (Zp) (shown in Fig. 5) indicate that the rate of change of moisture and solid content was very high at the interface and decreased towards the center (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003).
Most dehydrated foods are rehydrated before consumption. Loss of solids during rehydration is a major problem associated with the use of dehydrated foods. Rastogi, Angersbach, Niranjan, and Knorr (2000c) have studied the transient variation of moisture and solid content during rehydration of dried pineapples, which were subjected to high pressure treatment prior to a two-stage drying process consisting of osmotic dehydration and finish-drying at 25C (Fig. 6). The diffusion coefficients for water infusion as well as for solute diffusion were found to be significantly lower in high-pressure pre- treated samples. The observed decrease in water diffusion coefficient was attributed to the permeabilization of cell membranes, which reduces the rehydration capacity (Rastogi and Niranjan, 1998). The solid infusion coefficient was also lower, and so was the release of the cellular components, which form a gel- network with divalent ions binding to de-esterified pectin (Basak and Ramaswamy, 1998; Eshtiaghi, Stute, and Knorr, 1994; Rastogi Angersbach, Niranjan, and Knorr, 2000c). Eshtiaghi, Stute, and Knorr (1994) reported that high-pressure treatment in conjunction with subsequent freezing could improve mass transfer during rehydration of dried plant products and enhance product quality.
Figure 4 Microstructures of control and pressure treated pineapple (a) control; (b) 300 MPa; (c) 700 MPa. ( 1 cm = 41.83 m) (from Rastogi and Niranjan, 1998).
Ahromrit, Ledward, and Niranjan (2006) explored the use of high pressures (up to 600 MPa) to accelerate water uptake kinetics during soaking of glutinous rice. The results showed that the length and the diameter the of the rice were positively correlated with soaking time, pressure and temperature. The water uptake kinetics was shown to follow the well-known Fickian model. The overall rates of water uptake and the equilibrium moisture content were found to increase with pressure and temperature.
Zhang, Ishida, and Isobe (2004) studied the effect of highpressure treatment (300-500 MPa for 0-380 min at 20C) on the water uptake of soybeans and resulting changes in their microstructure. The NMR analysis indicated that water mobility in high-pressure soaked soybean was more restricted and its distribution was much more uniform than in controls. The SEM analysis revealed that high pressure changed the microstructures of the seed coat and hilum, which improved water absorption and disrupted the individual spherical protein body structures. Additionally, the DSC and SDS-PAGE analysis revealed that proteins were partially denatured during the high pressure soaking. Ibarz, Gonzalez, Barbosa-Canovas (2004) developed the kinetic models for water absorption and cooking time of chickpeas with and without prior high-pressure treatment (275-690 MPa). Soaking was carried out at 25C for up to 23 h and cooking was achieved by immersion in boiling water until they became tender. As the soaking time increased, the cooking time decreased. High-pressure treatment for 5 min led to reductions in cooking times equivalent to those achieved by soaking for 60-90 min.
Ramaswamy, Balasubramaniam, and Sastry (2005) studied the effects of high pressure (33, 400 and 700 MPa for 3 min at 24 and 55C) and irradiation (2 and 5 kGy) pre-treatments on hydration behavior of navy beans by soaking the treated beans in water at 24 and 55C. Treating beans under moderate pressure (33 MPa) resulted in a high initial moisture uptake (0.59 to 1.02 kg/kg dry mass) and a reduced loss of soluble materials. The final moisture content after three hours of soaking was the highest in irradiated beans (5 kGy) followed by high-pressure treatment (33 MPa, 3 min at 55C). Within the experimental range of the study, Peleg’s model was found to satisfactorily describe the rate of water absorption of navy beans.
A reduction of 40% in oil uptake during frying was observed, when thermally blanched frozen potatoes were replaced by high pressure blanched frozen potatoes. This may be due to a reduction in moisture content caused by compression and decompression (Rastogi and Niranjan, 1998), as well as the prevalence of different oil mass transfer mechanisms (Knorr, 1999).
Solid Liquid Extraction
The application of high pressure leads to rearrangement in tissue architecture, which results in increased extractability even at ambient temperature. Extraction of caffeine from coffee using water could be increased by the application of high pressure as well as increase in temperature (Knorr, 1999). The effect of high pressure and temperature on caffeine extraction was compared to extraction at 100C as well as atmospheric pressure (Fig. 7). The caffeine yield was found to increase with temperature at a given pressure. The combination of very high pressures and lower temperatures could become a viable alternative to current industrial practice.
Figure 5 Distribution of (a, b) relative moisture and (c, d) solid content as well as (e, f) cell disi | <urn:uuid:759ff0b9-9458-45d0-8deb-368c01089695> | CC-MAIN-2013-20 | http://www.redorbit.com/news/business/815480/opportunities_and_challenges_in_high_pressure_processing_of_foods/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.924161 | 14,546 | 2.5625 | 3 |
Research and Tracking
Accurately tracking birth defects is the first step in preventing them and reducing their effect. Birth defects tracking systems are vital to help us find out where and when birth defects occur and who they affect. This gives us important clues about preventing birth defects and allows us to evaluate our efforts.
We base our research on what we learn from tracking. By analyzing the data collected, we can identify factors that increase or decrease the risk of birth defects and identify community or environmental concerns that need more study. In addition, research helps the Centers for Disease Control and Prevention (CDC) answer critical questions about the causes of many of these birth defects.
What We’ve Learned
We know what causes some birth defects, such as Down syndrome and fetal alcohol syndrome. However, for about two-thirds of birth defects, the causes are unknown.1 Also, we don’t understand well how certain factors might work together to cause birth defects. While there is still more work to do, we have learned a lot about birth defects through past research. For example:
- Taking supplements containing folic acid, a B vitamin, at least 1 month before getting pregnant and during pregnancy lowers the risk of having a baby with serious birth defects of the brain and spine (neural tube defects). For this reason, all women who can become pregnant should take supplements containing 400 micrograms of folic acid every day.
- Drinking alcohol during pregnancy can cause the baby to be born with fetal alcohol spectrum disorders (FASDs). Pregnant women should not drink alcohol any time during pregnancy. Women also should not drink alcohol if they are planning to become pregnant or are sexually active and do not use effective birth control.
- Smoking in the month before getting pregnant and throughout pregnancy increases the chance of premature birth, certain birth defects (such as cleft lip, cleft palate, or both), and infant death. Quitting smoking before getting pregnant is best. However, for women who are already pregnant, quitting as early as possible can still help protect against some health problems.
- Women who are obese when they get pregnant have a higher risk of having a baby with serious birth defects of the brain and spine (neural tube defects) and some heart defects. Helping women to reach a healthy weight before they get pregnant could prevent birth defects.
- Poor control of diabetes in pregnant women increases the chance for birth defects, and might cause serious complications for the mother, too. If a woman with diabetes keeps her blood sugar well-controlled before and during pregnancy, she can reduce the chance of having a baby with birth defects.
- Taking certain medications during pregnancy can cause serious birth defects, but the safety of many medications taken by pregnant women has been difficult to determine. If you are pregnant or planning a pregnancy, you should not stop taking medications you need or begin taking new medications without first talking with your doctor. This includes prescription and over-the-counter medications and dietary or herbal products.
Birth Defects Tracking and Research
The Metropolitan Atlanta Congenital Defects Program (MACDP)
MACDP is a population-based tracking system for birth defects among children born to residents of metropolitan Atlanta. Population-based means that the researchers look at all babies with birth defects who live in a defined study area, which is important to get a complete picture of what is happening within this known population. Established in 1967, MACDP was the nation's first population-based system for active collection of information about birth defects. Active data collection means that committed staff members seek out information about birth defects and continually review medical records at multiple health care facilities in a given geographic area. Information obtained from MACDP is used to understand the characteristics of affected children, learn about other health outcomes associated with birth defects, and provide data for education and health policy decisions leading to prevention of birth defects. The system also serves as a model to help other programs develop and implement new tracking methods.
Learn more about the Metropolitan Atlanta Congenital Defects Program (MACDP) »
State-Based Tracking Systems
CDC funds population-based birth defects tracking systems in 14 US states and territories. The tracking systems use the data to help prevent birth defects and to refer infants and children with birth defects to needed services. Identifying birth defects at a state level also strengthens public health officials' ability to estimate prevalence and evaluate risk factors that are the most important in their community. State-based birth defects tracking programs provide important insights into our continued efforts to prevent birth defects and support families affected by them.
National Birth Defects Prevention Network (NBDPN)
CDC supports and collaborates with the NBDPN. The NBDPN is a group of over 225 individuals working at the national, state, and local levels, who are involved in tracking, researching, and preventing birth defects. The NBDPN serves as a forum for exchanging ideas about preventing birth defects for tracking and researching birth defects, and providing technical support for state and local programs. Established in 1997, the NBDPN assesses the effect of birth defects on children, families, and the health care system. It also identifies risk factors for birth defects. This information can be used to develop strategies to prevent birth defects and to assist families and their providers in preventing other disabilities in children with birth defects.
International Clearinghouse for Birth Defects Surveillance and Research (ICBDSR)
The ICBDSR brings together birth defects programs from around the world with the aim of conducting worldwide tracking and research to prevent birth defects and to improve the lives of people born with these conditions. CDC supports and collaborates with the ICBDSR as a way to gain knowledge and expertise on birth defects worldwide and to further our domestic goals and those of the international community.
Environmental Public Health Tracking (EPHT)
Environmental public health tracking is the ongoing collection, integration, analysis, interpretation, and dissemination of data on environmental hazards, exposures to those hazards, and health effects that may be related to the exposures.
CDC has worked with representatives from 23 state and local health departments that have received EPHT grants to develop a monitoring system for 12 birth defects. These defects were selected because they are serious birth defects that are relatively easily identified at or around the time of birth, have some potential for environmental risk factors, and could be adequately ascertained by the different types of birth defects tracking systems. The EPHT Network tracks the prevalence of these defects and publishes annual data tables and maps in the national portal. Currently, the national portal has birth defects data for Colorado, Connecticut, Florida, Maine, Massachusetts, Missouri, New Hampshire, New Jersey, New Mexico, New York, Utah, and Wisconsin.
National Birth Defects Prevention Study (NBDPS)
Established in 1997, NBDPS is the largest population-based U.S. study looking at risk factors and potential causes of over 30 major birth defects. CDC funds the study and collects data with researchers from other study sites, collectively called the Centers for Birth Defects Research and Prevention (CBDRP). Participating sites have included Arkansas, California, Georgia (CDC), Iowa, Massachusetts, New Jersey, New York, North Carolina, Texas, and Utah.
Understanding the potential causes of birth defects can help us learn how to prevent them. The NBDPS has made key contributions in understanding the risk of specific medications when used just before and during pregnancy. Data from the NBDPS has also clearly demonstrated that maternal obesity is a strong risk factor for a number of major birth defects and has confirmed the association between maternal smoking and orofacial clefts. The NBDPS is one key step toward bringing us to a day when fewer babies are born with birth defects.
National Health and Nutrition Examination Survey (NHANES)
NHANES is a nationally-representative survey designed to look at the health and nutritional status of adults and children in the United States. The survey is unique in that it combines interviews and physical examinations, including the collection of blood samples.
CDC uses information from these studies to look at the amount of folic acid taken in from food and dietary supplements. Green vegetables, fruits, and juices have natural folate, and other foods, such as cereal and bread, have folic acid added to them. CDC is looking at NHANES data to see how people get folic acid and to see if they are getting the recommended amount. This information will help determine if adding different levels of folic acid to these foods or different types of foods would affect peoples’ intake. CDC is also using these data to look at folic acid intake and blood folate concentrations among women who are pregnant, or who may become pregnant, as well as specifically among women who are obese or who have diabetes. CDC also uses NHANES data to look at patterns of prescription medication use among pregnant women. This information will help determine the most commonly used medications during pregnancy, which CDC will use to identify medications that need future research to characterize their safety or risk during pregnancy.
- Nelson K, Holmes LB. 1989. Malformations due to presumed spontaneous mutations in newborn infants. New England Journal of Medicine 320:19-23.
- Centers for Disease Control and Prevention
National Center on Birth Defects and Developmental Disabilities
Division of Birth Defects and Developmental Disabilities
1600 Clifton Road
Atlanta, GA 30333
TTY: (888) 232-6348
New Hours of Operation | <urn:uuid:adefe937-7928-47fb-9bc2-b4ca758e8bfa> | CC-MAIN-2013-20 | http://www.cdc.gov/ncbddd/birthdefects/research.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.944953 | 1,938 | 3.796875 | 4 |
The issue of Nubian rights is an often neglected and poorly understood issue for public opinion. This is not a big surprise since Egyptians did not get any education on that part of their country and it hardly ever surfaces as part of the political discussion. Despite the active participation of Nubians in and before the revolution; their efforts to highlight their cause and their history of discrimination, little attention is given to them. Nubians have more recently become a part of the political discussion, more evident in the presidential race. However, as usual Nubians were excluded from participating in shaping their country’s future as none of them was selected to be in the constitution drafting committee.
Nubians are the inhabitants of a historical part in the South of Egypt and Northern Sudan. Their suffering started long before the building of the High Dam in 1964; it was in 1902 when Aswan Dam was built. The end result was about 44 villages that drowned along with the historic area that witnessed one of the humanity’s earliest civilizations. Some of the villages even drowned without prior notice; village inhabitants would wake up one day to find their property, their land and their cattle drowned. They were moved away to the desert land of Kom Ombo despite their heavy reliance on the Nile for agriculture all their life. They never received their rightful compensation for their displacement despite many promises made by successive regimes. They postponed the call for their rights several times for considerations of war and national crises. In addition, Nubians suffered from political, economic and cultural marginalization. School curricula exclude their cultural heritage and their language is not taught in schools (even in areas where they live) and it may become extinct if efforts are not made to preserve it.
Despite the marginalization, Nubian always asserted their “Egyptianness”. They have taken numerous patriotic stances and sacrificed several times for the sake of their country. They deserve respect not only for their struggle, but because they’re Egyptian and deserve full citizenship. They are fighting for their rights and their place in the new scene after the revolution. However their struggle exemplifies the many of the issues that Egypt suffers from:
Firstly, the “Nubian issue” reflects our crisis with growing racism and intolerance. Clearly, discrimination against different groups in Egypt is not uncommon whether it was based on gender, religious beliefs, class, etc. Activist Fatma Emam recently wrote an article of her experience as a black Egyptian and the racism she encounters on a daily basis. Her article served as a wakeup call for so many who were unaware of such experiences to “black Egyptians”. Moreover, a common issue often cited by Nubians is that most Egyptians assume they’re Sudanese or African as if Egypt doesn’t have that southern part where darker-skinned people live. The problem highlighted here is not only that we are being racist, but we are also in deep denial about it.
However, it’s not hard to find reasons why the situation has become so deteriorated. We lived under a centuries of authoritarianism and colonialism. Both systems usually play the cards of racism and divide-and-conquer very well; and they deprive societies from progressing towards pluralism. I am not justifying the racism or discrimination. However, I believe that’s a major lesson to be learnt. The more marginalized people are, the more likely people would want to rebel. And we can take South Sudan as an example. When people do not enjoy full participation and self-determination, they no longer want to be part of a country that denies them those rights.
Secondly, the “Nubian issue” also reflects a crisis with our “elite”, and by elite here I mean our opinion leaders, intellectuals, and media people. We may also add the emerging younger elite that started to gain more visibility after the revolution. A few of the elite come out to speak up about discrimination, racism, and Nubian rights. This can be seen a part of bigger elite crisis reflected in their detachment from the public and their failure to truly engage the public. Even when the revolution began, it was hardly credited for the efforts of the elite. It is sad that the people who should lead the change get trapped by infighting and at many cases follow their own personal interests.
One explanation that could be given as to why “the elite” shy away from the Nubian cause is considerations of populism. It seems like the Nubian issue is not “sexy” or “doesn’t sell” for intellectuals, so more favorable topics are preferred. For example, we will not find a lack of “elite” who spoke out against Palestinian displacement, but a handful who spoke against Nubian displacements.
We know that activism is more effective when there is more solidarity from different groups. That’s why women issues would be further promoted if more men stand by them, and Christians’ rights would be easier to attain if more Muslims speak up against their violations. It is true that more people now calling for inclusion and representation than ever before, but a strong stance against racism, sectarianism and discrimination is still much needed.
Thirdly, history tells us that many peoples were exploited under many guises. Arab nationalism was one of those ideologies under which lots of abuses and violations against minorities in different Arab countries were justified. The attack on other languages and cultures within Arab nations carried the banner of Arab nationalism. I would personally be happy with Arab union and with breaking geographical and economic barriers between us one day. However, all I can see that from now are big shiny words about Arab brotherhood and solidarity, while none of it is materializing.
Until now, Nubians are accused of separationism when they speak up for their rights. And more often than not they’re told it’s not the right time to bring it up. It’s time for us to realize that values of democracy and diversity must be respected and should never be taken away under any ideological guise or notion. Discrimination cannot be condoned or downplayed anymore; we can’t even afford it anymore.
Thanks Fatma Emam for the advice given for producing this piece. | <urn:uuid:eccec58b-587d-4fa9-89e0-7a03e42dc45b> | CC-MAIN-2013-20 | http://rwac-egypt.blogspot.com/2012/05/lessons-from-nubia.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.977828 | 1,288 | 3.296875 | 3 |
You have been insulted, your ego is bruised, your
pride is hurt, you have been shown powerless and diminished in some way, and now you are hurt
and mad as hell! You have just
been humiliated, it is unfair, and you don't like feeling foolish. Humiliation often results
in violent retaliation and revenge.
Remember, at the end of the day, the only opinion of yourself that matters is
- Feeling disrespected.
- A loss of stature or image.
- An image change reflecting a decrease in what others believe about your
- Induced shame
- To reduce the pride or fail to recognize the
dignity of another
- An event perceived to cause loss of honor and induce
- Feeling powerless.
- Being unjustly forced into a degrading position.
- Ridicule, scorn, contempt or other treatment at the hands of others.
Root: from Latin humilis, low, lowly, from humus, ground.
Literally, “reducing to dirt”.
Synonyms include losing face, being made to feel like a fool, feeling
foolish, hurt, disgraced, indignity, put-down, debased, dejected, denigrated, dishonored,
disrespected, dis'ed, defamed,
humbled, scorned, slighted, slurred, shamed, mortified, rejected, being laughed
at. While humility is considered a strength, humiliation is hurtful; the
distinction pivots on autonomy.
Appreciation is the opposite of humiliation.
Humiliation involves an event that demonstrates unequal
power in a relationship where you are in
the inferior position and unjustly diminished. Often the painful experience is vividly
remembered for a long time. Your vindictive
passions are aroused and a humiliated fury may
result. There are three involved parties: 1) the perpetrator
exercising power, 2) the victim who is shown powerless and therefore humiliated,
and 3) the witness or observers to the event.
Because of the powerlessness and lack of control that it exposes, humiliation
may lead to anxiety.
Humility is recognizing and accepting our own limitations based on
an accurate and modest estimate of our importance and significance. The humble person
recognizes he is one among the six billion
interdependent people on this earth, earth is one planet circling the sun, and our sun
is one of a billion stars in
the presently known universe. Because of this broad and sound perspective on her
significance, the truly humble person cannot be humiliated.
Humility reduces our need for self-justification
and allows us to admit to and learn from our mistakes. Our ego
Humiliation and Shame
Shame is private, humiliation is public.
The essential distinction between humiliation and shame is this: you agree
with shame and you disagree with humiliation. Humiliation is suffering an insult. If you judge the insult to be credible,
then you feel shame. Others can insult and humiliate
you, but you will only feel shame if your self-image is reduced; and that
requires your own assessment and decision. A person who is insecure about their
genuine stature is more prone to feeling shame as a
result of an insult. This is because they give more credibility to what others
think of them than to what they think of themselves. This can result in
People believe they deserve their shame, they do not believe they deserve
their humiliation. Humiliation is seen as unjust.
Forms of Humiliation
Humans have many ways to slight others and humiliate them. For example:
- Overlooking someone, taking them for granted, ignoring them, giving them
the silent treatment, treating them as invisible, or making them wait unnecessarily for you,
- Rejecting someone, holding them distant, abandoned, or isolated,
- Withholding acknowledgement, denying recognition, manipulating
- Denying someone basic social amenities, needs, or human
- Manipulating people or treating them like objects (it) or animals, rather
than as a person (thou).
- Treating people unfairly,
- Domination, control, manipulation, abandonment,
- Threats or abuse including: verbal (e.g. name calling), physical, psychological, or sexual,
- Assault, attack, or injury
- Reduction in rank, responsibility, role, title, positional power, or
- Betrayal, or being cheated, lied to, defrauded, suckered, or duped,
- Being laughed at, mocked, teased, ridiculed, given a dirty look, spit on, or made
to look stupid or foolish.
- Being the victim of a practical joke, prank, or confidence scheme.
- False accusation or insinuation,
- Public shame, disrespect, or being dis'ed, downgraded, defeated, or
- Forced nakedness,
- Rape or incest,
- Seeing your love interest flirt with another, induced jealousy, violating
your love interest, cuckolding,
- Seeing your wife, girlfriend, sister, or daughter sexually violated,
- Poverty, unemployment, bad investments, debt, bankruptcy, foreclosure, imprisonment, homelessness, punishment,
- Denigration of a person's values,
beliefs, heritage, race, gender,
appearance, characteristics, or affiliations,
- Dependency, especially on weaker people,
- Losing a dominance contest. Being
forced to submit.
- Trespass such as violating privacy or other boundaries,
- Violating, denying, or suppressing
- Losing basic personal freedoms such a mobility, access, or autonomy; being
controlled, dominated, intruded on, exploited, or manipulated,
- Diminished competency resulting from being disabled, immobilized, tricked,
weakened, trapped, mislead, thwarted goals, opposed, sabotage, or let down.
- Diminished resources resulting from being defrauded, robbed, cheated,
evicted, or being deprived of privileges, or rights,
- Having safety or security reduced by intimidation or threat,
- Dismissing, discounting, or silencing your story,
- Being treated as an equal by a lower stature person.
The Paradox of Humiliation
An insult usually hurts, but it is important to resolve in your own
mind, based on evidence, why the insult hurts. What
loss does it represent to you? Decide if the insult:
- is an unjustified attack that does not decrease your stature, diminish your
self-image, nor tarnish your public image or reputation, or
- is justified and has diminished your public image or reputation, or
- is justified and has diminished or revised your self-image.
Begin the analysis by deciding if the insult is based on information
that accurately represents you. Then reflect and consider if your
image accurately represents your stature. If you decide the insult is unjustified
then you can simply ignore it (“don't take the bait”) or you can describe why it
is unfair and ask your offender for an
apology. If your public image exceeds your stature, then
the insult may a justifiable retaliation for your arrogance and
it may contain an important message you can learn from. If the insult is
justified it may cause you to feel
shame and then lead you to revise your
better align it with your stature. The insult is never
justified if it is an attempt to reduce your stature below the threshold of human
Public Image, Self-Image, Stature, and Revenge
For an insult to diminish your public image, the
public has to believe it is true. For an insult to diminish your
self-image or self-esteem, you have to believe it is
true. An insult cannot diminish your stature because
your self-image is not your self. An insult may cause you to reassess your
self-image or self-esteem.
Revenge is often sought as a remedy for
humiliation; perhaps using the phrase “protecting honor” as justification. But
revenge cannot be an effective remedy for humiliation, because it does nothing
to increase your stature.
Humiliation is more demeaning and hurtful than “taking offense” at something.
“Taking offense” is cognitive; you have questioned, disagreed with, or attacked
my beliefs and perhaps my values. We disagree, and I
think you are wrong. Offense is intellectual; it is about what I think.
“Humiliation” is visceral; you have attacked me, my being, my
self, and made me feel foolish about who I am. The attack is personal and
credible enough that you have caused me to doubt my own worth, and thereby
induced my shame. Humiliation is existential; it is
about who I am.
Humiliation has been linked to academic failure, low
self-esteem, social isolation,
conflict, delinquency, abuse, discrimination,
depression, learned helplessness, social
disruption, torture, and even death. People in power use humiliation as a form
of social control; it is a common tool of oppression. The fear of humiliation is
also a powerful motivating force.
Although shame and humiliation are
human universals, the particular circumstances
and events that cause humiliation can vary greatly from one culture to the next.
An event that is benign in one culture may cause great offense, shame, and
humiliation in another. For example:
- Under Islamic law a woman who spends time alone with an unrelated man
brings great shame to her family.
Victims of humiliation may be able to achieve resolution through either of
two paths. The first is to reappraise the
humiliating experience in some way that acknowledges the victim's strength and ability
to cope with a difficult situation. This approach
increases self-confidence and diminishes the fear of humiliation. The second
path is to leave the degrading environment and find a more appreciative
- “The most dangerous men on earth are those who are
afraid they are wimps.” ~ James Gilligan
- “No one can make you feel inferior without your consent.” ~
- “The truly humble person cannot be humiliated.” ~ Donald Klein
- “The fear of humiliation appears to be one of the most powerful
motivators in individual and collective human behavior.” ~ Donald Klein
- “Persistent humiliation robs you of the vantage of rebellion.” ~ M.
- “Ridicule is man's most potent weapon.” ~
- “The difference between how a person treats the powerless versus the
powerful is as good a measure of human character as I know.” ~ Robert I.
- “When you dismiss my story you dismiss who I am; you diminish me.” ~
Leland R. Beaumont
On Apology, by Aaron Lazare
Somebodies and Nobodies: Overcoming the Abuse of Rank, by Robert W. Fuller
Violence, by James
The No Asshole Rule,
by Robert I. Sutton
Threatened Egotism to Violence and Aggression: The Dark Side of High
Self-Esteem, Psychology Review, 1996, Vol. 103, No. 1, 5-33, by
Roy F. Baumeister, Laura Smart, Joseph M. Boden
Humiliation and Assistance: Telling the Truth About Power, Telling a
New Story, by Linda M. Hartling, Wellesley College
The Humiliation Dynamic,
Donald C. Klein, Ph.D., The Union Institute
Humiliation: Assessing the Specter of Derision, Degradation, and
Debasement, Linda M. Hartling (1995) Doctoral
dissertation. Cincinnati, OH: Union Institute Graduate School. | <urn:uuid:1148c431-6f71-4f4b-a17d-239b7ae5a69d> | CC-MAIN-2013-20 | http://www.emotionalcompetency.com/humiliation.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.912922 | 2,499 | 2.78125 | 3 |
China has worked actively and seriously to tackle global climate change and build capacity to respond to it. We believe that every country has a stake in dealing with climate change and every country has a responsibility for the safety of our planet. China is at a critical stage of building a moderately prosperous society on all fronts, and a key stage of accelerated industrialization and urbanization. Yet, despite the huge task of developing the economy and improving people’s lives, we have joined global actions to tackle climate change with the utmost resolve and a most active attitude, and have acted in line with the principle of common but differentiated responsibilities established by the United Nations. China voluntarily stepped up efforts to eliminate backward capacity in 2007, and has since closed a large number of heavily polluting small coal-fired power plants, small coal mines and enterprises in the steel, cement, paper-making, chemical and printing and dyeing sectors. Moreover, in 2009, China played a positive role in the success of the Copenhagen conference on climate change and the ultimate conclusion of the Copenhagen Accord. In keeping with the requirements of the Copenhagen Accord, we have provided the Secretariat of the United Nations Framework Convention on Climate Change with information on China’s voluntary actions on emissions reduction and joined the list of countries supporting the Copenhagen Accord.
The targets released by China last year for greenhouse gas emissions control require that by 2020, CO2 emissions per unit of GDP should go down by 40% - 45% from the 2005 level, non-fossil energy should make up about 15% of primary energy consumption, and forest coverage should increase by 40 million hectares and forest stock volume by 1.3 billion cubic meters, both from the 2005 level. The measure to lower energy consumption alone will help save 620 million tons of standard coal in energy consumption in the next five years, which will be equivalent to the reduction of 1.5 billion tons of CO2 emissions. This is what China has done to step up the shift in economic development mode and economic restructuring. It contributes positively to Asia’s and the global effort to tackle climate change.
Ladies and Gentlemen,
Green and sustainable development represents the trend of our times. To achieve green and sustainable development in Asia and beyond and ensure the sustainable development of resources and the environment such as the air, fresh water, ocean, land and forest, which are all vital to human survival, we countries in Asia should strive to balance economic growth, social development and environmental protection. To that end, we wish to work with other Asian countries and make further efforts in the following six areas.
First, shift development mode and strive for green development. To accelerate the shift in economic development mode and economic restructuring provides an important precondition for our efforts to actively respond to climate change, achieve green development and secure the sustainable development of the population, resources and the environment. It is the shared responsibility of governments and enterprises of all countries in Asia and around the world. We should actively promote a conservation culture and raise awareness for environmental protection. We need to make sure that the concept of green development, green consumption and a green lifestyle and the commitment to taking good care of Planet Earth, our common home are embedded in the life of every citizen in society.
Second, value the importance of science and technology as the backing of innovation and development. We Asian countries have a long way to go before we reach the advanced level in high-tech-powered energy consumption reduction and improvement of energy and resource efficiency. Yet, this means we have a huge potential to catch up. It is imperative for us to quicken the pace of low-carbon technology development, promote energy efficient technologies and raise the proportion of new and renewable energies in our energy mix so as to provide a strong scientific and technological backing for green and sustainable development of Asian countries. As for developed countries, they should facilitate technology transfer and share technologies with developing countries on the basis of proper protection of intellectual property rights.
Third, open wider to the outside world and realize harmonious development. In such an open world as ours, development of Asian countries and development of the world are simply inseparable. It is important that we open our markets even wider, firmly oppose and resist protectionism in all forms and uphold a fair, free and open global trade and investment system. At the same time, we should give full play to the role of regional and sub-regional dialogue and cooperation mechanisms in Asia to promote harmonious and sustainable development of Asia and the world.
Fourth, strengthen cooperation and sustain common development. Pragmatic, mutually beneficial and win-win cooperation is a sure choice of all Asian countries if we are to realize sustainable development. No country could stay away from or manage to meet on its own severe challenges like the international financial crisis, climate change and energy and resources security. We should continue to strengthen macro-economic policy coordination and vigorously promote international cooperation in emerging industries, especially in energy conservation, emissions reduction, environmental protection and development of new energy sources to jointly promote sustainable development of the Asian economy and the world economy as a whole.
Fifth, work vigorously to eradicate poverty and gradually achieve balanced development. A major root cause for the loss of balance in the world economy is the seriously uneven development between the North and the South. Today, 900 million people in Asia, or roughly one fourth of the entire Asian population, are living below the 1.25 dollars a day poverty line. We call for greater efforts to improve the international mechanisms designed to promote balanced development, and to scale up assistance from developed countries to developing countries, strengthen South-South cooperation, North-South cooperation and facilitate attainment of the UN Millennium Development Goals. This will ensure that sustainable development brings real benefits to poor regions, poor countries and poor peoples.
Sixth, bring forth more talents to promote comprehensive development. The ultimate goal of green and sustainable development is to improve people’s living environment, better their lives and promote their comprehensive development. Success in this regard depends, to a large extent, on the emergence of talents with an innovative spirit. We need to build institutions, mechanisms and a social environment to help people bring out the best of their talents, and to intensify education and training of professionals of various kinds. This will ensure that as Asia achieves green and sustainable development, our people will enjoy comprehensive development.
Ladies and Gentlemen,
We demonstrated solidarity as we rose up together to the international financial crisis in 2009. Let us carry forward this great spirit, build up consensus, strengthen unity and cooperation and explore a path of green and sustainable development. This benefits Asia. It benefits the world, too.
In conclusion, I wish this annual conference of the Boao Forum for Asia a complete success. | <urn:uuid:648ee2b5-f8cd-4273-8ab0-29206d637638> | CC-MAIN-2013-20 | http://news.xinhuanet.com/english2010/china/2010-04/11/c_13245754_2.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.936942 | 1,357 | 2.96875 | 3 |
Discovered in 1988, the Roman Hippodrome in Beirut is situated in Wadi Abou Jmil, next to the newly renovated Jewish Synagogue in Downtown Beirut. This monument, dating back for thousands of years, now risks to be destroyed.
The hippodrome is considered, along with the Roman Road and Baths, as one of the most important remaining relics of the Byzantine and Roman era. It spreads over a total area of 3500 m2.
Requests for construction projects in the hippodrome’s location have been ongoing since the monument’s discovery but were constantly refused by former ministers of culture of which we name Tarek Metri, Tamam Salam and Salim Warde. In fact, Tamam Salam had even issued a decree banning any work on the hippodrome’s site, effectively protecting it by law. Salim Warde did not contest the decree. Current minister of culture Gabriel Layoun authorized constructions to commence.
When it comes to ancient sites in cities that have lots of them, such as Beirut, the current adopted approach towards these sites is called a “mitigation approach” which requires that the incorporation of the monuments in modern plans does not affect those monuments in any way whatsoever. The current approval by minister Layoun does not demand such an approach to be adopted. The monument will have one of its main walls dismantled and taken out of location. Why? to build a fancy new high-rise instead. Minister Layoun sees nothing wrong with this. In fact, displacing ruins is never done unless due to some extreme circumstances. I highly believe whatever Solidere has in store for the land is considered an “extreme circumstance.”
The Roman Hippodrome in downtown Beirut is considered as one of the best preserved not only in Lebanon, but in the world. It is also the fifth to be discovered in the Middle East. In fact, a report (Arabic) by the General Director of Ruins in Lebanon, Frederick Al Husseini, spoke about the importance of the monument as one that has been talked about in various ancient books. It has also been correlated with Beirut’s infamous ancient Law School. He speaks about the various structures that are still preserved and only needing some restoration to be fully exposed. He called the monument as a highly important site for Lebanon and the world and is one of Beirut’s main facilities from the Byzantine and Roman eras, suggesting to work on preserving and making this site one of Beirut’s important cultural and touristic locations. His report dates back from 2008.
MP Michel Aoun, the head of the party of which Gabriel Layoun is part, defended his minsiter’s position by saying that: “there are a lot of discrepancies between Solidere and us. Therefore, a minister from our party cannot be subjected to Solidere. Minister Layoun found a way, which is adopted internationally, to incorporate ancient sites with newer ones… So I hope that media outlets do not discuss this issue in a way that would raise suspicion.”
With all due respect to Mr. Aoun and his minister but endangering Beirut’s culture to strip away even more of the identity that makes it Beirut is not something that should concern him or Solidere. What’s happening is a cultural crime to the entirety of the Lebanese population, one where the interests of meaningless politicians becomes irrelevant. Besides, for a party that has been anti-Solidere for years, I find it highly hypocritical that they are allowing Solidere to dismantle the Roman Hippodrome.
The conclusion is: never has a hippodrome been dismantled and displaced in any parts of the world. Beirut’s hippodrome will effectively become part of the parking of the high-rise to be built in its place. No mitigation approach will be adopted here. It is only but a diversion until people forget and plans go well underway in secrecy. But the time for us to be silent about this blatant persecution of our history cannot continue.
If there’s anything that we can do is let the issue propagate as much as we can. There shouldn’t be a Lebanese person in the 10452 km2 that remains clueless about any endangered monument for that matter. Sadly enough, this goes beyond the hippodrome. We have become so accustomed to the reality of it that we’ve become very submissive: the ancient Phoenician port is well behind us, there are constructions around the ancient Phoenician port of Tyre and the city itself risks of being removed off UNESCO’s list for Cultural Heritage Sites.
The land on which ancient monuments are built doesn’t belong to Solidere, to the Ministry of Culture or to any other contractor – no matter how much they’ve paid to buy it. It belongs to the Lebanese people in their entirety. When you realize that of the 200 sites uncovered at Solidere, those that have remained intact can be counted with the fingers of one hand, the reality becomes haunting. It’s about time we rise to our rights. Beirut’s hippodrome will not be destroyed. | <urn:uuid:996b9ec7-7a4b-4f7e-abcf-dc25c231bf57> | CC-MAIN-2013-20 | http://stateofmind13.com/2012/03/18/save-beiruts-heritage-the-roman-hippodrome-to-be-demolished/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.966408 | 1,063 | 2.921875 | 3 |
Forty years ago, the Navajo Nation and Southern Ute tribes languished side by side, mired in high unemployment and poverty.
Today, worth billions, the Southern Ute Indian Tribe is one of the richest in the United States, while the Navajo still suffer as one of the most impoverished communities in the country.
The difference? Energy.
While the Southern Utes have natural gas, the Navajo Nation has coal, which has for decades been extracted to feed various large-scale power plants, including in the Four Corners. But that revenue has come under threat by tightening air-quality regulations. At the same time, natural gas, a cleaner fuel source for generating electricity, has grown cheaper and more abundant because of hydraulic fracturing.
The shift has left Navajo leaders in a quandary: either continue investing in coal in the hopes it will remain a significant fuel source, or cut their losses and choose another investment.
The tribe is staking its future on coal.
The Navajo Nation is in negotiations to buy the Navajo Mine from BHP Billiton in 2016 when BHP's lease expires.
Opponents say all the deal will do is saddle the tribe with a huge environmental cleanup. Tribal leaders say coal is a viable fuel source, and buying the Navajo Mine will save jobs and provide a foundation for the tribe to build a formidable energy portfolio similar to the Southern Utes.
Tribal leaders' plans will have far-reaching effects, not only for members – some of whom still live without electricity or plumbing – but also for residents throughout the region, who have seen haze cloud their views and threaten their health.
“We see how Southern Ute is using natural resources to advantage the tribe. We think they're doing well as far as adapting with the times,” said Navajo member Dailan Long, whose family lives near the Navajo Mine that is close to Fruitland on Navajo land.
“I definitely see a much older mind-set managing our economy, which has gotten us to the point where we are now. I think they (the Navajo leaders) could take a much more rational and logical approach to energy alternatives,” said Long, who formerly worked for Diné Citizens Against Ruining Our Environment, a Navajo environmental-activist organization.
The fossil fuel of the future
Dust billows at the Navajo Mine as a dragline excavator scoops dirt into its bucket and transfers it into a pile nearby, exposing coal for a bulldozer to grab.
Workers pour an ammonium-nitrate fuel-oil mixture into holes dug into the side of the mine to later blow up sections to reveal more coal in its depths.
The coal mine accounted for $53.6 million in direct revenue in 2012, about $110 million when taking into account lease and royalty revenue.
The 33,000-acre surface mine supplies the Four Corners Power Plant, about nine miles north, with about 8 million tons of coal a year.
BHP and Arizona Public Service approached Navajo Nation this summer about buying the Navajo Mine after the two entities failed to reach an agreement over coal prices, said Erny Zah, spokesman for the Navajo Nation president.
Under the preliminary negotiations, the tribe would give any federal and state tax revenue to BHP until 2016, when BHP's lease ends to pay for the mine, Zah said.
It's unclear who would take on the legal liabilities associated with eventual mandatory cleanup of the mine. The Navajo Nation emphatically says the tribe will not be responsible, and BHP says the issue is still part of the negotiations.
Going against the national energy tide
While global demand for coal remains high, coal production in the United States dropped about 7 percent in 2012, according to the U.S. Department of Energy. Coal still provided 42 percent of the nation's electricity in 2011, but just four years ago, it was providing nearly half.
“If we're talking about coal development and revenue as a major source of funding for the tribe, why didn't they negotiate this (sale) decades ago?” Long said.
He believes BHP wants to walk away from the cleanup responsibilities and leave the tribe with the bill.
“I think that it's definitely an exit strategy on behalf of BHP,” he said.
In contrast, the Southern Utes have focused on energy sources that will likely grow.
In 2008, the Southern Utes formed a separate company to manage its new-energy investments that include wind farms and biofuel projects.
The U.S. Energy Information Administration expects coal power to decline to 35 percent of total American electricity generation by 2040. Natural gas is expected to generate 30 percent of the nation's electricity within three decades.
Navajo Nation tribal officials acknowledge the new energy realities and have invested in alternatives, especially natural gas. Oil and gas contributed $48 million in revenue to the tribe in 2012.
The tribe also has invested in an 850-megawatt wind farm.
But coal will nonetheless remain the centerpiece of Navajo Nation's energy portfolio moving forward.
“We want to make sure coal is part of the picture for at least another couple of decades, if not longer,” Zah said.
But tightening environmental regulations that shutter coal-fired power plants could further strangle coal production.
Mike Eisenfeld and other environmental activists don't anticipate the Four Corners Power Plant will remain open for more than 10 years, as federal scrutiny of its environmental impact increases.
“I don't think they're sustainable now, and the sooner we figure out a transition plan to wean ourselves from big coal plants the better,” said Eisenfeld, New Mexico energy coordinator for the San Juan Citizen's Alliance.
APS is decommissioning its three oldest generating units at the Four Corners Power Plant rather than install new equipment to meet EPA regulations. This will result in a 30 percent reduction in production at Navajo Mine and layoffs at both the mine and power plant.
For a tribe with a roughly 50 percent unemployment rate and more than 50 percent of its members living below the poverty level, the closure will be a painful blow.
“One of the reasons we're exploring the possibility of taking over Navajo Mine is we're looking at saving jobs at the mine and also at the power plant because one cannot exist without the other,” Zah said.
If the power plant closes, Navajo Mine may have to shut its doors. This was the fate of the Black Mesa Mine in 2006 after the Mohave Generating Station, near Laughlin, Nev., which it supplied, shut down.
Navajo Nation's other plans to secure a future for its coal haven't been met with much success.
In 2003, The Navajo Nation proposed a new coal-fired power plant called the Desert Rock Energy Project, which would be supplied by an expanded Navajo Mine.
Under the project, international energy developer Sithe Global Power would have built a new 1,500-megawatt power plant on Navajo land about 25 miles outside Farmington. The plant would have been among the largest in the country, and it would have provided electricity for 1.5 million customers.
The tribe heralded the power plant as an economic victory, saying it would provide 400 jobs at the Desert Rock power plant, 200 additional jobs at Navajo Mine and $50 million in annual revenue – almost half of Navajo Nation's current total coal revenue.
The EPA approved Desert Rock in 2008 and touted it as “state-of-the-art,” but the agency reconsidered the project's air permit after New Mexico joined environmental groups in appealing the EPA's initial approval. In 2009, the EPA reversed its decision and revoked the air permit.
For the Navajos, losing the project was a crushing blow.
“(Navajo Nation) viewed Desert Rock as a huge economic opportunity,” said Tom Shipps, an Indian law lawyer who works for Colorado's Southern Ute tribe. “The technology employed at Desert Rock could have been great improvements. Concerns about the environment drove a lot of the criticism, especially around here.”
The project remains in the back of tribal leader's minds, Zah said. One idea is to convert it to a natural-gas plant.
Still, the question of what to do with the Navajo Mine would remain if a new power plant were to run on natural gas.
In the meantime, coal is the focus of the tribe's energy strategy.
BHP gives a qualified endorsement of that strategy.
“Coal will continue to be an important part of meeting the energy needs of the U.S.,” Norman Benally, spokesperson for BHP, said. “Granted there are mandates coming from the EPA that are limiting the use of coal and in some areas shutting down power plants that generate electricity. But we believe coal will be part of the energy portfolio, though limited, until such time that renewable energies have or will become viable.” | <urn:uuid:99402d3d-84b9-4039-91d3-80d618c9faf3> | CC-MAIN-2013-20 | http://durangoherald.com/article/20130127/NEWS01/130129627/0/taxonomy/A-path-paved-in-coal-- | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.953607 | 1,842 | 2.921875 | 3 |
A surprising amount of what we think we know about the brain comes from neuropsychology; famous case studies such as HM have informed theories of memory so that they include short and long term storage, which are separable, and so on. These case studies can have a profound effect on research; my favourite story, though, was about a memory researcher who had a skiing accident and temporarily developed retrograde amnesia - he couldn't remember anything except that there was this guy in Connecticut (HM) who couldn't remember things either!
I always enjoyed classes in neuropsychology; the case studies are always fascinating. But they are deeply limited in what they can actually tell us about the brain. First, they are typically single patient case studies, which restricts how general the conclusions are. Second, they are data from damaged brains; the fairly linear assumption that some localised function has been subtracted out is simply not true, and the damage will have had complex effects on distributed functional networks.Third, the damage is never straight-forward, because these almost all come from accidents or strokes (HM's surgery being a rare example of more detail being known). This has not stopped the field being very excited by these cases, though, and from basing a lot of theory on these patterns of deficits.
In movement research, the most famous neuropsychology case study is Patient DF She suffered bilateral damage along the ventral stream of visual processing (James et al, 2003). The effect was visual form agnosia: she is able to control her actions with respect to objects, but cannot describe or recognise these objects verbally. Crucially, her accident did not damage her parietal lobe; specifically, the dorsal stream of visual processing was left intact. These two streams are well defined anatomical pathways leading out of primary visual cortex, and were first described by Ungerleider & Mishkin, 1982). DF's pattern of deficits led Mel Goodale and David Milner (Goodale & Milner, 1992) to suggest functional roles for these streams. The ventral stream, they suggested, was for perception - things like object and scene recognition. The dorsal stream, in contrast, was for perception-for-action, and used visual information for the online control of action. This perception-action hypothesis has been hugely dominant in the field, and the theory rests heavily on DF's shoulders.
Recently, Thomas Schenk (2012a) published some data which claims to show that DF's visually guided reaching is not normal if she doesn't have access to haptic feedback about the object. His data suggests that the only reason she succeeds at reaching while failing judgment tasks is that haptic information is only normally available in the former case. If correct, this is actually quite a shot across the bow of the perception vs perception-for-action work; naturally Goodale and Milner don't buy it, and have published a reply to which Schenk has then replied.
I like seeing these arguments happen in the literature; but to be honest, the time scale is too slow. Schenk publishes, then Milner et al get to reply and Schenk gets right of reply to that. They may or may not iterate again and it's always left as 'we agree to disagree'. But these critiques have answers, and I think a blog comment feed might be the right place to work through the various cycles of suggestions and rebuttals until the obviously wrong things have been weeded out. It would also provide a place for other interested parties to weigh in. So if Schenk, Milner and Goodale (and anyone else!) feel like using the comments for this post or another made to purpose to bang around ideas until an obvious experiment or analysis pops out, please feel free!
Schenk (2012a): Patient DF and haptic feedback
The classic result from DF that inspired the Goodale and Milner account has two parts (a dissociation). When DF is asked to reach for an object, she can do so well and she shows appropriate pre-shaping of her hand which scales correctly with the size of the object. This suggests she has intact visual perception-for-action. However, if you ask her to judge the size of the object without reaching for it, she cannot do it; she has selectively impaired visual perception. Schenk ran the following experiments on DF and to demonstrate that her unimpaired reaching is not because of intact visual perception-for-action, but rather because of haptic feedback from real objects.
'Perception': these are the tasks DF typically fails due to her visual form agnosia, and she fails them here too.
- Size discrimination: Choose the largest of two objects
- Manual estimation: Judge the size of an object by shaping your hand correctly
- Standard grasping: Reach to grasp an object in the mirror and there is actually an object there (vision + haptic information match)
- Grasping without haptic feedback: Reach for the reflection but there is no actual object there.
- Grasping with intermittent haptic feedback: there was an object present on half the trials; these were scattered randomly throughout the session and a light cued participants when the object would be present.
- Grasping with dissociated positions: participants saw an object in the middle location, were asked to reach for an object at the far position, and there was a real object there.
|Figure 1. Grip performance for controls (open circles) and DF (red diamonds).|
Milner, Ganel & Goodale (2012)
Unsurprisingly, Milner et al (2012) do not agree that these data cast doubt on the perception-action hypothesis about the function of the dorsal and ventral streams. They make the following criticisms:
- They suggest that "so-called 'haptic feedback'" (to quote the paper) from trial n could only inform a reach on trial n+1 if the objects were the same size in both trials; object size was randomised across trials, however.
- They then claim that Schenk's interpretation means he thinks DF's reaches are prepared on the basis of previous haptic, rather than current visual information. Therefore, they suggest, Schenk must make 'the inescapable prediction' that a reach on trial n+1 should be appropriate for what happened on trial n, regardless of what is presented on trial n+1.They allow that there may be some 'minor intrusion' of haptic information from previous trials.
Again unsurprisingly, Schenk (2012b) does not agree with Milner et al's evaluation.
- He claims that the Milner et al critique assumes that prehension requires the visual computation of an object's size. He then cites recent work by Smeets & Brenner (1999) who claimed to show that prehension involves the independent targeting of the thumb and forefinger, and thus you don't need object size.
- He then suggests that DF is generally able to reach successfully because she has access to the necessary egocentric information (in hand-centred coordinates) about the location of object edges. This information requires regular calibration (Bingham et al, 2007) to remain accurate.
- He therefore predicts that if DF has egocentric information about the object, and this information has been calibrated recently, she can reach successfully, otherwise she fails. His 2012a data then support this pattern.
- Regarding the pantomime problem: Schenk tested this in Task 5 with the trials with no objects. DF knew there would be no object on these trials (the light cue) but still produced normal reaches because of the calibration on other trials.
There is a lot that is weird about the replies. Milner et al make some odd claims, and Schenk goes to strange places in his defence. Let's address those first.
1. "So-called 'haptic feedback'"
Milner et al want to keep claiming that DF reaches on the basis of current visual information, and not on the basis of previous haptic information. But there's a problem for them - this is the claim Schenk's data actually refutes! So they make an odd move, and simply claim that earlier haptic information does not affect reaches, and that even if it could, it won't here because the size changes from trial to trial.
However, Coats, Bingham & Mon-Williams (2008) have demonstrated (using a mirror rig similar to Schenk's) that if you systematically change the size of the grasped object while leaving the visual object the same size, people happily recalibrate their reach actions and change their grip apertures. Bingham et al (2007) have also shown that even occasional calibration allows stable reach behaviour to persist; calibration lasts some time. So even when the visual size remains unchanged, people's grip behaviour reflects the haptic calibration of the visual perception of size and if this calibration changes, so does grip.
Milner et al's second point - that haptic feedback can't help because the object size changes randomly - is actually addressed by Schenk's control data, which shows that neurologically intact people can happily scale their grips appropriately under these conditions (albiet slightly more noisily).
2. Reaching and the need for visual size
Schenk centres his reply on the idea that Milner et al assume you need to compute (or perceive) the size of objects in order to scale your grasp. He then cites Smeets & Brenner (1999) who claim that instead, you simply control your thumb and forefinger independently and bring them into alignment with the edges of the object.
The problem here is that Smeets & Brenner's work is highly controversial, and in fact more recent work from Mon-Williams & Bingham (2011) tested the predictions of this account in great detail and found no support for this claim. Instead, they showed that the unit of control is an opposition axis (Iberall, Bingham & Arbib, 1986). This is the space between the thumb and forefinger, and Mon-Williams & Bingham (2011) demonstrated that prehension is about aligning this space with the object. You do still therefore need to perceive object size, specifically the maximum object extent. I'll blog this paper in more detail sometime, it is a master class in affordance research.
I think Schenk had it basically right in the first paper; the explanation for his data is that in tasks 3 and 5, DF has sufficient access to haptic information about the object's size to allow her to bypass her visual perceptual deficit. She can therefore successfully reach to grasp. In all other tasks, she can't go round the problem and she fails. This suggests that her visual deficit is not simply restricted to 'perception'; the visual system involves both anatomical streams working in concert and these are not functionally independent of each other. What Schenk needs to do is treat haptic information as perceptual information for size in it's own right, not simply feedback or an 'egocentric cue'. DF has unimpaired access to this information and when it's available, she can reach-to-grasp.
Bingham, G. P., Coats, R., & Mon-Williams, M. (2007). Natural prehension in trials without haptic feedback but only when calibration is allowed. Neuropsychologia, 45, 288 –294. Download
Coats, R., Bingham, G.P. & Mon-Williams, M. (2008). Calibrating grasp size and reach distance: Interactions reveal integral organization in reaching-to-grasp movements. Experimental Brain Research, 189, 211-220. Download
Goodale, M. A., Jakobson, L. S., & Keillor, J.M. (1994). Differences in the visualcontrol of pantomimed and natural grasping movements. Neuropsychologia, 32(1), 1159-1178.
Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neuroscience, 15(1), 20 –25. Download
Iberall, T., Bingham, G. P., & Arbib, M. A. (1986). Opposition space as a structuring concept for the analysis of skilled hand movements. In: Heuer H, From C (eds) Experimental Brain Research Series 15. Springer, Berlin, pp 158–173. Download
James, T.W., Culham, J., Humphrey, G.K., Milner, A.D, & Goodale, M.A. (2003). Ventral occipital lesions impair object recognition but not object-directed grasping: an fMRI study. Brain 126, 2463–2475. Download
Milner, A., Ganel, T., & Goodale, M. (2012). Does grasping in patient D.F. depend on vision? Trends in Cognitive Sciences DOI: 10.1016/j.tics.2012.03.004
Mon-Williams, M. & Bingham, G.P. (2011). Discovering affordances that determine the spatial structure of reach-to-grasp movements. Experimental Brain Research, 211(1), 145-160. Download
Schenk, T. (2012a). No Dissociation between Perception and Action in Patient DF When Haptic Feedback is Withdrawn. Journal of Neuroscience, 32 (6), 2013-2017 DOI: 10.1523/JNEUROSCI.3413-11.2012
Schenk, T. (2012b). Response to Milner et al.: Grasping uses vision and haptic feedback Trends in Cognitive Sciences DOI: 10.1016/j.tics.2012.03.006
Smeets, J.B.J., & Brenner, E. (1999). A new view on grasping. Motor Control, 3, 237-231. Download
Ungerleider, L.G., & Mishkin, M. (1982). Two cortical visual systems. In Analysis of Visual Behavior (Ingle DJ, Goodale MA, Mansfield RJ, eds). Cambridge, MA: MIT. | <urn:uuid:433d6fd5-9770-4881-80d4-087901e3577d> | CC-MAIN-2013-20 | http://psychsciencenotes.blogspot.com/2012/04/patient-df-uses-haptics-not-intact.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.922984 | 2,940 | 2.71875 | 3 |
by Gerry Everding
St. Louis MO (SPX) Feb 12, 2013
Nominated early this year for recognition on the UNESCO World Heritage List, which includes such famous cultural sites as the Taj Mahal, Machu Picchu and Stonehenge, the earthen works at Poverty Point, La., have been described as one of the world's greatest feats of construction by an archaic civilization of hunters and gatherers.
Now, new research in the current issue of the journal Geoarchaeology, offers compelling evidence that one of the massive earthen mounds at Poverty Point was constructed in less than 90 days, and perhaps as quickly as 30 days - an incredible accomplishment for what was thought to be a loosely organized society consisting of small, widely scattered bands of foragers.
"What's extraordinary about these findings is that it provides some of the first evidence that early American hunter-gatherers were not as simplistic as we've tended to imagine," says study co-author T.R. Kidder, PhD, professor and chair of anthropology in Arts and Sciences at Washington University in St. Louis.
"Our findings go against what has long been considered the academic consensus on hunter-gather societies - that they lack the political organization necessary to bring together so many people to complete a labor-intensive project in such a short period."
Co-authored by Anthony Ortmann, PhD, assistant professor of geosciences at Murray State University in Kentucky, the study offers a detailed analysis of how the massive mound was constructed some 3,200 years ago along a Mississippi River bayou in northeastern Louisiana.
Based on more than a decade of excavations, core samplings and sophisticated sedimentary analysis, the study's key assertion is that Mound A at Poverty Point had to have been built in a very short period because an exhaustive examination reveals no signs of rainfall or erosion during its construction.
"We're talking about an area of northern Louisiana that now tends to receive a great deal of rainfall," Kidder says. "Even in a very dry year, it would seem very unlikely that this location could go more than 90 days without experiencing some significant level of rainfall. Yet, the soil in these mounds shows no sign of erosion taking place during the construction period. There is no evidence from the region of an epic drought at this time, either."
Part of a much larger complex of earthen works at Poverty Point, Mound A is believed to be the final and crowning addition to the sprawling 700-acre site, which includes five smaller mounds and a series of six concentric C-shaped embankments that rise in parallel formation surrounding a small flat plaza along the river. At the time of construction, Poverty Point was the largest earthworks in North America.
Built on the western edge of the complex, Mound A covers about 538,000 square feet [roughly 50,000 square meters] at its base and rises 72 feet above the river. Its construction required an estimated 238,500 cubic meters - about eight million bushel baskets - of soil to be brought in from various locations near the site. Kidder figures it would take a modern, 10-wheel dump truck about 31,217 loads to move that much dirt today.
"The Poverty Point mounds were built by people who had no access to domesticated draft animals, no wheelbarrows, no sophisticated tools for moving earth," Kidder explains. "It's likely that these mounds were built using a simple 'bucket brigade' system, with thousands of people passing soil along from one to another using some form of crude container, such as a woven basket, a hide sack or a wooden platter."
To complete such a task within 90 days, the study estimates it would require the full attention of some 3,000 laborers. Assuming that each worker may have been accompanied by at least two other family members, say a wife and a child, the community gathered for the build must have included as many as 9,000 people, the study suggests.
"Given that a band of 25-30 people is considered quite large for most hunter-gatherer communities, it's truly amazing that this ancient society could bring together a group of nearly 10,000 people, find some way to feed them and get this mound built in a matter of months," Kidder says.
Soil testing indicates that the mound is located on top of land that was once low-lying swamp or marsh land - evidence of ancient tree roots and swamp life still exists in undisturbed soils at the base of the mound. Tests confirm that the site was first cleared for construction by burning and quickly covered with a layer of fine silt soil. A mix of other heavier soils then were brought in and dumped in small adjacent piles, gradually building the mound layer upon layer.
As Kidder notes, previous theories about the construction of most of the world's ancient earthen mounds have suggested that they were laid down slowly over a period of hundreds of years involving small contributions of material from many different people spanning generations of a society. While this may be the case for other earthen structures at Poverty Point, the evidence from Mound A offers a sharp departure from this accretional theory. Kidder's home base in St.
Louis is just across the Mississippi River from one of America's best known ancient earthen structures, the Monk Mound at Cahokia, Ill. He notes that the Monk Mound was built many centuries later than the mounds at Poverty Point by a civilization that was much more reliant on agriculture, a far cry from the hunter-gatherer group that built Poverty Point. Even so, Mound A at Poverty Point is much larger than almost any other mound found in North America; only Monk's Mound at Cahokia is larger.
"We've come to realize that the social fabric of these socieites must have been much stronger and more complex that we might previously have given them credit. These results contradict the popular notion that pre-agricultural people were socially, politically, and economically simple and unable to organize themselves into large groups that could build elaborate architecture or engage in so-called complex social behavior," Kidder says.
"The prevailing model of hunter-gatherers living a life 'nasty, brutish and short' is contradicted and our work indicates these people were practicing a sophisticated ritual/religious life that involved building these monumental mounds."
Washington University in St. Louis
All About Human Beings and How We Got To Be Here
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement| | <urn:uuid:a5058d3c-2691-4aef-862f-88a3935a760d> | CC-MAIN-2013-20 | http://www.terradaily.com/reports/Archaic_Native_Americans_built_massive_Louisiana_mound_in_less_than_90_days_999.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.966482 | 1,459 | 2.9375 | 3 |
Long Term Player Development
What is LTPD?
Long Term Player Development (LTPD) is a systemic approach being developed and adopted by Golf Canada in partnership with the Canadian Professional Golfers’ Association to maximize a participant's potential and involvement in our sport. The LTPD framework aims to define optimal training, competition and recovery throughout an athlete's career to enable him / her to reach his / her full potential in golf and as an athlete. Tailoring a child's sports development program to suit basic principles of growth and maturation, especially during the 'critical' early years of their development, enables him / her to:
- Reach full potential
- Increase lifelong participation in golf and other physical activities
The LTPD model is split into stages in which a player will move from simple to more complex skills and from general to golf related skills. For example, a beginner may start by learning basic swinging actions and then once this has been mastered he / she will progress onto more advanced skills.
This framework will set out recommended training sequences and skills developments for the participant from the Active Start stage (6 and under) to the Active for Live Stage (adult recreational). It will address the physical, mental, emotional and technical needs of the athlete as they pass through each stage of development.
Where has it come from?
A combination of recent research and the knowledge of coaches from around the world are being used to write the LTPD model. The program will be sport-science supported and based on the best data and research available. Our work will be based on the work of Canadian sport scientists, such as Istvan Bayyi, and focuses on key, common principles of individual development, which many sports organizations consider good practice in long-term planning for athletes.
Many leading sports stars have also attributed part of their success to participating in different sports and activities at a young age by giving them a wider base of sports skills. Our goal will be to develop our players to their maximum potential by training and enhancing all the athletic skills that contribute to their success.
What will this mean for your child?
During your child's first few years of golf, the emphasis will be on physical literacy. Time should be spent learning the ABC's of athleticism (Agility, Balance, Coordination and Speed) to teach them how to control his / her own bodies. For this reason, your child may take part in exercises that do not look relevant to golf but are supporting their development. Games and other sports will teach your child to throw the ball (basic hitting actions), catch it (hand-eye coordination), and run properly. At each stage the child will be trained in the optimal systems and programs to maximize his / her potential as a golfer and as a long-term participant in sport.
What has this got to do with golf?
It is thought that taking part in golf-specific training too early can lead to an early drop out rate, create muscle imbalances and also neglect teaching the fundamental skills needed for most sports. In fact, research shows that early specialization in most late maturing sports results in these outcomes.
Research has also shown that it is during childhood that people are best at learning physical skills. For this reason we are advising coaches and parents to teach transferable skills first that will allow your child to become proficient in a number of different sports and therefore increase their chances of being physically active throughout their lifetime.
Who else is using LTPD?
The Council of Federal, Provincial and Territorial Ministers responsible for Sport have endorsed and established the goal of the implementation of a Long Term Athlete Development program throughout the sport community in Canada. Sport Canada has been working with National Sport Organizations to development sport-specific programs according to an overall framework established by an expert group of sport scientists.
To date, over 57 sports in Canada have started the process of designing and putting into place LTAD programs. There has been a sharing of best practices among resource personnel and National bodies and the overall program is gaining momentum.
Golf Canada is in the second wave of sports to start the LTPD process and is following closely the work of such groups as Rowing Canada, Athletics Canada, Speedskating Canada and Soccer to create the best opportunities for all children.
Various national sporting groups in the UK and Ireland are approximately 18 months ahead of their Canadian counterparts in the development of LTPD programs and we are using their experiences and best practices in process development to ensure we have the most comprehensive and effective system possible.
Where will Golf's LTPD model come from?
We will be consulting with a wide range of coaches, sports scientists and experienced volunteers from across Canada to represent the views of the whole golf community. Their knowledge and expertise will be used as input to form the LTPD framework for golf in Canada. We will be assisted in this process by the LTPD Resource paper and research of the expert group, in particular Stephen Norris (see www.ltad.ca for resource paper).
In developing this model and framework, Golf Canada is currently in the process of reviewing our programs in line with LTPD principles. Our competition program, coach education system, elite play structure and development initiatives will all evolve to be consistent with the principles established within this underpinning model.
One of the principles to be adopted will be a continuous improvement regime where the system will be benchmarked against the most current developmental principles and upgraded regularly. It will be a living document that provides a planning framework to enable us to always deliver the most appropriate training.
Golf Canada LTPD resources
Golf Canada will continually update this section to provide access to the most current materials and programs as they are developed.
We will add a range of LTPD resources designed to help all Coaches, Teachers, Players and Parents understand the stages that each player goes through and also the training principles and activities at each stage.
Please click here to download complete LTPD Guide (2.13 MB) | <urn:uuid:aa78468a-f028-4eb2-9dea-66d8fd02c521> | CC-MAIN-2013-20 | http://www.golfcanada.ca/about-us/sport-development/long-term-player-development/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.957291 | 1,207 | 2.765625 | 3 |
Upland Bird Regional Forecast
When considering upland game population levels during the fall hunting season, two important factors impact population change. First is the number of adult birds that survived the previous fall and winter and are considered viable breeders in the spring. The second is the reproductive success of this breeding population. Reproductive success consists of nest success (the number of nests that successfully hatched) and chick survival (the number of chicks recruited into the fall population). For pheasant and quail, annual population turnover is relatively high; therefore, the fall population is more dependent on reproductive success than breeding population levels. For grouse (prairie chickens), annual population turnover is not as rapid although reproductive success is still the major population regulator and important for good hunting. In the following forecast, breeding population and reproductive success of pheasants, quail, and prairie chickens will be discussed. Breeding population data were gathered during spring breeding surveys for pheasants (crow counts), quail (whistle counts), and prairie chickens (lek counts). Data for reproductive success were collected during late summer roadside surveys for pheasants and quail. Reproductive success of prairie chickens cannot be easily assessed using the same methods because they generally do not associate with roads like the other game birds.
Kansas experienced extreme drought this past year. Winter weather was mild, but winter precipitation is important for spring vegetation, which can impact reproductive success, and most of Kansas did not get enough winter precipitation. Pheasant breeding populations showed significant reductions in 2012, especially in primary pheasant range in western Kansas. Spring came early and hot this year, but also included fair spring moisture until early May, when the precipitation stopped, and Kansas experienced record heat and drought through the rest of the reproductive season. Early nesting conditions were generally good for prairie chickens and pheasants. However, the primary nesting habitat for pheasants in western Kansas is winter wheat, and in 2012, Kansas had one of the earliest wheat harvests on record. Wheat harvest can destroy nests and very young broods. The early harvest likely lowered pheasant nest and early brood success. The intense heat and lack of rain in June and July resulted in a decrease in brooding cover and insect populations, causing lower chick survival for all upland game birds.
Because of drought, all counties in Kansas were opened to Conservation Reserve Program (CRP) emergency haying or grazing. CRP emergency haying requires fields that are hayed to leave at least 50 percent of the field in standing grass cover. CRP emergency grazing requires 25 percent of the field (or contiguous fields) to be left ungrazed or grazing at 75-percent normal stocking rates across the entire field. Many CRP fields, including Walk In Hunting Areas (WIHA), may be affected across the state. WIHA property is privately-owned land open to the public for hunting access. Kansas has more than one million acres of WIHA. Often, older stands of CRP grass are in need of disturbance, and haying and grazing can improve habitat for the upcoming breeding season, and may ultimately be beneficial if weather is favorable.
Due to continued drought, Kansas will likely experience a below-average upland game season this fall. For those willing to hunt hard, there will still be pockets of decent bird numbers, especially in the northern Flint Hills and northcentral and northwestern parts of the state. Kansas has approximately 1.5 million acres open to public hunting (wildlife areas and WIHA combined). The regular opening date for the pheasant and quail seasons will be Nov. 10 for the entire state. The previous weekend will be designated for the special youth pheasant and quail season. Youth participating in the special season must be 16 years old or younger and accompanied by a non-hunting adult who is 18 or older. All public wildlife areas and WIHA tracts will be open for public access during the special youth season. Please consider taking a young person hunting this fall, so they might have the opportunity to develop a passion for the outdoors that we all enjoy.
PHEASANT – Drought in 2011 and 2012 has taken its toll on pheasant populations in Kansas. Pheasant breeding populations dropped by nearly 50 percent or more across pheasant range from 2011 to 2012 resulting in fewer adult hens in the population to start the 2012 nesting season. The lack of precipitation has resulted in less cover and insects needed for good pheasant reproduction. Additionally, winter wheat serves as a major nesting habitat for pheasants in western Kansas, and a record early wheat harvest this summer likely destroyed many nests and young broods. Then the hot, dry weather set in from May to August, the primary brood-rearing period for pheasants. Pheasant chicks need good grass and weed cover and robust insect populations to survive. Insufficient precipitation and lack of habitat and insects throughout the state’s primary pheasant range resulted in limited production. This will reduce hunting prospects compared to recent years. However, some good opportunities still exist to harvest roosters in the sunflower state, especially for those willing to work for their birds. Though the drought has taken its toll, Kansas still contains a pheasant population that will produce a harvest in the top three or four major pheasant states this year.
The best areas this year will likely be pockets of northwest and northcentral Kansas. Populations in southwest Kansas were hit hardest by the 2011-2012 drought (72 percent decline in breeding population), and a very limited amount of production occurred this season due to continued drought and limited breeding populations.
QUAIL – The bobwhite breeding population in 2012 was generally stable or improved compared to 2011. Areas in the northern Flint Hills and parts of northeast Kansas showed much improved productivity this year. Much of eastern Kansas has seen consistent declines in quail populations in recent decades. After many years of depressed populations, this year’s rebound in quail reproduction in eastern Kansas is welcomed, but overall populations are still below historic averages. The best quail hunting will be found throughout the northern Flint Hills and parts of central Kansas. Prolonged drought undoubtedly impacted production in central and western Kansas.
PRAIRIE CHICKEN – Kansas is home to greater and lesser prairie chickens. Both species require a landscape of predominately native grass. Lesser prairie chickens are found in westcentral and southwestern Kansas in native prairie and nearby stands of native grass within the conservation reserve program (CRP). Greater prairie chickens are found primarily in the tallgrass and mixed-grass prairies in the eastern one-third and northern one-half of the state.
The spring prairie chicken lek survey indicated that most populations remained stable or declined from last year. Declines were likely due to extreme drought throughout 2011. Areas of northcentral and northwest Kansas fared the best, while areas in southcentral and southwest Kansas experienced the sharpest declines where drought was most severe. Many areas in the Flint Hills were not burned this spring due to drought. This resulted in far more residual grass cover for much improved nesting conditions compared to recent years. There have been some reports of prairie chickens broods in these areas, and hunting will likely be somewhat improved compared to recent years.
Because of recent increases in prairie chicken (both species) populations in northwest Kansas, regulations have been revised this year. The early prairie chicken season (Sept. 15-Oct. 15) and two-bird bag limit has been extended into northwest Kansas. The northwest unit boundary has also been revised to include areas north of U.S. Highway 96 and west of U.S. Highway 281. Additionally, all prairie chicken hunters are now required to purchase a $2.50 prairie chicken permit. This permit will allow KDWPT to better track hunters and harvest, which will improve management activities. Both species of prairie chicken are of conservation concern and the lesser prairie chicken is a candidate species for federal listing under the Endangered Species Act.
This region has 11,809 acres of public land and 339,729 acres of WIHA open to hunters this fall.
Pheasant – Spring breeding populations declined almost 50 percent from 2011 to 2012, reducing fall population potential. Early nesting conditions were decent due to good winter wheat growth, but early wheat harvest and severe heat and drought through the summer reduced populations. While this resulted in a significant drop in pheasant numbers, the area will still have the highest densities of pheasants this fall compared to other areas in the state. Some counties — such as Graham, Rawlins, Decatur, and Sherman — showed the relatively-highest densities of pheasants during summer brood surveys. Much of the cover will be reduced compared to previous years due to drought and resulting emergency haying and grazing in CRP fields. Good hunting opportunities will also be reduced compared to recent years, and harvest will likely be below average.
Quail – Populations in this region have been increasing in recent years although the breeding population had a slight decline. This area is at the extreme northwestern edge of bobwhite range in Kansas, and densities are relatively low compared to central Kansas. Some counties — such as Graham, Rawlins, and Decatur — will provide hunting opportunities for quail.
Prairie Chicken – Prairie chicken populations have expanded in both numbers and range within the region over the past 20 years. The better hunting opportunities will be found in the central and southeastern portions of the region in native prairies and nearby CRP grasslands. Spring lek counts in that portion of the region were slightly depressed from last year and nesting conditions were only fair this year. Extreme drought likely impaired chick survival.
This region has 75,576 acres of public land and 311,182 acres of WIHA open to hunters this fall.
Pheasant – The Smoky Hills breeding population dropped about 40 percent from 2011 to 2012, reducing overall fall population potential. While nesting conditions were fair due to good winter wheat growth, the drought and early wheat harvest impacted the number of young recruited into the fall population. Certain areas had decent brood production, including portions of Mitchell, Rush, Rice, and Cloud counties. Across the region, hunting opportunities will likely be below average and definitely reduced from recent years. CRP was opened to emergency haying and grazing, reducing available cover.
Quail – Breeding populations increased nearly 60 percent from 2011 to 2012, increasing fall population potential. However, drought conditions were severe, likely impairing nesting and brood success. There are reports of fair quail numbers in certain areas throughout the region. Quail populations in northcentral Kansas are naturally spotty due to habitat characteristics. Some areas, such as Cloud County, showed good potential while other areas in the more western edges of the region did not fare as well.
Prairie Chicken – Greater prairie chickens occur throughout the Smoky Hills in large areas of native rangeland and some CRP. This region includes some of the highest densities and greatest hunting opportunities in the state for greater prairie chickens. Spring counts indicated that numbers were stable or slightly reduce from last year. Much of the rangeland cover is significantly reduced due to drought, which likely impaired production, resulting in reduced fall hunting opportunities..
This region has 60,559 acres of public land and 54,170 of WIHA open to hunters this fall.
Pheasant – Spring crow counts this year showed a significant increase in breeding populations of pheasants. While this increase is welcome, this region was nearing all-time lows in 2011. Pheasant densities across the region are still low, especially compared to other areas in western Kansas. Good hunting opportunities will exist in only a few pockets of good habitat.
Quail – Breeding populations stayed relatively the same as last year, and some quail were detected during the summer brood survey. The long-term trend for this region has been declining, largely due to unfavorable weather and degrading habitat. This year saw an increase in populations. Hunting opportunities for quail will be improved this fall compared to recent years in this region. The best areas will likely be in Marshall and Jefferson counties.
Prairie Chickens – Very little prairie chicken range occurs in this region, and opportunities are limited. The best areas are in the western edges of the region, in large areas of native rangeland.
This region has 80,759 acres of public land and 28,047 acres of WIHA open to hunters this fall.
Pheasant – This region is outside the primary pheasant range and has very limited hunting. A few birds can be found in the northwestern portion of the region.
Quail – Breeding populations were relatively stable from 2011 to 2012 for this region although long term trends have been declining. In the last couple years, the quail populations throughout much of the region have been on the increase. Specific counties that showed relatively higher numbers are Coffey, Osage, and Wilson. However, populations remain far below historic levels across the bulk of the region due to extreme habitat degradation.
Prairie Chicken – Greater prairie chickens occur in the central and northwest parts of this region in large areas of native rangeland. Breeding population densities were up nearly 40 percent from last year, and opportunities may increase accordingly. However, populations have been in consistent decline over the long term. Infrequent fire frequency has resulted in woody encroachment of native grasslands in the area, gradually reducing the amount of suitable habitat.
This region has 128,371 acres of public land and 63,069 acres of WIHA open to hunters this fall.
Pheasant – This region is on the eastern edge of pheasant range in Kansas and well outside the primary range. Pheasant densities have always been relatively low throughout the Flint Hills. Spring breeding populations were down nearly 50 percent, and reproduction was limited this summer. The best pheasant hunting will be in the northwestern edge of this region in Marion and Dickinson counties.
Quail – This region contains some of the highest densities of bobwhite in Kansas. The breeding population in this region increased 25 percent compared to 2011, and the long-term trend (since 1998) has been stable do to steadily increasing populations over the last four or five years. High reproductive success was reported in the northern half of this region, and some of the best opportunities for quail hunting will be found in the northern Flint Hills this year. In the south, Cowley County showed good numbers of quail this summer.
Prairie Chickens – The Flint Hills is the largest intact tallgrass prairie left in North America. It has served as a core habitat for greater prairie chickens for many years. Since the early 1980s, inadequate range burning frequencies have consistently reduced nest success in the area, and prairie chicken numbers have been declining as a result. Because of the drought this spring, many areas that are normally burned annually were left unburned this year. This left more residual grass cover for nesting and brood rearing. There are some good reports of prairie chicken broods, and hunting opportunities will likely increase throughout the region this year.
This region has 19,534 acres of public land and 73,341 acres of WIHA open to hunters this fall.
Pheasant – The breeding population declined about 40 percent from 2011 to 2012. Prolonged drought for two years now and very poor vegetation conditions resulted in poor reproductive success this year. All summer indices showed a depressed pheasant population in this region, especially compared to other regions. Some of the relatively better counties in this area will be Reno, Pawnee, and Pratt although these counties have not been immune to recent declines. There will likely few good hunting opportunities this fall.
Quail – The breeding population dropped over 30 percent this year from 2011 although long term trends (since 1998) have been stable in this region. This region generally has some of the highest quail densities in Kansas, but prolonged drought and reduced vegetation have caused significant declines in recent years. Counties such as Reno, Pratt, and Stafford will likely have the best opportunities in the region. While populations may be down compared to recent years, this region will continue to provide fair hunting opportunities for quail.
Prairie Chicken – This region is almost entirely occupied by lesser prairie chickens. The breeding population declined nearly 50 percent from 2011 to 2012. Reproductive conditions were not good for the region due to extreme drought and heat for the last two years, and production was limited. The best hunting opportunities will likely be in the sand prairies south of the Arkansas River.
This region has 2,904 acres of public land and 186,943 acres of WIHA open to hunters this fall.
Pheasant – The breeding population plummeted more than 70 percent in this region from 2011 to 2012. Last year was one of the worst on record for pheasant reproduction. However, last fall there was some carry-over rooster (second-year) from a record high season in 2010. Those carry-over birds are mostly gone now, which will hurt hunting opportunities this fall. Although reproduction was slightly improved from 2011, chick recruitment was still fair to below average this summer due to continued extreme drought conditions. Moreover, there were not enough adult hens in the population yet to make a significant rebound. Generally, hunting opportunity will remain well below average in this region. Haskell and Seward counties showed some improved reproductive success, especially compared to other counties in the region.
Quail – The breeding population in this region tends to be highly variable depending on available moisture and resulting vegetation. The region experienced an increase in breeding populations from 2011 to 2012 although 2011 was a record low for the region. While drought likely held back production, the weather was better than last year, and some reproduction occurred. Indices are still well below average for the region. There will be some quail hunting opportunities in the region although good areas will be sparse.
Prairie Chicken – While breeding populations in the eastern parts of this region were generally stable or increasing, areas of extreme western and southwest portions (Cimarron National Grasslands) saw nearly 30-percent declines last year and 65 percent declines this year. Drought remained extreme in this region, and reproductive success was likely very low. Hunting opportunities in this region will be extremely limited this fall. | <urn:uuid:a611d07f-9067-4341-92f3-f62b82e34e98> | CC-MAIN-2013-20 | http://www.kdwpt.state.ks.us/index.php/news/Hunting/Upland-Birds/Upland-Bird-Regional-Forecast | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.956535 | 3,769 | 3.484375 | 3 |
Oil & Natural Gas Projects
Transmission, Distribution, & Refining
Multispectral and Hyperspectral Remote Sensing Techniques for Natural Gas Transmission Infrastructure Systems
The goal is to help maintain the nation's natural gas transmission infrastructure through the timely and effective detection of natural gas leaks through evaluation of geobotanical stress signatures.
The remote sensing techniques being developed employ advanced spectrometer systems that produce visible and near infrared reflected light images with spatial resolution of 1 to 3 meters in 128 wavelength bands. This allows for the discrimination of individual species of plants as well as geological and man-made objects, and permits the detection of biological impacts of methane leaks or seepages in large complicated areas. The techniques employed do not require before-and-after imagery because they use the spatial patterns of plant species and health variations present in a single image to distinguish leaks. Also, these techniques should allow discrimination between the effects of small leaks and the damage caused by human incursion or natural factors such as storm run off, landslides and earthquakes. Because plants in an area can accumulate doses of leaked materials, species spatial patterns can record time-integrated effects of leaked methane. This can be important in finding leaks that would otherwise be hard to detect by direct observation of methane concentrations in the air.
This project is developing remote sensing methods of detecting, discriminating, and mapping the effects of natural gas leaks from underground pipelines. The current focus is on the effects that the increased methane soil concentrations, created by the leaks, will have on plants. These effects will be associated with extreme soil CH4 concentrations, plant sickness, and even death. Similar circumstances have been observed and studied in the effects of excessive CO2soil concentrations at Mammoth Mountain near Mammoth Lakes California, USA. At the Mammoth Mountain site, the large CO2 soil concentrations are due to the volcanic rumblings of the magma still active below Mammoth Mountain. At more subtle levels this research has been able to map, using hyperspectral air borne imagery, the tree plant stress over all of the Mammoth Mountain. These plant stress maps match, and greatly extend into surrounding regions, the on-ground CO2 emission mapping done by the USGS in Menlo Park, California.
In addition, vegetation health mapping along with altered mineralization mapping at Mammoth Mountain does reveal subtle hidden faults. These hidden faults are pathways for potential CO2 leaks, at least near the surface, over the entire region. The methods being developed use airborne hyperspectral and multi-spectral high-resolution imagery and very high resolution (0.6 meter) satellite imagery. The team has identified and worked with commercial providers of both airborne hyperspectral imagery acquisitions and high resolution satellite imagery acquisitions. Both offer competent image data post processing, so that eventually, the ongoing surveillance of pipeline corridors can be contracted for commercially. Current work under this project is focused on detecting and quantifying natural gas pipeline leaks using hyperspectral imagery from airborne or satellite based platforms through evaluation of plant stress.
Lawrence Livermore National Laboratory (LLNL) – project management and research products
NASA – Ames – Development of UAV platform used to carry hyperspectral payload
HyVista Corporation– Development and operation of the HyMap hyperspectral sensor
Livermore, CA 94511
The use of geobotanical plant stress signatures from hyperspectral imagery potential offers a unique means of detecting and quantifying the existence of natural gas leaks from the U.S. pipeline infrastructure. The method holds the potential to cover large expanses of pipeline with minimal man effort thus reducing the potential likelihood that a leak would go undetected. By increasing the effectiveness and efficiency of leak detection, the amount of gas leaked from a site can be reduced resulting in decreased environmental impact from fugitive emissions of gas, increased safety and reliability of gas delivery and increase in overall available gas; as less product is lost from the lines.
The method chosen for testing these techniques was to image the area surrounding known gas pipeline leaks. After receiving notice and location information for a newly discovered leak from research collaborator Pacific Gas and Electric (PG&E), researchers determined the area above the buried pipeline to be scanned, including some surrounding areas thought to be outside the influence of any methane that might percolate to within root depth of the surface. Flight lines were designed for the airborne acquisition program and researchers used a geographic positioning system (GPS) and digital cameras to visually record the soils, plants, minerals, waters, and manmade objects in the area while the airborne imagery was acquired. After the airborne imagery set for all flight lines was received (including raw data, data corrected to reflectance including atmospheric absorptions, and georectification control files), the data was analyzed using commercial computer software (ENVI) by a team of researchers at University of California, Santa Cruz (UCSC), Lawrence Livermore National Laboratory (LLNL), and one of the acquisition contractors.
- Created an advanced Geographic Information System (GIS) that will be able provide dynamic integration of airborne imagery, satellite imagery, and other GIS information to monitor pipelines for geobotanical leak signatures.
- Used the software to integrate hyperspectral imagery, high resolution satellite imagery, and digital elevation models of the area around a known gas leak to determine if evidence of the leak could be resolved.
- Helped develop hyperspectral imagery payload for use on an unmanned aerial vehicle developed by NASA-Ames.
- Participated in DOE-NETL sponsored natural gas pipeline leak detection demonstration in Casper, Wyoming on September 13-17, 2004. Using both the UAV hyperspectral payload (~1000 ft), and Hyvista hyperspectral platform (~5000 ft) to survey for plant stress.
Researchers used several different routines available within the ENVI program suite to produce “maps” of plant species types, plant health within species types, soil types, soil conditions, water bodies, water contents such as algae or sediments, mineralogy of exposed formations, and manmade objects. These maps were then studied for relative plant health patterns, altered mineral distributions, and other categories. The researchers then returned to the field to verify and further understand the mappings, fine-tune the results, and produce more accurate maps. Since the maps are georectified and the pixel size is 3 meters, individual objects can all be located using the maps and a handheld GPS.
These detailed maps show areas of existing anomalous conditions such as plant kills and linear species modifications caused by subtle hidden faults, modifications of the terrain due to pipeline work or encroachment. They are also the “baseline” that can be used to chart any future changes by re-imaging the area routinely to monitor and document any effects caused by significant methane leakage.
The sensors used for image acquisition are hyperspectral scanners, one of which provides 126 bands across the reflective solar wavelength region of 0.45 – 2.5 nm with contiguous spectral coverage (except in the atmospheric water vapor bands) and bandwidths between 15 – 20 nm. This sensor operates on a 3-axis gyro-stabilized platform to minimize image distortion due to aircraft motion and provides a signal to noise ratio >500:1. Geo-location and image geo-coding is achieved with an-on board Differential GPS (DGPS) and an integrated IMU (inertial monitoring unit).
During a DOE – NETL sponsored natural gas leak detection demonstration at the National Petroleum Reserve 3 (NPR3) site of the Rocky Mountain Oilfield Testing Center (RMOTC) outside of Casper, Wyoming, the project utilized hyperspectral imaging of vegetation to sense plant stress related to the presence of natural gas on a simulated pipeline using actual natural gas releases. The spectral signature of sunlight reflected from vegetation was used to determine vegetation health. Two different platforms were used for imaging the virtual pipeline path: a Twin Otter aircraft flying at an altitude of about 5,000 feet above ground level that imaged the entire site in strips, and an unmanned autonomous vehicle (UAV) flying at an altitude of approximately 1,000 feet above ground level that imaged an area surrounding the virtual pipeline.
The manned hyperspectral imaging took place on two days. Wednesday, September 9 and Wednesday, September 15. The underground leaks were started on August 30. This was done to allow time for the methane from the leaks to saturate the soils and produce plant stress by excluding oxygen from the plant root systems. On both days, the entire NPR3-RMOTC site was successfully imaged.
At that time of year, the vegetation at NPR3-RMOTC was largely in hibernation. The exception was in the gullies where there was some moisture. Therefore, the survey looked for unusually stressed plant “patches” in the gullies as possible leak points. Several spots were found in the hyperspectral imagery that had the spectral signature typical of sick vegetation that were several pixels in diameter in locations in the gullies or ravines along the virtual pipeline route. Due to the limited vegetation along the test route the successful detection of natural gas leaks through imaging of plant stress was limited in success. The technique did demonstrate an ability to show plant stress in areas near leak sites but was less successful in determining general leak severity based on those results. In areas with much denser vegetation coverage and less dormant plant life the method still shows promise.
| Airborne hyperspectral imagery unit - close-up
|| Airborne hyperspectral imagery unit - on plane
Overall results from the DOE-NETL sponsored natural gas leak detection demonstration can be found in the demonstration final report [PDF-7370KB] .
Current Status and Remaining Tasks:
All work under this project has been completed.
Project Start: August 13, 2001
Project End: December 31, 2005
DOE Contribution: $966,900
Performer Contribution: $0
NETL – Richard Baker (email@example.com or 304-285-4714)
LLNL – Dr. William L. Pickles (firstname.lastname@example.org or 925-422-7812)
DOE Leak Detection Technology Demonstration Final Report [PDF-7370KB]
DOE Fossil Energy Techline: National Labs to Strengthen Natural Gas Pipelines' Integrity, Reliability
Status Assessment [PDF-26KB] | <urn:uuid:14f91c40-6ff9-4e80-8412-73a8f1b2b57e> | CC-MAIN-2013-20 | http://www.netl.doe.gov/technologies/oil-gas/NaturalGas/Projects_n/TDS/TD/T%26D_A_FEW0104-0085Multispectral.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.92439 | 2,141 | 2.828125 | 3 |
Notes on the Bible, by Albert Barnes, , at sacred-texts.com
Now the word of the Lord - , literally, "And, ..." This is the way in which the several inspired writers of the Old Testament mark that what it was given them to write was united onto those sacred books which God had given to others to write, and it formed with them one continuous whole. The word, "And," implies this. It would do so in any language, and it does so in Hebrew as much as in any other. As neither we, nor any other people, would, without any meaning, use the word, And, so neither did the Hebrews. It joins the four first books of Moses together; it carries on the history through Joshua, Judges, the Books of Samuel and of the Kings. After the captivity, Ezra and Nehemiah begin again where the histories before left off; the break of the captivity is bridged over; and Ezra, going back in mind to the history of God's people before the captivity, resumes the history, as if it had been of yesterday, "And in the first year of Cyrus." It joins in the story of the Book of Ruth before the captivity, and that of Esther afterward. At times, even prophets employ it, in using the narrative form of themselves, as Ezekiel, "and it was in the thirtieth year, in the fourth month, in the fifth day of the month, and I was in the captivity by the river of Chebar, the heavens opened and I saw." If a prophet or historian wishes to detach his prophecy or his history, he does so; as Ezra probably began the Book of Chronicles anew from Adam, or as Daniel makes his prophecy a whole by itself. But then it is the more obvious that a Hebrew prophet or historian, when he does begin with the word, "And," has an object in so beginning; he uses an universal word of all languages in its uniform meaning in all language, to join things together.
And yet more precisely; this form, "and the word of the Lord came to - saying," occurs over and over again, stringing together the pearls of great price of God's revelations, and uniting this new revelation to all those which had preceded it. The word, "And," then joins on histories with histories, revelations with revelations, uniting in one the histories of God's works and words, and blending the books of Holy Scripture into one divine book.
But the form of words must have suggested to the Jews another thought, which is part of our thankfulness and of our being Act 11:18, "then to the Gentiles also hath God given repentance unto life." The words are the self-same familiar words with which some fresh revelation of God's will to His people had so often been announced. Now they are prefixed to God's message to the pagan, and so as to join on that message to all the other messages to Israel. Would then God deal thenceforth with the pagan as with the Jews? Would they have their prophets? Would they be included in the one family of God? The mission of Jonah in itself was an earnest that they would, for God. Who does nothing fitfully or capriciously, in that He had begun, gave an earnest that He would carry on what He had begun. And so thereafter, the great prophets, Isaiah, Jeremiah, Ezekiel, were prophets to the nations also; Daniel was a prophet among them, to them as well as to their captives.
But the mission of Jonah might, so far, have been something exceptional. The enrolling his book, as an integral part of the Scriptures, joining on that prophecy to the other prophecies to Israel, was an earnest that they were to be parts of one system. But then it would be significant also, that the records of God's prophecies to the Jews, all embodied the accounts of their impenitence. Here is inserted among them an account of God's revelation to the pagan, and their repentance. "So many prophets had been sent, so many miracles performed, so often had captivity been foreannounced to them for the multitude of their sins. and they never repented. Not for the reign of one king did they cease from the worship of the calves; not one of the kings of the ten tribes departed from the sins of Jeroboam? Elijah, sent in the Word and Spirit of the Lord, had done many miracles, yet obtained no abandonment of the calves. His miracles effected this only, that the people knew that Baal was no god, and cried out, "the Lord He is the God." Elisha his disciple followed him, who asked for a double portion of the Spirit of Elijah, that he might work more miracles, to bring back the people.
He died, and, after his death as before it, the worship of the calves continued in Israel. The Lord marveled and was weary of Israel, knowing that if He sent to the pagan they would bear, as he saith to Ezekiel. To make trial of this, Jonah was chosen, of whom it is recorded in the Book of Kings that he prophesied the restoration of the border of Israel. When then he begins by saying, "And the word of the Lord came to Jonah," prefixing the word "And," he refers us back to those former things, in this meaning. The children have not hearkened to what the Lord commanded, sending to them by His servants the prophets, but have hardened their necks and given themselves up to do evil before the Lord and provoke Him to anger; "and" therefore "the word of the Lord came to Jonah, saying, Arise and go to Nineveh that great city, and preach unto her," that so Israel may be shewn, in comparison with the pagan, to be the more guilty, when the Ninevites should repent, the children of Israel persevered in unrepentance."
Jonah the son of Amittai - Both names occur here only in the Old Testament, Jonah signifies "Dove," Amittai, "the truth of God." Some of the names of the Hebrew prophets so suit in with their times, that they must either have been given them propheticly, or assumed by themselves, as a sort of watchword, analogous to the prophetic names, given to the sons of Hosea and Isaiah. Such were the names of Elijah and Elisha, "The Lord is my God," "my God is salvation." Such too seems to be that of Jonah. The "dove" is everywhere the symbol of "mourning love." The side of his character which Jonah records is that of his defect, his want of trust in God, and so his unloving zeal against those, who were to be the instruments of God against his people. His name perhaps preserves that character by which he willed to be known among his people, one who moaned or mourned over them.
Arise, go to Nineveh, that great city - The Assyrian history, as far as it has yet been discovered, is very bare of events in regard to this period. We have as yet the names of three kings only for 150 years. But Assyria, as far as we know its history, was in its meridian. Just before the time of Jonah, perhaps ending in it, were the victorious reigns of Shalmanubar and Shamasiva; after him was that of Ivalush or Pul, the first aggressor upon Israel. It is clear that this was a time Of Assyrian greatness: since God calls it "that great city," not in relation to its extent only, but its power. A large weak city would not have been called a "great city unto God" Jon 3:3.
And cry against it - The substance of that cry is recorded afterward, but God told to Jonah now, what message he was to cry aloud to it. For Jonah relates afterward, how he expostulated now with God, and that his expostulation was founded on this, that God was so merciful that He would not fulfill the judgment which He threatened. Faith was strong in Jonah, while, like Apostles "the sons of thunder," before the Day of Pentecost, he knew not" what spirit he was of." Zeal for the people and, as he doubtless thought, for the glory of God, narrowed love in him. He did not, like Moses, pray Exo 32:32, "or else blot me also out of Thy book," or like Paul, desire even to be "an anathema from Christ" Rom 9:3 for his people's sake, so that there might be more to love his Lord. His zeal was directed, like that of the rebuked Apostles, against others, and so it too was rebuked. But his faith was strong. He shrank back from the office, as believing, not as doubting, the might of God. He thought nothing of preaching, amid that multitude of wild warriors, the stern message of God. He was willing, alone, to confront the violence of a city of 600,000, whose characteristic was violence. He was ready, at God's bidding, to enter what Nahum speaks of as a den of lions Nah 2:11-12; "The dwelling of the lions and the feeding-place of the young lions, where the lion did tear in pieces enough for his whelps, and strangled for his lionesses." He feared not the fierceness of their lion-nature, but God's tenderness, and lest that tenderness should be the destruction of his own people.
Their wickedness is come up before Me - So God said to Cain, Gen 4:10. "The voice of thy brother's blood crieth unto Me from the ground:" and of Sodom Gen 18:20 :21, "The cry of Sodom and Gomorrah is great, because their sin is very grievous; the cry of it is come up unto Me." The "wickedness" is not the mere mass of human sin, of which it is said Jo1 5:19, "the whole world lieth in wickedness," but evil-doing toward others. This was the cause of the final sentence on Nineveh, with which Nahum closes his prophecy, "upon whom hath not thy wickedness passed continually?" It bad been assigned as the ground of the judgment on Israel through Nineveh Hos 10:14-15. "So shall Bethel do unto you, on account of the wickedness of your wickedness." It was the ground of the destruction by the flood Gen 6:5. "God saw that the wickedness of man was great upon the earth." God represents Himself, the Great Judge, as sitting on His Throne in heaven, Unseen but All-seeing, to whom the wickedness and oppressiveness of man against man "goes up," appealing for His sentence against the oppressor. The cause seems ofttimes long in pleading. God is long-suffering with the oppressor too, that if so be, he may repent. So would a greater good come to the oppressed also, if the wolf became a lamb. But meanwhile, " every iniquity has its own voice at the hidden judgment seat of God." Mercy itself calls for vengeance on the unmerciful.
But (And) Jonah rose up to flee ... from the presence of the Lord - literally "from being before the Lord." Jonah knew well, that man could not escape from the presence of God, whom he knew as the Self-existing One, He who alone is, the Maker of heaven, earth and sea. He did not "flee" then "from His presence," knowing well what David said Psa 139:7, Psa 139:9-10, "whither shall I go from Thy Spirit, or whither shall I flee from Thy presence? If I take the wings of the morning, and dwell in the uttermost parts of the sea, even there shall Thy hand lead me and Thy right hand shall hold me." Jonah fled, not from God's presence, but from standing before him, as His servant and minister. He refused God's service, because, as he himself tells God afterward Jon 4:2, he knew what it would end in, and he misliked it.
So he acted, as people often do, who dislike God's commands. He set about removing himself as far as possible from being under the influence of God, and from the place where he "could" fulfill them. God commanded him to go to Nineveh, which lay northeast from his home; and he instantly set himself to flee to the then furthermost west. Holy Scripture sets the rebellion before us in its full nakedness. "The word of the Lord came unto Jonah, go to Nineveh, and Jonah rose up;" he did something instantly, as the consequence of God's command. He "rose up," not as other prophets, to obey, but to disobey; and that, not slowly nor irresolutely, but "to flee, from" standing "before the Lord." He renounced his office. So when our Lord came in the flesh, those who found what He said to be "hard sayings," went away from Him, "and walked no more with Him" Joh 6:66. So the rich "young man went away sorrowful Mat 19:22, for he had great possessions."
They were perhaps afraid of trusting themselves in His presence; or they were ashamed of staying there, and not doing what He said. So men, when God secretly calls them to prayer, go and immerse themselves in business; when, in solitude, He says to their souls something which they do not like, they escape His Voice in a throng. If He calls them to make sacrifices for His poor, they order themselves a new dress or some fresh sumptuousness or self-indulgence; if to celibacy, they engage themselves to marry immediately; or, contrariwise, if He calls them not to do a thing, they do it at once, to make an end of their struggle and their obedience; to put obedience out of their power; to enter themselves on a course of disobedience. Jonah, then, in this part of his history, is the image of those who, when God calls them, disobey His call, and how He deals with them, when he does not abandon them. He lets them have their way for a time, encompasses them with difficulties, so that they shall "flee back from God displeased to God appeased."
"The whole wisdom, the whole bliss, the whole of man lies in this, to learn what God wills him to do, in what state of life, calling, duties, profession, employment, He wills him to serve Him." God sent each one of us into the world, to fulfill his own definite duties, and, through His grace, to attain to our own perfection in and through fulfilling them. He did not create us at random, to pass through the world, doing whatever self-will or our own pleasure leads us to, but to fulfill His will. This will of His, if we obey His earlier calls, and seek Him by prayer, in obedience, self-subdual, humility, thoughtfulness, He makes known to each by His own secret drawings, and, in absence of these, at times by His Providence or human means. And then , "to follow Him is a token of predestination." It is to place ourselves in that order of things, that pathway to our eternal mansion, for which God created us, and which God created for us.
So Jesus says Joh 10:27-28, "My sheep hear My voice and I know them, and they follow Me, and I give unto them eternal life, and they shall never perish, neither shall any man pluck them out of My Hand." In these ways, God has foreordained for us all the graces which we need; in these, we shall be free from all temptations which might be too hard for us, in which our own special weakness would be most exposed. Those ways, which people choose out of mere natural taste or fancy, are mostly those which expose them to the greatest peril of sin and damnation. For they choose them, just because such pursuits flatter most their own inclinations, and give scope to their natural strength and their moral weakness. So Jonah, disliking a duty, which God gave him to fulfill, separated himself from His service, forfeited his past calling, lost, as far as in him lay, his place among "the goodly fellowship of the prophets," and, but for God's overtaking grace, would have ended his days among the disobedient. As in Holy Scripture, David stands alone of saints, who had been after their calling, bloodstained; as the penitent robber stands alone converted in death; as Peter stands singly, recalled after denying his Lord; so Jonah stands, the one prophet, who, having obeyed and then rebelled, was constrained by the overpowering providence and love of God, to return and serve Him.
"Being a prophet, Jonah could not be ignorant of the mind of God, that, according to His great Wisdom and His unsearchable judgments and His untraceable and incomprehensible ways, He, through the threat, was providing for the Ninevites that they should not suffer the things threatened. To think that Jonah hoped to hide himself in the sea and elude by flight the great Eye of God, were altogether absurd and ignorant, which should not be believed, I say not of a prophet, but of no other sensible person who had any moderate knowledge of God and His supreme power. Jonah knew all this better than anyone, that, planning his flight, he changed his place, but did not flee God. For this could no man do, either by hiding himself in the bosom of the earth or depths of the sea or ascending (if possible) with wings into the air, or entering the lowest hell, or encircled with thick clouds, or taking any other counsel to secure his flight.
This, above all things and alone, can neither be escaped nor resisted, God. When He willeth to hold and grasp in His Hand, He overtaketh the swift, baffleth the intelligent, overthroweth the strong, boweth the lofty, tameth rashness, subdueth might. He who threatened to others the mighty Hand of God, was not himself ignorant of nor thought to flee, God. Let us not believe this. But since he saw the fall of Israel and perceived that the prophetic grace would pass over to the Gentiles, he withdrew himself from the office of preaching, and put off the command." "The prophet knoweth, the Holy Spirit teaching him, that the repentance of the Gentiles is the ruin of the Jews. A lover then of his country, he does not so much envy the deliverance of Nineveh, as will that his own country should not perish. - Seeing too that his fellow-prophets are sent to the lost sheep of the house of Israel, to excite the people to repentance, and that Balaam the soothsayer too prophesied of the salvation of Israel, he grieveth that he alone is chosen to be sent to the Assyrians, the enemies of Israel, and to that greatest city of the enemies where was idolatry and ignorance of God. Yet more he feared lest they, on occasion of his preaching, being converted to repentance, Israel should be wholly forsaken. For he knew by the same Spirit whereby the preaching to the Gentiles was entrusted to him, that the house of Israel would then perish; and he feared that what was at one time to be, should take place in his own time." "The flight of the prophet may also be referred to that of man in general who, despising the commands of God, departed from Him and gave himself to the world, where subsequently, through the storms of ill and the wreck of the whole world raging against him, he was compelled to feel the presence of God, and to return to Him whom he had fled. Whence we understand, that those things also which men think for their good, when against the will of God, are turned to destruction; and help not only does not benefit those to whom it is given, but those too who give it, are alike crushed. As we read that Egypt was conquered by the Assyrians, because it helped Israel against the will of God. The ship is emperiled which had received the emperiled; a tempest arises in a calm; nothing is secure, when God is against us."
Tarshish - , named after one of the sons of Javan, Gen 10:4. was an ancient merchant city of Spain, once proverbial for its wealth (Psa 72:10. Strabo iii. 2. 14), which supplied Judaea with silver Jer 10:9, Tyre with "all manner of riches," with iron also, tin, lead. Eze 27:12, Eze 27:25. It was known to the Greeks and Romans, as (with a harder pronunciation) Tartessus; but in our first century, it had either ceased to be, or was known under some other name. Ships destined for a voyage, at that time, so long, and built for carrying merchandise, were naturally among the largest then constructed. "Ships of Tarshish" corresponded to the "East-Indiamen" which some of us remember. The breaking of "ships of Tarshish by the East Wind" Psa 48:7 is, on account of their size and general safety, instanced as a special token of the interposition of God.
And went down to Joppa - Joppa, now Jaffa (Haifa), was the one well-known port of Israel on the Mediterranean. There the cedars were brought from Lebanon for both the first and second temple Ch2 3:16; Ezr 2:7. Simon the Maccabee (1 Macc. 14:5) "took it again for a haven, and made an entrance to the isles of the sea." It was subsequently destroyed by the Romans, as a pirate-haven. (Josephus, B. J. iii. 9. 3, and Strabo xvi. 2. 28.) At a later time, all describe it as an unsafe haven. Perhaps the shore changed, since the rings, to which Andromeda was tabled to have been fastened, and which probably were once used to moor vessels, were high above the sea. Perhaps, like the Channel Islands, the navigation was safe to those who knew the coast, unsafe to others. To this port Jonah "went down" from his native country, the mountain district of Zabulon. Perhaps it was not at this time in the hands of Israel. At least, the sailors were pagan. He "went down," as the man who fell among the thieves, is said to "have gone down from Jerusalem to Jericho." Luk 10:30. He "went down" from the place which God honored by His presence and protection.
And he paid the fare thereof - Jonah describes circumstantially, how he took every step to his end. He went down, found a strongly built ship going where he wished, paid his fare, embarked. He seemed now to have done all. He had severed himself from the country where his office lay. He had no further step to take. Winds and waves would do the rest. He had but to be still. He went, only to be brought back again.
"Sin brings our soul into much senselessness. For as those overtaken by heaviness of head and drunkenness, are borne on simply and at random, and, be there pit or precipice or whatever else below them, they fall into it unawares; so too, they who fall into sin, intoxicated by their desire of the object, know not what they do, see nothing before them, present or future. Tell me, Fleest thou the Lord? Wait then a little, and thou shalt learn from the event, that thou canst not escape the hands of His servant, the sea. For as soon as he embarked, it too roused its waves and raised them up on high; and as a faithful servant, finding her fellow-slave stealing some of his master's property, ceases not from giving endless trouble to those who take him in, until she recover him, so too the sea, finding and recognizing her fellow-servant, harasses the sailors unceasingly, raging, roaring, not dragging them to a tribunal but threatening to sink the vessel with all its unless they restore to her, her fellow-servant."
"The sinner "arises," because, will he, nill he, toil he must. If he shrinks from the way of God, because it is hard, he may not yet be idle. There is the way of ambition, of covetousness, of pleasure, to be trodden, which certainly are far harder. 'We wearied ourselves (Wisdom 5:7),' say the wicked, 'in the way of wickedness and destruction, yea, we have gone through deserts where there lay no way; but the way of the Lord we have not known.' Jonah would not arise, to go to Nineveh at God's command; yet he must needs arise, to flee to Tarshish from before the presence of God. What good can he have who fleeth the Good? what light, who willingly forsaketh the Light? "He goes down to Joppa." Wherever thou turnest, if thou depart from the will of God, thou goest down. Whatever glory, riches, power, honors, thou gainest, thou risest not a whit; the more thou advancest, while turned from God, the deeper and deeper thou goest down. Yet all these things are not had, without paying the price. At a price and with toil, he obtains what he desires; he receives nothing gratis, but, at great price purchases to himself storms, griefs, peril. There arises a great tempest in the sea, when various contradictory passions arise in the heart of the sinner, which take from him all tranquility and joy. There is a tempest in the sea, when God sends strong and dangerous disease, whereby the frame is in peril of being broken. There is a tempest in the sea, when, thro' rivals or competitors for the same pleasures, or the injured, or the civil magistrate, his guilt is discovered, he is laden with infamy and odium, punished, withheld from his wonted pleasures. Psa 107:23-27. "They who go down to the sea of this world, and do business in mighty waters - their soul melteth away because of trouble; they reel to and fro and stagger like a drunken man, and all their wisdom is swallowed up."
But (And) the Lord sent out - (literally 'cast along'). Jonah had done his all. Now God's part began. This He expresses by the word, "And." Jonah took "his" measures, "and" now God takes "His." He had let him have his way, as He often deals with those who rebel against Him. He lets them have their way up to a certain point. He waits, in the tranquility of His Almightiness, until they have completed their preparations; and then, when man has ended, He begins, that man may see the more that it is His doing . "He takes those who flee from Him in their flight, the wise in their counsels, sinners in their conceits and sins, and draws them back to Himself and compels them to return. Jonah thought to find rest in the sea, and lo! a tempest." Probably, God summoned back Jonah, as soon as he had completed all on his part, and sent the tempest, soon after he left the shore.
At least, such tempests often swept along that shore, and were known by their own special name, like the Euroclydon off Crete. Jonah too alone had gone down below deck to sleep, and, when the storm came, the mariners thought it possible to put back. Josephus says of that shore, "Joppa having by nature no haven, for it ends in a rough shore, mostly abrupt, but for a short space having projections, i. e., deep rocks and cliffs advancing into the sea, inclining on either side toward each other (where the traces of the chains of Andromeda yet shown accredit the antiquity of the fable,) and the north wind beating right on the shore, and dashing the high waves against the rocks which receive them, makes the station there a harborless sea. As those from Joppa were tossing here, a strong wind (called by those who sail here, the black north wind) falls upon them at daybreak, dashing straightway some of the ships against each other, some against the rocks, and some, forcing their way against the waves to the open sea, (for they fear the rocky shore ...) the breakers towering above them, sank."
The ship was like - (literally 'thought') To be broken Perhaps Jonah means by this very vivid image to exhibit the more his own dullness. He ascribes, as it were, to the ship a sense of its own danger, as she heaved and rolled and creaked and quivered under the weight of the storm which lay on her, and her masts groaned, and her yard-arms shivered. To the awakened conscience everything seems to have been alive to God's displeasure, except itself.
And cried, every man unto his God - They did what they could. "Not knowing the truth, they yet know of a Providence, and, amid religious error, know that there is an Object of reverence." In ignorance they had received one who offended God. And now God, "whom they ignorantly worshiped" Act 17:23, while they cried to the gods, who, they thought, disposed of them, heard them. They escaped with the loss of their wares, but God saved their lives and revealed Himself to them. God hears ignorant prayer, when ignorance is not willful and sin.
To lighten it of them - , literally "to lighten from against them, to lighten" what was so much "against them," what so oppressed them. "They thought that the ship was weighed down by its wonted lading, and they knew not that the whole weight was that of the fugitive prophet." "The sailors cast forth their wares," but the ship was not lightened. For the whole weight still remained, the body of the prophet, that heavy burden, not from the nature of the body, but from the burden of sin. For nothing is so onerous and heavy as sin and disobedience. Whence also Zechariah Zac 5:7 represented it under the image of lead. And David, describing its nature, said Psa 38:4, "my wickednesses are gone over my head; as a heavy burden they are too heavy for me." And Christ cried aloud to those who lived in many sins, Mat 11:28. "Come unto Me, all ye that labor and are heavy-laden, and I will refresh you."
Jonah was gone down - , probably before the beginning of the storm, not simply before the lightening of the vessel. He could hardly have fallen asleep "then." A pagan ship was a strange place for a prophet of God, not as a prophet, but as a fugitive; and so, probably, ashamed of what he had completed, he had withdrawn from sight and notice. He did not embolden himself in his sin, but shrank into himself. The conscience most commonly awakes, when the sin is done. It stands aghast as itself; but Satan, if he can, cuts off its retreat. Jonah had no retreat now, unless God had made one.
And was fast asleep - The journey to Joppa had been long and hurried; he had "fled." Sorrow and remorse completed what fatigue began. Perhaps he had given himself up to sleep, to dull his conscience. For it is said, "he lay down and was fast asleep." Grief produces sleep; from where it is said of the apostles in the night before the Lord's Passion, when Jesus "rose up from prayer and was come to His disciples, He found them sleeping for sorrow" Luk 22:45 . "Jonah slept heavily. Deep was the sleep, but it was not of pleasure but of grief; not of heartlessness, but of heavy-heartedness. For well-disposed servants soon feel their sins, as did he. For when the sin has been done, then he knows its frightfulness. For such is sin. When born, it awakens pangs in the soul which bare it, contrary to the law of our nature. For so soon as we are born, we end the travail-pangs; but sin, so soon as born, rends with pangs the thoughts which conceived it." Jonah was in a deep sleep, a sleep by which he was fast held and bound; a sleep as deep as that from which Sisera never woke. Had God allowed the ship to sink, the memory of Jonah would have been that of the fugitive prophet. As it is, his deep sleep stands as an image of the lethargy of sin . "This most deep sleep of Jonah signifies a man torpid and slumbering in error, to whom it sufficed not to flee from the face of God, but his mind, drowned in a stupor and not knowing the displeasure of God, lies asleep, steeped in security."
What meanest thou? - or rather, "what aileth thee?" (literally "what is to thee?") The shipmaster speaks of it (as it was) as a sort of disease, that he should be thus asleep in the common peril. "The shipmaster," charged, as he by office was, with the common weal of those on board, would, in the common peril, have one common prayer. It was the prophet's office to call the pagan to prayers and to calling upon God. God reproved the Scribes and Pharisees by the mouth of the children who "cried Hosanna" Mat 21:15; Jonah by the shipmaster; David by Abigail; Sa1 25:32-34; Naaman by his servants. Now too he reproves worldly priests by the devotion of laymen, sceptic intellect by the simplicity of faith.
If so be that God will think upon us - , (literally "for us") i. e., for good; as David says, Psa 40:17. "I am poor and needy, the Lord thinketh upon" (literally "for") "me." Their calling upon their own gods had failed them. Perhaps the shipmaster had seen something special about Jonah, his manner, or his prophet's garb. He does not only call Jonah's God, "thy" God, as Darius says to Daniel "thy God" Dan 6:20, but also "the God," acknowledging the God whom Jonah worshiped, to be "the God." It is not any pagan prayer which he asks Jonah to offer. It is the prayer of the creature in its need to God who can help; but knowing its own ill-desert, and the separation between itself and God, it knows not whether He will help it. So David says Psa 25:7, "Remember not the sins of my youth nor my transgressions; according to Thy mercy remember Thou me for Thy goodness' sake, O Lord."
"The shipmaster knew from experience, that it was no common storm, that the surges were an infliction borne down from God, and above human skill, and that there was no good in the master's skill. For the state of things needed another Master who ordereth the heavens, and craved the guidance from on high. So then they too left oars, sails, cables, gave their hands rest from rowing, and stretched them to heaven and called on God."
Come, and let us cast lots - Jonah too had probably prayed, and his prayers too were not heard. Probably, too, the storm had some unusual character about it, the suddenness with which it burst upon them, its violence, the quarter from where it came, its whirlwind force . "They knew the nature of the sea, and, as experienced sailors, were acquainted with the character of wind and storm, and had these waves been such as they had known before, they would never have sought by lot for the author of the threatened wreck, or, by a thing uncertain, sought to escape certain peril." God, who sent the storm to arrest Jonah and to cause him to be cast into the sea, provided that its character should set the mariners on divining, why it came. Even when working great miracles, God brings about, through man, all the forerunning events, all but the last act, in which He puts forth His might. As, in His people, he directed the lot to fall upon Achan or upon Jonathan, so here He overruled the lots of the pagan sailors to accomplish His end. " We must not, on this precedent, immediately trust in lots, or unite with this testimony that from the Acts of the Apostles, when Matthias was by lot elected to the apostolate, since the privileges of individuals cannot form a common law." "Lots," according to the ends for which they were cast, were for:
i) The lot for dividing is not wrong if not used,
1) "without any necessity, for this would be to tempt God:"
2) "if in case of necessity, not without reverence of God, as if Holy Scripture were used for an earthly end," as in determining any secular matter by opening the Bible:
3) for objects which ought to be decided otherwise, (as, an office ought to be given to the fittest:)
4) in dependence upon any other than God Pro 16:33. "The lot is cast into the lap, but the whole disposing of it is the Lord's." So then they are lawful "in secular things which cannot otherwise be conveniently distributed," or when there is no apparent reason why, in any advantage or disadvantage, one should be preferred to another." Augustine even allows that, in a time of plague or persecution, the lot might be cast to decide who should remain to administer the sacraments to the people, lest, on the one side, all should be taken away, or, on the other, the Church be deserted.
ii.) The lot for consulting, i. e., to decide what one should do, is wrong, unless in a matter of mere indifference, or under inspiration of God, or in some extreme necessity where all human means fail.
iii.) The lot for divining, i. e., to learn truth, whether of things present or future, of which we can have no human knowledge, is wrong, except by direct inspiration of God. For it is either to tempt God who has not promised so to reveal things, or, against God, to seek superhuman knowledge by ways unsanctioned by Him. Satan may readily mix himself unknown in such inquiries, as in mesmerism. Forbidden ground is his own province.
God overruled the lot in the case of Jonah, as He did the sign which the Philistines sought . "He made the heifers take the way to Bethshemesh, that the Philistines might know that the plague came to them, not by chance, but from Hilmself" . "The fugitive (Jonah) was taken by lot, not by any virtue of the lots, especially the lots of pagan, but by the will of Him who guided the uncertain lots" "The lot betrayed the culprit. Yet not even thus did they cast him over; but, even while such a tumult and storm lay on them, they held, as it were, a court in the vessel, as though in entire peace, and allowed him a hearing and defense, and sifted everything accurately, as men who were to give account of their judgment. Hear them sifting all as in a court - The roaring sea accused him; the lot convicted and witnessed against him, yet not even thus did they pronounce against him - until the accused should be the accuser of his own sin. The sailors, uneducated, untaught, imitated the good order of courts. When the sea scarcely allowed them to breathe, whence such forethought about the prophet? By the disposal of God. For God by all this instructed the prophet to be humane and mild, all but saying aloud to him; 'Imitate these uninstructed sailors. They think not lightly of one soul, nor are unsparing as to one body, thine own. But thou, for thy part, gavest up a whole city with so many myriads. They, discovering thee to be the cause of the evils which befell them, did not even thus hurry to condemn thee. Thou, having nothing whereof to accuse the Ninevites, didst sink and destroy them. Thou, when I bade thee go and by thy preaching call them to repentance, obeyedst not; these, untaught, do all, compass all, in order to recover thee, already condemned, from punishment.'"
Tell us, for whose cause - Literally "for what to whom." It may be that they thought that Jonah had been guilty toward some other. The lot had pointed him out. The mariners, still fearing to do wrong, ask him thronged questions, to know why the anger of God followed him; "what" hast thou done "to whom?" "what thine occupation?" i. e., either his ordinary occupation, whether it was displeasing to God? or this particular business in which he was engaged, and for which he had come on board. Questions so thronged have been admired in human poetry, Jerome says. For it is true to nature. They think that some one of them will draw forth the answer which they wish. It may be that they thought that his country, or people, or parents, were under the displeasure of God. But perhaps, more naturally, they wished to "know all about him," as people say. These questions must have gone home to Jonah's conscience. "What is thy business?" The office of prophet which he had left. "Whence comest thou?" From standing before God, as His minister. "What thy country? of what people art thou?" The people of God, whom he had quitted for pagan; not to win them to God, as He commanded; but, not knowing what they did, to abet him in his flight.
What is thine occupation? - They should ask themselves, who have Jonah's office to speak in the name of God, and preach repentance . "What should be thy business, who hast consecrated thyself wholly to God, whom God has loaded with daily benefits? who approachest to Him as to a Friend? "What is thy business?" To live for God, to despise the things of earth, to behold the things of heaven," to lead others heavenward.
Jonah answers simply the central point to which all these questions tended:
I am an Hebrew - This was the name by which Israel was known to foreigners. It is used in the Old Testament, only when they are spoken of by foreigners, or speak of themselves to foreigners, or when the sacred writers mention them in contrast with foreigners . So Joseph spoke of his land Gen 40:15, and the Hebrew midwives Exo 1:19, and Moses' sister Exo 2:7, and God in His commission to Moses Exo 3:18; Exo 7:16; Exo 9:1 as to Pharaoh, and Moses in fulfilling it Exo 5:3. They had the name, as having passed the River Euphrates, "emigrants." The title might serve to remind themselves, that they were "strangers" and "pilgrims," Heb 11:13. whose fathers had left their home at God's command and for God , "passers by, through this world to death, and through death to immortality."
And I fear the Lord - , i. e., I am a worshiper of Him, most commonly, one who habitually stands in awe of Him, and so one who stands in awe of sin too. For none really fear God, none fear Him as sons, who do not fear Him in act. To be afraid of God is not to fear Him. To be afraid of God keeps men away from God; to fear God draws them to Him. Here, however, Jonah probably meant to tell them, that the Object of his fear and worship was the One Self-existing God, He who alone is, who made all things, in whose hands are all things. He had told them before, that he had fled "from being before Yahweh." They had not thought anything of this, for they thought of Yahweh, only as the God of the Jews. Now he adds, that He, Whose service he had thus forsaken, was "the God of heaven, Who made the sea and dry land," that sea, whose raging terrified them and threatened their lives. The title, "the God of heaven," asserts the doctrine of the creation of the heavens by God, and His supremacy.
Hence, Abraham uses it to his servant Gen 24:7, and Jonah to the pagan mariners, and Daniel to Nebuchadnezzar Dan 2:37, Dan 2:44; and Cyrus in acknowledging God in his proclamation Ch2 36:23; Ezr 1:2. After his example, it is used in the decrees of Darius Ezr 6:9-10 and Artaxerxes Ezr 7:12, Ezr 7:21, Ezr 7:23, and the returned exiles use it in giving account of their building the temple to the Governor Ezr 5:11-12. Perhaps, from the habit of contact with the pagan, it is used once by Daniel Dan 2:18 and by Nehemiah Neh 1:4-5; Neh 2:4, Neh 2:20. Melchizedek, not perhaps being acquainted with the special name, Yahweh, blessed Abraham in the name of "God, the Possessor" or "Creator of heaven and earth" Gen 14:19, i. e., of all that is. Jonah, by using it, at once taught the sailors that there is One Lord of all, and why this evil had fallen on them, because they had himself with them, the renegade servant of God. "When Jonah said this, he indeed feared God and repented of his sin. If he lost filial fear by fleeing and disobeying, he recovered it by repentance."
Then were the men exceedingly afraid - Before, they had feared the tempest and the loss of their lives. Now they feared God. They feared, not the creature but the Creator. They knew that what they had feared was the doing of His Almightiness. They felt how awesome a thing it was to be in His Hands. Such fear is the beginning of conversion, when people turn from dwelling on the distresses which surround them, to God who sent them.
Why hast thou done this? - They are words of amazement and wonder. Why hast thou not obeyed so great a God, and how thoughtest thou to escape the hand of the Creator ? "What is the mystery of thy flight? Why did one, who feared God and had revelations from God, flee, sooner than go to fulfill them? Why did the worshiper of the One true God depart from his God?" "A servant flee from his Lord, a son from his father, man from his God!" The inconsistency of believers is the marvel of the young Christian, the repulsion of those without, the hardening of the unbeliever. If people really believed in eternity, how could they be thus immersed in things of time? If they believed in hell, how could they so hurry there? If they believed that God died for them, how could they so requite Him? Faith without love, knowledge without obedience, conscious dependence and rebellion, to be favored by God yet to despise His favor, are the strangest marvels of this mysterious world.
All nature seems to cry out to and against the unfaithful Christian, "why hast thou done this?" And what a why it is! A scoffer has recently said so truthfully : "Avowed scepticism cannot do a tenth part of the injury to practical faith, that the constant spectacle of the huge mass of worldly unreal belief does." It is nothing strange, that the world or unsanctified intellect should reject the Gospel. It is a thing of course, unless it be converted. But, to know, to believe, and to DISOBEY! To disobey God, in the name of God. To propose to halve the living Gospel, as the woman who had killed her child Kg1 3:26, and to think that the poor quivering remnants would be the living Gospel anymore! As though the will of God might, like those lower forms of His animal creation, be divided endlessly, and, keep what fragments we will, it would still be a living whole, a vessel of His Spirit! Such unrealities and inconsistencies would be a sore trial of faith, had not Jesus, who (cf. Joh 2:25), "knew what is in man," forewarned us that it should be so. The scandals against the Gospel, so contrary to all human opinion, are only all the more a testimony to the divine knowledge of the Redeemer.
What shall we do unto thee? - They knew him to be a prophet; they ask him the mind of his God. The lots had marked out Jonah as the cause of the storm; Jonah had himself admitted it, and that the storm was for "his" cause, and came from "his" God . "Great was he who fled, greater He who required him. They dare not give him up; they cannot conceal him. They blame the fault; they confess their fear; they ask "him" the remedy, who was the author of the sin. If it was faulty to receive thee, what can we do, that God should not be angered? It is thine to direct; ours, to obey."
The sea wrought and was tempestuous - , literally "was going and whirling." It was not only increasingly tempestuous, but, like a thing alive and obeying its Master's will, it was holding on its course, its wild waves tossing themselves, and marching on like battalions, marshalled, arrayed for the end for which they were sent, pursuing and demanding the runaway slave of God . "It was going, as it was bidden; it was going to avenge its Lord; it was going, pursuing the fugitive prophet. It was swelling every moment, and, as though the sailors were too tardy, was rising in yet greater surges, shewing that the vengeance of the Creator admitted not of delay."
Take me up, and cast me into the sea - Neither might Jonah have said this, nor might the sailors have obeyed it, without the command of God. Jonah might will alone to perish, who had alone offended; but, without the command of God, the Giver of life, neither Jonah nor the sailors might dispose of the life of Jonah. But God willed that Jonah should be cast into the sea - where he had gone for refuge - that (Wisdom 11:16) wherewithal he had "sinned, by the same also he might be punished" as a man; and, as a prophet, that he might, in his three days' burial, prefigure Him who, after His Resurrection, should convert, not Nineveh, but the world, the cry of whose wickedness went up to God.
For I know that for my sake - o "In that he says, "I know," he marks that he had a revelation; in that he says, "this great storm," he marks the need which lay on those who cast him into the sea."
The men rowed hard - , literally "dug." The word, like our "plowed the main," describes the great efforts which they made. Amid the violence of the storm, they had furled their sails. These were worse than useless. The wind was off shore, since by rowing alpine they hoped to get back to it. They put their oars well and firmly in the sea, and turned up the water, as men turn up earth by digging. But in vain! God willed it not. The sea went on its way, as before. In the description of the deluge, it is repeated Gen 7:17-18, "the waters increased and bare up the ark, and it was lifted up above the earth; the waters increased greatly upon the earth; and the ark went upon the face of the waters." The waters raged and swelled, drowned the whole world, yet only bore up the ark, as a steed bears its rider: man was still, the waters obeyed. In this tempest, on the contrary, man strove, but, instead of the peace of the ark, the burden is, the violence of the tempest; "the sea wrought and was tempestuous against them" . "The prophet had pronounced sentence against himself, but they would not lay hands upon him, striving hard to get back to land, and escape the risk of bloodshed, willing to lose life rather than cause its loss. O what a change was there. The people who had served God, said, Crucify Him, Crucify Him! These are bidden to put to death; the sea rageth; the tempest commandeth; and they are careless its to their own safety, while anxious about another's."
Wherefore (And) they cried unto the Lord - "They cried" no more "each man to his god," but to the one God, whom Jonah had made known to them; and to Him they cried with an earnest submissive, cry, repeating the words of beseeching, as men, do in great earnestness; "we beseech Thee, O Lord, let us not, we beseech Thee, perish for the life of this man" (i. e., as a penalty for taking it, as it is said, Sa2 14:7. "we will slay him for the life of his brother," and, Deu 19:21. "life for life.") They seem to have known what is said, Gen 9:5-6. "your blood of your lives will I require; at the hand of every beast will I require it and at the hand of man; at the hand of every man's brother will I require the life of man. Whoso sheddeth man's blood, by man shall his blood be shed, for in the image of God made He man" , "Do not these words of the sailors seem to us to be the confession of Pilate, who washed his hands, and said, 'I am clean from the blood of this Man?' The Gentiles would not that Christ should perish; they protest that His Blood is innocent."
And lay not upon us innocent blood - innocent as to them, although, as to this thing, guilty before God, and yet, as to God also, more innocent, they would think, than they. For, strange as this was, one disobedience, their whole life, they now knew, was disobedience to God; His life was but one act in a life of obedience. If God so punishes one sin of the holy Pe1 4:18, "where shall the ungodly and sinner appear?" Terrible to the awakened conscience are God's chastenings on some (as it seems) single offence of those whom He loves.
For Thou, Lord, (Who knowest the hearts of all men,) hast done, as it pleased Thee - Wonderful, concise, confession of faith in these new converts! Psalmists said it, Psa 135:6; Psa 115:3. "Whatsoever God willeth, that doeth He in heaven and in earth, in the sea and in all deep places." But these had but just known God, and they resolve the whole mystery of man's agency and God's Providence into the three simple words , as (Thou) "willedst" (Thou) "didst." "That we took him aboard, that the storm ariseth, that the winds rage, that the billows lift themselves, that the fugitive is betrayed by the lot, that he points out what is to be done, it is of Thy will, O Lord" . "The tempest itself speaketh, that 'Thou, Lord, hast done as Thou willedst.' Thy will is fulfilled by our hands." "Observe the counsel of God, that, of his own will, not by violence or by necessity, should he be cast into the sea. For the casting of Jonah into the sea signified the entrance of Christ into the bitterness of the Passion, which He took upon Himself of His own will, not of necessity. Isa 53:7. "He was offered up, and He willingly submitted Himself." And as those who sailed with Jonah were delivered, so the faithful in the Passion of Christ. Joh 18:8-9. "If ye seek Me, let these go their way, that the saying might be fulfilled which" Jesus spake, 'Of them which Thou gavest Me, I have lost none. '"
They took up Jonah - o "He does not say, 'laid hold on him', nor 'came upon him' but 'lifted' him; as it were, bearing him with respect and honor, they cast him into the sea, not resisting, but yielding himself to their will."
The sea ceased (literally "stood") from his raging - Ordinarily, the waves still swell, when the wind has ceased. The sea, when it had received Jonah, was hushed at once, to show that God alone raised and quelled it. It "stood" still, like a servant, when it had accomplished its mission. God, who at all times saith to it Job 38:11, "Hitherto shalt thou come and no further, and here shall thy proud waves be stayed," now unseen, as afterward in the flesh Mat 8:26, "rebuked the winds and the sea, and there was a great calm" . "If we consider the errors of the world before the Passion of Christ, and the conflicting blasts of diverse doctrines, and the vessel, and the whole race of man, i. e., the creature of the Lord, imperiled, and, after His Passion, the tranquility of faith and the peace of the world and the security of all things and the conversion to God, we shall see how, after Jonah was cast in, the sea stood from its raging" . "Jonah, in the sea, a fugitive, shipwrecked, dead, sayeth the tempest-tossed vessel; he sayeth the pagan, aforetime tossed to and fro by the error of the world into divers opinions. And Hosea, Amos, Isaiah, Joel, who prophesied at the same time, could not amend the people in Judaea; whence it appeared that the breakers could not be calmed, save by the death of (Him typified by) the fugitive."
And the men feared the Lord with a great fear - because, from the tranquility of the sea and the ceasing of the tempest, they saw that the prophet's words were true. This great miracle completed the conversion of the mariners. God had removed all human cause of fear; and yet, in the same words as before, he says, "they feared a great fear;" but he adds, "the Lord." It was the great fear, with which even the disciples of Jesus feared, when they saw the miracles which He did, which made even Peter say, Luk 5:8. "Depart from me, for I am a sinful man, O Lord." Events full of wonder had thronged upon them; things beyond nature, and contrary to nature; tidings which betokened His presence, Who had all things in His hands. They had seen "wind and storm fulfilling His word" Psa 148:8, and, forerunners of the fishermen of Galilee, knowing full well from their own experience that this was above nature, they felt a great awe of God. So He commanded His people, "Thou shalt fear the Lord thy God Deu 6:13, for thy good always" Deu 6:24.
And offered a sacrifice - Doubtless, as it was a large decked vessel and bound on a long voyage, they had live creatures on board, which they could offer in sacrifice. But this was not enough for their thankfulness; "they vowed vows." They promised that they would do thereafter what they could not do then ; "that they would never depart from Him whom they had begun to worship." This was true love, not to be content with aught which they could do, but to stretch forward in thought to an abiding and enlarged obedience, as God should enable them. And so they were doubtless enrolled among the people of God, firstfruits from among the pagan, won to God Who overrules all things, through the disobedience and repentance of His prophet. Perhaps, they were the first preachers among the pagan, and their account of their own wonderful deliverance prepared the way for Jonah's mission to Nineveh.
Now the Lord had (literally "And the Lord") prepared - Jonah (as appears from his thanksgiving) was not swallowed at once, but sank to the bottom of the sea, God preserving him in life there by miracle, as he did in the fish's belly. Then, when the seaweed was twined around his head, and he seemed to be already buried until the sea should give up her dead, "God prepared the fish to swallow Jonah" . "God could as easily have kept Jonah alive in the sea as in the fish's belly, but, in order to prefigure the burial of the Lord, He willed him to be within the fish whose belly was as a grave." Jonah, does not say what fish it was; and our Lord too used a name, signifying only one of the very largest fish. Yet it was no greater miracle to create a fish which should swallow Jonah, than to preserve him alive when swallowed . "The infant is buried, as it were, in the womb of its mother; it cannot breathe, and yet, thus too, it liveth and is preserved, wonderfully nurtured by the will of God." He who preserves the embryo in its living grave can maintain the life of man as easily without the outward air as with it.
The same Divine Will preserves in being the whole creation, or creates it. The same will of God keeps us in life by breathing this outward air, which preserved Jonah without it. How long will men think of God, as if He were man, of the Creator as if He were a creature, as though creation were but one intricate piece of machinery, which is to go on, ringing its regular changes until it shall be worn out, and God were shut up, as a sort of mainspring within it, who might be allowed to be a primal Force, to set it in motion, but must not be allowed to vary what He has once made? "We must admit of the agency of God," say these men when they would not in name be atheists, "once in the beginning of things, but must allow of His interference as sparingly as may be." Most wise arrangement of the creature, if it were indeed the god of its God! Most considerate provision for the non-interference of its Maker, if it could but secure that He would not interfere with it for ever! Acute physical philosophy, which, by its omnipotent word, would undo the acts of God! Heartless, senseless, sightless world, which exists in God, is upheld by God, whose every breath is an effluence of God's love, and which yet sees Him not, thanks Him not, thinks it a greater thing to hold its own frail existence from some imagined law, than to be the object of the tender personal care of the Infinite God who is Love! Poor hoodwinked souls, which would extinguish for themselves the Light of the world, in order that it may not eclipse the rushlight of their own theory!
And Jonah was in the belly of the fish - The time that Jonah was in the fish's belly was a hidden prophecy. Jonah does not explain nor point it. He tells the fact, as Scripture is accustomed to do so. Then he singles out one, the turning point in it. Doubtless in those three days and nights of darkness, Jonah (like him who after his conversion became Paul), meditated much, repented much, sorrowed much, for the love of God, that he had ever offended God, purposed future obedience, adored God with wondering awe for His judgment and mercy. It was a narrow home, in which Jonah, by miracle, was not consumed; by miracle, breathed; by miracle, retained his senses in that fetid place. Jonah doubtless, repented, marveled, adored, loved God. But, of all, God has singled out this one point, how, out of such a place, Jonah thanked God. As He delivered Paul and Silas from the prison, when they prayed with a loud voice to Him, so when Jonah, by inspiration of His Spirit, thanked Him, He delivered him.
To thank God, only in order to obtain fresh gifts from Him, would be but a refined, hypocritical form of selfishness. Such a formal act would not be thanks at all. We thank God, because we love Him, because He is so infinitely good, and so good to us, unworthy. Thanklessness shuts the door to His personal mercies to us, because it makes them the occasion of fresh sins of our's. Thankfulness sets God's essential goodness free (so to speak) to be good to us. He can do what He delights in doing, be good to us, without our making His Goodness a source of harm to us. Thanking Him through His grace, we become fit vessels for larger graces . "Blessed he who, at every gift of grace, returns to Him in whom is all fullness of graces; to whom when we show ourselves not ungrateful for gifts received, we make room in ourselves for grace, and become meet for receiving yet more." But Jonah's was that special character of thankfulness, which thanks God in the midst of calamities from which there was no human exit; and God set His seal on this sort of thankfulness, by annexing this deliverance, which has consecrated Jonah as an image of our Lord, to his wonderful act of thanksgiving. | <urn:uuid:ea49b7b5-b7ff-41f6-9341-88d6b5bddf22> | CC-MAIN-2013-20 | http://www.sacred-texts.com/bib/cmt/barnes/jon001.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.98259 | 14,017 | 3 | 3 |
(Water on the Brain)
|Copyright © Nucleus Medical Media, Inc.|
- An excess of CSF is produced (rare)
- A blockage that doesn't allow CSF to drain properly (more commonly)
- Brain tumors
- Cancer in the cerebrospinal fluid (CSF)
- Swelling in the CSF (such as sarcoidosis)
- Cysts in the brain
- Malformations of the brain, such as:
- Brain injuries
- Infections of the brain or the meninges can be caused by a number of agents including bacteria, mycobacteria, fungus, viruses, and parasites, such as:
- Problems with the blood vessel in the brain
- Bleeding into the brain or CSF space
- Headache (often worse when lying down or upon first awakening in the morning or with straining)
- Nausea / Vomiting
- Problems with balance
- Difficulty walking
- Poor coordination
- Personality changes
- Memory problems
- Dementia in the elderly
- Coma and death
- Slow development
- Loss of developmental milestones—no longer able to do activities they once could do
- Bulging fontanelle (soft spot on the head)
- Large head circumference
- Shunt placement (ventriculoperitoneal shunt)—a shunt (a tube placed into the brain) allows excess CSF to drain into another area, usually the abdomen. Sometimes a temporary extraventricular drain (EVD) is placed.
- Third ventriculostomy—a hole is created in an area of the brain. It allows the CSF to flow out of the area where it is building up.
- Removal of the obstruction of CSF flow. For example: removal of tumor or cyst
- Lumbar puncture (spinal tap)—This involves the insertion of a needle between the back bones in the back to remove excess CSF.
- Medicines—In some cases, medicines, such as acetazolamide (Diamox) and furosemide (Lasix), may decrease the production of CSF.
- Other medicines such as steroids or mannitol may decrease swelling around lesions that are causing obstruction of CSF flow.
- Get regular prenatal care.
- Keep your child’s vaccines up to date.
- Protect yourself or your child from head injuries.
Toxoplasmosis—foodborne illness that may be prevented by:
- Carefully cook meat and vegetables.
- Correctly clean contaminated knives and cutting surfaces.
- Avoid handling cat litter, or wear gloves when cleaning the litter box.
- Cytomegalovirus (CMV)—talk to your doctor about identifying CMV in pregnancy
- Lymphocytic choriomeningitis virus (LCV) from pet rodents (mice, rats, hamsters)—avoid rodent contact during pregnancy
- Viruses that cause chickenpox or mumps—can be prevented with vaccinations
American Neurological Association http://www.aneuroa.org/
Hydrocephalus Foundation, Inc. http://www.hydrocephalus.org/
National Institute for Neurological Disorders and Stroke http://www.ninds.nih.gov/
Health Canada http://www.hc-sc.gc.ca/
Spina Bifida and Hydrocephalus Association of Canada http://www.sbhac.ca/
Goetz CG. Textbook of Clinical Neurology. 3rd ed. Philadelphia, PA: WB Saunders Company; 2007.
Hydrocephalus in adults. EBSCO DynaMed website. Available at: http://www.ebscohost.com/dynamed/what.php . Updated May 25, 2012. Accessed September 20, 2012.
Hydrocephalus in children. EBSCO DynaMed website. Available at: http://www.ebscohost.com/dynamed/what.php . Updated May 21, 2012. Accessed September 20, 2012.
Hydrocephalus fact sheet. National Institute of Neurological Disorders and Stroke website. Available at: http://www.ninds.nih.gov/disorders/hydrocephalus/detail%5Fhydrocephalus.htm . Updated December 16, 2011. Accessed September 20, 2012.
Kliegman R, Behrman RE, Jenson HB, Stanton BF. Nelson Textbook of Pediatrics. 18th ed. Philadelphia, PA: WB Saunders Company; 2007.
- Reviewer: Rimas Lukas, MD
- Review Date: 09/2012 -
- Update Date: 00/93/2012 -
This content is reviewed regularly and is updated when new and relevant evidence is made available. This information is neither intended nor implied to be a substitute for professional medical advice. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with questions regarding a medical condition.
Copyright © EBSCO Publishing
All rights reserved. | <urn:uuid:35a54a79-d3ee-4975-9a4a-6feec707ccde> | CC-MAIN-2013-20 | http://medicalcityhospital.com/your-health/condition_detail.dot?id=11771&lang=English&db=hlt&ebscoType=healthlibrary&widgetTitle=FOR%20ALL%20HOSTS%20***%20EBSCO%20-%20Condition%20Detail%20v2 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.829332 | 1,062 | 3.203125 | 3 |
In our Torah portion this week, it is written that Jacob "came to a certain place and stayed there that night" (Gen. 28:11). The Hebrew text, however, indicates that Jacob did not just happen upon a random place, but rather that "he came to the place" -- vayifga bamakom (וַיִּפְגַּע בַּמָּקוֹם). The sages therefore wondered why the Torah states bamakom, "the place," rather than b'makom, "a place"? Moreover, the verb translated "he came" is yifga (from paga': פָּגַע), which means to encounter or to meet, suggesting that Jacob's stop was a divine appointment.
The Hebrew word makom ("place") comes from the verb kum (קוּם), meaning "to arise," and in Jewish tradition, ha-makom became a Name for God. The early sages therefore interpreted the verse to mean that Jacob actually had his dream while in Jerusalem rather than in Bethel... Indeed, the Talmud identifies "the place" Jacob encountered as Mount Moriah - the location of the Akedah - based on the language used in Genesis 22:4: "On the third day, Abraham raised his eyes and saw the place (הַמָּקוֹם) in the distance" (Sanhedrin 95b, Chulin 91b). If that is the case (i.e., if Jacob had been miraculously transported south from the mountains of Bet El to what would later be called Jerusalem), then Jacob's dream of the ladder would have functioned as a revelation of the coming glory of the resurrected Messiah - the Promised Seed whom Isaac foreshadowed and through whom all the families of the earth would be blessed. It was Yeshua, the Angel of the LORD, who came to "descend" (as the Son of Man) and to "rise" (as the resurrected LORD) to be our mediator before God (see John 1:47-51). Perhaps the Talmud makes the claim that Jacob's vision occurred in Jerusalem because Bethel later became the site for one of two idolatrous shrines (i.e., the golden calves at Bethel and Dan) established by King Jeroboam of the Northern Kingdom which he set up to discourage worship at Solomon's Temple in the City of Jerusalem (see 1 Kings 12:28-29).
At any rate, the Hebrew word for "intercessor" (i.e., mafgia: מַפְגִּיעַ) comes from the same verb (paga') mentioned in our verse. Yeshua is our Intercessor who makes "contact" with God on our behalf. Through His sacrifice for our redemption upon the cross (i.e., his greater Akedah), Yeshua created a meeting place (paga') between God and man. Therefore we see the later use of paga' in Isaiah 53:6, "...the Lord laid on him (i.e., hifgia bo: הִפְגִּיעַ בּוֹ) the iniquity of us all," indicating that our sins "fell" on Yeshua as He made intercession for us (i.e., yafgia: יַפְגִּיעַ) for us (Isa. 53:12). Because of Yeshua, God touches us and we are able to touch God... And today, our resurrected LORD "ever lives to make intercession (paga') for us" (Heb. 7:25). He is still touched by our need and sinful condition (Heb. 4:15).
כֻּלָּנוּ כַּצּאן תָּעִינוּ אִישׁ לְדַרְכּוֹ פָּנִינוּ
וַיהוָה הִפְגִּיעַ בּוֹ אֵת עֲוֹן כֻּלָּנוּ
kul·la·nu katz·on ta·i·nu, ish le·dar·ko pa·ni·nu
vadonai hif·gi·a bo, et a·von kul·la·nu
"All we like sheep have gone astray; we have turned each to his own way;
but the LORD has laid on him the iniquity of us all."
Paga' is also a term for warfare or violent meetings, and this alludes to the collision between the powers of hell and the powers of heaven in the outworking of God's plan of redemption: "... he (i.e., the Savior/Messiah) will crush your head (ראשׁ), and you (i.e., the serpent/Satan) will crush his heel (עָקֵב)." This was the original prophecy of redemption, an encounter with evil that would provide atonement and retribution (see the "Gospel in the Garden"). Rabbi Yechezkel Levenstein, the mashgiach of Ponevezh, points out that the entire future of the Jewish people hinged on the vision given to Jacob - and in Jacob's response to it. Had he been prevented to return (i.e., through Laban's schemes to keep him in Charan), the Jewish people would have become enslaved and assimilated into the people of Aram, and ultimately the Messiah Himself would not have been born. Laban, then, embodied the desire of Satan to thwart the coming of the Promised Seed, and therefore he may be compared to Pharaoh, who likewise tried to enslave Israel in Egypt...
As I mentioned in my additional commentary on parashat Balak, Laban's worship of the serpent (nachash) led him to become one of the first enemies of the Jewish people (see "The Curses of Laban"). He tried to make Jacob a slave from the beginning, later claiming that all his descendants and possessions belonged to him (Gen. 31:43). After Jacob escaped from his clutches, Laban had a son named Beor (בְּעוֹר) who became the father of the wicked prophet Balaam (בִּלְעָם). In other words, the "cursing prophet" Balaam was none other than the grandson of diabolical Laban. Here is a diagram to help you see the relationships:
In Jewish tradition, Laban (the patriarch of Balaam) is regarded as even more wicked than the Pharaoh who enslaved the Jews in Egypt. This enmity is enshrined during the Passover Seder when we recall Laban's treachery as the one who "sought to destroy our father, Jacob." Spiritually understood, Laban's hatred of Jacob (i.e., Israel) was intended to eradicate the Jewish nation at the very beginning. Had Laban succeeded, Israel would have been assimilated and disappeared from history, and more radically, God's plan for the redemption of humanity through the Promised Seed would have been overturned....
Thankfully, Jacob was enabled by God's grace to overcome Laban and to return to the Promised Land, and even more thankfully, the Messiah was able to crush the rule of Satan through His atoning sacrifice and resurrection at Moriah. Yeshua, our ascended LORD, is ha-makom - the place where we encounter the Living God....
The authority and reign of Satan has been gloriously vanquished by Yeshua our Savior, blessed be He, though there is coming a time of judgment for all who dwell upon the earth. The time immediately preceding the appearance of the Messiah will be a time of testing in which the world will undergo various forms of tribulation, called chevlei Mashiach (חֶבְלֵי הַמָּשִׁיחַ) - the "birth pangs of the Messiah" (Sanhedrin 98a; Ketubot, Bereshit Rabbah 42:4, Matt. 24:8). Some say the birth pangs are to last for 70 years, with the last 7 years being the most intense period of tribulation -- called the "Time of Jacob's Trouble" / עֵת־צָרָה הִיא לְיַעֲקב (Jer. 30:7). The climax of the "Great Tribulation" (צָרָה גְדוֹלָה) is called the great "Day of the LORD" (יוֹם־יהוה הַגָּדוֹל) which represents God's wrath poured out upon a rebellious world system. On this fateful day, the LORD will terribly shake the entire earth (Isa. 2:19) and worldwide catastrophes will occur. "For the great day of their wrath has come, and who can stand?" (Rev. 6:17). The prophet Malachi likewise says: "'Surely the day is coming; it will burn like a furnace. All the arrogant and every evildoer will be stubble, and that day that is coming will set them on fire,' says the LORD Almighty. 'Not a root or a branch will be left to them'" (Mal. 4:1). Only after the nations of the world have been judged will the Messianic kingdom (מַלְכוּת הָאֱלהִים) be established upon the earth. Yeshua will return to Jerusalem to establish His glorious kingdom (as foretold by the prophets) and then "all Israel will be saved." The Jewish people will finally understand that Mashiach ben Yosef (the Suffering Servant) and Mashiach ben David (the anointed King of Israel) are one and the same... The 1,000 year reign of King Messiah will then commence (Rev. 20:4).
Presently our responsibility is to come to "the place" (ha-makom) where God's work of redemption was completed - that is, to the Cross of Yeshua. There we turn to God in repentance (teshuvah) and consign our sins to the judgment borne for us through Yeshua's sacrifice as our kapporah (atonement). By faith we understand that the resurrected Savior is forever ha-makom, "the place" where God meets with us, and we learn to abide in His gracious Presence by means of the Holy Spirit. We cease striving to justify ourselves (i.e., by virtue of works), but instead receive God's love and Spirit into our hearts. This means that we will study the Scriptures (truth), obey the Torah of Yeshua and His emissaries, and share the good message of God's redemption with a lost and dying world...
We are fast approaching, however, the prophesied "End of Days" (acharit hayamim), when the LORD will return to earth to "settle accounts" with its inhabitants (including those who profess to obey Him). We do not have much more time, chaverim. We must encourage people to call upon the LORD for salvation before it is too late...
כִּי־כֵן אהֵב אֱלהִים אֶת־הָעוֹלָם
עַד־אֲשֶׁר נָתַן בַּעֲדוֹ אֶת־בְּנוֹ אֶת־יְחִידוֹ
וְכָל־הַמַּאֲמִין בּוֹ לא־יאבַד
כִּי בוֹ יִמְצָא חַיֵּי עוֹלָם׃
ki-khen o·hev E·lo·him et-ha·o·lam,
ad-a·sher na·tan ba·a·do et-be·no et-ye·chi·do,
ve·khol-ha·ma·a·min bo, lo-yo·vad
ki vo yim·tza cha·yei o·lam
"For God so loved the world that he gave his only and unique Son,
so that whoever trusts in Him should not be destroyed, but have eternal life"
Hebrew Study Card | <urn:uuid:5022a41a-5bc4-487b-947a-a34645767f2d> | CC-MAIN-2013-20 | http://www.hebrew4christians.com/Scripture/Parashah/Summaries/Vayetzei/Ladder/ladder.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.930064 | 2,857 | 3.046875 | 3 |
Imprint training pioneer Dr. Robert Miller debunks some of the common misconceptions about this useful foal handling technique.
The technique of imprinting has been used to help train foals for more than 50 years, but common misconceptions about what it means to imprint young horses still exist. While recent years have seen more and more owners and trainers fully commit to properly using this valuable foal handling procedure, a few basic misunderstandings still remain that prevent those who attempt to apply imprint training techniques from achieving the results that they desire.
Photo by Megan Parks
Dr. Robert Miller, one of the pioneering practitioners of equine imprint training, believes that proper implementation of imprint training principles make a horse more trainable and easier to handle, but only if it is done correctly and completely—and only if the person performing the technique is realistic about exactly what the process is meant to achieve.
“You’re not going to change [a foal’s] personality,” Miller says. “Energetic will be energetic. Lazy will be lazy. Highly reactive will be highly reactive. Indifferent will be indifferent. Intelligent will be intelligent. But what [imprinting] does is, if you do it right after they are born, before any other learning goes in—you build a foundation.”
Understanding what that foundation is meant to support is the key to getting the long-term results that proper imprint training can provide.
“You’re not imprint training the body,” Miller says. “You’re imprint training the mind.”
In the wild, horse have always been prey animals and, therefore, a foal must be able to get to its feet within an hour after birth and run with its herd as a means of self defense. This history means that even domesticated horses are born with a healthy paranoia of just about everything. Fear is the natural first response a horse feels to new stimuli. They have to learn not to fear people, objects and even other animals.
It is almost impossible to teach a horse anything if his focus is on self preservation.
“It applies to us, too,” Miller says. “If we are preparing to take a test, and we’re really scared, our learning ability isn’t nearly as great as if we are laid back and relaxed and can think things out.”
Fear and anxiety interfere with the learning procedure because, in any animal, it increases the primary defensive behavior.
“In some species, [the defensive behavior] is to fight, so fear will precipitate an attack,” Miller explains. “But in the horse, fear usually precipitates flight or the desire for flight.
“In the horse—and this is only true in the horse and not dogs, cattle or people—control of the feet controls the mind. When you control the animal’s primary defense, you control the mind. In wolves, it’s control of the muzzle. In the human being, it’s control of the hands. In horses, it’s control of the feet. Every school of horsemanship has relied upon [this concept] even if they didn’t understand why it worked.”
The main goal of all imprint training is to minimize the fear responses that can keep a horse from learning.
“If we don’t have the fear, the horse isn’t thinking about running away,” Miller says. “Instead, the horse is paying attention and subject to quick learning.”
It is important to Miller that people understand that imprinting and training are two different things—a fact that he believes sometimes gets overlooked.
“Imprinting is a visual memorization of what the foal sees moving around it, and it triggers an instinct to trust and to follow,” Miller says. “Training is learning by reinforcement. The only reason I called the process Imprint Training is because it’s training during the imprint period.”
In nature, a foal imprints on its mother and the other members of its herd, serving to protect the foal against predators in the wild. However, foals can also imprint on a variety of other creatures, too.
“In domestication, it can be a human, or a dog or a piece of machinery,” Miller says. “Anything that moves, the foal will be imprinted upon it and tend to want to go to it and be near it and trust it.”
It is this inherent trust that Miller uses to teach the newborn foal that the world isn’t nearly as scary as it would otherwise believe it to be.
“In an hour to hour and a half, I can get so much done with a foal,” he says. “It’s a great time saver.”
Doing It Right
Miller often regrets calling the process Imprint Training because people confuse the meaning of term.
“I often have people come up to me and say, ‘I imprinted my horse when he was 6 months of age,’ or it was an 18 month old,” he says. “They don’t understand that imprinting only occurs in the horse [immediately after birth].
“It’s not that way with all species. Dogs, for example, imprint between 6 and 7 weeks of age. But that is a different species. They are not precocial. They are helpless as babies, just like humans. Whereas in the precocial species like horses, the ones who must be able to run from danger by the time they are one hour of age, the imprinting begins in the hour after birth.”
To Miller, the advantages of imprint training are that it happens very quickly, you don’t have to rely on or undo any previous negative learning, and since the foal is already down and has yet to stand, you can prepare a foal for most of what it needs to know for the rest of its life in about an hour.
During that time, Miller touches literally every part of the foal—from nose to tail and from ears to hooves—with the goal of habituating [making it comfortable with and unafraid of] common loud or frightening sounds or movements that can spook a horse, being touched in the areas that will be handled during farriery and during veterinary examinations of every body opening, as well as pressure in the saddle area. Teaching foals not to fear pressure in the girth area is usually saved for the second session when they are already on their feet.
“You remove the anxiety, the fear,” Miller says, “and they are quicker to learn other things.”
The mistake most people make when imprint training is that they don’t do enough at the right times, either during the first session when the foal has yet to stand, or during subsequent training sessions when the foal is on its feet.
“I poll every audience that I speak to,” Miller says, “and what I’ve learned from doing this over the last four decades is the two most common mistakes are rushing the training and failing to follow up after the first session.”
Miller says it is almost always men who make the mistake of rushing the first session of imprint training.
“They are usually working for a ranch or farm,” he says, “and they show them my video and tell them they want them to do this with their foals, and they rush it.”
For example, when Miller is habituating a foal to having his feet touched, he taps the bottom of each hoof with his hand 100 times to ensure that he’s tapping long enough for the foal’s initial fear response to the touch to wear off and he learns that having his foot touched isn’t something to be afraid of.
“Men will often tap the hoof 5-10 times and quit,” Miller says, “and what they’ve taught the foal to do is to fear that touch. They have failed to habituate by stopping during the fear and flight period in the foal’s mind.”
By doing this, they have actually sensitized the foal to fear of having its feet touched, and the next day, the person can’t get anywhere near them to continue the second phase of the training.
“Whereas the foals I do, you can touch them anywhere on their body the next day,” Miller says. “Women just love this part of the training because of the intimacy of it.”
Conversely, Miller states it is almost always women who fail to complete imprint training after the first session.
“They do the birth session, but they don’t do sessions two through seven when the foal is on its feet, which is where you get their respect,” he says of the sessions where the foal is habituated to a variety of body pressure and movement cues, as well as leading and tying.
“I ask them ‘Why didn’t you do it?’ and the answer is always the same. ‘He didn’t like it, so I didn’t do it.’ They often admit that they got bad results, but claim to know what they did wrong.”
The Lessons Transfer
While one of the major complaints about imprint training is that the foal will only respond to the person he imprinted upon, Miller doesn’t believe this to be the case. The foal may be slightly more responsive to that individual, but if that person imprint trains the foal properly, anyone who handles that horse in the future will have an easier time than if the horse had never received imprint training.
Miller admits that the trust an imprinted animal gains in its imprinter doesn’t transfer directly to all other humans. However, it does prepare them for what other people may do with them and makes the horse’s transition to the new handler’s style easier and faster.
“Say you have an individual sacking out a colt who has never been ridden with a blanket, and he stands perfectly happy while everything happens,” Miller says. “If a stranger comes along with a blanket, he’s going to react with fear to the stranger, but once he’s past that and the blanket begins, the attitude is, ‘Oh, I remember,’ and the horse relaxes.
“As far as their future talent on the racetrack, as a jumping horse, as a roping horse or a barrel horse, imprint training doesn’t interfere. It actually enhances it because you remove most of the fear factors [that can interfere with learning]. If a foal has learned to be unafraid of a human presence and being touched on any part of its body, and now you want to make a rope horse out of it, it’s not going to get all boogered when the rope drags on the ground beside it or you throw it out in front of him. I wouldn’t say it improves performance, but it enhances learning.”
The same basic principle applies to imprint training a young horse to be relaxed in a stall area around other horses, in your training pen and especially when approaching what some horses consider to be a scary object—a barrel. Once a horse is hauled to a show, the environment may be different and cause anxiety temporarily, but the similarity of the situation to what they’ve experienced before will help the horse relax and perform to the best of its ability faster.
Although imprint training doesn’t affect a horse’s personality, it will help more reactive horses become more handlable. A highly reactive individual will still remain flighty, but if you’ve de-sensitized him to the everyday things he might spook at—which interferes with what you’re trying to teach them at any point—it helps him learn what you’re teaching faster.
Respect, Not Fear
Imprint training has gained wide acceptance over the last 25 years, but there are still some trainers who object to the technique. Many object for a one primary reason—they believe that since imprinting lessens a horse’s fears, it makes the horse much harder for anyone who didn’t imprint that horse to control.
“The objective in training horses should be 100 percent respect and zero percent fear,” Miller states. “Some people think you’ve got to show them who’s boss. You’ve got to show a horse who’s the leader, but you don’t have to show them who’s boss, which infers fear of the boss. It’s not necessary to be able to control the horse.”
The idea that fear of humans is necessary to maintain control of a horse and to get it to do what you want it to do ignores a fact that practitioners of natural horsemanship have known for decades—if you really want your horse to want to work for you, fear can’t be part of the equation.
For Miller, the goal of imprint training is to get the horse to see the human as a herd leader and be submissive to his or her requests as a result. You don’t want a horse to fear you, but depend on you for leadership and guidance. When done correctly, imprint training enhances a horse’s relationship with all humans. All it takes is putting in the effort to achieve the results you want instead of looking for a quick, fear-based fix.
According to Miller, in wild mustang herds, the leader is most often the oldest mare, which indicates that physical strength has nothing to do with gaining the respect of the horse or achieving leadership. The stallion, who many would perceive as the natural choice to be the leader, runs in back and chases the stragglers to get them to keep up with the leading mare, serving as more of a guardian for the herd than a leader of it.
“There are still some trainers that object to [imprint training], and there are still trainers who do it improperly and criticize the technique,” Miller says. “Any training technique ever devised, if you don’t do them correctly, you can’t expect good results.
“I hear people say, ‘I did my colt Parelli style,’ or, ‘I did my colt Kurt Pate style, and it didn’t do any good.’ Well, they didn’t do it correctly because they are very good training techniques. They blame the technique rather than themselves.”
But as with any training program, in the end, you and your horse will get out of it what you put into it. After 50 years of teaching and practicing the techniques of imprint training, Miller believes that if it is done properly, it will absolutely make a difference in the teachability and overall confidence and happiness of the horse.
“Simply put, imprint training works,” Miller says emphatically. “It works consistently and very, very effectively.
“The foal respects the mare, but is also bonded to her and trusts her. We can get exactly the same thing, trust combined with respect.”
Dr. Robert Miller graduated from Colorado State University and settled in Thousand Oaks, Calif., where he founded the Conejo Valley Veterinary Clinic. He retired in 1987 after 31 years as a renowned veterinarian and expert in ethology (the study of animal behavior) in order to devote his full time to the teaching of equine behavior and to support the revolution in horsemanship that began in the Western United States in the late 20th Century and is now a worldwide phenomenon.
He is best known for his scientifically based system of training newborn foals, called imprint training, which is now in use all over the world. He has authored several books, including Imprint Training of the Newborn Foal, and has been on the editorial staff of several veterinary practice and horse industry magazines. | <urn:uuid:4901912d-84db-446b-92c3-2710683c63b8> | CC-MAIN-2013-20 | http://www.barrelhorsenews.com/articles/how-to/3885 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.965612 | 3,409 | 2.625 | 3 |
- Types of home dialysis
- Daily HHD
- Nocturnal HHD
- Standard HHD
- News & events
- Message boards
- For professionals
- About us
...everything you need to know about doing dialysis at home.
Here we present a chronological tour of dialysis from the beginning.
All photos by Jim Curtis; descriptions courtesy of Baxter.
The first practical artificial kidney was developed during World War II by the Dutch physician Willem Kolff. The Kolff kidney used a 20-meter long tube of cellophane sausage casing as a dialyzing membrane. The tube was wrapped around a slatted wooden drum. Powered by an electric motor, the drum revolved in a tank filled with dialyzing solution. The patient’s blood was drawn through the cellophane tubing by gravity as the drum revolved. Toxic molecules in the blood diffused through the tubing into the dialyzing solution. Complete dialysis took about six hours. The Kolff kidney effectively removed toxins from the blood, but because it operated at low pressure, it was unable to remove excess fluid from the patient’s blood. Modern dialysis machines are designed to filter out excess fluid while cleansing the blood of wastes.
Blood was drained from the patient into a sterile container. Anticlotting drugs were added, and the filled container was hung on a post above the artificial kidney and connected to the cellulose acetate tubing that was wound around the wooden drum. A motor turned the drum, pulling the blood through the tubing by gravity.
The tank underneath the drum was filled with dialyzing fluid. As the blood-filled tubing passed through this fluid, waste products from the blood diffused through the tubing into the dialyzing fluid. The cleansed blood collected in a second sterile container at the other end of the machine. When all of the blood had passed through the machine, this second container was raised to drain the blood back into the patient.
George Thorn, MD, of the Peter Bent Hospital in Boston, MA, invited Willem Kolff, MD, to meet with Carl Walters, MD, and John Merrill, MD, to redesign and modify the original Kolff Rotating Drum Kidney. The artificial kidney was to be used to support the first proposed transplant program in the United States. This device was built by Edward Olson, an engineer, who would produce over forty of these devices, which were shipped all over the world.
Cellulose acetate tubular membrane, the same type of membrane that is used as sausage casing, was wrapped around the drum and connected to latex tubing that would be attached to the patient’s bloodstream. The drum would be rotating in the dialyzing fluid bath that is located under the drum.
The patient’s blood was propelled through the device by the “Archimedes screw principle” and a pulsatile pump. A split coupling was developed to connect the tubing to the membrane, a component necessary to prevent the tubing and membrane from twisting. This connection is at the inlet and outlet of the rotating drum.
The membrane surface area could be adjusted by increasing or decreasing the number of wraps of tubing. The Plexiglas™ hood was designed to control the temperature of the blood. The cost of this device was $5,600 in 1950.
Murphy WP Jr., Swan RC Jr., Walter CW, Weller JM, Merrill JP. Use of an artificial kidney. III. Current procedures in clinical hemodialysis. J Lab Clin Med. 1952 Sep; 40(3): 436-44.
Leonard Skeggs, PhD, and Jack Leonards, MD, developed the first parallel flow artificial kidney at Case Western Reserve in Cleveland, OH. The artificial kidney was designed to have a low resistance to blood flow and to have an adjustable surface area.
Two sheets of membrane are sandwiched between two rubber pads in order to reduce the blood volume and to ensure uniform distribution of blood across the membrane to maximize efficiency. Multiple layers were utilized. The device required a great deal of time to construct and it often leaked. This was corrected by the use of bone wax to stop the leak.
The device had a very low resistance to blood flow and it could be used without a blood pump. If more than one of these units were used at a time, a blood pump was required. Skeggs was able to remove water from the blood in the artificial kidney by creating a siphon on the effluent of the dialyzing fluid. This appears to be the first reference to negative pressure dialysis.
This technology was later adapted by Leonard Skeggs to do blood chemistries. It was called the SMA 12-60 Autoanalyzer.
This artificial kidney was developed to reduce the amount of blood outside of the body and to eliminate the need for pumping the blood through the device.
Guarino used cellulose acetate tubing. The dialyzing fluid was directed inside the tubing and the blood, entering the device from the top, cascading down the membrane. The metal tubing inside the membrane gave support to the membrane.
The artificial kidney ahd a very low blood volume, but it had limited use because there was concern regarding the possibility of the dialyzing fluid leaking into the blood.
Von Garrelts had constructed a dialyzer in 1948 by wrapping a cellulose acetate membrane around a core. The layers of membrane were separated by rods. It was very bulky and weighed over 100 pounds.
William Inouye, MD, took this concept and miniaturized it by wrapping the cellulose acetate tubing around a beaker and separating the layers with fiberglass screening. He placed this “coil” in a Presto Pressure Cooker in order to enclose it and control the temperature. In addition, he made openings in the pot for the dialyzing fluid. With the use of a vacuum on the dialysate leaving the pot, he was able to draw the excess water out of the patient’s blood. A blood pump was required to overcome resistance within the device.
This device was used clinically and when it was used in a closed circuit, the exact amount of fluid removed could be determined.
Inouye WY, Engelberg J. A simplified artificial dialyzer and ultrafilter. Surg Forum. Proceedings of the Forum Sessions, Thirty-ninth Clinical Congress of the American College of Surgeons, Chicago, Illinois, October, 1953; 4: 438-42.
Home Dialysis Central is made possible through the generous annual contributions of our Corporate Sponsors. Learn more about becoming a Corporate Sponsor. | <urn:uuid:bc4bbbb2-dc86-44a4-8523-9c0b20d1285a> | CC-MAIN-2013-20 | http://homedialysis.org/types/museum | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.963169 | 1,365 | 3.4375 | 3 |
- Historic Sites
History As A Cure
December 1956 | Volume 8, Issue 1
Not long ago two teen-age boys in New York City got into trouble with the law. The police laid hands on them as juvenile delinquents, and in due course the boys appeared in court. Judge J. Randall Creel, of Magistrates’ Court, faced the tough problem that confronts jurists in such cases: should he send the boys off to jail forthwith, or should he see whether they might not be able to straighten themselves out? He decided on the latter course, and in looking for a means of rehabilitation he selected an unusual instrument: American history.
He gave these high school youngsters a historical research job to do.
They should forthwith (he told them) make a study of the Battle of Long Island—an important struggle in the Revolutionary War, in which General Washington’s army narrowly escaped destruction when the British commander, General Howe, landed an overpowering force on Long Island late in August of 1776. They would find, he suggested, that utter disaster for the American cause was averted because of a valiant rear-guard action fought by a regiment of the Maryland line. Let them, therefore, go to the sources and find out all they could about this action; having done this, each boy must submit a paper, describing what had happened, citing his sources, and bringing out what the gallant stand of the embattled Marylanders meant to future generations of Americans. By the jobs they did he would determine whether or not they had reinstated themselves as reliable junior members of their community.
The boys went to work, and eventually they presented their papers. They had done a good deal of hard work. They had traced the movements of the different troops engaged. They had run down the historic markers which (largely ignored by present-day citizens) show where the actions took place. They had found out all they could about the gallant old Maryland battalion, and they had gone to the trouble of listing the names and ranks of all the Marylanders who were killed or captured in the fight. And it seems that this excursion into American history had been a powerful medicine for good. These boys learned something—about their own home city (for they live in Brooklyn, and the battle they investigated had been fought along what are now the familiar streets and squares and parks of their own neighborhood), about the price a former generation had to pay for American freedom and happiness, and about the way in which boys of their own age, long ago, met a profound challenge.
Specifically: in their research into the history of the Battle of Long Island, these boys learned that a spirited rear-guard action by a regiment of Maryland soldiers—the 5th Maryland Infantry, under Colonel William Smallwood—had kept the American defeat from becoming a complete, irremediable disaster. The Marylanders fought a delaying action which enabled the bulk of Washington’s army to get away. Four or five hundred of them charged a British strong point, losing more than half of their numbers and going down at last to bloody defeat, but gaining the important fragments of time that enabled the beaten Continental army to get away clean and live to fight another day.
They found out that the Maryland soldiers who saved the day were simply boys like themselves. They dug out the old muster rolls, which gave each man’s name and age. Down the long columns, in faded archaic script, were the scanty records of boys who dared everything and gave everything more than a century and a half ago—and most of them turned out to be lads of eighteen or nineteen. This probably meant that they were even younger than that; boys usually add a year or two to their ages when they enlist in wartime. The Marylanders, in other words, were early teen-agers, who faced up to something big before they had got out of adolescence simply because youth wants nothing in all the world so much as the chance to respond to a real challenge. Life was good to them; it gave them the challenge, they met it—and today, as a direct result, we have an American nation.
One of the two boys, writing about the battle, expressed himself like this:
“If the youth of today is not conscious of the historical background of Long Island, the battle I will now unfold will make him so. I will, therefore, endeavor to show you what today’s youth could and should be capable of as compared to those who fought to preserve their rights as individuals. … It was merely lads of seventeen and eighteen who fought and died making this supreme sacrifice in defense of the preservation of the American Army and their fight for independence.”
The other boy, carried away by the story of the fight, found himself writing in the best vein of the military historian:
”…Thrice again these brave young Marylanders charged upon the house, once driving the gunners from their pieces within its shadow; but numbers overwhelmed them, and for twenty minutes the fight was terrible. Washington, Putnam and other General Officers who witnessed it … saw the overwhelming force with which their brave compatriots were contending, and held their breath in suspense and fear.… Washington wrung his hands, in the intensity of his emotion, and exclaimed: ‘Good God, what brave fellows I must this day lose!’” | <urn:uuid:ed5b147c-4b25-4679-953c-67169c32d06c> | CC-MAIN-2013-20 | http://www.americanheritage.com/content/history-cure | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.977983 | 1,100 | 3.234375 | 3 |
2. Physical Activity
Twenty-four RCT articles were reviewed for the effect of physical activity on weight loss, abdominal fat (measured by waist circumference), and changes in cardiorespiratory fitness (VO2 max). Thirteen articles were deemed acceptable (346, 363, 365, 369, 375, 401, 404, 406, 432, 434, 445-447). Only one of these RCTs compared different intensities and format with a control group, although the goal was to increase physical activity and not specifically to produce weight loss (401). Results from this trial were subsequently reported after 2 years, but these no longer included the control group (447). One additional study did not have a no-treatment control group but compared three active treatment groups with each other: diet only, exercise only, and combination exercise plus diet (448).
Most RCTs described the type of physical activity as cardiovascular endurance activities in the form of aerobic exercise such as aerobic dancing, brisk walking, jogging, running, riding a stationary bicycle, swimming, and skiing, preceded and followed by a short session of warmup and cool-down exercises. Some physical activity programs also included unspecified dynamic calisthenic exercises (363, 369, 406, 446).
The intensity of the physical activity was adapted to each individual and varied from 60 to 85 percent of the individual's estimated maximum heart rate, or was adjusted to correspond to approximately 70 percent of maximum aerobic capacity (VO2 max). The measure of physical fitness included VO2 max. The frequency of physical activity varied from three to seven sessions a week and the length of the physical activity session ranged from 30 to 60 minutes. Some physical activity programs were supervised, and some were home-based. Adherence to the prescribed physical activity program was recorded and reported in some studies and not mentioned in others. Most studies did not estimate the caloric expenditure from the physical activity or report calorie intake. The duration of the intervention varied from 16 weeks to 1 year; six articles reported on trials that lasted at least 1 year (346, 363, 375, 401, 406, 432).
Rationale: Twelve RCT articles examined the effects of physical activity, consisting primarily of aerobic exercise, on weight loss compared to controls (346, 363, 365, 369, 375, 401, 404, 406, 432, 434, 445, 446). Ten of the 12 RCT articles reported a mean weight loss of 2.4 kg (5.3 lb) (or 2.4 percent of weight) (363, 369, 375, 406, 419, 432, 434) or a mean reduction in BMI of 0.7 kg/m2 (2.7 percent reduction) (346, 365, 401) in the exercise group compared to the control group. In three of these ten studies, the weight loss was < 2 percent of body weight (< 2 kg) (4.4 lb) (369, 375, 434). In contrast, two RCTs showed no benefit on weight from exercise, reporting weight gain in the exercise group compared to the control group (445, 446). In one of these studies, the control group received only diet advice but nevertheless lost 9 kg (19.8 lb), whereas the exercise group lost only 7 kg (15.4 lb) (445). In the second study, there was a total of only 10 participants, all having noninsulin-dependent diabetes mellitus, and the control group lost 3 kg (6.6 lb) whereas the exercise group lost only 2 kg (4.4 lb) (446). A meta-analysis of 28 publications of the effect on weight loss of exercise compared to diet or control groups showed that aerobic exercise alone produces a modest weight loss of 3 kg (6.6 lb) in men and 1.4 kg (3.1 lb) in women compared to controls (449).
Ten articles reported on RCTs that had a diet-only group in addition to an exercise-only group (346, 363, 365, 369, 375, 406, 432, 434, 445, 448). In every case except one (365),the exercise-only group did not experience as much weight loss as the diet-only group. The diet-only group produced approximately 3 percent, or 3 kg (6.6 lb), greater weight loss than the exercise-only group.
No single study examined the length of the intervention in relation to the weight loss outcome. Only one study compared the effect on maximum oxygen uptake of different intensities and formats of physical activity over a 1-year follow-up (401) and 2-year follow-up period (447). Better adherence over 1 year was found if the exercise was performed at home rather than in a group setting, regardless of the intensity level. Subsequently, the different exercise groups were compared with each other over the longer term (2 years), and better long-term adherence was found in the higher intensity home-based exercise group compared to the lower intensity home-based or higher intensity group-based exercise groups (447).
The question of whether physical activity enhances long-term maintenance of weight loss has not been formally examined in RCTs. Examination of long-term weight loss maintenance produced by physical activity interventions compared with diet-only interventions cannot easily be compared between RCTs because of numerous differences between studies with respect to design, sample size, intervention content and delivery, and characteristics of the study population samples. However, a number of analyses of observational and post hoc analyses of intervention studies have examined whether physical activity has a beneficial effect on weight. Cross-sectional studies have generally shown that physical activity is inversely related to body weight (450-454) and rate of weight gain with age (455). Longitudinal studies with 2 to 10 years of follow-up results have observed that physical activity is related to less weight gain over time (456-459), less weight gain after smoking cessation in women (460), and weight loss over 2 years (461). In addition, post hoc analyses of several weight loss intervention studies reported that physical activity was a predictor of successful weight loss (454, 462, 463). The results of these RCTs showed that physical activity produces only modest weight loss and observational analyses from other studies suggest that physical activity may play a role in long-term weight control and/or maintenance of weight loss.
Rationale: Only three RCTs testing the effect of physical activity on weight loss also had measures of abdominal fat as assessed by waist circumference (365, 369, 375). One study demonstrated that physical activity reduced waist circumference compared with the control group (365), and another study showed a small effect on waist circumference (0.9 cm) in men but not women (375). One study in men showed a small increase in waist circumference (369). Weight loss was modest in all of these studies. These studies were not designed to test the effects of physical activity on abdominal fat independent of weight loss.
However, large studies in Europe (464), Canada (453), and the United States (465-468) reported that physical activity has a favorable effect on body fat distribution. These studies showed an inverse association between energy expenditure through physical activity and several indicators of abdominal obesity, such as waist circumference and waist-to-hip and waist-to-thigh circumference ratios.
Rationale: Eleven RCT articles testing the effect of physical activity alone on weight loss in men and women also included measures of cardiorespiratory fitness, as measured by maximal oxygen uptake (VO2 max) (346, 363, 369, 375, 401, 404, 406, 432, 434, 445, 446). All 11 showed that physical activity increased maximum oxygen uptake in men and women in the exercise groups by an average of 14 percent (ml/kg body weight) to 18 percent (L/min). Even in studies with modest weight loss (< 2 percent), physical activity increased VO2 max by an average of 12 percent (L/min) to 16 percent (ml/kg) (369, 375, 434).
One study that compared different formats and intensities of physical activity on VO2 max reported that improvement in VO2 max was related to adherence to the physical activity regime. In that study, the lower intensity program was equally effective on VO2 max as a higher intensity program, largely as a result of different levels of adherence (401).
The results of the RCTs strongly demonstrate that physical activity increases cardiorespiratory fitness in overweight and obese individuals. | <urn:uuid:512f125a-1c2b-4f1d-a1c5-4ccc50826a80> | CC-MAIN-2013-20 | http://www.nhlbi.nih.gov/guidelines/obesity/e_txtbk/methtri/3222.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.94747 | 1,745 | 3.0625 | 3 |
Source: The Pennsylvania Center for the Book
Date: Summer 2010
Byline: Jim Byrne
The Philadelphia Lazaretto: A Most Unloved Institution
In mid-July 1800, the U.S.S. Ganges, on the U.S. Navy’s earliest warships, encountered two American schooners, the Phebe and the Prudent, while navigating the Straits of Florida. These schooners held neither sugar cane nor corn but rather human cargo, 134 chained captives from West Africa to be exact, doomed for delivery to a notorious slave trader in Havana, Cuba. Commissioned with enforcing the newly-passed Federal Slave Trade Act that prohibited the transport of slaves by U.S. ships, Ganges Captain John Mullowny seized the Phebe and the Prudent, assigned prize crews to man both ships, and commanded them to sail north for Philadelphia. After a 1,500 mile, month long trip up the eastern coast of the United States, the crews arrived at the confluence of the Schuylkill and Delaware rivers with their recently confiscated cargo. Emaciated, exposed, and, surely disoriented, the scores of African survivors disembarked on Province Island, home to the Marine Hospital, a.k.a. the Philadelphia Lazaretto.
“The Ganges Africans” — the name bestowed upon them after many were given the surname Ganges — were part of one of the first slave trade violation cases confronted by the newly-established Federal judicial system. Their story exemplifies the ideological clashes that were occurring in the young nation and that would eventually boil over into the Civil War. Interestingly enough, at least two of the Ganges Africans would have extended stays in the Philadelphia area. One of them would become an indentured servant to Thomas Egger, the last quarantine master of the “old” Lazaretto on Province Island and the first one of the “new” Lazaretto in Tinicum Township, Delaware County. A second of the slaves was indentured to farmer Thomas Smith who had sold the property for the new facility to the Philadelphia Board of Health.
Moreover, their story reflects in miniature the history of the Philadelphia Lazaretto itself: integral to our nation’s demographic and political landscape and yet often overlooked. Stories like those of the Ganges Africans and the thousands of others who remain little-known or whose tales are untold are the reasons University of Pennsylvania historian David Barnes calls the Philadelphia Lazaretto a place “even rarer and more precious than New York’s [Ellis Island].”
Lazaretto seems an unusual name for a hospital in the American colonies. As it turns out, however, the word would become a common English term for isolation facilities and quarantine hospitals, ceasing to be a proper noun at all. As one might think, the term derives from the Italian, but its exact provenance is not completely clear. One theory holds that in 16th century Venice, the name of a local hospital, the Nazaretto, was combined with the name of Lazarus the Beggar from the New Testament. Professor Barnes notes that this theory is less plausible than the simpler, and generally accepted, thought isolation wards and quarantines came to be named for Lazarus the Beggar, patron saint of lepers. Regardless of its origin, it started to be applied to the “old” facility on Province Island built in 1743, though it was still referred to the Marine Hospital. The “new” Lazaretto, however, would never be called anything else.
At the end of the 18th century, politicians and doctors throughout the United States were passionately debating public health and quarantine issues, much as members of those same two professions debate health care policy today.
Thousands were falling victim to the effects of infectious diseases such as tuberculosis and yellow fever. The issue came to a head in 1793 after futile attempts to quarantine all ships arriving at the port of Philadelphia failed. A yellow fever epidemic of calamitous proportions broke out throughout the city. Within two months, the nation’s largest city and political and economic capital, lost more than a quarter of its population. Approximately 10% perished from yellow fever and another 17% fled the city.
After the 1793 epidemic, the city created a Board of Health to oversee conditions in the city. It found, among other things, that the old quarantine facility had failed terribly in preventing the propagation of yellow fever and other infectious diseases. Many authorities blamed this failure on its location only a few miles from the most densely populated parts of the city and thu, facilitating the interaction between infected patients and residents. Determined to prevent another calamity, The Board of Health constructed a new lazaretto ten miles south of Philadelphia in the isolated, uninhabited marshlands of Tinicum Township, Delaware County.
Construction of the Philadelphia Lazaretto finished in 1801, almost a century before the building of New York’s Ellis Island and San Francisco’s Angel Island. Upon completion, the Lazaretto was the first of its kind. It was a complex — both stately and state of the art — that, as The Philadelphia Public Ledger said in 1879, “might be mistaken for [the grounds of a] wealthy country gentleman....were it not for the yellow Quarantine flag.”
The central figure of the ten acre campus was the main hospital, an august Georgian, double-brick building that could house upwards of 200 patients at a given time. The campus also included the following: the Dutch hospital which was added in 1805 and was primarily used to house German immigrants afflicted with smallpox; the dead-house which was used for incinerating infected patients’ belongings; the barge houses which served as the waiting and sleeping quarters of the bargemen on duty during quarantine season; the residences of the chief physician and the quarantine master; and the government warehouse which stored the goods and merchandise of the quarantined vessels. These facilities and the sheer size of the campus made the Philadelphia Lazaretto a state-of-the-art hospital in comparison to its contemporaries. Remarkably, more than two centuries after their construction, the main hospital building and several other buildings remain intact and are rather well preserved.
During its years of operation from 1801 to 1893, the Philadelphia Lazaretto was certainly neither revered nor loved. As a matter of fact, sailors, merchants, and newly arriving immigrants dreaded and abhorred the sight of its flag — a yellow banner. It was, as University of Pennsylvania professor and the Philadelphia Lazaretto’s de facto historian David Barnes calls it, a most unloved institution. During the hospital’s near century of operation, the Philadelphia Lazaretto’s staff inspected and quarantined hundreds of thousands of merchant ships, much to the chagrin of businessmen and sailors. The hours, days, or even months at the Lazaretto meant the spoiling or confiscation of cargo and, in turn, the loss of significant capital. In addition to the loss of the merchants’ cargo, the Philadelphia Lazaretto was also the site at which sailors and immigrants perished. Many immigrants, much like the “Ganges Africans,” took their first steps on American soil. According to various estimates, a thirst of all Americans have an ancestor whose first contact with the New World was at the Lazaretto.
After closing its doors as a quarantine station in 1893, the site and the buildings went through several transformations. The Philadelphia Athletic Club purchased the site and renamed it the Orchard Club. The Orchard Club was a summer get-away for wealthy Philadelphians who used its prime waterfront location for leisure and recreation. There were several athletic clubs whos sponsored baseball teams during this era, and some even played their games on the grounds of the Lazaretto. These were not, however, the forerunners of the American League’s Philadelphia Athletics. After the closing of the Orchard Club, Colonel Robert Edward Glendinning — a substantial contributor to wartime aviation in the First World War — and George C. Thomas opened the Essington School of Flying in 1913. It was the state’s first water flying school and one of the first in the world.
Sadly, today the Philadelphia Lazaretto lies, mostly forgotten, in the same marshlands of Tinicum Township, Delaware County. Were it not for the tireless lobbying of a group of hard-working activists, the site may well have been destroyed. Though a township firestation was recently build on the northern half of the Lazaretto site, there is still much worthy of preservation. Similar projects, like New York’s restored Ellis Island and San Francisco’s Angel Island (in process), have given voice to the thousands and hundreds of thousands who came to these shores. The Friends of the Lazaretto and others are working hard to ensure that the rest of the site will stand as testament to the immigrants who landed in Philadelphia. | <urn:uuid:1487d9b6-9cee-445a-a4b7-b7784f0282ac> | CC-MAIN-2013-20 | http://www.ushistory.org/laz/news/pcbsummer10.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.964192 | 1,854 | 3.46875 | 3 |
The Edupunks' Guide: How to Do Research Online
It’s the best of times and the worst of times to be a learner. College tuition has doubled in the past decade, while the options for learning online and independently keep expanding. Anya Kamenetz's new free ebook The Edupunks’ Guide is all about the many paths that learners are taking in this new world, and we're running excerpts from the book all week. We're also asking GOOD readers to doodle your learning journey and submit the result by Sunday, September 11.
There’s been a revolution in the way people spread knowledge. Sharing information openly over the Internet is way cheaper than purchasing it commercially in dead-tree format, and often the learning that happens this way is faster, more up-to-date, and more relevant to our immediate needs. A simple example is learning to make pizza. A few years ago, you may have had to take a class or at least buy a cookbook. Today you can put “how to make a pizza” into YouTube and within minutes, you’re watching a video that shows you how to fling the dough!
More and more people around the world are building on this knowledge revolution to explore new modes of learning and to transform what we mean by “education.”
For many, the first step in an online learning journey is a simple Google search.
- Start with Google, the most-used search engine on the web. Put your phrase in quotes to return pages with the exact words, like this: “African-American history”
- Search on Wikipedia to get an overview of the topic. Follow the links to an article’s sources at the bottom of the page.
- Google Scholar will give you scholarly journal articles and other verified sources of information.
- An online archive like The Internet Archive may offer original source material.
- YouTube is good for videos—a quick entry into a topic. Or just Google your phrase and the word “video.”
- For news stories, try Google News
- For links on news, trends, and up-to-the-minute happenings, you can search Twitter with a hashtag, like this: #americanidol.
- Try posting your question to a site like OpenStudy or WikiAnswers.
- Put in your search terms plus the word “forum” or “blog” to see what ideas other people have discussed on message boards or on blogs.
A successful online research session will leave you with 20 open tabs or windows at the top of your screen. Follow your curiosity, but keep track of the links you’re following in an email draft, Word document, or an application like Evernote or Diigo so you can consult them later.
Top Free Learning Resources Online
Europeana: A digital library with 4.6 million items from libraries, archives, museums and other institutions across Europe. Read Charles Darwin’s letters or listen to Pavarotti singing Verdi.
The Internet Archive: A vast nonprofit digital library of Internet sites and other cultural artifacts—video, audio, texts, and live music.
Khan Academy: The Khan Academy has over 2000 videos covering basic math through calculus and trigonometry, physics, biology, chemistry, banking, finance, and statistics. The videos are short—5 to 15 minutes long—simple, and entertaining. They’re all made by Sal Khan, a 33-year-old former hedge fund analyst who started making them to help tutor his young cousins.
LearnFree: 750 free lessons on basic computer skills, reading and math.
Library.nu: Half a million free books. May not be exactly legal. Browse at your own risk.
MIT Open Courseware: The oldest open courseware site, with 1,900 courses on everything from history to physics. A favorite for science and math.
OpenCulture: A well-edited blog and site chronicling “the best” cultural and educational media on the web. They have lists of free online courses from top universities and free language lessons.
Open Learning Initiative: The Open Learning Initiative at Carnegie Mellon has 13 free complete courses in topics ranging from physics to logic to French. The courses are highly interactive, using video, animations, and lots of embedded quizzes and assessments so you know how you’re doing. The site requires a signup and sometimes you may have to download some software.
Open Textbooks: A catalog of open textbooks that are free to read online.
Quia: On Quia, you can create your own games and quizzes to test yourself, or take thousands of quizzes—flashcards, matching games, word searches—that other students and teachers have created for the ultimate study guide.
Saylor Foundation: Saylor lists 241 original courses on the site, for which the material comes from around the web.
Scribd: Scribd is a place to find free books and presentations on almost any topic, uploaded and shared by the authors.
Slideshare: Slideshare is a collection of free PowerPoint presentations, sometimes with audio. It’s a good place to learn about up-to-date topics like design, technology, and music.
TED: TED (for Technology, Entertainment, Design) has an excellent collection of 300-plus short video lectures by scientists, authors, artists, political figures, and more. Browsing the site is sure to be enlightening and can give you clues about fields you might want to study, like behavioral economics or biophysics.
Textbook Revolution: A student-run site with links and reviews to textbooks and other educational resources. Many are available free as PDFs, viewable online as ebooks, or websites containing course materials. You can also use the site to find descriptions of books that aren’t free, and find where they may be cheaper.
Wikiversity: Wikiversity has a wide variety of multimedia course materials. Courses are run through the site, meaning students at universities create and publish course modules for other students’ use. Like Wikipedia, you can participate in the community by editing course material (a great way to test and expand your own knowledge) or by joining discussions in the “Colloquium” section.
YouTube and YouTube EDU: Don’t forget to search YouTube for lectures and presentations on any topic you find interesting. YouTube EDU contains content that’s been tagged “education,” which may include quirky things like Tina Fey’s 2011 book talk at Google. | <urn:uuid:b4df6a76-7b1e-45c5-9b39-963295c53dd6> | CC-MAIN-2013-20 | http://www.good.is/posts/edupunks-guide-excerpt-how-to-do-research-online | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.907526 | 1,370 | 3.21875 | 3 |
Some unknown terrible person shot a defenseless pilot whale last month, leaving it to swim the Atlantic in agony for weeks before it finally beached itself on the New Jersey shore and died. Authorities are still looking for the shooter. The bullet wound caused a fulminant infection in the whale’s jaw that prevented it from eating, so it basically starved to death. This was determined during a necropsy, an autopsy for animals.
Along with sympathy for the poor creature, this debacle aroused an interesting question: How does one autopsy a whale? With four-ton meat hooks, whaling knives and bone saws, actually. Michael Moore, a veterinarian and whale biologist at the Woods Hole Oceanographic Institution, does it all the time.
Moore spends much of his time studying North Atlantic right whales, an endangered species whose name derives from whalers’ adage that these were the “right whales” to hunt, because they’re easy to spot and float when they die. They’re no longer hunted for their oil, but they are entangled in fishing lines and injured in ship collisions, often suffering for a great while and also succumbing to starvation. “It’s the most egregious animal welfare issue globally at this time,” in Moore’s words. But protecting them requires understanding how they died, and to do this Moore must take them apart, studying their broken bones and lobster net-tangled flukes to determine their exact causes of death.In partnership with the National Oceanic and Atmospheric Administration, Moore deploys on-call, toting a case full of knives to examine right whales that have beached or are floating in the open ocean. Right whales are baleen whales and at least two orders of magnitude bigger than the toothed pilot whale that was shot, so in most cases, they must be examined right where they‘re found — that means on the beach. They either beach themselves and die there, or they’re towed to shore once they have been located at sea.
Moore and other rubber-suited biologists work amid 120,000 pounds of slick black-and-red whale flesh, clambering over and through the carcass to find out what went wrong. Time is of the essence, because the longer they wait, the more the animal’s internal organs break down, making it difficult to determine how it died.
Moore uses a Japanese whaling hook, which is useful for pulling back sheets of blubber to get at the animal’s internal organs. He carries a bone saw — formerly his mother’s — to get through jaws and vertebrae to find the location of a fatal injury. He’s even visited indigenous Alaskan tribes to study their ancestral whale processing techniques.
The pilot whale that died was small, so it was trucked to a necropsy facility at the the Marine Mammal Stranding Center in Brigantine, N.J., down the shore north of Atlantic City. It weighed about 740 pounds when it beached, quite gaunt for an animal that should normally weigh more than a ton. Researchers knew something was seriously wrong, but they had to perform a necropsy to determine what it was.
The creatures are brought in on trucks and hoisted into the facility on chains rigged to the ceiling, attached to four-ton-rated meat hooks. They lay on negative-pressure steel tables, the same types used in human autopsy procedures, which suck out odors and pathogens as the biologists get to work. The lab also contains deep freezers for stringing up deceased animals; it harbors an overwhelming odor of chemical and organic substances. (It’s somewhat legendary at WHOI that Moore lost his sense of smell while in veterinary school, which he says enables him to get literally inside a rotting animal carcass without losing his lunch or his cool.)
The 11-foot-long pilot whale died shortly after authorities reached its side on the beach on Sept. 24. But it wasn’t until a necropsy a couple weeks later that they knew what happened. The bullet entered near its blow hole, but the wound had closed and faded a bit, suggesting it had been shot about a month prior. The .30-caliber round lodged in its jaw, causing the infection.
“This poor animal literally starved to death,” said Bob Schoelkopf, co-director of the Marine Mammal Stranding Center, in an interview with the AP. “It was wandering around and slowly starving to death because of the infection. Who would do that to an innocent animal?”
That question is now in the hands of the authorities. For biologists like Moore and Schoelkopf, necropsies can at least answer the question of how. Why, of course, is something else entirely.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:b6502306-e434-43de-977a-97e28d1218d6> | CC-MAIN-2013-20 | http://www.popsci.com/science/article/2011-10/how-do-you-autopsy-whale | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.967078 | 1,058 | 2.828125 | 3 |
Aswan High Dam on the Nile River
Aswan-- On 15 January 1971, Egypt celebrated the completion of the High Dam, whose funding was the center of a Cold War dispute that led to the 1956 Suez War. Located on the Upper Nile, around 1000km downstream from Cairo, the gigantic mountain of concrete and steel is among the 20th century's most elaborate engineering work. Aswan High Dam on the Nile River is located at the north end of Lake Nasser.
The construction of the High Dam began in 1960, and it was officially inaugurated ten years later, at a cost of US$1 billion, much of which was provided by the former Soviet Union. The dam stores 160 billion cubic meters of water, and the reservoir behind it, named Lake Nasser, stretches some 350km into Egypt and 150km into Sudan. It is a symbol for Egyptians patience and challenge.
The Suez Crisis began on 26 July 1956 when Egyptian President Gamal Abdel Nasser nationalised the Suez Canal. The move was in response to a decision by the United States and Britain to withdraw finance for the Aswan High Dam - a massive project to bring water to the Nile valley and electricity to develop Egypt's industry - because of Egypt's political and military ties to the Soviet Union.
The world-famous High Dam was an engineering miracle when it was built in the 1960s. Today it provides irrigation and electricity for the whole of Egypt and, together with the old Aswan Dam, 6km downriver, wonderful views for visitors. From the top of the two mile-long High Dam you can gaze across Lake Nasser, the huge reservoir created when it was built to Kalabsha temple in the south and the huge power station to the north.
The story of the High Dam was a tale of a nation, hikayit sha‘b, as Abdel Halim Hafiz chanted in an iconic song from the Nasserist period.
This nation lived under the yoke of British colonialism for over 70 years. After gaining independence, Egypt's revolutionary president, Colonel Gamal Abdel Nasser, approached the World Bank to finance the construction of a dam on the Nile, a vital step towards economic development. The World Bank refused. In an audacious challenge to old and new imperialism, Nasser nationalized the Suez Canal in 1956 to acquire funding for the project. The struggling nation heroically endured subsequent military assaults and a trade embargo. The dam was eventually built.
The story of the High Dam at Aswan is indeed the tale of this nation. The stages of its history chronicle critical transformations in Egyptian history at large. During the last half century, the dam moved from being a celebrated monument to Egyptian independence to a forgotten barrage deep in the country's south. It was a state-engineered tool of anti-imperialist propaganda, whose splendor faded away with the downfall and fundamental reversion of the anti-imperial project.
In other words Egypt had monopoly of the waters. On behalf of its colonial possessions - Sudan, Kenya, Tanzania and Uganda - Britain, which was primarily concerned with the Suez Canal and the passage to India, signed away their most precious resource.
Egypt had the right to veto any project along the Nile and full rights of inspection.
In 1959, this deal was overtaken by a new agreement between Egypt and Sudan splitting the waters 75 per cent to 25 per cent and guaranteeing Cairo "full control of the river".
The results of this control are nowhere more clearly seen than at Lake Nasser, a man-made reservoir 550km-long, created when Egypt completed the Aswan high dam. The country's largest engineering project took six years to build and another five years to fill.
Some 55.5 billion cu m of water gush from the Aswan dam into Egypt annually. It has enabled Cairo to regulate the life-giving annual flood, to irrigate its otherwise parched landscape, and at the point it was finished supplied half the country's electricity needs.
Nasser was in need of a success, a success to rival that of the Suez Canal, which is a 190 km-long man-made waterway linking the Mediterranean with the Red Sea. The French dug the Suez Canal 97 years before Nasser took the decision to nationalize it. Nasser took the decision to nationalize the Suez Canal just two years before it was supposed to revert back to Egyptian control. This was used as justification by three countries [Britain, France and Israel] to launch a tripartite strike against Egypt. The President's decision to cancel the contract just two years before its expiry date made it seem like Nasser was seeking a popular confrontation.
The difference is that the Aswan High Dam is a success story, whereas the Suez Canal was a story of conflict. The High Dam is a witness of the history of Egypt's relationship with the Soviet Union, which continued throughout Nasser's rule. The Aswan High Dam is considered to be the most important construction of the Nasser regime.
Abu Simbel, Egypt — In the 1960s, rising waters from a new dam threatened to submerge the temples and monuments of Nubia, the ancient home of black pharaohs in Egypt's far south. To preserve them, the antiquities were dismantled, moved and reconstructed. Today, most of the surviving monuments can be seen only from the lake created by the waters that nearly destroyed them.
A short flight from Aswan is Abu Simbel and the Great Temple of Ramses II, and the gods of creation and light, Ptah, Amen and Heru-khuti is the most iconic Ancient Egyptian sight after the Pyramids. The temple has four massive statues of Ramses II and the gods Ra-Horakhty, Amun and Ptah at its entrance. Beyond lie two pillared halls and the sacred sanctuary. Every wall and ceiling is a storybook art gallery of this extraordinary pharaoh's life. (He lived to 90 and is said to have fathered more than 200 children).
Know more details about Visiting Temples of Ramses II at Abu Simbel, Aswan
Construction of the high dam was an enormous national project that every Egyptian contributed to. Nubians in particular sacrificed their possessions, including 45 Nubian villages lying on 300 square kilometers of land and one million palm trees. They were moved from a paradise on the banks of the Nile. These treasures are all submerged under water now. Along with the loss of the land, there was also the loss of heritage, values, memories and lifestyle, and, above all proximity to the water, the primary source of life.
For More Information Visit: Aswan Tourism and Tourist Information
Cruise ships on Egypt's Lake Nasser visit the ancient temples of Nubia's black pharaohs:
The Aswan High Dam: A Political Witness: http://aawsat.com/english/news.asp?section=2&id=19640
Memories of a high dam: http://www.almasryalyoum.com/en/news/memories-dam-high-dam%E2%80%A6
Suez Crisis: Key players: http://news.bbc.co.uk/2/hi/5195582.stm
Nile deal brings countries to boiling point over water: http://www.nzherald.co.nz/world/news/article.cfm?c_id=2&objectid=10648772 | <urn:uuid:618ed3d6-f1e8-4ba9-8a5a-d895deaf38f0> | CC-MAIN-2013-20 | http://www.somalipress.com/guides/city-guides/aswan-high-dam-nile-river.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.951938 | 1,546 | 3.515625 | 4 |
|Met proto-oncogene (hepatocyte growth factor receptor)|
Crystallographic structure of MET. PDB rendering based on 1r0p.
|External IDs||ChEMBL: GeneCards:|
|RNA expression pattern|
c-Met (MET or MNNG HOS Transforming gene) is a proto-oncogene that encodes a protein known as hepatocyte growth factor receptor (HGFR). The hepatocyte growth factor receptor protein possesses tyrosine-kinase activity. The primary single chain precursor protein is post-translationally cleaved to produce the alpha and beta subunits, which are disulfide linked to form the mature receptor.
MET is a membrane receptor that is essential for embryonic development and wound healing. Hepatocyte growth factor (HGF) is the only known ligand of the MET receptor. MET is normally expressed by cells of epithelial origin, while expression of HGF is restricted to cells of mesenchymal origin. Upon HGF stimulation, MET induces several biological responses that collectively give rise to a program known as invasive growth.
Abnormal MET activation in cancer correlates with poor prognosis, where aberrantly active MET triggers tumor growth, formation of new blood vessels (angiogenesis) that supply the tumor with nutrients, and cancer spread to other organs (metastasis). MET is deregulated in many types of human malignancies, including cancers of kidney, liver, stomach, breast, and brain. Normally, only stem cells and progenitor cells express MET, which allows these cells to grow invasively in order to generate new tissues in an embryo or regenerate damaged tissues in an adult. However, cancer stem cells are thought to hijack the ability of normal stem cells to express MET, and thus become the cause of cancer persistence and spread to other sites in the body.
MET proto-oncogene (GeneID: 4233) has a total length of 125,982 bp, and it is located in the 7q31 locus of chromosome 7. MET is transcribed into a 6,641 bp mature mRNA, which is then translated into a 1,390 amino-acid MET protein.
MET is a receptor tyrosine kinase (RTK) that is produced as a single-chain precursor. The precursor is proteolytically cleaved at a furin site to yield a highly glycosylated extracellular α-subunit and a transmembrane β-subunit, which are linked together by a disulfide bridge.
- Region of homology to semaphorins (Sema domain), which includes the full α-chain and the N-terminal part of the β-chain
- Cysteine-rich MET-related sequence (MRS domain)
- Glycine-proline-rich repeats (G-P repeats)
- Four immunoglobulin-like structures (Ig domains), a typical protein-protein interaction region.
A Juxtamembrane segment that contains:
- a serine residue (Ser 985), which inhibits the receptor kinase activity upon phosphorylation
- a tyrosine (Tyr 1003), which is responsible for MET polyubiquitination, endocytosis, and degradation upon interaction with the ubiquitin ligase CBL
- Tyrosine kinase domain, which mediates MET biological activity. Following MET activation, transphosphorylation occurs on Tyr 1234 and Tyr 1235
- C-terminal region contains two crucial tyrosines (Tyr 1349 and Tyr 1356), which are inserted into the multisubstrate docking site, capable of recruiting downstream adapter proteins with Src homology-2 (SH2) domains. The two tyrosines of the docking site have been reported to be necessary and sufficient for the signal transduction both in vitro.
MET signaling pathway
MET activation by its ligand HGF induces MET kinase catalytic activity, which triggers transphosphorylation of the tyrosines Tyr 1234 and Tyr 1235. These two tyrosines engage various signal transducers, thus initiating a whole spectrum of biological activities driven by MET, collectively known as the invasive growth program. The transducers interact with the intracellular multisubstrate docking site of MET either directly, such as GRB2, SHC, SRC, and the p85 regulatory subunit of phosphatidylinositol-3 kinase (PI3K), or indirectly through the scaffolding protein Gab1
Tyr 1349 and Tyr 1356 of the multisubstrate docking site are both involved in the interaction with GAB1, SRC, and SHC, while only Tyr 1356 is involved in the recruitment of GRB2, phospholipase C γ (PLC-γ), p85, and SHP2.
GAB1 is a key coordinator of the cellular responses to MET and binds the MET intracellular region with high avidity, but low affinity. Upon interaction with MET, GAB1 becomes phosphorylated on several tyrosine residues which, in turn, recruit a number of signalling effectors, including PI3K, SHP2, and PLC-γ. GAB1 phosphorylation by MET results in a sustained signal that mediates most of the downstream signaling pathways.
Activation of signal transduction
MET engagement activates multiple signal transduction pathways:
- The RAS pathway mediates HGF-induced scattering and proliferation signals, which lead to branching morphogenesis. Of note, HGF, differently from most mitogens, induces sustained RAS activation, and thus prolonged MAPK activity.
- The PI3K pathway is activated in two ways: PI3K can be either downstream of RAS, or it can be recruited directly through the multifunctional docking site. Activation of the PI3K pathway is currently associated with cell motility through remodeling of adhesion to the extracellular matrix as well as localized recruitment of transducers involved in cytoskeletal reorganization, such as RAC1 and PAK. PI3K activation also triggers a survival signal due to activation of the AKT pathway.
- The STAT pathway, together with the sustained MAPK activation, is necessary for the HGF-induced branching morphogenesis. MET activates the STAT3 transcription factor directly, through an SH2 domain.
- The beta-catenin pathway, a key component of the Wnt signaling pathway, translocates into the nucleus following MET activation and participates in transcriptional regulation of numerous genes.
Role in development
During embryonic development, transformation of the flat, two-layer germinal disc into a three-dimensional body depends on transition of some cells from an epithelial phenotype to spindle-shaped cells with motile behaviour, a mesenchymal phenotype. This process is referred to as epithelial-mesenchymal transition (EMT). Later in embryonic development, MET is crucial for gastrulation, angiogenesis, myoblast migration, bone remodeling, and nerve sprouting among others. MET is essential for embryogenesis, because MET -/- mice die in utero due to severe defects in placental development. Furthermore, MET is required for such critical processes as liver regeneration and wound healing during adulthood.
Tissue distribution
MET is normally expressed by epithelial cells. However, MET is also found on endothelial cells, neurons, hepatocytes, hematopoietic cells, and melanocytes. HGF expression is restricted to cells of mesenchymal origin.
Transcriptional control
MET transcription is activated by HGF and several growth factors. MET promoter has four putative binding sites for Ets, a family of transcription factors that control several invasive growth genes. ETS1 activates MET transcription in vitro. MET transcription is activated by hypoxia-inducible factor 1 (HIF1), which is activated by low concentration of intracellular oxygen. HIF1 can bind to one of the several hypoxia response elements (HREs) in the MET promoter. Hypoxia also activates transcription factor AP-1, which is involved in MET transcription.
Role in cancer
MET pathway plays an important role in the development of cancer through:
- angiogenesis (sprouting of new blood vessels from pre-existing ones to supply a tumor with nutrients);
- scatter (cells dissociation due to metalloprotease production), which often leads to metastasis.
Coordinated down-regulation of both MET and its downstream effector extracellular signal-regulated kinase 2 (ERK2) by miR-199a* may be effective in inhibiting not only cell proliferation but also motility and invasive capabilities of tumor cells.
Interaction with tumour suppressor genes
PTEN (phosphatase and tensin homolog) is a tumor suppressor gene encoding a protein PTEN, which possesses lipid and protein phosphatase-dependent as well as phosphatase-independent activities. PTEN protein phosphatase is able to interfere with MET signaling by dephosphorylating either PIP3 generated by PI3K, or the p52 isoform of SHC. SHC dephosphorylation inhibits recruitment of the GRB2 adapter to activated MET.
Cancer therapies targeting HGF/MET
Since tumor invasion and metastasis are the main cause of death in cancer patients, interfering with MET signaling appears to be a promising therapeutic approach. A comprehensive list of HGF and MET targeted experimental therapeutics for oncology now in human clinical trials can be found here.
MET kinase inhibitors
Kinase inhibitors are low molecular weight molecules that prevent ATP binding to MET, thus inhibiting receptor transphosphorylation and recruitment of the downstream effectors. The limitations of kinase inhibitors include the facts that they only inhibit kinase-dependent MET activation, and that none of them is fully specific for MET.
- K252a (Fermentek Biotechnology) is a staurosporine analogue isolated from Nocardiopsis sp. soil fungi, and it is a potent inhibitor of all receptor tyrosine kinases (RTKs). At nanomolar concentrations, K252a inhibits both the wild type and the mutant (M1268T) MET function.
- SU11274 (SUGEN) specifically inhibits MET kinase activity and its subsequent signaling. SU11274 is also an effective inhibitor of the M1268T and H1112Y MET mutants, but not the L1213V and Y1248H mutants. SU11274 has been demonstrated to inhibit HGF-induced motility and invasion of epithelial and carcinoma cells.
- PHA-665752 (Pfizer) specifically inhibits MET kinase activity, and it has been demonstrated to represses both HGF-dependent and constitutive MET phosphorylation. Furthermore, some tumors harboring MET amplifications are highly sensitive to treatment with PHA-665752.
- ARQ197 (ArQule) is a promising selective inhibitor of MET, which entered a phase 2 clinical trial in 2008.
- Foretinib (XL880, Exelixis) targets multiple receptor tyrosine kinases (RTKs) with growth-promoting and angiogenic properties. The primary targets of foretinib are MET, VEGFR2, and KDR. Foretinib has completed a phase 2 clinical trials with indications for papillary renal cell carcinoma, gastric cancer, and head and neck cancer.
- SGX523 (SGX Pharmaceuticals) specifically inhibits MET at low nanomolar concentrations.
- MP470 (SuperGen) is a novel inhibitor of c-KIT, MET, PDGFR, Flt3, and AXL. Phase I clinical trial of MP470 had been announced in 2007.
HGF inhibitors
Since HGF is the only known ligand of MET, formation of a HGF:MET complex blocks MET biological activity. For this purpose, truncated HGF, anti-HGF neutralizing antibodies, and an uncleavable form of HGF have been utilized so far. The major limitation of HGF inhibitors is that they block only HGF-dependent MET activation.
- NK4 competes with HGF as it binds MET without inducing receptor activation, thus behaving as a full antagonist. NK4 is a molecule bearing the N-terminal hairpin and the four kringle domains of HGF. Moreover, NK4 is structurally similar to angiostatins, which is why it possesses anti-angiogenic activity.
- Neutralizing anti-HGF antibodies were initially tested in combination, and it was shown that at least three antibodies, acting on different HGF epitopes, are necessary to prevent MET tyrosine kinase activation. More recently, it has been demonstrated that fully human monoclonal antibodies can individually bind and neutralize human HGF, leading to regression of tumors in mouse models. Two anti-HGF antibodies are currently available: the humanized AV299 (AVEO), and the fully human AMG102 (Amgen).
- Uncleavable HGF is an engineered form of pro-HGF carrying a single amino-acid substitution, which prevents the maturation of the molecule. Uncleavable HGF is capable of blocking MET-induced biological responses by binding MET with high affinity and displacing mature HGF. Moreover, uncleavable HGF competes with the wild-type endogenous pro-HGF for the catalytic domain of proteases that cleave HGF precursors. Local and systemic expression of uncleavable HGF inhibits tumor growth and, more importantly, prevents metastasis.
Decoy MET
Decoy MET refers to a soluble truncated MET receptor. Decoys are able to inhibit MET activation mediated by both HGF-dependent and independent mechanisms, as decoys prevent both the ligand binding and the MET receptor homodimerization. CGEN241 (Compugen) is a decoy MET that is highly efficient in inhibiting tumor growth and preventing metastasis in animal models.
Immunotherapy targeting MET
Drugs used for immunotherapy can act either passively by enhancing the immunologic response to MET-expressing tumor cells, or actively by stimulating immune cells and altering differentiation/growth of tumor cells.
Passive immunotherapy
Administering monoclonal antibodies (mAbs) is a form of passive immunotherapy. MAbs facilitate destruction of tumor cells by complement-dependent cytotoxicity (CDC) and cell-mediated cytotoxicity (ADCC). In CDC, mAbs bind to specific antigen, leading to activation of the complement cascade, which in turn leads to formation of pores in tumor cells. In ADCC, the Fab domain of a mAb binds to a tumor antigen, and Fc domain binds to Fc receptors present on effector cells (phagocytes and NK cells), thus forming a bridge between an effector and a target cells. This induces the effector cell activation, leading to phagocytosis of the tumor cell by neutrophils and macrophages. Furthermore, NK cells release cytotoxic molecules, which lyse tumor cells.
- DN30 is monoclonal anti-MET antibody that recognizes the extracellular portion of MET. DN30 induces both shedding of the MET ectodomain as well as cleavage of the intracellular domain, which is successively degraded by proteasome machinery. As a consequence, on one side MET is inactivated, and on the other side the shed portion of extracellular MET hampers activation of other MET receptors, acting as a decoy. DN30 inhibits tumour growth and prevents metastasis in animal models.
- OA-5D5 is one-armed monoclonal anti-MET antibody that was demonstrated to inhibit orthotopic pancreatic and glioblastoma tumor growth and to improve survival in tumor xenograft models. OA-5D5 is produced as a recombinant protein in Escherichia coli. It is composed of murine variable domains for the heavy and light chains with human IgG1 constant domains. The antibody blocks HGF binding to MET in a competitive fashion.
Active immunotherapy
Active immunotherapy to MET-expressing tumors can be achieved by administering cytokines, such as interferons (IFNs) and interleukins (IL-2), which triggers non-specific stimulation of numerous immune cells. IFNs have been tested as therapies for many types of cancers and have demonstrated therapeutic benefits. IL-2 has been approved by the U.S. Food and Drug Administration (FDA) for the treatment of renal cell carcinoma and metastatic melanoma, which often have deregulated MET activity.
See also
- Bottaro DP, Rubin JS, Faletto DL, Chan AM, Kmiecik TE, Vande Woude GF, Aaronson SA (February 1991). "Identification of the hepatocyte growth factor receptor as the met proto-oncogene product". Science 251 (4995): 802–4. doi:10.1126/science.1846706. PMID 1846706.
- Galland F, Stefanova M, Lafage M, Birnbaum D (1992). "Localization of the 5' end of the MCF2 oncogene to human chromosome 15q15----q23". Cytogenet. Cell Genet. 60 (2): 114–6. doi:10.1159/000133316. PMID 1611909.
- Cooper CS (January 1992). "The met oncogene: from detection by transfection to transmembrane receptor for hepatocyte growth factor". Oncogene 7 (1): 3–7. PMID 1531516.
- "Entrez Gene: MET met proto-oncogene (hepatocyte growth factor receptor)".
- Gentile A, Trusolino L, Comoglio PM (March 2008). "The Met tyrosine kinase receptor in development and cancer". Cancer Metastasis Rev. 27 (1): 85–94. doi:10.1007/s10555-007-9107-6. PMID 18175071.
- Birchmeier C, Birchmeier W, Gherardi E, Vande Woude GF (December 2003). "Met, metastasis, motility and more". Nat. Rev. Mol. Cell Biol. 4 (12): 915–25. doi:10.1038/nrm1261. PMID 14685170.
- Gandino L, Longati P, Medico E, Prat M, Comoglio PM (January 1994). "Phosphorylation of serine 985 negatively regulates the hepatocyte growth factor receptor kinase". J. Biol. Chem. 269 (3): 1815–20. PMID 8294430.
- Peschard P, Fournier TM, Lamorte L, Naujokas MA, Band H, Langdon WY, Park M (November 2001). "Mutation of the c-Cbl TKB domain binding site on the Met receptor tyrosine kinase converts it into a transforming protein". Mol. Cell 8 (5): 995–1004. doi:10.1016/S1097-2765(01)00378-1. PMID 11741535.
- Ponzetto C, Bardelli A, Zhen Z, Maina F, dalla Zonca P, Giordano S, Graziani A, Panayotou G, Comoglio PM (April 1994). "A multifunctional docking site mediates signaling and transformation by the hepatocyte growth factor/scatter factor receptor family". Cell 77 (2): 261–71. doi:10.1016/0092-8674(94)90318-2. PMID 7513258.
- Maina F, Casagranda F, Audero E, Simeone A, Comoglio PM, Klein R, Ponzetto C (November 1996). "Uncoupling of Grb2 from the Met receptor in vivo reveals complex roles in muscle development". Cell 87 (3): 531–42. doi:10.1016/S0092-8674(00)81372-0. PMID 8898205.
- Abounader R, Reznik T, Colantuoni C, Martinez-Murillo F, Rosen EM, Laterra J (December 2004). "Regulation of c-Met-dependent gene expression by PTEN". Oncogene 23 (57): 9173–82. doi:10.1038/sj.onc.1208146. PMID 15516982.
- Pelicci G, Giordano S, Zhen Z, Salcini AE, Lanfrancone L, Bardelli A, Panayotou G, Waterfield MD, Ponzetto C, Pelicci PG (April 1995). "The motogenic and mitogenic responses to HGF are amplified by the Shc adaptor protein". Oncogene 10 (8): 1631–8. PMID 7731718.
- Weidner KM, Di Cesare S, Sachs M, Brinkmann V, Behrens J, Birchmeier W (November 1996). "Interaction between Gab1 and the c-Met receptor tyrosine kinase is responsible for epithelial morphogenesis". Nature 384 (6605): 173–6. doi:10.1038/384173a0. PMID 8906793.
- Furge KA, Zhang YW, Vande Woude GF (November 2000). "Met receptor tyrosine kinase: enhanced signaling through adapter proteins". Oncogene 19 (49): 5582–9. doi:10.1038/sj.onc.1203859. PMID 11114738.
- Gual P, Giordano S, Anguissola S, Parker PJ, Comoglio PM (January 2001). "Gab1 phosphorylation: a novel mechanism for negative regulation of HGF receptor signaling". Oncogene 20 (2): 156–66. doi:10.1038/sj.onc.1204047. PMID 11313945.
- Gual P, Giordano S, Williams TA, Rocchi S, Van Obberghen E, Comoglio PM (March 2000). "Sustained recruitment of phospholipase C-gamma to Gab1 is required for HGF-induced branching tubulogenesis". Oncogene 19 (12): 1509–18. doi:10.1038/sj.onc.1203514. PMID 10734310.
- O'Brien LE, Tang K, Kats ES, Schutz-Geschwender A, Lipschutz JH, Mostov KE (July 2004). "ERK and MMPs sequentially regulate distinct stages of epithelial tubule development". Dev. Cell 7 (1): 21–32. doi:10.1016/j.devcel.2004.06.001. PMID 15239951.
- Marshall CJ (January 1995). "Specificity of receptor tyrosine kinase signaling: transient versus sustained extracellular signal-regulated kinase activation". Cell 80 (2): 179–85. doi:10.1016/0092-8674(95)90401-8. PMID 7834738.
- Graziani A, Gramaglia D, Cantley LC, Comoglio PM (November 1991). "The tyrosine-phosphorylated hepatocyte growth factor/scatter factor receptor associates with phosphatidylinositol 3-kinase". J. Biol. Chem. 266 (33): 22087–90. PMID 1718989.
- Boccaccio C, Andò M, Tamagnone L, Bardelli A, Michieli P, Battistini C, Comoglio PM (January 1998). "Induction of epithelial tubules by growth factor HGF depends on the STAT pathway". Nature 391 (6664): 285–8. doi:10.1038/34657. PMID 9440692.
- Monga SP, Mars WM, Pediaditakis P, Bell A, Mulé K, Bowen WC, Wang X, Zarnegar R, Michalopoulos GK (April 2002). "Hepatocyte growth factor induces Wnt-independent nuclear translocation of beta-catenin after Met-beta-catenin dissociation in hepatocytes". Cancer Res 7 (62): 2064–71. ISSN 0008-5472. PMID 11929826.
- Gude NA, Emmanuel G, Wu W, Cottage CT, Fischer K, Quijada P, Muraski JA, Alvarez R, Rubio M, Schaefer E, Sussman MA (May 2008). "Activation of Notch-mediated protective signaling in the myocardium". Circ. Res. 102 (9): 1025–35. doi:10.1161/CIRCRESAHA.107.164749. PMID 18369158.
- "he fields of HGF/c-Met involvement". HealthValue. Retrieved 2009-06-13.
- Boccaccio C, Comoglio PM (August 2006). "Invasive growth: a MET-driven genetic programme for cancer and stem cells". Nat. Rev. Cancer 6 (8): 637–45. doi:10.1038/nrc1912. PMID 16862193.
- Birchmeier C, Gherardi E (October 1998). "Developmental roles of HGF/SF and its receptor, the c-Met tyrosine kinase". Trends Cell Biol. 8 (10): 404–10. doi:10.1016/S0962-8924(98)01359-2. PMID 9789329.
- Uehara Y, Minowa O, Mori C, Shiota K, Kuno J, Noda T, Kitamura N (February 1995). "Placental defect and embryonic lethality in mice lacking hepatocyte growth factor/scatter factor". Nature 373 (6516): 702–5. doi:10.1038/373702a0. PMID 7854453.
- Shirasaki F, Makhluf HA, LeRoy C, Watson DK, Trojanowska M (December 1999). "Ets transcription factors cooperate with Sp1 to activate the human tenascin-C promoter". Oncogene 18 (54): 7755–64. doi:10.1038/sj.onc.1203360. PMID 10618716.
- Gambarotta G, Boccaccio C, Giordano S, Andŏ M, Stella MC, Comoglio PM (November 1996). "Ets up-regulates MET transcription". Oncogene 13 (9): 1911–7. PMID 8934537.
- Pennacchietti S, Michieli P, Galluzzo M, Mazzone M, Giordano S, Comoglio PM (April 2003). "Hypoxia promotes invasive growth by transcriptional activation of the met protooncogene". Cancer Cell 3 (4): 347–61. doi:10.1016/S1535-6108(03)00085-0. PMID 12726861.
- "HGF/c-Met and cancer". HealthValue. Retrieved 2009-06-13.
- Kim S, Lee UJ, Kim MN, et al. (June 2008). "MicroRNA miR-199a* regulates the MET proto-oncogene and the downstream extracellular signal-regulated kinase 2 (ERK2)". J. Biol. Chem. 283 (26): 18158–66. doi:10.1074/jbc.M800186200. PMID 18456660.
- Maehama T, Dixon JE (May 1998). "The tumor suppressor, PTEN/MMAC1, dephosphorylates the lipid second messenger, phosphatidylinositol 3,4,5-trisphosphate". J. Biol. Chem. 273 (22): 13375–8. doi:10.1074/jbc.273.22.13375. PMID 9593664.
- Morris MR, Gentle D, Abdulrahman M, Maina EN, Gupta K, Banks RE, Wiesener MS, Kishida T, Yao M, Teh B, Latif F, Maher ER (June 2005). "Tumor suppressor activity and epigenetic inactivation of hepatocyte growth factor activator inhibitor type 2/SPINT2 in papillary and clear cell renal cell carcinoma". Cancer Res. 65 (11): 4598–606. doi:10.1158/0008-5472.CAN-04-3371. PMID 15930277.
- Morotti A, Mila S, Accornero P, Tagliabue E, Ponzetto C (July 2002). "K252a inhibits the oncogenic properties of Met, the HGF receptor". Oncogene 21 (32): 4885–93. doi:10.1038/sj.onc.1205622. PMID 12118367.
- Berthou S, Aebersold DM, Schmidt LS, Stroka D, Heigl C, Streit B, Stalder D, Gruber G, Liang C, Howlett AR, Candinas D, Greiner RH, Lipson KE, Zimmer Y (July 2004). "The Met kinase inhibitor SU11274 exhibits a selective inhibition pattern toward different receptor mutated variants". Oncogene 23 (31): 5387–93. doi:10.1038/sj.onc.1207691. PMID 15064724.
- Wang X, Le P, Liang C, Chan J, Kiewlich D, Miller T, Harris D, Sun L, Rice A, Vasile S, Blake RA, Howlett AR, Patel N, McMahon G, Lipson KE (November 2003). "Potent and selective inhibitors of the Met [hepatocyte growth factor/scatter factor (HGF/SF) receptor] tyrosine kinase block HGF/SF-induced tumor cell growth and invasion". Mol. Cancer Ther. 2 (11): 1085–92. PMID 14617781.
- Christensen JG, Schreck R, Burrows J, Kuruganti P, Chan E, Le P, Chen J, Wang X, Ruslim L, Blake R, Lipson KE, Ramphal J, Do S, Cui JJ, Cherrington JM, Mendel DB (November 2003). "A selective small molecule inhibitor of c-Met kinase inhibits c-Met-dependent phenotypes in vitro and exhibits cytoreductive antitumor activity in vivo". Cancer Res. 63 (21): 7345–55. PMID 14612533.
- Smolen GA, Sordella R, Muir B, Mohapatra G, Barmettler A, Archibald H, Kim WJ, Okimoto RA, Bell DW, Sgroi DC, Christensen JG, Settleman J, Haber DA (February 2006). "Amplification of MET may identify a subset of cancers with extreme sensitivity to the selective tyrosine kinase inhibitor PHA-665752". Proc. Natl. Acad. Sci. U.S.A. 103 (7): 2316–21. doi:10.1073/pnas.0508776103. PMC 1413705. PMID 16461907.
- Matsumoto K, Nakamura T (April 2003). "NK4 (HGF-antagonist/angiogenesis inhibitor) in cancer biology and therapeutics". Cancer Sci. 94 (4): 321–7. doi:10.1111/j.1349-7006.2003.tb01440.x. PMID 12824898.
- Cao B, Su Y, Oskarsson M, Zhao P, Kort EJ, Fisher RJ, Wang LM, Vande Woude GF (June 2001). "Neutralizing monoclonal antibodies to hepatocyte growth factor/scatter factor (HGF/SF) display antitumor activity in animal models". Proc. Natl. Acad. Sci. U.S.A. 98 (13): 7443–8. doi:10.1073/pnas.131200498. PMC 34688. PMID 11416216.
- Burgess T, Coxon A, Meyer S, Sun J, Rex K, Tsuruda T, Chen Q, Ho SY, Li L, Kaufman S, McDorman K, Cattley RC, Sun J, Elliott G, Zhang K, Feng X, Jia XC, Green L, Radinsky R, Kendall R (February 2006). "Fully human monoclonal antibodies to hepatocyte growth factor with therapeutic potential against hepatocyte growth factor/c-Met-dependent human tumors". Cancer Res. 66 (3): 1721–9. doi:10.1158/0008-5472.CAN-05-3329. PMID 16452232.
- Mazzone M, Basilico C, Cavassa S, Pennacchietti S, Risio M, Naldini L, Comoglio PM, Michieli P (November 2004). "An uncleavable form of pro–scatter factor suppresses tumor growth and dissemination in mice". J. Clin. Invest. 114 (10): 1418–32. doi:10.1172/JCI22235. PMC 525743. PMID 15545993.
- Michieli P, Mazzone M, Basilico C, Cavassa S, Sottile A, Naldini L, Comoglio PM (July 2004). "Targeting the tumor and its microenvironment by a dual-function decoy Met receptor". Cancer Cell 6 (1): 61–73. doi:10.1016/j.ccr.2004.05.032. PMID 15261142.
- Reang P, Gupta M, Kohli K (2006). "Biological Response Modifiers in Cancer". MedGenMed 8 (4): 33. PMC 1868326. PMID 17415315.
- Petrelli A, Circosta P, Granziero L, Mazzone M, Pisacane A, Fenoglio S, Comoglio PM, Giordano S (March 2006). "Ab-induced ectodomain shedding mediates hepatocyte growth factor receptor down-regulation and hampers biological activity". Proc. Natl. Acad. Sci. U.S.A. 103 (13): 5090–5. doi:10.1073/pnas.0508156103. PMC 1458799. PMID 16547140.
- Jin H, Yang R, Zheng Z, Romero M, Ross J, Bou-Reslan H, Carano RA, Kasman I, Mai E, Young J, Zha J, Zhang Z, Ross S, Schwall R, Colbern G, Merchant M (June 2008). "MetMAb, the one-armed 5D5 anti-c-Met antibody, inhibits orthotopic pancreatic tumor growth and improves survival". Cancer Res. 68 (11): 4360–8. doi:10.1158/0008-5472.CAN-07-5960. PMID 18519697.
- Martens T, Schmidt NO, Eckerich C, Fillbrandt R, Merchant M, Schwall R, Westphal M, Lamszus K (October 2006). "A novel one-armed anti-c-Met antibody inhibits glioblastoma growth in vivo". Clin. Cancer Res. 12 (20 Pt 1): 6144–52. doi:10.1158/1078-0432.CCR-05-1418. PMID 17062691.
- Comoglio, P M (1993). "Structure, biosynthesis and biochemical properties of the HGF receptor in normal and malignant cells". EXS (SWITZERLAND) 65: 131–65. ISSN 1023-294X. PMID 8380735.
- Naldini, L; Weidner K M, Vigna E, Gaudino G, Bardelli A, Ponzetto C, Narsimhan R P, Hartmann G, Zarnegar R, Michalopoulos G K (Oct. 1991). "Scatter factor and hepatocyte growth factor are indistinguishable ligands for the MET receptor". EMBO J. (ENGLAND) 10 (10): 2867–78. ISSN 0261-4189. PMC 452997. PMID 1655405.
- Petrelli, Annalisa; Gilestro Giorgio F, Lanzardo Stefania, Comoglio Paolo M, Migone Nicola, Giordano Silvia (Mar. 2002). "The endophilin-CIN85-Cbl complex mediates ligand-dependent downregulation of c-Met". Nature (England) 416 (6877): 187–90. doi:10.1038/416187a. ISSN 0028-0836. PMID 11894096.
- Ng, Cherlyn; Jackson Rebecca A, Buschdorf Jan P, Sun Qingxiang, Guy Graeme R, Sivaraman J (Mar. 2008). "Structural basis for a novel intrapeptidyl H-bond and reverse binding of c-Cbl-TKB domain substrates". EMBO J. (England) 27 (5): 804–16. doi:10.1038/emboj.2008.18. PMC 2265755. PMID 18273061.
- Grisendi, S; Chambraud B, Gout I, Comoglio P M, Crepaldi T (Dec. 2001). "Ligand-regulated binding of FAP68 to the hepatocyte growth factor receptor". J. Biol. Chem. (United States) 276 (49): 46632–8. doi:10.1074/jbc.M104323200. ISSN 0021-9258. PMID 11571281.
- Davies, G; Jiang W G, Mason M D (Apr. 2001). "HGF/SF modifies the interaction between its receptor c-Met, and the E-cadherin/catenin complex in prostate cancer cells". Int. J. Mol. Med. (Greece) 7 (4): 385–8. ISSN 1107-3756. PMID 11254878.
- Ponzetto, C; Zhen Z, Audero E, Maina F, Bardelli A, Basile M L, Giordano S, Narsimhan R, Comoglio P (Jun. 1996). "Specific uncoupling of GRB2 from the Met receptor. Differential effects on transformation and motility". J. Biol. Chem. (UNITED STATES) 271 (24): 14119–23. doi:10.1074/jbc.271.24.14119. ISSN 0021-9258. PMID 8662889.
- Liang, Q; Mohan R R, Chen L, Wilson S E (Jul. 1998). "Signaling by HGF and KGF in corneal epithelial cells: Ras/MAP kinase and Jak-STAT pathways". Invest. Ophthalmol. Vis. Sci. (UNITED STATES) 39 (8): 1329–38. ISSN 0146-0404. PMID 9660480.
- Wang, Dakun; Li Zaibo, Messing Edward M, Wu Guan (Sep. 2002). "Activation of Ras/Erk pathway by a novel MET-interacting protein RanBPM". J. Biol. Chem. (United States) 277 (39): 36216–22. doi:10.1074/jbc.M205111200. ISSN 0021-9258. PMID 12147692.
- Hiscox S, Jiang WG (1999). "Association of the HGF/SF receptor, c-met, with the cell-surface adhesion molecule, E-cadherin, and catenins in human tumor cells.". Biochem Biophys Res Commun 261 (2): 406–11. doi:10.1006/bbrc.1999.1002. PMID 10425198.
Further reading
- Peruzzi B, Bottaro DP (2006). "Targeting the c-Met signaling pathway in cancer". Clin. Cancer Res. 12 (12): 3657–60. doi:10.1158/1078-0432.CCR-06-0818. PMID 16778093.
- Birchmeier C, Birchmeier W, Gherardi E, Vande Woude GF (December 2003). "Met, metastasis, motility and more". Nat. Rev. Mol. Cell Biol. 4 (12): 915–25. doi:10.1038/nrm1261. PMID 14685170.
- Zhang YW, Vande Woude GF (February 2003). "HGF/SF-met signaling in the control of branching morphogenesis and invasion". J. Cell. Biochem. 88 (2): 408–17. doi:10.1002/jcb.10358. PMID 12520544.
- Paumelle R, Tulasne D, Kherrouche Z, Plaza S, Leroy C, Reveneau S, Vandenbunder B, Fafeur V, Tulashe D, Reveneau S (April 2002). "Hepatocyte growth factor/scatter factor activates the ETS1 transcription factor by a RAS-RAF-MEK-ERK signaling pathway". Oncogene 21 (15): 2309–19. doi:10.1038/sj.onc.1205297. PMID 11948414.
- Comoglio PM (1993). "Structure, biosynthesis and biochemical properties of the HGF receptor in normal and malignant cells". EXS 65: 131–65. PMID 8380735.
- Maulik G, Shrikhande A, Kijima T, et al. (2002). "Role of the hepatocyte growth factor receptor, c-Met, in oncogenesis and potential for therapeutic inhibition". Cytokine Growth Factor Rev. 13 (1): 41–59. doi:10.1016/S1359-6101(01)00029-6. PMID 11750879.
- Ma PC, Maulik G, Christensen J, Salgia R (2004). "c-Met: structure, functions and potential for therapeutic inhibition". Cancer Metastasis Rev. 22 (4): 309–25. doi:10.1023/A:1023768811842. PMID 12884908.
- Knudsen BS, Edlund M (2004). "Prostate cancer and the met hepatocyte growth factor receptor". Adv. Cancer Res. Advances in Cancer Research 91: 31–67. doi:10.1016/S0065-230X(04)91002-0. ISBN 978-0-12-006691-9. PMID 15327888.
- Dharmawardana PG, Giubellino A, Bottaro DP (2005). "Hereditary papillary renal carcinoma type I". Curr. Mol. Med. 4 (8): 855–68. doi:10.2174/1566524043359674. PMID 15579033.
- Kemp LE, Mulloy B, Gherardi E (2006). "Signalling by HGF/SF and Met: the role of heparan sulphate co-receptors". Biochem. Soc. Trans. 34 (Pt 3): 414–7. doi:10.1042/BST0340414. PMID 16709175.
- Proto-Oncogene Proteins c-met at the US National Library of Medicine Medical Subject Headings (MeSH)
- UniProtKB/Swiss-Prot entry P08581: MET_HUMAN, ExPASy (Expert Protein Analysis System) proteomics server of the Swiss Institute of Bioinformatics (SIB)
- A table with references to significant roles of MET in cancer | <urn:uuid:2abd27a8-72cd-4986-b4fd-87a09e0fb8af> | CC-MAIN-2013-20 | http://en.wikipedia.org/wiki/C-Met | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.734207 | 9,502 | 2.78125 | 3 |
In the summer of 2007, shoppers at some food co-ops in the upper Midwest encountered a new label on their produce: “Local Fair Trade.” Seasonal staples such as cucumbers, squash, and broccoli were the first to don the label, a large, hard-to-miss sticker symbolizing the union of two approaches to sustainable food: eating food grown locally, and purchasing food traded fairly.
We’ve gotten used to a variety of labels on our food. There’s “organic,” which used to connote ideas like “pure” and “natural” but these days technically means food certified as organic by the USDA (if domestically produced) or by the food’s country of origin. “Local” usually means food grown or produced within a few hundred miles of its selling location. And “fair trade” is seen most commonly on popular imports such as coffee and chocolate; the label means that the food’s growers or producers were paid a decent wage.
So what does “local fair trade” mean? According to Erik Esse, the director of the Minneapolis-based Local Fair Trade Network, the label is an attempt to answer a question: “How can the principles of fair trade, which have effectively moved many farmers and workers in the developing world out of poverty and towards self-sufficiency, work here in the U.S., where our farmworkers are having some of the same problems?”
At the heart of the local or “domestic fair trade” label is the idea of fair and equitable relationships. The label can be applied to food grown in the U.S. under a set of guidelines, including a living wage and an emphasis on fair and healthy living conditions. The product of nearly a decade of careful planning, the domestic fair-trade label is an effort to incorporate social-justice awareness into our burgeoning efforts to eat foods that have been cleanly and sustainably produced.
As the fair-trade movement (both international and domestic) wants everyone to understand, the local people behind the food we eat deserve sustainability, too. The USDA’s national organic standards guarantee that organically certified food is not genetically modified and is grown without petroleum-based fertilizers or synthetic chemicals. But the standards have nothing to say about the people who produce the food. This fact appalls those who work closely with farmworkers, including Richard Mandelbaum of El Comité de Apoyo a los Trabajadores Agrícolas (CATA), a migrant-farmworker organization based in New Jersey.
“Organic standards include all sorts of rules about how livestock needs to be treated, but absolutely none for the human beings that are on the farm,” says Mandelbaum.
The Local Fair Trade Network’s Esse isn’t sure that enough consumers are paying attention to those human beings, either. “The way stores like to put up pictures of happy farmers these days — that’s in some ways great, in that it’s identifying that there’s a person growing their food,” he says. “But in some ways, those smiles mask the fact of how little money they get paid and how hard their lives are.”
Although some organic farms in the U.S. opt to pay their workers a living wage, as well as provide vacation days and access to health care, many do none of those things. Small-scale organic farmers, who often live hand-to-mouth themselves, rarely have the budget to do so. And most large, industrial-sized organic farms rely on hundreds if not thousands of underpaid migrant workers, in much the same way that conventional farms and food processors do.
According to a 2005 survey report from the University of California, Davis, the majority of the 188 California organic farms surveyed did not pay a living wage or provide medical or retirement plans. And despite the nationwide boom in organic food — the industry was worth more than $17 billion by the end of 2006 — the wealth has not trickled down. While the absence of synthetic pesticides (and the health impacts that accompany them) can be a draw to some workers, most employees on organic farms earn no more than those on conventional ones.
Across the country, three to five million people labor every year on farms and in factories, planting, cultivating, harvesting, and processing fresh produce and other agricultural products. Their lives are anything but easy. According to the National Agricultural Workers Survey, 61 percent of farmworkers live in poverty. In recent years, their median income has not kept up with inflation: for individual farmworkers, the median annual income is now $7,500, while for farmworker households, the median annual income is less than $10,000. (The overall U.S. median household income, according to the U.S. Census Bureau, is more than $48,000.) It is also estimated that between 72 and 78 percent of farmworker households have no health insurance.
Today’s domestic fair-trade movement dates back as far as 1999, when CATA, along with Rural Advancement Foundation International-USA (RAFI-USA) and several other partners, argued for an inclusion of labor issues in the federal organic standards.
“When it became clear that the issues of social justice and fairness would not be incorporated into the federal [organic standards],” says RAFI’s Michael Sligh, “that really triggered our work to look at opportunities to make that additional claim to the marketplace.”
The result was something called the Agricultural Justice Project (AJP), a group that set to work devising a separate set of standards that would cover both equity for the small-scale farmer as well as fair working conditions for farmworker. But not everyone was convinced.
“When we first started out, it wasn’t uncommon to get a shrug from those in the organic/sustainable community, with responses like, ‘I’m not sure why you’re focused on this,’” says CATA’s Mandelbaum. “But in the last two years, we’ve seen that consumers are increasingly dissatisfied with anonymous products, and really want to know how their food is made — environmentally of course, but also increasingly socially.”
In 2005, the Agricultural Justice Project, along with the international fair-trade organization Equal Exchange and several domestic farmer cooperatives, held the first meeting of the Domestic Fair Trade Working Group (now renamed the Domestic Fair Trade Association). By 2006, it became clear that the best place to pilot a domestic fair-trade label was the Minnesota/Wisconsin area.
According to Erik Esse, whose Local Fair Trade Network is the Minneapolis-based arm of the movement, there are around 40 food co-ops in Minnesota and more than 25 in Wisconsin. “The fact that consumer cooperatives are so key to the area,” he says, “as opposed to the corporate natural-food-store model, means we have a background that lends to embracing fair trade. [The customers] already believe in democracy and in consumer activism.”
By early 2007, four small farms in the upper Midwest had been chosen to participate in the Domestic Fair Trade Working Group’s pilot project, along with two food co-ops in the Minneapolis area. All the farms involved were closely audited, including their business practices and employee policies. And they pledged to, among other things, “1) Respect workers’ freedom of association and right to collective bargaining, 2) Provide adequate health and safety protections, including access to adequate medical care, information on potential hazards, and using the least toxic methods available, and 3) Pay a living wage.”
Esse points out that although small-scale farmers and farmworkers are often in similar financial situations, there is still some tension between the two groups. “Farmers don’t always want to stir the pot,” he says. “We often hear, ‘Things are going fine; why would I want to bring this up?’”
Rufus Hauke, of Keewaydin Farms in Viola, Wisconsin, does want to stir the pot. He’s a produce farmer participating in the pilot project who, along with his brother, decided to remake the family farm according to a vision for what he calls “the next evolution of food.” Although he has only a few employees, Hauke was excited to offer them the paperwork and training necessary to help get the first Keewaydin Fair Trade growing season off the ground.
Culinate’s features address the practical challenges and joys of food.
Want more? Comb the archives.
Flatbreads from around the continent
Beyond a supporting role
The great Sicilian-Neapolitan kitchen rivalry
Five ideas each month for eating better | <urn:uuid:03094706-781e-432e-8f97-65be5d08c4ee> | CC-MAIN-2013-20 | http://www.culinate.com/articles/features/domestic_fair_trade | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.964222 | 1,854 | 3.09375 | 3 |
| ||Minding Your Mind || |
When Striving for Perfection Is a Problem
Last Reviewed by Faculty of Harvard Medical School on May 4, 2012
By Howard LeWine, M.D.
Harvard Medical School
Some people can't live with the slightest imperfection. Their need to appear or be perfect perfectionism is so intense that it's exhausting if not painful.
But striving for perfection, while accepting that perfection rarely can be achieved, can lead to growth and development and a feeling of satisfaction. It can be a powerful motivator as long as it is based on reasonable standards and expectation. For example, the desire to have a perfect golf swing or tennis stroke can enhance the pleasure you take in these pursuits, whether you are an amateur or professional.
Perfectionism is unproductive, however, when it is linked to excessively high standards and is driven by a fear of failure.
Back to top
Types of Perfectionism
Perfectionism comes in many forms:
- Obsessive concern over mistakes
- Setting excessively high personal standards
- Perceiving parents as overly critical
- Unreasonable doubts about ability to perform tasks
- An over-emphasis on organization
- Trying to live up to high expectations you're convinced other people, such as parents, have of you
- Having high expectations of other people
But whatever form it takes, perfectionism can rob you of life's pleasures.
Back to top
The Roots of Perfectionism
It may not be so easy to figure out where perfectionism comes from. For some, it is a part of their inborn temperament like perfect skin and teeth. Researchers have linked perfectionism to anxiety, depression and eating disorders. The trait of perfectionism is common among people with obsessive-compulsive disorders. Or it could be a response to having parents who expected too much from you. Maybe they never let you off the hook, even if you got 98 out of 100 on an exam.
Back to top
The Role of Indirect Aggression
What goes on outside the home is also a big factor in the development of perfectionism. For some women, perfectionism is a way to cope with indirect aggression, a term for the socially manipulative behaviors of the stereotypical "mean girls" that they may have experienced.
A recent study published in the journal Aggressive Behavior supports the idea that perfectionism may develop in a social group and suggests that indirect aggression triggers it. The "aggressor" talks behind a person's back, gives someone the "silent treatment," tells secrets, or is nice in private but rejecting in public to hide her hostility toward another.
Girls and women tend to resort to this kind of social bullying because they are not encouraged or taught how to express aggressive or competitive feelings directly. They become aggressive in ways that can be easily concealed or denied.
For the study, researchers at McMaster University asked two groups of college-age women to fill out surveys about what types of verbal abuse, physical abuse and indirect aggression they had experienced in grades 3 through 12. They also asked the women to answer questions to gauge whether they were perfectionists.
The women who recalled experiencing indirect aggression in childhood were more likely to become perfectionists by the time they reached college. Verbal and physical abuse apparently was not linked to perfectionism.
The authors acknowledge that the study asked subjects to report on old experiences and that women who are perfectionistic might be more likely to recall past events in a negative way, no matter how they were treated in reality.
Even so, the authors say that a victim of indirect aggression may without knowing it come to feel that being "perfect" is the only way to assert herself in social situations or maintain control. Thus, perfectionism becomes a way to cope with a threatening environment.
Back to top
Making Perfectionism Work for You
There is a fine line between the positive aspects of striving to be perfect and perfectionism that can be detrimental. Striving to be perfect can be very positive as long as it:
- Is realistic
- Moves you forward
- Helps you feel stronger
- Gives you the satisfaction you deserve after all that hard work.
Perfectionism becomes a problem when it makes you feel worse instead of better, or when your inability to be satisfied unless you are perfect realizing that it will always be out of reach causes suffering.
You can make perfectionism work for you. Here's how:
- Look at and change unrealistic and self-destructive thought patterns with cognitive behavior therapy.
- Understand how you became perfectionistic and ease up on unwarranted self-criticism. Psychodynamic therapy can help you do this.
- If you do have one of the underlying disorders linked to perfectionism (obsessive-compulsive disorder, anxiety or depression), then a medication or psychotherapy may help by targeting the pressure coming from that source.
Consult a mental health professional as a first step. The goal is to let go of the excessively high standards and find ways to cope with fears of failure. At the same time, you want to hold on to the positive force of striving for perfection.
Back to top
Howard LeWine, M.D. is chief editor of Internet Publishing, Harvard Health Publications. He is a clinical instructor of medicine at Harvard Medical School and Brigham and Women's Hospital. Dr. LeWine has been a primary care internist and teacher of internal medicine since 1978. | <urn:uuid:34e5084b-7059-464f-b37f-a87caeccd8a1> | CC-MAIN-2013-20 | http://www.intelihealth.com/IH/ihtIH/EMIHC267/35320/63153/724345.html?d=dmtHMSContent | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.955684 | 1,116 | 2.828125 | 3 |
The CEI is currently involved in many international space missions and projects.
Gaia: Gaia was adopted within the scientific programme of the European Space Agency (ESA) in October 2000. The mission aims to: measure the positions of ~1 billion stars both in our Galaxy and other members of the Local Group, with an accuracy down to 20 µas, perform spectral and photometric measurements of all objects, derive space velocities of the Galaxy's constituent stars using the stellar distances and motions, and create a three-dimensional structural map of the Galaxy. The gathered large datasets will provide astronomers with a wealth of information covering a wide range of research fields: from solar system studies, galactic astronomy, cosmology to general relativity. The CEI is involved in a number of different projects including modelling the CCDs designed for the Gaia mission to simulate the charge trapping effect of radiation damage, analysis of the BP/RP and RVS radiation campaign datasets and the development of the data processing pipeline.
IXO: The International X-ray Observatory (IXO) is a new X-ray telescope with joint participation from NASA, the European Space Agency (ESA), and Japan's Aerospace Exploration Agency (JAXA). This project supersedes both NASA's Constellation-X and ESA's XEUS mission concepts. IXO is a next-generation facility designed to examine three main areas: black holes and matter under extreme conditions, formation and evolution of galaxies, clusters and large scale structure, and the life cycles of matter and energy. The IXO optics will have 20 times more collecting area at 1 keV than any previous X-ray telescope. The focal plane instruments will deliver up to 100-fold increase in effective area for high resolution spectroscopy from 0.3-10 keV, deep spectral imaging from 0.2-40 keV over a wide field of view, unprecedented polarimetric sensitivity, and microsecond spectroscopic timing with high count rate capability. The CEI is currently developing instrumentation for the X-ray Grating Spectrometer (XGS) readout.
Euclid: Euclid is a medium class mission candidate for launch in 2017 as part of the Cosmic Vision 2015-2025 programme and will spend five years in orbit at L2. The mission is a combination of two missions: the Dark UNiverse Explorer (DUNE) and the SPectroscopic All Sky Cosmic Explorer (SPACE). The primary goal is to study the dark universe by means of two main cosmological probes: the Weak Lensing (WL) technique which maps the distribution of dark matter and measures the properties of dark energy in the universe and the Baryonic Acoustic Oscillations (BAO) technique which uses the scales in the spatial and angular power spectra as "standard rulers" to measure the equation of state and rate of change of dark energy.
The CEI is working with a number of institutes, including ESA and Mullard Space Science Laboratory (MSSL) on the characterisation of the radiation effects to the Euclid CCDs. This work involves pre- and post-irradiation characterisation of the e2v CCD204s provided by ESA, based on the same architecture as the CCD203 which is proposed to be used onboard Euclid. A model of the CCD204 pixel structure is being created to explore the electron density for charge storage as a function of signal size and being used in CTE simulations under a variety of signal conditions to predict CTE effects at the mid and end of mission. The aim is to provide recommendations on CCD design modifications for improved radiation tolerance, device operation and shielding.
Chandrayaan-1 & 2: The Indian Space Research Organisation Chandrayaan-1 spacecraft was launched on the 22nd of October 2008. It spent nine months in a 100 km circular orbit around the Moon before communication was lost. During this time it surveyed the around 15 percent of the lunar surface providing a map of chemical characteristics and 3-dimensional topography. The spacecraft carried a number of instruments including a terrain mapping camera, infrared spectrometers, and the Chandrayaan-1 X-ray Spectrometer (C1XS). The C1XS instrument consisted of 24 e2v technologies CCD54 swept-charge device silicon X-ray detectors arranged in 6 modules that will carried out high quality X-ray spectroscopic mapping of the Moon using the technique of X-ray fluorescence in the energy range 0.5-10 keV.
The CEI was involved in performing the proton radiation damage assessment for the CCD54 devices recommending instrument shielding, operating temperature and operating potentials. The pre-flight characterisation of the 14 modules available for flight selection was also conducted recommending ten modules suitable for use in the instrument. The ESA space environment information system (SPENVIS) software was utilised to estimate the worse case end of life 10 MeV equivalent proton fluence, which was used to irradiate a number of CCD54 devices to investigate their post irradiation performance. The CEI continues to be involved in the instrument in an advisory role on the observed radiation effects to the CCD54 devices.
Chandrayaan-2 is the second Indian lunar mission, to be launched in 2014 into a 200 km polar orbit where is will use and test various new technologies. The spacecraft will include a number of instruments, one being the Chandrayaan-2 Large Area Soft X-ray Spectrometer (CLASS) instrument which is a continuation of the successful C1XS instrument. CLASS will map the abundance of major rock forming elements on the lunar surface, mapping elemental abundances with a nominal spatial resolution of 25 km. The instrument uses the second generation swept charge device, CCD236, and has a geometrical area three times that of C1XS which will allow for data collection at low levels of solar activity.
The CEI will provide assistance with the characterisation of the SCDs and an analysis of the impact of radiation damage on their performance. Initial studies have demonstrated a factor of two improvement in radiation hardness, further optimisation and a more detailed investigation into device performance is currently underway.
UKube-1: The UK Space Agency is planning to launch their first cubesat later this year. The cubesat platform allows fast mission turnaround of small payloads, allowing more groups to be involved with the missions. After launch in late 2011, the satellite will spend 1 year in a low Earth orbit (~400km), with a view to operating for a further 2 years if successful.
The CEI has successfully bid for design and production of a single payload, working in tandem with Clydespace to develop a payload responsible for imaging of the Earth with narrow and wide field imagers. In addition the group plans to include an imager to monitor radiation damage effects on the 0.18µm CMOS sensors, which have been previously characterised on the ground. This is the first such instrument under total development by the group, and will provide ample training and expansion of knowledge of mission development within the group. Click Here to access the mission page for more information. | <urn:uuid:87c1836c-5c2f-464a-b960-6d267edf25b2> | CC-MAIN-2013-20 | http://www.open.ac.uk/pssri/cei/?p=4 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.912695 | 1,461 | 2.734375 | 3 |
Volume 15, Number 2—February 2009
Reemergence of Human and Animal Brucellosis, Bulgaria
Bulgaria had been free from brucellosis since 1958, but during 2005–2007, a reemergence of human and animal disease was recorded. The reemergence of this zoonosis in the country highlights the importance of maintaining an active surveillance system for infectious diseases that will require full cooperation between public health and veterinary authorities.
According to the World Health Organization (1), brucellosis is one of the most common zoonoses worldwide and is considered a reemerging infectious disease in many areas of the world. An estimated 500,000 new human cases occur annually worldwide (2). In Europe, 1,033 human brucellosis cases were reported in 2006 (3); data from a passive surveillance system were based on clinical findings, supported by epidemiologic criteria, and confirmed by serologic tests. Here we report the results of a survey performed in Bulgaria during 2005–2007, which has been considered free from Brucellosis melitensis and B. abortus disease since 1958 (4).
In Bulgaria, until 1998 serologic screening was mandatory for all cattle, sheep, and goats >12 months of age. Afterward, based on risk assessment, animal surveillance activities covered 100% of heads reared in municipalities along the borders with countries endemic for brucellosis such as Turkey, Greece, and the former Yugoslav Republic of Macedonia; 50% of the animals reared in other municipalities of the regions bordering the aforementioned countries; and 25% of animals reared in the inner Bulgarian regions. Currently, an active surveillance system is in place for dairy factory employees and persons considered at risk after outbreaks in ruminants.
During 2005–2007 (Figure 1), a total of 105 human cases of brucellosis were diagnosed among 2,054 persons who were tested on the basis of clinical suspicion or risky exposure. A human case of brucellosis was considered confirmed if results of serologic tests, such as ELISA or complement fixation test, were positive, in accordance with the World Health Organization case definition (5). Bacteria isolation and characterization had not been performed routinely.
The alert started in 2005 (Figure 1, panel A), when a case of brucellosis occurred in a Bulgarian migrant animal keeper working in Greece. Active surveillance of persons at risk was implemented, enabling detection of a total of 34 human cases of brucellosis. All cases were classified as imported cases; therefore, no supplemental active surveillance on animals was implemented. Additionally, during routine screening for at-risk workers, 3 other persons employed in a dairy factory were found to be seropositive. Due to the lack of traceability of the raw material used in the factory, it was not possible to trace the origin of the infection. At that time, there was no evidence of animal cases of brucellosis.
During 2006 (Figure 1, panel B), a total of 10 cases of human brucellosis were reported from different regions of the country. According to anamnestic information, these case-patients had different sources of infection: 3 of the 10 were considered imported infections; 1 case-patient was diagnosed during hospitalization in Sicily (Italy), where the patient reported having eaten ricotta cheese, and 2 occurred in Bulgarian migrant animal keepers working in Greece. Concerning the origin of infection, epidemiologic data suggest that 5 of the 10 cases were related to occupational risk and the remaining to consumption of raw milk and milk derivates. Surveillance activities enabled detection of 10 animals (7 small ruminants and 3 cows) with positive serologic results; these animals were then killed and destroyed. During 2007 (Figure 1, panel C), a total of 58 human cases were identified. Of 58 cases, 54 were classified as autochthonous (i.e., acquired by imported animals found to be infected during regular veterinary surveillance). These cases were identified in a Bulgarian region bordering Greece and Turkey (Haskovo region).
Two other cases, which were also classified as autochthonous, were diagnosed in patients who stated they had consumed a risky product (i.e., raw milk handled without adherence to hygienic standards). The remaining 2 cases were classified as imported because they involved Bulgarian migrant animal keepers working in Greece. Active surveillance in place for animals found a total of 625 heads (618 small ruminants, 7 cows) with positive serologic results; all were killed and destroyed. Analagous with what we observed in humans, most of the infected animals were found in the Haskovo region. All animals found to be infected during surveillance activity were bred at the family farm, and their milk and dairy products were prepared and eaten without adherence to proper hygienic standards.
Our data show that brucellosis is reemerging in Bulgaria (Figure 2). On the basis of information provided in this report, we can make several hypotheses regarding the causes of the resurgence of a previously controlled infection in a transitional, rapidly changing country.
Overall, 105 human cases of human brucellosis were identified over a 3-year period. Of them, 84 cases (80%) were identified in persons at occupational risk. This finding suggests that when brucellosis is introduced into naive territories (i.e., those territories that were considered officially free of brucellosis), the primary source of infection for humans is direct contact with infected animals (i.e., exposure to abortion/delivery products) or domestic consumption of products produced on family farms (milk, raw cheese). However, environmental exposure can also occur, especially in infants and children, who are considered at lower risk for direct contact with potentially infected animals, as recently observed (6). This hypothesis appears to be consistent with the context of a naive setting, where preventive measures are not routinely implemented. Continuous health education and other strategies may contribute to reduce the circulation of human brucellosis in endemic areas (7).
The reemergence of brucellosis is not limited to Bulgaria but involves several countries in the Balkan region and even in the Caucasian region (P. Pasquali, unpub. data). This trend or reemergence has several explanations. First, due to socioeconomic changes, many countries in these regions are experiencing a dramatic increase of animal trade, animal movement, and occupational migration, which in turn may increase the risk for introduction and spread of infectious diseases, such as brucellosis, from other disease-endemic countries like Greece or Turkey (2). Second, the process that has characterized the change of the social and administrative organization since the collapse of the Soviet Union is far from being completed; the public health systems are still flawed in many countries. Finally, part of the increase may simply be that brucellosis is a complex disease, which has different cycles of expansion and regression.
Before drawing conclusions, we should mention 2 possible limitations of the study. First, samples from patients with positive serologic results were used for bacterial culture for brucellosis only if sample collection was properly timed; no culture positive case is available. Second, we cannot exclude the possibility that part of the increase in cases of brucellosis could be due to improved surveillance; in particular, temporal trends and geographic comparison might be, to some extent, affected by the intensity of screening activities. However, this increased surveillance is unlikely to bias the observed shift from imported to locally acquired cases.
In conclusion, this report shows how a disease such as brucellosis may increase its public health impact, particularly in transitional countries such as Bulgaria. Our findings emphasize the importance of the combination of health education and active surveillance systems for controlling infectious diseases and highlight the need for cooperation between public health officials and veterinary officers. Creating and improving capacity building are necessary to properly address issues that pose public health hazards.
Dr Russo is a physician at Sapienza University in Rome. His research interests are infectious diseases in resource-limited countries in the context of natural disasters.
The authors thank Massimo Amicosante and Luca Avellis for logistic support.
This work was conducted as part of a cooperative effort between Italy and Bulgaria, supported by the European Union.
- Food and Agriculture Organization of the United Nations, World Organisation for Animal Health, and World Health Organization. Brucellosis in human and animals. Geneva: World Health Organization; 2006. WHO/CDS/EPR/2006.7 [cited 2009 Jan 7]. Available from http://www.who.int/entity/csr/resources/publications/Brucellosis.pdf
- Pappas G, Papadimitriou P, Akritidis N, Christou L, Tsianos EV. The new global map of human brucellosis. Lancet Infect Dis. 2006;6:91–9.
- European Food Safety Authority–European Centre for Disease Prevention and Control. The community summary report on trends and sources of zoonoses, zoonotic agents, antimicrobial resistance and foodborne outbreaks in the European Union in 2006. EFSA Journal. 2007;2007:130 [cited 2009 Jan 5]. Available from http://www.efsa.europa.eu/cs/BlobServer/DocumentSet/Zoon_report_2006_en,0.pdf?ssbinary=true
- Corbel MJ. Brucellosis: an overview. 1st international conference on emerging zoonoses (Jerusalem, Israel). Emerg Infect Dis. 1997;3:213–21.
- Robinson A. Guidelines for coordinated human and animal brucellosis surveillance. Rome: Food and Agriculture Organization of the United Nations; 2003. FAO Animal Production and Health Paper 156.
- Makis AC, Pappas G, Galanakis E, Haliasos N, Siamopoulou A. Brucellosis in infant after familial outbreak. Emerg Infect Dis. 2008;14:1319–20.
- Jelastopulu E, Bikas C, Petropoulos C, Leotsinidis M. Incidence of human brucellosis in a rural area in western Greece after the implementation of a vaccination programme against animal brucellosis. BMC Public Health. 2008;8:241.
Suggested citation for this article: Russo G, Pasquali P, Nenova R, Alexandrov T, Ralchev S, Vullo V, et al. Reemergence of human and animal brucellosis, Bulgaria. Emerg Infect Dis [serial on the Internet]. 2009 Feb [date cited]. Available from http://wwwnc.cdc.gov/eid/article/15/2/08-1025.htm | <urn:uuid:4182d4d9-d6c1-4252-959d-55794dc84d55> | CC-MAIN-2013-20 | http://wwwnc.cdc.gov/eid/article/15/2/08-1025_article.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.946803 | 2,192 | 2.828125 | 3 |
Charles Heywood kept his cannon firing even as his ship sank.
Born in Waterville on Oct. 3, 1839, Heywood was commissioned a second lieutenant in the United States Marine Corps at New York City on April 5, 1858. Thirty-five months later he reported aboard the USS Cumberland, a 24-gun sloop of war then assigned to the Gosport Navy Yard in Portsmouth, Va.
Fearing the shipyard’s capture by Confederate militia, senior Navy officers ordered Gosport abandoned on April 20, 1861. After mining key installations with barrels of gunpowder, Heywood and his Marines lit the fuses and hastened aboard the USS Cumberland, already slipping seaward on the ebbing Elizabeth River.
The resulting explosions and fires caused significant damage, but Confederate engineers quickly repaired crucial machinery. They also raised the burned and scuttled USS Merrimac. Upon its hull arose an ironclad, the CSS Virginia. Measuring 263 feet in length and weighing 3,200 tons, the steam-engine ironclad mounted 10 massive cannons. The skipper was Capt. Franklin Buchanan.
He intended to sail into Hampton Roads and sink the Yankee warships stationed there. By the time Buchanan ordered his ship to sail on March 8, 1862, Heywood was a captain commanding the USS Cumberland’s Marine contingent.
Brilliant late-winter sunshine and warm air engulfed Hampton Roads as the CSS Virginia stood downriver that Saturday. Buchanan had already selected his first target. “I am going to ram the Cumberland,” which was equipped with “the new rifled guns, the only ones in their whole fleet we have cause to fear,” he told Chief Engineer H. Ashton Ramsay. “The moment we are out in the Roads, I’m going to make right for her and ram her.”
That morning, the USS Cumberland lay moored some 300 yards off Newport News on the north shore of Hampton Roads. Aboard the Cumberland, Cmdr. William Radford had gone ashore, leaving Lt. George Morris, the executive officer, in command.
At noon, sailors on watch aboard the Cumberland “discovered three vessels under steam, standing down the Elizabeth River toward Sewell’s Point,” Morris wrote on March 9. One ship belched black smoke; Morris, Heywood, and other USS Cumberland officers could see the smoke without using their spyglasses.
Buchanan’s underpowered ironclad steamed ponderously into Hampton Roads. After the CSS Virginia cleared Sewell’s Point about 1:30 p.m., Buchanan steered west to attack the USS Cumberland.
Aboard that ship, crewmen had already “double breeched the guns on the main deck, and cleared [the] ship for action,” Morris reported. He ordered the sloop pivoted on her anchor so that her starboard broadside would face the ironclad.
Assigned to his ship’s after gun division, Heywood stood with his gun crews and watched as the Virginia approached. At 2 p.m. aboard the ironclad, Lt. Charles Simms fired the bow pivot gun, a 7-inch Brooke rifle.
According to William C. Davis writing in “Duel Between the First Ironclads,” Simms’ cannon ball “screamed across the Roads and hit” the USS Cumberland “squarely, passing through the starboard-quarter rail” and hurling wood splinters into nearby Marines. After a Cumberland gun crew fired their ship’s forward 10-inch pivot gun and missed, Simms’ second shot exploded amidst that gun crew and killed all but two men.
“Our firing became at once very rapid from the few guns we could bring to bear” as the ironclad “approached slowly,” recalled Master Moses Stuyvesant, who commanded the sloop’s after gun division. Simms’ pivot gun pounded the Cumberland; as shell fragments and wood splinters struck down sailors and Marines alike, Heywood’s crews fired their cannons through the dense smoke.
At approximately 2:30 p.m., Cumberland pilot A.B. Smith thought the approaching ironclad resembled “a huge half-submerged crocodile” with its “iron ram projecting, straight forward.”
Steaming at 6 knots, the Virginia “stood on and struck us under the starboard fore channels” near the Cumberland’s bow, Morris reported. “She delivered her fire at the same time; the destruction was great. We returned the fire with solid shot with alacrity.”
The ironclad’s ram opened a hole about 7 feet across. Flooding with seawater, the USS Cumberland listed to starboard — and the ship’s weight bore the Virginia downward.
The ironclad was already reversing its engines as the tide pivoted the trapped Virginia alongside the sinking Cumberland. Then the ram broke off, freeing the ironclad from a potential watery grave. Buchanan backed his ship until it lay parallel to the Cumberland and only 20 feet away.
Cannons flashed and boomed for some 30 minutes as Heywood and his comrades heroically fought the Virginia amidst “a scene of carnage and destruction never to be recalled without horror,” Stuyvesant remembered. Blood and gore splattered across the main deck as Heywood shouted at sailors and Marines to drag wounded comrades to the ship’s port side.
Now the sea lapped at the Cumberland’s main deck as the bowsprit disappeared into Hampton Roads. Gun crews kept firing until their cannons submerged. Stationed near the stern, Heywood worked his guns even as another cannon broke loose and lurched across the deck to crush a sailor.
“At 3:35 [p.m.] the water had risen to the main hatchway, and the ship canted to port, and we delivered a parting fire, each man trying to save himself by jumping overboard,” Morris recalled. Acknowledging that severely wounded men taken below decks could only be left to drown, he credited specific officers for their coolness under fire.
“I can only say in conclusion that all did their duty and we sunk with the American flag at the [mast] peak,” Morris closed his report.
Even as the Cumberland’s deck sharply canted, Heywood remained with the aft guns. “The water began to swash over the upper deck, and still every unencumbered gun was hurling defiance at the foe,” wrote John S.C. Abbott in “The History of the Civil War in America, Vol. 1,” published in 1863.
“The ship careened upon one side. The last gunner [Heywood], knee-deep in water, pulled the trigger of the last gun,” Abbott wrote. Then “the majestic frigate, with all her dead and all her wounded, sank like lead.”
And Charles Heywood went overboard into Marine Corps lore.
His heroism went noticed. “I omitted to mention to you the gallant conduct of Lieutenant Charles Heywood … whose bravery upon the occasion of the fight with the Merrimack won my highest applause,” Morris wrote Navy Secretary Gideon Welles on April 12.
Heywood received a brevet promotion to major after the battle. Of the 46 Marines aboard the USS Cumberland, 14 died. Another 107 sailors were killed aboard the sloop on March 8.
The next day, the Monitor fought the Virginia in Hampton Roads in history’s first ironclad-to-ironclad sea battle. Most history books still call the Virginia the “Merrimac.”
Heywood would fight during another famous Civil War battle; he was commanding two gun crews aboard the USS Hartford at Mobile Bay, Ala. as Adm. David Farragut roared, “Damn the torpedoes! Full speed ahead!” on Aug. 5, 1864. Within an hour, Heywood and Farragut traded shot and shell with another Confederate ironclad, the CSS Tennessee — commanded by Adm. Franklin Buchanan.
This time Buchanan lost and Heywood won.
Named the ninth Marine Corps commandant on Jan. 30, 1891, Charles Heywood instituted important changes to the corps’ mission and strength. Promoted to major general a year before his 1903 retirement, he died at Washington, D.C., on Feb. 26, 1915. His wife, Caroline Bacon, outlived him by 12 years.
Heywood lies buried in Section 2, Lot 1115 at Arlington National Cemetery.
Brian Swartz is the BDN special sections editor. An avid Civil War buff, he has extensively explored and photographed Civil War battlefields throughout the South. Swartz may be reached at email@example.com or visit his blog at http://maineatwar.bangordailynews.com. | <urn:uuid:3e102e66-ad64-49f5-bfff-7fc6ce61680c> | CC-MAIN-2013-20 | http://bangordailynews.com/2012/03/12/living/watervilles-charles-heywood-battled-an-ironclad-enemy/print/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.955605 | 1,894 | 2.796875 | 3 |
Matisse: Radical Invention, 1913-1917
July 18–October 11, 2010
Matisse conceived this "souvenir of Morocco" in 1912, stretched a canvas for it in 1913, and returned to the composition late in 1915, only to start again on a new canvas in early 1916. Black is the principal agent, at once simplifying, dividing, and joining the three zones of the canvas: the still life of melons and leaves on a gridded pavement, bottom left; the architecture with domed marabout, top left; and the figures, at right. Next to a seated Moroccan shown from behind, the large curving ocher shape and circular form derive from a reclining figure in the sketches. Above the shadowed archway, figures in profile may be discerned in the two windows: at right, the lower part of a seated man; at left, the upper part of a man with raised arms. Matisse built up the surface with thin layers of pigment, the color of the underlying layers modifying those on top. Painter Gino Severini reported that "Matisse said . . . that everything that did not contribute to the balance and rhythm of [this] work, had to be eliminated . . . as you would prune a tree."
Matisse developed this painting of what he described as “the terrace of the little cafe of the casbah” in the years following two visits to Morocco, in 1912 and 1913. As he worked on various studies he eliminated details he felt were extraneous to the painting’s overall balance. A balcony with a flowerpot and a mosque behind it are at upper left, at lower left is a still life of vegetables, and to the right is a man wearing a round turban, seen from behind. Matisse’s generous application of black paint helps unify the three sections of the painting across its abstract expanse.
Matisse: Radical Invention, 1913–1917, July 18–October 11, 2010
Director, Glenn Lowry: Matisse first conceived this painting in 1912, while he was visiting Morocco. But he didn't actually start the canvas until early 1916. Once he did, he continued working on it—with great focus and concentration—through the fall.
Curator, John Elderfield: The forms are difficult to decipher. I know some people who have thought that what Matisse says are melons and leaves are in fact the Moroccans, but Matisse is insistent that they are not. I think one can clearly see the figure whose back is towards us. And if we look at it carefully we can see that that figure's grown in size. To the right of it, that black area does seem to derive from drawings of an arched doorway with light hitting the bottom, but the top part is in shadow, leading into another architectural space. And the two elements at the top are ones which we can trace back to drawings he made in Morocco—the one at the right, of a sort of seated figure, and the one to the left, more puzzling, but somewhat amusingly, in one of the drawings—and he refers to this in his letter—is of a figure who has got his arms raised to look through binoculars. All that remains is the forearms and part of the body, and Matisse is quite happy to have carried it to that point of almost unintelligibility. But I don't think the painting asks us to be specific about these forms.
Curator, Stephanie D’Alessandro: I don't think so either. I think there's a level of memory and recollection, and maybe even nostalgia with this picture.
Glenn Lowry: One aspect of Morocco that stayed with Matisse was the harsh contrast between the midday sun and the shade, evoked here in the black background.
Conservator, Michael Duffy: You can see how it defines certain shapes. Particularly the shape in the middle, this curved shape, which is made up of ochre and white. The black edge on the left is painted over, so he actually defines the form further by overlaying the black paint. When you look closely at the black paint, you'll see that it covers areas of blue and pink underneath, and that gives the black a very warm color. And it's very typical of the way Matisse painted. Rather than blending his colors together, to achieve one color, he would typically layer colors, even black, over several layers, in order to build up a kind of rich, optical surface.
MoMA Audio: Collection, 2008
Curator Emeritus, John Elderfield: This painting was made in 1915–1916 and is a remembrance of visits that Matisse made to Morocco. And while the paintings made in Morocco are beautifully, limpidly colored, obviously the remembrance is rather of the great heat, of contrasts of color in the conditions of very bright light.
The Moroccans themselves are on the right on the terrace with their melons and gourds—the green and yellow forms at the left. We can see a figure with his back to us, and then, with more difficulty, figures in windows at the top. In the background is a mosque with a vase of blue and white flowers standing on the parapet.
Matisse said that he put black in his pictures to simplify the composition. And indeed, through the teens and into the 1920s, he regularly puts in a little dosing of black to hold everything else in place. I think, unquestionably, he was thinking of shadow, and of the kind of stifling midday sun in North Africa. There is that element of renunciation of color and wanting to put in an element of real gravity in the composition. Its hard to imagine any other color doing it in that same way.
The Museum of Modern Art , MoMA Highlights, New York: The Museum of Modern Art, revised 2004, originally published 1999, p. 79
The Moroccans marvelously evokes tropical sun and heat even while its ground is an enveloping black, what Matisse called "a grand black, . . . as luminous as the other colors in the painting." Utterly dense, this black evokes a space as tangible as any object, and allows a gravity and measured drama without the illusion of depth once necessary to achieve this kind of grandeur.
The painting, which Matisse described as picturing "the terrace of the little café of the casbah," is divided into three: at the upper left, an architectural section showing a balcony with flowerpot and the dome of a mosque behind; a still life, of four green-leafed yellow melons at the lower left; and a figural scene in which an Arab sits with his back to us. To his right is an arched doorway, and windows above contain vestigial figures. The form to his left is hard to decipher, but has been interpreted as a man's burnoose and circular turban.
During his visit to Morocco in 1912-13, Matisse had been inspired by African light and color. At the same time, he faced the challenge of Cubism, the leading avant-garde art movement of the period, and The Moroccans summarizes his memories of Morocco while also combining the intellectual rigor of Cubist syntax with the larger scale and richer palette of his own art.
Matisse Picasso, February 13–May 19, 2003
Narrator: In 1911 and again in 1912 Matisse traveled to Morocco. Three years later, he tapped those memories for this big souvenir picture, The Moroccans. By that time he was deeply involved in the Cubist vocabulary of reduced geometric form.
Curator, John Elderfield: What it shows is on the upper left a mosque with a vase of flowers on the right hand side. In the bottom left is a pavement with melons with their green leaves. And at the right, more difficult to figure out, various figures who are presumably sitting on some sort of terrace outside a cafe in Tangier. One can I think clearly understand the figure with his back to us, with a white turban and, blue shirt. And to the right, what looks like the top of an archway in shadow.
Matisse talked about the black as being a way of representing heat and light. And as one gets further south, one gets these very strong black and white contrasts. It's also trying to convey some of the sense of the intense light, and the almost tangible heat of Tangier.
Curator, Kirk Varnedoe: Certainly Picasso must have looked intensely at a major picture like this, and learned from it a new vocabulary of Cubism, more highly abstracted, more monumental. When you compare The Moroccans to Picasso's Three Musicians of 1921...what leaps out at you are certain similarities—the use of black for example. But Picasso unlike Matisse is not a traveler. Picasso often said, "If someone didn't come to the studio in the morning, I wouldn't have anything to paint in the afternoon."
Narrator: The Three Musicians most likely represent the artist and his friends. Picasso himself is at the center, identified by the harlequin costume and guitar he often used as his symbols. To the right, the man dressed as a monk with a stylized beard is probably Picasso's friend the poet Max Jacob, who had entered a monastery after the First World War. And the large white figure with the clarinet may be another poet friend, Guillaume Apollinaire, who had died from war wounds.
Kirk Varnedoe: The picture has a kind of gravity, a kind of sadness or melancholy, which is played off by small and amusing details, like the tiny little zig zags that represent the hand on the notes of music, or the dog that lies under the table to the left. So you imagine the music being played. Is it syncopated like a kind of bright jazz, and on the other hand melancholy like a threnody? And when you compare this in its detail, then you sense how monumental the Matisse is by comparison, and how in a certain sense impersonal it is. | <urn:uuid:2a578a9b-5330-4676-ba04-3ad5f024bf2d> | CC-MAIN-2013-20 | http://www.moma.org/collection/object.php?criteria=O%3AOD%3AE%3A79588&page_number=1&template_id=1&sort_order=1&background=black | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.969378 | 2,112 | 3.015625 | 3 |
Darfur’s people are a complex mosaic of between 40 and 90 ethnic groups, some of ‘African’ origin (mostly settled farmers), some Arabs. All Darfurians are Muslim. The Arabs began arriving in the 14th century and established themselves as mainly nomadic cattle and camel herders. Peaceful coexistence has been the norm, with inevitable disputes over resources between fixed and migratory communities resolved through the mediation of local leaders. For much of its history, the division between ‘Arab’ and ‘African’ has been blurred at best, with so much intermarriage that all Darfurians can claim mixed ancestry. Identities have been defined in different ways at different times, based on race, speech, appearance or way of life.
An Independent Sultanate
At the heart of Darfur is an extinct volcano in a mountainous area called Jebel Marra. Around it the land is famously fertile, and it was here that the earliest known inhabitants of Darfur lived – the Daju. Very little is known about them. The recorded history of Darfur begins in the 14th century, when the Daju dynasty was superseded by the Tunjur, who brought Islam to the region.
Darfur existed as an independent state for several hundred years. In the mid-17th century, the Keyra Fur Sultanate was established, and Darfur prospered.
In its heyday in the 17th and 18th centuries the Fur Sultanate’s geographical location made it a thriving commercial hub, trading with the Mediterranean in slaves, ivory and ostrich feathers, raiding its neighbours and fighting wars of conquest in the surrounding region.
Darfur under siege
In the mid-19th century, Darfur’s sultan was defeated by notorious slave trader Zubayr Rahma, who was in turn subjugated by the Ottoman Empire. At the time, this included Egypt and what is now northern Sudan. The collapse of the Keyra dynasty plunged Darfur into lawlessness. Roaming bandits and local armies preyed on vulnerable communities, and Islamic ‘Mahdist’ forces fighting British colonial control of the region sought to incorporate Darfur into a much larger Islamic republic. A period of almost constant war followed, until 1899 when the Egyptians – now under British rule – recognized Ali Dinar, grandson of one of the Keyra sultans, as Sultan of Darfur. This marked a de facto return to independence, and Darfur lived in peace for a few years.
Colonial ‘benign neglect’
Ali Dinar refused to submit to the wishes of either the French or the British, who were busy building their empires around his territory. Diplomatic friction turned into open warfare. Ali Dinar defied the British forces for six months, but was ambushed and killed, along with his two sons, in November 1916. In January 1917 Darfur was absorbed into the British Empire and became part of Sudan, making this the largest country in Africa.
The only aim of Darfur’s new colonial rulers was to keep the peace. Entirely uninterested in the region’s development (or lack thereof), no investment was forthcoming. In stark contrast to the north of Sudan, by 1935 Darfur had only four schools, no maternity clinic, no railways or major roads outside the largest towns. Darfur has been treated as an unimportant backwater, a pawn in power games, by its successive rulers ever since.
Independence brings war
The British reluctantly but peacefully granted Sudan independence in 1956. The colonialists had kept North and South Sudan separate, developing the fertile lands around the Nile Valley in the North, whilst neglecting the South, East and Darfur to the west. They handed over political power directly to a minority of northern Arab élites who, in various groupings, have been in power ever since. This caused the South to mutiny in 1955, starting the first North-South war. It lasted until 1972 when peace was signed under President Nimeiry. But the Government continually flouted the peace agreement. This, combined with its shift towards imposing radical political Islam on an unwilling people, and the discovery of oil, reignited conflict in the South in 1983.
Darfur, meanwhile, became embroiled in the various conflicts raging around it: not just internal wars by the centre over its marginalized populations – many of the soldiers who fought for the Government against the South were Darfurian recruits – but also regional struggles. The use of Darfur by Libya’s Colonel Qadafhi as a military base for his Islamist wars in Chad promoted Arab supremacism, inflamed ethnic tensions, flooded the region with weaponry and sparked the Arab-Fur war (1987-89), in which thousands were killed and hundreds of Fur villages burned. The people’s suffering was exacerbated by a devastating famine in the mid-1980s, during which the Government abandoned Darfurians to their fate.
Bashir seizes power
In 1989 the National Islamic Front (NIF), led by General Omar al-Bashir, seized power in Sudan from the democratically elected government of Sadiq al Mahdi, in a bloodless coup. The NIF revoked the constitution, banned opposition parties, unravelled steps towards peace and instead proclaimed jihad against the non-Muslim South, regularly using ethnic militias to do the fighting. Although depending on Muslim Darfur for political support, the NIF’s programme of ‘Arabization’ further marginalized the region’s ‘African’ population.
The regime harboured several Islamic fundamentalist organizations, including providing a home for Osama bin Laden from 1991 until 1996, when the US forced his expulsion. Sudan was implicated in the June 1995 assassination attempt on Egyptian President Mubarak. Its support for terrorists and increasing international isolation culminated in a US cruise-missile attack on a Sudanese pharmaceutical factory in 1998, following terrorist bombings of the US embassies in Nairobi and Dar es Salaam.
The Janjaweed: ‘counterinsurgency on the cheap’
Janjaweed fighters, with their philosophy of violent Arab supremacism, were first active in Darfur in the Arab-Fur war in the late 1980s. Recruited mainly from Arab nomadic tribes, demobilized soldiers and criminal elements, the word janjaweed means ‘hordes’ or ‘ruffians’, but also sounds like ‘devil on horseback’ in Arabic. The ruthlessly opportunistic Sudanese Government first armed, trained and deployed them against the Massalit people of Darfur in 1996-98. This was an established strategy by which the Government used ethnic militias to fight as proxy forces for them. It allowed the Government to fight local wars cheaply, and also to deny it was behind the conflict, despite overwhelming evidence to the contrary.
The Comprehensive Peace Agreement
When President George W Bush came to power in 2000, US policy shifted from isolationism to engagement with Sudan. After 11 September 2001 Bashir ‘fell into line’, started to co-operate with the US in their ‘war on terror’ and a peace process began in earnest in the South. After years of painstaking negotiations, and under substantial pressure from the US, in January 2005 a Comprehensive Peace Agreement (CPA) was signed between the Government and the Sudan People’s Liberation Movement/Army (SPLM/A), ending 21 years of bloody war which killed two million people, displaced another four million and razed southern Sudan to the ground.
A surprisingly favourable deal for the South, the CPA included a power-sharing agreement leading up to a referendum on independence for the South in 2011, a 50-50 share of the profits from its lucrative oilfields, national elections in 2009, and 10,000 UN peacekeepers to oversee the agreement’s implementation. But the ‘comprehensive’ deal completely ignored Darfur, catalyzing the conflict that is currently engulfing the region.
The rebels attack
Rebellion had been brewing in marginalized, poverty-stricken Darfur for years. After decades in the political wilderness, being left out of the peace negotiations was the final straw. Inspired by the SPLA’s success, rebel attacks against Government targets became increasingly frequent as two main rebel groups emerged – the Sudan Liberation Army (SLA) and the Justice and Equality Movement (JEM). By early 2003 they had formed an alliance. Attacks on garrisons, and a joint attack in April on an airbase that reduced several Government planes and helicopters to ashes, were causing serious damage and running rings around the Sudanese army.
Facing the prospect of its control over the entire country unravelling, in 2003 the Government decided to counterattack. Manipulating ethnic tensions that had flared up in Darfur around access to increasingly scarce land and water resources, they unleashed the Janjaweed to attack communities they claimed had links to the rebels.
Julie Flint and Alex de Waal, Darfur: a Short History of a Long War, Zed Books, 2005; Ruth Iyob and Gilbert M. Khadiagala, Sudan: The Elusive Quest for Peace, Lynne Rienner Publishers, 2006; Douglas H. Johnson, The Root Causes of Sudan’s Civil Wars, Indiana University Press, 2006; Gerard Prunier, Darfur: The Ambiguous Genocide, Cornell University Press, 2007; www.wikipedia.org
This first appeared in our award-winning magazine - to read more, subscribe from just £7 | <urn:uuid:9c91a95b-6523-43e4-8a59-09086a6ce25d> | CC-MAIN-2013-20 | http://newint.org/features/2007/06/01/history/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.961208 | 1,972 | 3.703125 | 4 |
The Basics of Home Canning
I never missed the signs of summer coming to a close while I was growing up: shopping for school clothes, getting backpacks and pencils together...and the smell of boiling tomatoes, hot peppers, and pectin permeating the kitchen.
With eleven kids to care for, canning food was not only an event in my household, but a necessity. As soon as we saw the familiar sight of glass mason jars lining the kitchen cupboard, we knew it wouldn’t be long before we were recruited to pick tomatoes, peel peaches, or even pit cherries. Sometimes it seemed mom felt the need to can every living thing in our backyard (with the exception of the dog).
But those long days spent working and canning as a family built stronger relationships, even if it didn’t always build our food storage, like the time a fateful food fight left half our tomatoes on the ground instead of in the bottles.
No matter what produce you plan to stuff away into bottles for the winter, canning can be the ideal way to preserve those fresh foods from your garden or the local market and also provide one last activity to bring your family together before the summer ends.
Canning requires some different utensils that you might not already have in your kitchen. Some of the necessary equipment includes: a boiling-water canner (or a big pot), mason-type jars (different sizes are available, wide-mouth jars are easiest to fill), lids and rings (you can only use the lids once), a jar lifter, and a candy thermometer.
Washing and Sterilizing
This is an important step in the canning process in order to get rid of all bacteria that could contaminate your food. Wash canning jars, new lids, and metal rings in hot, soapy water or the dishwasher, and rinse them thoroughly. To sterilize the jars place them upright in the canner, cover them with hot water, and boil for 10 minutes. Follow the manufacturer’s instructions that come with your lids.
Foods can be processed a number of different ways including: boiling-water, steam-pressure, and freezing. The recipes in this article follow the boiling-water process. To begin the boiling-water method, place a rack on the bottom of the pot to keep jars from touching the canner. Fill the canner half full with hot water and heat to 140º F for raw-packed foods, or 180º F for hot-packed foods. With the jars filled and capped, lower them into the canner with a jar lifter. Add more boiling water, if needed, so the water level is at least one inch above jar tops. Turn the heat to its highest position until water boils vigorously. Set a timer for the minutes required for processing food (according to the recipe).
Add more boiling water, if needed, to keep the water level above the jars. When jars have been boiled for the recommended time, turn off the heat and remove the canner lid. Using a jar lifter, remove the jars and place them on a towel, leaving at least one-inch spaces between the jars during cooling.Start Canning!
If you are trying to can for the first time, it would be best to start with fruit, because most use the boiling-water method. When selecting what fruits to can, select firm, ripe fruit.
Fruits are often canned in light sugar syrups. Depending on your desired taste, the amount of sugar in the syrup varies with fruits. For the recipes here, use a heaping 1/3 cup to 3/4 cup per quart of water. Place the sugar in a quart measuring pitcher and add cold water. Stir until the sugar is dissolved.
These are one of the most successfully canned of all foods. Wash and peel. Halve or quarter and cut out the cores. Boil gently in the syrup liquid for five minutes. Pack the hot pears in hot jars. Add the hot liquid, leaving 1/2-inch headspace. Process for 20 minutes.
Dip peaches in boiling water and remove after a few seconds; slip off the skins. Cut the fruits in half and remove the pits. Place in a pan without crowding, cover with the desired liquid, and bring to a boil. Ladle the hot fruit into hot jars, packing the halves in layers, cut side down. Add the hot liquid, leaving a 1/2-inch headspace. Process for 20 minutes.
TomatoesTomatoes are a canning favorite and can easily be canned using the boiling-water method. To peel tomatoes: Using a small knife, cut a small X in the bottom of the tomatoes; do not cut the flesh. Ease the tomatoes one by one into a pot of boiling water. Leave ripe tomatoes in for about 15 seconds, barely ripe tomatoes for twice as long. Lift them out with a slotted spoon and drop into a bowl of ice water. Pull off the skin with the tip of a knife.
Wash, peel, and cut tomatoes into halves. Put in a pan without crowding, add water to cover, and boil gently for five minutes. Pack the hot tomatoes in hot jars. Add salt to taste. Add the hot cooking liquid, leaving 1/2-inch headspace. Process pints for 40 minutes in canner, quarts for 45 minutes.
Get Yourself into a Jam
Preserving jams can also be a rewarding and easy way to fill your food storage. Jams can be preserved through freezing or canning. For first-timers, frozen jam is generally easier to preserve, but below is also a good canned berry jam recipe for the adventurous.Strawberry Jam (frozen)
2 cups crushed strawberries (1 qt.)
4 cups granulated sugar
1 box fruit pectin
Wash berries, remove stems, and crush. Measure the 2 cups of crushed berries into a large bowl. Measure the sugar into another bowl and then stir it into fruit. Set aside for 10 minutes. Stir occasionally.
Stir 1 box of fruit pectin and 3/4 cup water together in a small saucepan. Bring the mixture to a boil on high heat, stirring constantly. Boil and stir for 1 minute, and remove from the heat. Stir the pectin mixture into the fruit mixture, stirring constantly until the sugar is completely dissolved and no longer grainy.
Pour quickly into clean plastic containers to within 1/2" of the top. Wipe off the top edges of the containers and cover with the lids. Let stand at room temperature for 24 hours to set, and then refrigerate for immediate use or freeze. (Do not double this recipe; make one batch at a time.)
Berry Jam (canned)
Peel, core, and finely grate:
8 ounces tart green apples
- 2 pounds blackberries, blueberries, cranberries, elderberries, or raspberries (stemmed)
- 1 tablespoon orange juice
3 cups sugar
Cook together, crushing one-quarter of the berries that are in the pot, leaving the rest whole (but do not crush raspberries). Boil rapidly, stirring frequently, to the jelling point. This is the point preserves will jell once cooled; a good visual indicator is when, after boiling high and foamy in the pan, the mixture settles, and suddenly its surface is covered with furiously boiling small bubbles. You can also check using a thermometer. Jelling point is 8 to 10 degrees higher than the boiling point of water. Remove from the heat and skim off any foam before ladling into hot jars. Leave 1/4-inch headspace, and process for 10 minutes. | <urn:uuid:710028ea-4dbc-4619-ac69-b371ec871acb> | CC-MAIN-2013-20 | http://www.ldsliving.com/story/5957-the-basics-of-home-canning | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.920065 | 1,604 | 2.734375 | 3 |
Winter Around the World:
Whether you observe Yule, Christmas, Sol Invictus, or Hogmanay, the winter season is typically a time of celebration around the world. Traditions vary widely from one country to the next, but one thing they all have in common is the observance of customs around the time of the winter solstice. Here are some ways that residents of different countries observe the season.
Althought Australia is huge geographically, the population sits at under 20 million people. Many of them come from a blend of cultures and ethnic backgrounds, and celebration in December is often a mix of many different elements. Because Australia is in the southern hemisphere, December is part of the warm season. Residents still hhave Christmas trees, Father Christmas, Christmas Carols and gifts which are a familiar Christmas and gifts, as well as being visited by Father Christmas. Because it coincides with school holidays, it's not uncommon for Australians to celebrate the season on vacation away from home.
In China, only about two percent of the population observes Christmas as a religious holiday, although it is gaining in popularity as a commercial event. However, the main winter festival in China is New Year celebration that occurs at the end of January. Recently, it's become known as the Spring Festival, and is a time of gift-giving and feasting. A key aspect of the Chinese New Year is ancestor worship, and painings and portraits are brought out and honored in the family's home.
In Denmark, Christmas Eve dinner is a big cause for celebration. The most anticipated part of the meal is the traditional rice pudding, baked with a single almond inside. Whichever guest gets the almond in his pudding is guaranteed good luck for the coming year. Children leave out glasses of milk for the Juulnisse, which are elves that live in peoples' homes, and for Julemanden, the Danish version of Santa Claus.
The Finns have a tradition of resting and relaxing on Christmas Day. The night before, on Christmas Eve, is really the time of the big feast -- and leftovers are consumed the next day. On December 26, the day of St. Stephen the Martyr, everyone goes out and visits friends and relatives, weather permitting. One fun custom is that of Glogg parties, which involve the drinking of Glogg, a mulled wine made from Madeira, and the eating of lots of baked treats.
Christmas was typically not a huge holiday in Greece, as it is in North America. However, the recognition of St. Nicholas has always been important, because he was the patron saint of sailors, among other things. Hearth fires burn for several days between December 25 and January 6, and a sprig of basil is wrapped around a wooden cross to protect the home from the Killantzaroi, which are negative spirits that only appear during the twelve days after Christmas. Gifts are exchanged on January 1, which is St. Basil's day.
India's Hindu population typically observes this time of year by placing clay oil lamps on the roof in honor of the return of the sun. The country's Christians celebrate by decorating mango and banana trees, and adorning homes with red flowers, such as the poinsettia. Gifts are exchanged with family and friends, and baksheesh, or charity, is given to the poor and needy.
In Italy, there is the legend of La Befana, a kind old witch who travels the earth giving gifts to children. It is said that the three Magi stopped on their way to Bethlehem and asked her for shelter for a night. She rejected them, but later realized she'd been quite rude. However, when she went to call them back, they had gone. Now she travels the world, searching, and delivering gifts to all the children.
In Romania, people still observe an old fertility ritual which probably pre-dates Christianity. A woman bakes a confection called a turta, made of pastry dough and filled with melted sugar and honey. Before baking the cake, as the wife is kneading the dough, she follows her husband outdoors. The man goes from one barren tree to another, threatening to cut each down. Each time, the wife begs him to spare the tree, saying, "Oh no, I am sure this tree will be as heavy with fruit next spring as my fingers are with dough today." The man relents, the wife bakes the turta, and the trees are spared for another year.
In Scotland, the big holiday is that of Hogmanay. On Hogmanay, which is observed on December 31, festivities typically spill over into the first couple of days of January. There's a tradition known as "first-footing", in which the first person to cross a home's threshold brings the residents good luck for the coming year -- as long as the guest is dark-haired and male. The tradition stems from back when a red- or blonde-haired stranger was probably an invading Norseman. | <urn:uuid:22ae9cf3-0877-4766-aa2c-0e14907b8a55> | CC-MAIN-2013-20 | http://paganwiccan.about.com/od/yulethelongestnight/p/Winter_Customs.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.974051 | 1,022 | 3.015625 | 3 |
Constitution of Oregon: 2011 Version
Sec. 1. Election to accept or reject Constitution
2. Questions submitted to voters
3. Majority of votes required to accept or reject Constitution
4. Vote on certain sections of Constitution
5. Apportionment of Senators and Representatives
6. Election under Constitution; organization of state
7. Former laws continued in force
8. Officers to continue in office
9. Crimes against territory
10. Saving existing rights and liabilities
11. Judicial districts
Section 1. Election to accept or reject Constitution. For the purpose of taking the vote of the electors of the State, for the acceptance or rejection of this Constitution, an election shall be held on the second Monday of November, in the year 1857, to be conducted according to existing laws regulating the election of Delegates in Congress, so far as applicable, except as herein otherwise provided.
Section 2. Questions submitted to voters. Each elector who offers to vote upon this Constitution, shall be asked by the judges of election this question:
Do you vote for the Constitution? Yes, or No.
And also this question:
Do you vote for Slavery in Oregon? Yes, or No.
And in the poll books shall be columns headed respectively.
“Constitution, Yes.” “Constitution, No"
“Slavery, Yes." “Slavery, No."
And the names of the electors shall be entered in the poll books, together with their answers to the said questions, under their appropriate heads. The abstracts of the votes transmitted to the Secretary of the Territory, shall be publicly opened, and canvassed by the Governor and Secretary, or by either of them in the absence of the other; and the Governor, or in his absence the Secretary, shall forthwith issue his proclamation, and publish the same in the several newspapers printed in this State, declaring the result of the said election upon each of said questions. [Constitution of 1859; Amendment proposed by S.J.R. 7, 2001, and adopted by the people Nov. 5, 2002]
Section 3. Majority of votes required to accept or reject Constitution. If a majority of all the votes given for, and against the Constitution, shall be given for the Constitution, then this Constitution shall be deemed to be approved, and accepted by the electors of the State, and shall take effect accordingly; and if a majority of such votes shall be given against the Constitution, then this Constitution shall be deemed to be rejected by the electors of the State, and shall be void.–
Section 4. Vote on certain sections of Constitution. If this Constitution shall be accepted by the electors, and a majority of all the votes given for, and against slavery, shall be given for slavery, then the following section shall be added to the Bill of Rights, and shall be part of this Constitution:
“Sec. ___ “Persons lawfully held as slaves in any State, Territory, or District of the United States, under the laws thereof, may be brought into this State, and such Slaves, and their descendants may be held as slaves within this State, and shall not be emancipated without the consent of their owners.”
And if a majority of such votes shall be given against slavery, then the foregoing section shall not, but the following sections shall be added to the Bill of Rights, and shall be a part of this Constitution.
“Sec. ___ There shall be neither slavery, nor involuntary servitude in the State, otherwise than as a punishment for crime, whereof the party shall have been duly convicted.” [Constitution of 1859; Amendment proposed by S.J.R. 7, 2001, and adopted by the people Nov. 5, 2002]
Note: See sections 34 and 35 of Article I, Oregon Constitution.
Section 5. Apportionment of Senators and Representatives. Until an enumeration of the inhabitants of the State shall be made, and the senators and representatives apportioned as directed in the Constitution, the County of Marion shall have two senators, and four representatives.
Linn two senators, and four representatives.
Lane two senators, and three representatives.
Clackamas and Wasco, one senator jointly, and Clackamas three representatives, and Wasco one representative.
Yamhill one senator, and two representatives.
Polk one senator, and two representatives.
Benton one senator, and two representatives.
Multnomah, one senator, and two representatives.
Washington, Columbia, Clatsop, and Tillamook one senator jointly, and Washington one representative, and Washington and Columbia one representative jointly, and Clatsop and Tillamook one representative jointly.
Douglas, one senator, and two representatives.
Jackson one senator, and three representatives.
Josephine one senator, and one representative.
Umpqua, Coos and Curry, one senator jointly, and Umpqua one representative, and Coos and Curry one representative jointly. [Constitution of 1859; Amendment proposed by S.J.R. 7, 2001, and adopted by the people Nov. 5, 2002]
Section 6. Election under Constitution; organization of state. If this Constitution shall be ratified, an election shall be held on the first Monday of June 1858, for the election of members of the Legislative Assembly, a Representative in Congress, and State and County officers, and the Legislative Assembly shall convene at the Capital on the first Monday of July 1858, and proceed to elect two senators in Congress, and make such further provision as may be necessary to the complete organization of a State government.–
Section 7. Former laws continued in force. All laws in force in the Territory of Oregon when this Constitution takes effect, and consistent therewith, shall continue in force until altered, or repealed.–
Section 8. Officers to continue in office. All officers of the Territory of Oregon, or under its laws, when this Constitution takes effect, shall continue in office, until superseded by the State authorities.–
Section 9. Crimes against territory. Crimes and misdemeanors committed against the Territory of Oregon shall be punished by the State, as they might have been punished by the Territory, if the change of government had not been made.–
Section 10. Saving existing rights and liabilities. All property and rights of the Territory, and of the several counties, subdivisions, and political bodies corporate, of, or in the Territory, including fines, penalties, forfeitures, debts and claims, of whatsoever nature, and recognizances, obligations, and undertakings to, or for the use of the Territory, or any county, political corporation, office, or otherwise, to or for the public, shall inure to the State, or remain to the county, local division, corporation, officer, or public, as if the change of government had not been made. And private rights shall not be affected by such change.–
Section 11. Judicial districts. Until otherwise provided by law, the judicial districts of the State, shall be constituted as follows: The counties of Jackson, Josephine, and Douglas, shall constitute the first district. The counties of Umpqua, Coos, Curry, Lane, and Benton, shall constitute the second district.–The counties of Linn, Marion, Polk, Yamhill and Washington, shall constitute the third district.–The counties of Clackamas, Multnomah, Wasco, Columbia, Clatsop, and Tillamook, shall constitute the fourth district–and the County of Tillamook shall be attached to the county of Clatsop for judicial purposes.– | <urn:uuid:de5d2f23-ff7c-4876-969c-7c2f8ab3c154> | CC-MAIN-2013-20 | http://www.sos.state.or.us/bbook/state/constitution/constitution18.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.939502 | 1,573 | 2.640625 | 3 |
What can be learned from the post-election crisis in Greece?
The traditional political establishment in Greece buckled under the weight of crippling austerity and a mass people’s movement when the country went to the polls May 6. Now that the voting is over, and attempts to form a government have failed, another election must be held, scheduled for June 17, raising new questions about the way forward for the Greek working class. The crisis deepens as panicking Greeks withdraw their savings from banks on the brink of collapse.
Since the fall of the U.S.-backed military dictatorship that ruled Greece from 1967-1974, two parties, PASOK and New Democracy, have dominated the political scene. However, both parties had their worst showing ever, and combined were only able to muster 32 percent of the vote, down from 77 percent in 2009. Instead, support grew for parties of both the left and the far right. With parliament deadlocked, unable to form a new governing coalition, a new election is pending and there is a distinct possibility of a protracted political crisis and a sharp polarization that provides an opportunity for the working class to decisively assert itself.
Background to the elections
Since the worldwide capitalist economic crisis began in 2007-2008, several countries in the eurozone, which all operate with the common euro currency, have experienced severe debt crises. These national economies are more intimately linked than ever before—which was supposed to be the benefit of the eurozone—so the problems of one immediately threatens the rest.
Germany, the strongest capitalist economy of the eurozone, along with the imperialist U.S., have been working hard to force economic restructuring on the most indebted countries, offering bailouts in exchange for severe cuts to social welfare programs and other austerity measures. They have worked through three main entities: the U.S.-dominated International Monetary Fund, the European Union and the European Central Bank, collectively referred to as the “Troika.” Over the last two years, the Troika has arranged for around €240 billion ($305 billion) in bailout funds for Greece to service its massive debt. In exchange, the Greek ruling class forced through devastating cuts that have led to repeated strikes and militant popular mobilizations.
The Troika has worked hand in hand with the Greek ruling class, which, while claiming to “understand” the opposition of the people, claims that austerity is a difficult but necessary step toward economic revival. The other option, they claim, is complete collapse. It is a story poor and working people across the world are familiar with, including in the United States.
Narrowly winning first place in the elections was New Democracy, a center-right party that was part of the existing government led by Lucas Papademos. An unelected banker, Papademos was appointed to lead the government through its unpopular debt deal. New Democracy campaigned on a platform of supporting the extreme austerity measures imposed by the Troika, and promised only to try to renegotiate some of the more painful terms of the debt deal.
The other pro-austerity party, the misnamed Pan-Hellenic Socialist Movement (PASOK), came in third for the first time in the party’s history. PASOK is led by Evangelos Venizelos, the finance minister under the two previous governments. Venizelos was one of the main architects of the austerity “memorandum” and offered only a pitiful pledge that he would ask the country’s creditors to give them three years, rather than two, to reach absurdly unrealistic economic benchmarks.
Gains on the left
The biggest surprise of the election was the second-place finish of the Coalition of the Radical Left (SYRIZA), with 16.8 percent of the vote. SYRIZA is a collection of small communist tendencies and a larger reformist party that split from the Communist Party of Greece after the fall of the Soviet Union. SYRIZA is led by Alexis Tsipras, a former Communist Youth leader who has received significant international press attention. SYRIZA calls for canceling the bailout deal but keeping Greece inside the eurozone and European Union. This, it says, can be achieved through negotiations with the Troika and through nationalization of the Greek banking sector.
Although there are revolutionary forces within SYRIZA, the dominant line at present is fundamentally social democratic. The “peaceful revolution” they have declared mutes the questions of socialism and working-class power, and raises hopes in a radically reformed capitalism. For example, Tsipras stated in a letter to high-ranking European Union officials, “We must urgently protect the economic and social stability of our country. … It is our duty to re-examine the whole framework of the existing strategy, given that it not only threatens social cohesion and stability in Greece but is a source of instability for the European Union.” While SYRIZA’s leadership wants to reverse austerity, its appeal to “social cohesion and stability” means “stability” under a reformed capitalism.
The second most popular party on the left was the Communist Party of Greece (KKE), which registered a modest increase of 1 percent from their previous election result, ending up with 8.5 percent. This was below the 10-12 percent that most opinion polls predicted. The KKE put forward a platform calling for the socialization of the means of production under a “working class-people’s power” government. Of the left groups in parliament, the Communist Party is the only one to call for Greece to leave the European Union, a bloc of the major imperialist and peripheral capitalist states of Europe.
The KKE has played a major role in the massive fight-back movement waged by the Greek working class and especially in its advocacy for general strikes. It intervenes through mass organizations like the All-Workers Militant Front (PAME) in the labor movement, the Greek Women’s Federation and the Students Struggle Front, among others.
The lowest scoring of the three left parties was Democratic Left, a split to the right from SYRIZA formed in 2010. It only received 6.1 percent of the vote, but its leader Fotis Kouvelis is often ranked as the most popular politician in Greece by opinion polls. Democratic Left rejects the memorandum, but makes sure to balance its criticism of austerity with pledges of absolute loyalty to the eurozone.
Major gains for far right and fascists
Far-right forces experienced major gains in the election as well. The semi-fascist Popular Orthodox Rally suffered as punishment for its participation in the previous government, but new forces emerged. Independent Greeks, a split from New Democracy, came in fourth with 10.6 percent. The party rejects the austerity memorandum on nationalistic grounds and relies on anti-German and anti-Turkish demagogy in place of a specific political program.
The story that has perhaps gotten the most foreign press attention is the entrance of the neo-Nazi Golden Dawn party into parliament with 7 percent of the vote, more than 20 times their score in 2009. Its logo is an ancient Greek symbol similar to a swastika, and until recently Adolf Hitler’s manifesto Mein Kampf was displayed prominently at the party’s headquarters.
Golden Dawn campaigned on a platform of expelling all immigrants from Greece and national chauvinist opposition to the Troika. While some voters were attracted to its racist rhetoric and acts of violence against immigrants, Golden Dawn bought the loyalty of others by operating food banks during a time of growing hunger.
The fascists’ success is a serious threat to the working class and all democratic forces in Greece. While its 7 percent may appear small, it is precisely under these polarized economic and political conditions, when the capitalist class cannot achieve stable rule through democratic means, that fascism has historically grown and taken power.
The main bourgeois parties cynically used the threat of Golden Dawn to present the false dilemma of austerity or a descent into fascism. But in reality, these mainstream parties’ promotion of anti-immigrant racism gave Golden Dawn political space to grow. Moreover, if it appeared that the working class could potentially become the ruling power in Greece, the bourgeoisie could accept, if not turn to, a fascist coup.
That the Greek ruling class has operated under fascist military rule before makes such a scenario all the more plausible.
It is up to the revolutionary left and the working class to develop a program and plan of action to smash fascism politically and in the streets.
A ‘government of the left’?
A central component of the SYRIZA campaign was its appeal for the formation of a “government of the left,” encompassing all the left forces opposed to the Troika. The formation of such a government was impossible given the election results and highly implausible given Greece’s undemocratic electoral laws governing coalitions. But SYRIZA’s call for a government of the left clearly resonated with much of the working class and contributed to its success. If SYRIZA were to emerge in first place in the June election, as presently projected, the left could achieve such a majority.
SYRIZA leader Tsipras and other social-democratic proponents of a government of the left argue that it would be able to cancel the memorandum, reverse the wave of austerity measures, potentially nationalize the banks and rebuild the Greek economy in a way that strengthens the working class. SYRIZA makes the case that the European ruling class would never let Greece default and exit the eurozone because of the economic havoc this would create in other heavily indebted states like Spain and Italy. In short, Tsipras pledges to reverse the balance of forces inside the eurozone; instead of the Troika forcing Greece into deeper austerity, Greece would leverage its power against the Troika.
While the Troika obviously wants to avoid a complete Greek default (lenders have already accepted a 53.5 percent write-down on the debt), they have had the last two years to prepare for this eventuality. The centerpiece of the European ruling class’ preparations is the European Financial Stability Facility, a $976 billion bailout fund, meant to act as a “firewall” to counter the immediate effects of a Greek bankruptcy. With this in place, there is a small but growing tendency of capitalist financiers who believe that if Greece were expelled, the eurozone would “end up stronger once the dust had settled.”
Tsipras insists that Greece can out-negotiate the international capitalists, rather than calling for the socialist reorganization of society. He raises unrealistic expectations among the oppressed in electoral and bourgeois political gamesmanship, rather than raising the possibility of a new class power.
Why revolution is necessary
By contrast, the KKE has called the “government of the left” idea a false hope that will lead to disillusionment. The KKE rejects possible participation in a left government, insisting that such a government will leave the capitalist state and the for-profit economic system intact, keep Greece bound to the imperialist institutions of the EU and NATO, and thus cannot resolve the central contradictions at the heart of the political crisis.
More broadly, they explain that the social-democratic program, which arose in the post-war period of capitalist expansion, cannot be achieved in the context of protracted capitalist crisis and neoliberal financial control. They have called the SYRIZA plan opportunist, betraying the long-term interests and political clarity of the working class in exchange for short-term gains for particular leftist parties.
This raises the age-old but still pivotal question of reform and revolution: Does the working class have the capacity to come to power, and how so? Can the capitalist system be reformed to resolve the exploitation at its center? How far can revolutionary organizations go at this time?
Several organizations in Greece—including the KKE—have made the case that the political crisis of the bourgeois class has matured to the point of a revolutionary situation, opening the possibility for the transfer of power to the working class in alliance with middle-class strata.
Revolutionaries, of course, fight for reforms that improve the conditions of the working class and facilitate the political struggle against the ruling class. But a central responsibility is to assess whether the conditions for revolution are approaching, to hasten their development and prepare for such an opportunity.
The basic contradiction in capitalist society is that the productive process is socialized, involving millions of workers, while ownership is private, concentrated in a tiny ruling class. The capitalists control the means of production and distribution, as well as countless financial mechanisms, to squeeze profits out of workers and maintain their political and economic power. The capitalist state (the police, military and courts) allows them to safeguard this system with force, while the government provides for its administration.
A change in administration, like the ascent of a government of the left, will not alter the fundamental underlying character of the state. This can only be achieved by the overthrow of the capitalist state and its replacement by a worker’s state based on independent organs of working-class power—a socialist revolution.
Elections and the revolutionary process
Some communist tendencies support the formation of a government of the left for this reason—not because it would solve the crisis, but because it would further polarize the country and hasten the development of a revolutionary situation.
There can be no doubt that the formation of a SYRIZA-KKE-Democratic Left coalition would cause considerable panic among the Greek and European ruling class, and new bouts of intense class struggle.
History has shown that revolutions can take many paths and tactical turns. In Venezuela, the election of President Hugo Chávez, a socialist presiding over a fundamentally capitalist state, undoubtedly gave a boost to the class struggle and the regroupment of revolutionary forces in the country.
In Nepal, Maoists waged a triumphant revolutionary war against the feudal king that resulted in a negotiated peace and the Maoists’ subsequent election to lead a bourgeois government. Their decision to dissolve their armed forces remains the subject of considerable debate among revolutionaries. Neither country, despite heroic advances, has established socialism.
But for a Greek left government to be a vehicle of revolution, instead of demobilization, demoralization and disillusionment, a left-wing government would need to have clear programmatic unity around the socialization of the means of production, centralized planning, workers’ political power, and so on. It would need to organize the people to take on the police and the military that their own left-unity government would be associated with and nominally leading.
Otherwise, when a revolutionary situation emerged it would only disorient the movement and contribute to the persistence of reformist illusions.
SYRIZA has been silent or worse on these critical questions. In the run-up to the elections, Tsipras said: “A government of the left is in need of industrialists and investors. It needs a healthy business climate.” In other words, his version of a government of the left would not challenge the capitalists’ right to exploit labor.
The capitalist establishment has reciprocated, and the Federation of Hellenic Enterprises (SEV), the Greek equivalent of the U.S. Chamber of Commerce, has called for the formation of a national unity government including SYRIZA.
Tsipras called the election results a “peaceful revolution,” a slogan that misleadingly suggests the electoral realm, rather than continued mass struggle, can provide a way out of the crisis for working people.
The revolutionary crisis and dual power
While support for SYRIZA is likely to increase in the coming election, it is doubtful the new election will produce a clear winner or workable coalition. In the face of the increased likelihood of exiting the eurozone, which would deepen the economic and political crisis, the class struggle will intensify.
With the bourgeoisie so thoroughly discredited, and the Greek masses so clearly calling for an alternative way, the revolutionary left has an opportunity to offer a program that provides not only short-term relief, but also a longer-term vision of a new economic and political system. The question is how to mobilize the working class and broadly unite the revolutionary forces in a struggle to achieve this.
Historically, a key phase in any revolutionary crisis is that of dual power. By organizing what is essentially a second, rival state built on organs of mass struggle, revolutionaries can show concretely what working-class or people’s power looks like and offers. In the Russian Revolution, this took the form of councils of workers and soldiers (called soviets). In China, the Red Army itself functioned as a government in the areas that it liberated. Revolutionaries have also convened constituent assemblies—to rewrite the constitution—as a way to articulate, and establish the legitimacy of, a new political vision.
Clearly, there are millions in Greece who are still holding out hope that the existing capitalist government and state, perhaps with left-wing leadership, can deliver the goods. To this end, a sophisticated political struggle, backed by a concrete plan of action, must be waged against Tsipras and the social-democratic fantasies he projects. In his “April Theses,” designed to guide the Bolsheviks through Russia’s revolutionary crisis, Lenin called for “patient, systematic, and persistent explanation … especially adapted to the practical needs of the masses.”
Can the struggle in the streets break the deadlock in parliament? Will alternatives to bourgeois state power be built? The Greek working class has found itself on the frontline of the international struggle against capitalism, and the answers to these questions will resonate around the world.
For revolutionaries in the United States, our main role is not to endorse this or that organization and its tactics from afar. Our chief responsibilities are 1) to explain that the Greek crisis is a result of the contradictions of capitalism, not reckless social spending, 2) to defend the unfolding Greek revolution, especially as it could escalate and be slanderously attacked in the imperialist media and even militarily assaulted by U.S.-NATO forces, and 3) to study and learn from the complex revolutionary process that our brothers and sisters are trying to navigate.
While their process is far more advanced than our own, their struggle is ours—and we have much to learn from it. | <urn:uuid:0bb2acb2-bd9e-4c58-b2aa-1fc1d3444f08> | CC-MAIN-2013-20 | http://crimsonsatellite.wordpress.com/2012/05/17/reform-and-revolution-in-greece/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.958174 | 3,748 | 2.640625 | 3 |
Australian Bureau of Statistics
6202.0 - Labour Force, Australia, Jun 2012 Quality Declaration
Previous ISSUE Released at 11:30 AM (CANBERRA TIME) 12/07/2012
|Page tools: Print Page Print All RSS Search this Product|
UNDERSTANDING THE AUSTRALIAN LABOUR FORCE USING ABS STATISTICS
In order to understand what is happening in Australian society, or our economy, it is helpful to understand people’s patterns of work, unemployment and retirement. ABS statistics can help to build this picture. Fifty years ago, the majority of Australians who worked were men working full-time. Most worked well into their 60s, sometimes beyond, and if they were not working most were out looking for work until that age. The picture now is very different. Far more people work part-time, or in temporary or casual jobs. Retirement ages vary much more, with a greater proportion of men not participating in the labour force once they are older than 55. Nowadays. 45% of working Australians are women, compared with just 30% fifty years ago. These are profound changes that have helped shape 21st Century Australia. This note explains some of the key labour force figures the ABS produces that can be used to obtain a better picture of the labour market.
Every month, the ABS runs a Labour Force Survey across Australia covering almost 30,000 homes as well as a selection of hotels, hospitals, boarding schools, colleges, prisons and Indigenous communities. Apart from the Census, the Labour Force Survey is the largest household collection undertaken by the ABS. Data are collected for about 60,000 people and these people live in a broad range of areas and have diverse backgrounds - they are a very good representation of the Australian population. From this information, the ABS produces a wide variety of statistics that paint a picture of the labour market. Most statistics are produced using established international standards, to ensure they can be easily compared with the rest of the world. The ABS has also introduced new statistics in recent years that bring to light further aspects of the labour market. It can be informative to look at all of these indicators to get a grasp of what is happening, particularly when the economy is changing quickly.
One thing to remember about the ABS labour force figures is that when a publication states that, for example, 11.4 million Australians are employed, the ABS has not actually checked with each and every one of these people. In common with most statistics produced, the ABS surveys a sample of people across Australia and then scales up the results – based on the latest population figures - to give a total for the whole country. Because the figures are from a sample, they are subject to possible error. The Labour Force Survey is a large one, so the error is minimised. The ABS provides information about the possible size of the error to help users understand how reliable the estimates are.
The above diagram shows the break down of the civilian population into the
different groups of labour force participation. Each pixel represents about
1000 people as at September 2011.
According to established international standards, everyone who works for at least one hour or more for pay or profit is considered to be employed. This includes everyone from teenagers who work part-time after school, to a partially retired grandparent helping out at the school canteen. While it is unreasonable to expect a family to survive on the income of an hour of work per week, one could also argue that all work, no matter how small, contributes to the economy. This definition of 'one hour or more' - which is an international standard - means that ABS' employment figures can be compared with the rest of the world. Now it is, of course, easy to argue that someone who works 2 or 3 hours per week is not really “employed”. But a definition is required. And any cut-off point is open to debate. Imagine if ABS defined being ‘employed’ as working 15 hours a week. Would it be reasonable to argue that someone who works 14.5 hours is unemployed, but 15 hours is not? It is also a mistake to assume that all persons who work low hours would prefer to work longer hours, and are therefore 'hidden' unemployment. Most people who work less than 15 hours a week are not seeking additional hours, although of course there are some who are. The issue of underemployment is further discussed below.
Rather than open up such discussions, the ABS prefers to use the international standard and the ABS also encourages people to consider other indicators to form a better picture of what is happening. Alongside the total employed figures, full-time and part-time estimates are provided to better inform on the different kinds of employment, and a detailed breakdown by the number of hours worked is also provided to allow for customised definitions of 'employment.'
Commentators often refer to the rise in employment as the number of new jobs created each month. This can be misleading, because the ABS doesn't actually measure the number of jobs. This might sound like semantics, but if a person in the Labour Force Survey who is employed gains a second part-time job at the same time as their main job, this would have no impact on the employment estimate - the Labour Force Survey does not count jobs, it counts people.
It is also important to bear in mind that if the relative growth in population is greater than the number of new people in employment, there might actually be an increase in the employment figure, but a lower percentage of people with jobs. It is often informative to look at the proportion of people in employment. This measure, called the employment to population ratio, is the number of employed people expressed as a percentage of the civilian population aged over 15. This removes the impact of population growth to give a better picture of labour market dynamics over time.
AGGREGATE MONTHLY HOURS WORKED
Instead of counting how many people are working, another way of looking at how much Australians are working is to count the total number of hours worked by everyone. This is measured by a statistic produced by the ABS called Aggregate monthly hours worked, and it is measured in millions of hours. This can sometimes be more revealing of what is happening in the labour market, particularly in a weakening economy where a fall in hours worked can usually be seen before any fall in the number of people employed.
PEOPLE WHO ARE NOT WORKING: THE UNEMPLOYED AND OTHERS
There are many reasons why Australians do not work. Some have retired and are not interested in going back to work. Some are staying home to look after children and plan on going back to work once the kids have grown older. Some are out canvassing for work every day while others have given up looking. The ABS separates all of these people into those who are unemployed and those who are not by asking two simple questions: If you were given a job today, could you start straight away? and Have you taken active steps to look for work? Only those who are ready to get back into work, and are taking active steps to find a job, are classed as unemployed.
Some people might like to work, but are not currently available to work - such as a parent who is busy looking after small children. Other people might want to work but have given up actively looking for work - such as a discouraged job seeker who only half-heartedly glances at the job adds in the newspaper but doesn't call or submit any applications. These people are not considered to be unemployed, but are regarded as being marginally attached to the labour force. They can be thought of as 'potentially unemployed' when, or if, their circumstances change, but are regarded as being on the fringe of labour force participation until then.
It is important to note that the ABS unemployment figures are not the same as the data that Centrelink collects on the number of people receiving unemployment benefits. The ABS bases its figures on asking people directly about their availability and steps to find work. In this way, policy decisions about, for example, the criteria for the receipt of unemployment benefits have no impact on the way that the unemployment figures are measured.
LABOUR FORCE AND PARTICIPATION RATE
The size of the labour force is a measure of the total number of people in Australia who are willing and able to work. It includes everyone who is working or actively looking for work - that is, the number of employed and unemployed together as one group. The percentage of the total population who are in the labour force is known as the participation rate.
The unemployment rate is the percentage of people in the labour force who are unemployed. This is a popular measure around the world for tracking a country’s economic health as it removes all the people who are not participating (such as those who are retired). Because the unemployment rate is expressed as a percentage, it is not directly influenced by population growth.
The underemployment rate is a useful companion to the unemployment rate. Instead of looking at the people who are unemployed, the underemployment rate captures those who are currently employed, but are willing and able to work more hours. It highlights the proportion of the the labour force who work part-time but would prefer to work full-time. This is sometimes referred to as the 'hidden' potential in the labour force.
The underemployment rate can be an important indicator of changes in the economic cycle. During an economic slow down, some people lose their jobs, become unemployed and contribute to a rising unemployment rate. But while this is happening, there might well be others who remain working but have their hours reduced; for example from full-time to part-time. As long as they want to work more hours, they are classed as underemployed, and contribute to the underemployment rate.
LABOUR FORCE UNDERUTILISATION RATE
The labour force underutilisation rate combines the unemployment rate and the underemployment rate into a single figure that represents the percentage of the labour force that is willing and able to do more work. It includes people who are not currently working and want to start, and those who are currently working but want to - and can - work more hours. It provides an alternative – and more complete - picture of labour market supply than the unemployment rate, as changes in the underutilisation rate capture both changes in unemployment and underemployment, indicating the spare capacity in the Australian labour force.
For any queries regarding these measures or any other queries regarding the Labour Force Survey estimates, contact Labour Force on Canberra 02 6252 6525, or via email at email@example.com.
These documents will be presented in a new window.
This page last updated 8 August 2012 | <urn:uuid:77aefa6d-4f8b-4540-8e05-f5552dfb8c0e> | CC-MAIN-2013-20 | http://www.abs.gov.au/AUSSTATS/abs@.nsf/Previousproducts/6202.0Main%20Features999Jun%202012?opendocument&tabname=Summary&prodno=6202.0&issue=Jun%202012&num=&view= | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.964134 | 2,209 | 2.75 | 3 |
Protect your investment
Protected cropping can be as beneficial for small fruit production as it is for vegetables. Yields of some berries may be two to three times greater in protected cultivation than outside in the field. Build customer loyalty by having fresh fruit first in the spring, long into fall, and by growing the highest quality berries.
Strawberry plants, which are damaged by temperatures below 12F/-11C, require winter mulch. Although hay and straw are the traditional mulching materials, many strawberry growers use row covers because they require less labor to install and remove. Heavier row covers, weighing 1.25 oz./sq.yd. or more, are recommended.
Fall-bearing raspberries and blackberries that normally stop producing at the first frost will continue fruiting for months longer in an unheated hoophouse. They will fruit again earlier in spring than those in the field, commanding a much higher sales price.
Strawberry transplants from runners produced over the summer can be planted in an unheated hoophouse in September. They will produce fruit in the fall, continuing until December, and then fruit again in early spring.
Strawberry plasticulture is supplanting the traditional matted row system on many farms. Plant on black plastic mulch from mid-July to September and cover with row cover in fall. Fruits are harvested the following spring.
Protect fruits from marauding birds with Johnny's bird netting which won't damage fruit or bend branches.
Getting started with fruit
By Lynn Byczynski
For many market growers, fruits are the final frontier of horticultural expertise. Growing fruit is an interesting challenge for a vegetable grower because fruits require different systems for planting, cultivating, harvesting and post-harvest handling. But there are many reasons to take up the challenge.
- The primary reason is that people love fruit. Farmers market customers flock to vendors with berries and grapes for sale. CSA members develop stronger ties to farms that can supply a wide range of fresh produce. Chefs who tout their connections to local farmers are delighted to be able to list local fruit on their dessert menus. At home, even the pickiest eaters are usually happy to snack on berries and grapes.
- Consumption of berries and grapes is rapidly increasing worldwide, thanks to recent discoveries about the health benefits of these fruits. The pigments that give berries and red or purple grapes their deep colors contain phytochemicals that help prevent cancer, cardiovascular disease, and age-related mental decline. People feel good about eating grapes and berries!
- From the farmer's and gardener's perspective, berries and grapes are easier to grow than ever before. New varieties, production practices, and products are increasing the options for growers in every region. Berries are popular crops for the hoophouse, for example, because protection from wind and rain produces extraordinary yields of high-quality fruits. Plastic and paper mulches reduce the need for year-round weeding of these perennial plants. And because Johnny's offers plants in small quantities, growers can trial numerous commercial varieties without spending a lot of money.
- Although most berry and grape plants won't produce fruit for 1 to 3 years after planting, the wait is worthwhile. Commercial growers can charge a premium for fresh, ripe fruits. And home gardeners can save money by growing their own.
- What do you need to get started with fruits? First, if you aren't sure about the suitability of your climate for small fruits, contact your state Extension service for recommendations. Some regions of the country may not have enough cold (chilling hours) for certain varieties, while others may be too cold for the plants or too hot for the fruits. Good soil preparation is essential for successful fruit production. So is an irrigation system. Most small fruits don't compete well with weeds, so a mulch of hay, straw, or wood chips is beneficial. Grapes need a strong trellis, which should be erected when the vines are planted. A living mulch in the paths between rows will help reduce weed pressure and improve soil fertility. You’ll find products and information about living mulches in the cover crops section on the web and in the catalog.
By Lynn Byczynski
Growing grapes may appear complicated to the beginner, and with good reason. Although grapes will grow anywhere, there are many kinds of training and trellising systems, and choosing the right one requires some study before planting.
Training and trellising go hand-in-hand because the kind of structure you build to hold your grape vines will affect how you prune them. The structure, in turn, depends somewhat on the type of grapes you grow because some are more vigorous and need stronger supports.
In general, a grape trellis needs to be able to support the weight of the crop and withstand high winds. It also should be designed to last 20 years, as that's how long you can expect your vines to produce.
Home gardeners planting just a few vines can use a fence that fits into the landscape or, better still, an arbor that provides shade in summer as well as support for the grape vines. To get good fruit production from an arbor planting, pruning becomes the key. Texas Extension has a nicely illustrated manual on arbor training.
Commercial growers with larger aspirations need to set up a trellis in the field. The main ingredients for a vineyard trellis are strong end posts with braces, earth anchors, or deadmen; posts along the length of the trellis to support the wires; and high-tensile galvanized steel wire to support the vines.
The most common type of trellis is the single curtain trellis with either one or two wires and posts every 16 to 24 feet apart, depending on the training system. With this type of trellis, various training styles are possible. Another popular type of trellis, especially in northern areas, is the double curtain, which allows the vines to spread horizontally across two wires.
The recommended trellis and training system varies by climate. Northern growers with shorter growing seasons usually choose training systems that expose more leaf surface to the sun, but those can be inappropriate to warm climates. To learn more about the best training and trellising system for your location, check the list below of state viticulture guides and choose the state nearest your own. Or, contact your state Extension service for recommendations.
California: Viticulture and Enology Home Page
Colorado: Grape Growers Guide
Idaho, Oregon, Washington: Northwest Berry & Grape Information Network
Iowa: Viticulture Home Page
Kansas: Commercial Grape Production
Michigan: MSU Grape Information
Missouri: Home Fruit Production: Grape Training Systems
New York: Cornell Viticulture
Ohio: Midwest Grape Production Guide
Oklahoma: Viticulture and Enology
Pennsylvania: Wine Grape Network
South Dakota: Viticulture in South Dakota
Texas: Winegrape Network
Vermont: Cold Climate Grape Production
Wisconsin: Growing Grapes
By Lynn Byczynski
Strawberries are one of the most popular fruits in American gardens and market farms. They can be grown in many places, from hanging baskets to fields to hoophouses. The trick is to match the growing system to the type of strawberry you want to grow. Some varieties need plenty of space, whereas others can be grown in containers.
June-bearing varieties initiate fruit buds in fall and blossom the following spring. They are the earliest type to fruit. They produce one crop and then spend their energy sending out runners (also called daughter plants) that will fruit the following year. June-bearing strawberries are usually grown in a matted row system, in which the mother plants are planted in spring, spaced 18-24" apart in rows that are 3-4' apart. The first year, flowers are pinched off to stimulate the plants to send out runners that fill in the spaces within the row and between the rows. Plants produce fruit the second spring. A variation of this system is to prune runners to one or two per plant so that they stay in a line and don't spread out between the rows. This obviously requires a lot more labor, but may result in better yields because of reduced competition. Matted-row systems can be renovated to keep plants producing for many years. Another system is called the ribbon row system, in which strawberry crowns are planted in fall and allowed to bloom and fruit the following spring. As runners form, they are removed to increase fruit size. Once the crop is done, runners are allowed to develop and fill in the bed to a matted row system.
Day-neutral varieties produce fruit all summer. They can be grown as annuals: plant early in spring and pinch off flowers for two months to let the plants get established, and then let them fruit the rest of the summer. Day-neutral strawberries are good for container production on a deck or patio. Some varieties, including 'Seascape', will fruit on unrooted runners so they make attractive hanging baskets, with the runner plants cascading over the sides of the basket. Day-neutral strawberries can also be grown in a hill system, with 12 inches between plants.
Alpine strawberries produce small but intensely flavorful berries. They do not send out runners and are usually grown from seed. They are a good choice for strawberry pots and other containers, or as edging in the vegetable garden. They also can be grown with less than full sun, so they are a good choice for many home gardeners.
Region-specific growing information is available from most state Extension services. ATTRA has a publication on Organic Production of Strawberries.
By Lynn Byczynski
Strawberry quality, yield, and earliness is greatly improved in a hoophouse. Penn State researchers found that in their climate, hoophouse strawberries produced fruit 3 weeks earlier in spring than those grown outside, with about a 25% yield increase.
Most commercial hoophouse strawberries are grown using an annual plasticulture system that includes raised beds, drip irrigation, plastic mulch, and floating row cover. Plugs are planted in late summer on beds covered with plastic mulch, with drip tape beneath the mulch. As the weather gets cold, the young plants are covered with floating row cover to maintain the warmer soil temperatures needed for establishment. The plants grow slowly during winter in the protected environment of the hoophouse; then, as the weather warms, they flower and produce berries for several weeks. The crop is then finished for the year. Strawberry plants can either be removed to make way for other crops; or they can be left to produce a second year if berry prices or other factors justify tying up the space for a year.
Plugs are available from outside suppliers, or they can be produced on the farm in summer. To grow your own, detach unrooted daughter plants (runners) from the mother plant in July and stick them in potting mix in 72-cell flats under intermittent mist until roots protrude from the bottom of the cell. Then place on a greenhouse bench and grow until September, when they can be planted into the hoophouse. Plants that are rooted in July are likely to flower and fruit in fall in warmer climates, but that won't affect their yield the following spring.
For more information on hoophouse strawberries:
Growing Strawberries in High Tunnels in Missouri
Production of Vegetables, Strawberries, and Cut Flowers Using Plasticulture is a book about all aspects of horticultural plastics, and includes extensive information about hoophouse strawberries. | <urn:uuid:36ed1ab0-6236-4c74-aec9-8f12de0eb6cf> | CC-MAIN-2013-20 | http://www.johnnyseeds.com/t-catalog_extras_fruits.aspx?source=BlogJSSAdv0212 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.940725 | 2,392 | 3.421875 | 3 |
A patch is a piece of software designed to fix problems.
Hot patching is a mechanism of applying the patch to an application without requiring to shutdown or restart the system or the program concerned. This addresses the problems related to unavailability of service provided by the system or the program.
We came across a situation where a DLL is loaded by one of the critical processes in a system and that process demands high availability. The DLL has exported functions that are supposed to perform the core operations within the process. We identified a critical flaw in one of the exported functions from the DLL. One constraint we had while trying to fix the problem was to avoid any down time of this critical process.
We solved this problem by using the hot patch mechanism. We used an external process to inject a DLL [Hot patch]. This DLL has the corrected version of the defective function and also a hook to
GetProcAddress function. As we know, calling
GetProcAddress returns the address of the exported function from the specified DLL. By hooking into the
GetProcAddress API, we monitored the requests to the exported function. When we find that the request was made for the flawed function, we just returned the address of the updated function which was part of the injected DLL.
By intercepting the calls of
GetProcAddress API, we redirected the requests from the flawed function to the corrected function.
We could resolve this problem by using the concepts like DLL injection and API hooking.
Hot Patch Structure
This hot patching structure has the following binaries:
- Hot Patch DLL: This DLL has only exported functions that are updated and will be used as replacement for the flawed functions. This also has the hooking logic to hook to
GetProcAddress API exported by Kernel32.DLL.
- Updater.exe: This process injects the hot patched DLL into the target process.
How to Inject DLL into a Remote Process
One method of injecting a DLL into a remote process requires a thread in the remote process to call
LoadLibray of the desired DLL. Since we can't control the threads in a process other that our own, the solution would be to create a new thread in the target process. This enables us to have full control over the code that this thread is going to execute.
We can create a thread in a remote process using the following Windows API:
HANDLE hProcess, </span /> PSECURITY_ATTRIBUTES psa,
PDWORD pdwThreadId );
After creating a new thread in the target process, we need to make sure that there is a call to
LoadLibrary API that will load the DLL into the target process.
Following is the complete set of steps for injecting a DLL into the target process:
VirtualAllocEx API to allocate memory equal to the size of DLL full file path in the remote process:
VirtualAllocEx(hProcess, NULL, dwSizeOfFilePath, MEM_COMMIT, PAGE_READWRITE);
WriteProcessMemory to copy the DLL's full file path in the allocated space [Step 1]:
(void</span />*)szDLLFilePath, dwSizeOfFilePath,NULL);
CreateRemoteThread API to create thread in the remote process passing in the address of
LoadLibary API and also the memory location of the DLL's full file path.
hProcess, </span /> NULL,
(LPTHREAD_START_ROUTINE) ::GetProcAddress(hKernel32,"</span />LoadLibraryA"</span />),
pDLLFilePath, </span /> 0</span />,
VirtualFreeEx function to free the memory allocated in step 1:
::VirtualFreeEx( hProcess, pDLLFilePath, dwSize, MEM_RELEASE );
How to Perform API Hooking using Import Section
Using module’s import section to perform hooking is quiet robust and easy to implement. To hook to a particular function, all we need to do is change the address of the flawed function in the module's import section.
We are trying to Hook to the
GetProcAddress API exported by Kernel32.dll.
Following is the complete set of steps for API hooking:
- Locate the Module’s import section by calling
- Loop through import section for the DLL which contains API that we want to hook. In this case, we would be searching for Kernel32.dll.
- Once the DLL module is located, get the address to the array of
IMAGE_THUNK_DATA structure that contains the information about the imported symbols:
pThunk = (PIMAGE_THUNK_DATA)((PBYTE)hModule + pImportDesc-></span />FirstThunk);
- Once the address of the exported is located, use the following APIs to update the address to the new function that needs to be hooked:
WriteProcessMemory(GetCurrentProcess(), ppfn, &pNewFunction,
sizeof</span />(pNewFunction), NULL);
VirtualProtect(ppfn, sizeof</span />(pNewFunction), dwOldProtect,&dwOldProtect);
When Are We Going to Perform this Hooking
Immediately after injecting the DLL into the remote process,
LoadLibary calls the
DLLMain of the injected DLL with
DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved)
switch</span />( ul_reason_for_call )
case</span /> DLL_PROCESS_ATTACH:
HookTheAPI(); </span /> break</span />;
Implementing Our Own GetProcAddress
MyGetProcAddress will intercept all the calls made to
GetProcAddress APIs to monitor if there is any request for the faulting function and if so, return the address of the corrected function.
FARPROC WINAPI MyGetProcAddress(HMODULE hmod, PCSTR pszProcName)
pOldFunAdd = GetProcAddress(GetModuleHandle("</span />MySubsystem.dll"</span />), "</span />Foo"</span />);
pNewFunAdd = GetProcAddress(GetModuleHandle("</span />MySubsystem.HP.dll"</span />), "</span />"</span />Foo"</span />);
pRetFunAdd = GetProcAddress(hmod, pszProcName);
if((NULL != pOldFunAdd) && (NULL != pNewFunAdd))
// if the request is of that Faulting Function
if(pOldFunAdd == pRetFunAdd )
pRetFunAdd = pNewFunAdd; // Return address of
// corrected function
return pRetFunAdd ;
The attached binaries demo the hot patching implemented using the DLL Injection and Function Hooking. To keep the thing simple and for easy understanding, I am demonstrating with a very basic functionality. Please ignore the code optimizations for now:
- MySubsystem.DLL: This DLL exports following functions
RandomNumber: Returns a random number
SleepTime: Specifies the time to sleep before making the other call to
- MyProcess.exe: This process loads the MySubsystem.DLL and displays the
RandomNumbers in console after sleeping for the specified amount of time as returned by MySubsystem.DLL.
- Change Request: Displays the random numbers which are even. Odd number should not be displayed on the console. This has to be accomplished without restarting the process MyProcess.exe.
- MySubsystem.HP.DLL: This is a HOT Patch DLL that contains the corrected
MyRandomNumber as per the request. This also has the implementation for Hooking and Hooked function
- Updater.exe: This process injects the MySubsystem.HP.DLL into MyProcess.exe
Run MyProcess.exe to see that
RandomNumbers which are both even and odd are getting displayed. Run the updater.exe and you will see only even numbers getting displayed. All the odd numbers are skipped.
One limitation for this approach is that the functions that are exported can be hot patched.
- 6th October, 2010: Initial version | <urn:uuid:484ee5ee-b4ac-47b3-aa40-115ba57265c3> | CC-MAIN-2013-20 | http://www.codeproject.com/Articles/116253/Hot-Patching-Made-Easy?PageFlow=FixedWidth | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00001-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.841218 | 1,787 | 3.234375 | 3 |
Eric Weisz, better known to the world as Harry Houdini, was born on this date in 1874. Famous for his feats as an escape artist and magician, Houdini also became one of the most crusading anti-spiritualists of the 1920s. Because of his familiarity with the illusions of stage magic and sleight of hand, Houdini was particularly adept at spotting the trickery that the so-called psychics and spirit mediums then hawking their services as conduits to the afterlife to the credulous grieving public commonly used. (Can you imagine anyone being foolish enough to fall for such predatory charlatanry today?)
That turn in Houdini’s career led him to collaborate with Scientific American on a lengthy exposé of spiritualism. Scientific American had offered a $5,000 reward to any medium who could satisfy its panel of investigators, which included Houdini, two of the magazine’s editors (J. Malcolm Bird and Austin C. Lescaboura) and others, that his or her paranormal gifts were genuine.
Unfortunately, although the magazine’s panel did reveal many frauds during the few years of its tenure, the whole episode ended very badly. The problem—which in retrospect is quite evident to anyone who has read through the Scientific American archives of that time, as I did—was that at least one of the editors, Malcolm Bird, was not so secretly a believer in the afterlife and very much wanted a psychic to succeed.
Matters came to a head in 1924 when the team was evaluating a psychic whom it called “Margery” in print, though we now know her to be Mina Crandon, the comely young wife of a Boston socialite and surgeon. You can read Houdini’s own account of the messy business that resulted, but in short: Mina Crandon’s seance tricks, perhaps aided by her personal charm, bamboozled Bird and the rest Scientific American group with the exception of Houdini, who was not initially at their meetings. They were prepared to award her the prize but Houdini protested that he needed to see for himself. At the seance, Houdini saw through the deception and called “Margery” on it, much to the annoyance of Bird, who angrily resisted exposing her con game. The arguments that followed led to the dissolution of the ghostbusting squad, and the $5,000 was never awarded.
What Houdini’s account does not say, but which I have heard as a perhaps unreliable rumor, is that the dispute between Houdini and Bird actually turned into a physical brawl. (Ahh, the two-fisted SciAm editors of yesteryear….) Also, when I attended James Randi’s The Amaz!ing Meeting in Las Vegas in 2003 and talked about these events, magician and mad debunker Penn Jillette told me that he had heard directly from Mina Cranston’s granddaughter that Mina had been sleeping with several members of the team, including Bird. (Oh, the scandal!) That just might help to account for Bird’s umbrage at Houdini’s harsh quashing of Mina’s scam.
But that is not the anti-spiritualist episode I would like to talk about today.
Rather, turn to a particular occasion when Houdini was trying to disabuse Sir Arthur Conan Doyle of his own spiritualist inclinations. Conan Doyle was not the paragon of rationality and reason that one might assume the creator of Sherlock Holmes would be: he had a soft spot for mediums. (It tends to make one think of Holmes’s famous dictum that “when you have eliminated the impossible, whatever remains, however improbable, must be the truth” in a somewhat different light.) Nevertheless, because Houdini and Conan Doyle had come to be friends, Houdini wanted to open the author’s eyes to the psychic frauds, and so he staged a demonstration that he hoped would do the trick.
Michael Shermer recently described what happened in his February “Skeptic” column for Scientific American:
In the spring of 1922 Conan Doyle visited Houdini in his New York City home, whereupon the magician set out to demonstrate that slate writing—a favorite method among mediums for receiving messages from the dead, who allegedly moved a piece of chalk across a slate—could be done by perfectly prosaic means. Houdini had Conan Doyle hang a slate from anywhere in the room so that it was free to swing in space. He presented the author with four cork balls, asking him to pick one and cut it open to prove that it had not been altered. He then had Conan Doyle pick another ball and dip it into a well of white ink. While it was soaking, Houdini asked his visitor to go down the street in any direction, take out a piece of paper and pencil, write a question or a sentence, put it back in his pocket and return to the house. Conan Doyle complied, scribbling, “Mene, mene, tekel, upharsin,” a riddle from the Bible’s book of Daniel, meaning, “It has been counted and counted, weighed and divided.”
How appropriate, for what happened next defied explanation, at least in Conan Doyle’s mind. Houdini had him scoop up the ink-soaked ball in a spoon and place it against the slate, where it momentarily stuck before slowly rolling across the face, spelling out “M,” “e,” “n,” “e,” and so forth until the entire phrase was completed, at which point the ball dropped to the ground.
Houdini then explained that he had done the whole thing through simple trickery and implored Conan Doyle to give up his spiritualist beliefs. Alas, he failed: not only did Conan Doyle continue to believe in mediums but he suspected that Houdini knowingly or unknowingly used his own supernatural gifts in the performance of his escape acts.
Here is my question for the hive mind: How did Houdini do it? Magicians are of course famously reluctant to reveal how they do their tricks, and it’s not clear that Houdini showed the secret to Conan Doyle. (Perhaps that’s why Conan Doyle refused to be convinced.) Rather than ask a magician to break his professional code, I thought I would ask you readers to suggest how Houdini accomplished his “ghostly” slate writing.
Here are my own uninformed guesses about elements of the trick, which still probably don’t quite cohere into a full explanation.
- My sense is that the slate hung from wherever it was placed by wires attached to its four corners so that it could swing freely but also hang level. I suspect that a marionette-like arrangement of those wires could in theory allow a ball to be rolled across the slate’s surface as required.
- Conan Doyle’s cutting into one of the cork balls to prove it had not been tampered with does not preclude his selected ball from being tampered with or replaced subsequently, when he is not looking.
- Sending Conan Doyle down the street to write his secret message would give him a sense of privacy, but of course it also would allow Houdini (and unknown cronies?) a chance to reset the slate and the balls as they chose.
- Any good pickpocket could probably remove that page with Conan Doyle’s message from his pocket, look at it and return it without him being the wiser.
That’s my best hypothesis about how Houdini did it. What’s yours?
Update (added 3/25, 8:23 a.m.): Via Twitter, P. Kerim Friedman tells me that these two books (here and here) may hold the answer, although those explanations aren’t immediately available online. So I’d still like to hear your theories.
The How Did Houdini Trick Conan Doyle? by Retort, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. | <urn:uuid:bf72aa02-e812-4485-a117-2222459f46fe> | CC-MAIN-2013-20 | http://blogs.plos.org/retort/2011/03/24/how-did-houdini-trick-conan-doyle/comment-page-1/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.97732 | 1,726 | 2.984375 | 3 |
Green Power is electricity generated from renewable energy sources that are environmentally friendly such as solar, wind, biomass, and hydro power. New York State and the Public Service Commission have made a commitment to promote the use of Green Power and foster the development of renewable energy generation resources.
GREEN POWER IN NEW YORK
Electricity comes from a variety of sources such as natural gas, oil, coal, nuclear, hydro power, biomass, wind, solar, and solid waste. Green Power is electricity generated from renewable energy sources such as:
Solar: Solar energy systems convert sunlight directly into electricity.
Biomass: Organic wastes such as wood, other plant materials and landfill gases are used to generate electricity.
Wind: Modern wind turbines use large blades to catch the wind, spin turbines, and generate electricity
Hydropower: Small installations on rivers and streams use running or falling water to drive turbines that generate electricity.
NY’s ENERGY MIX
|The pie chart below shows the mix of energy sources that was used to generate New York’s electricity in 2003. Buying Green Power will help to increase the percentage of electricity that is produced using cleaner energy sources.|
You have the power to make a difference For only a few pennies more a day, you can choose Green Power and make a world of difference for generations to come.
- Produces fewer environmental impacts than fossil fuel energy.
- Helps to diversify the fuel supply, increasing the reliability of the NY State electric system and contributing to more stable energy prices.
- Reduces use of imported fossil fuels, keeping dollars spent on energy in the State’s economy.
- Creates jobs and helps the economy by spurring investments in environmentally-friendly facilities.
- Creates healthier air quality and helps to reduce respiratory illness.
If just 10% of New York’s households choose Green Power for their electricity supply, it would prevent nearly 3 billion pounds of carbon dioxide, 10 million pounds of sulfur dioxide, and nearly 4 million pounds of nitrogen oxides from getting into our air each year. Green Power helps us all breathe a little easier.
Your Energy…Your Choice
Your electric service is made up of two parts, supply and delivery. In New York’s competitive electric market, you can now shop for your electric supply. You can support cleaner, sustainable energy solutions by selecting Green Power for some or all of your supply. No matter what electric supply you choose,your utility is still responsible for delivering your electricity safely and reliably, and will provide you with customer service and respond to emergencies.
What happens when you choose to buy Green Power?
The Green Power you buy is supplied to the power grid that delivers the electricity to all customers in your region. Your Green Power purchase supports the development of more environmentally-friendly electricity generation. You are helping to create a cleaner, brighter New York for future generations. You will continue to receive the safe, reliable power you’ve come to depend on.
Switching to Green Power is as easy as:
1. Use the list below to contact the Green Power service providers in your area.
2. Compare the Green Power programs.
3. Choose the Green Energy Service Company program that is right for you.
Using New York’s power to change the future Energy conservation, energy efficiency and renewable energy are critical elements in New York’s economic, security and energy policies. New York State is committed to ensuring that we all have access to reliable electricity by helping consumers use and choose energy wisely. Recently, the state launched two initiatives – one designed to educate the public about the environmental impacts of energy production, and one to encourage the development of Green Power programs.
The Environmental Disclosure Label
NY RENEWABLE ENERGY SERVICE INITIATIVES
The New York State Public Service Commission is supporting development of renewable energy service programs in utility service territories across the state. These programs are spurring the development of new sources ofrenewable energy and the sale of Green Power to New York consumers. As a result, Green Power service providers are now offering a variety of renewable energy service options. Most New York consumers now have the opportunity to choose Green Power.
Suppliers Offering Green Energy Products
Green Power can be arranged through the following suppliers (may not operate in all utility territories.) PSC has created the following list of providers and does not recommend particular companies or products.
|Agway Energy Services||1-888-982-4929||www.agwayenergy.com|
|Amerada Hess||1-800-HessUSA (437-7872)||www.hess.com
(Commercial and Industrial only)
|Community Energy, Inc.||1-866-Wind-123
|Constellation New Energy||1-866-237-7693||www.integrysenergy.com
(Commercial and Industrial only)
|Energy Cooperative of New York||1-800-422-1475||www.ecny.org|
|Green Mountain Energy Company||1-800-810-7300||www.greenmountain.com|
|Integrys Energy NY||1-518-482-4615
|Juice Energy, Inc||1-888-925-8423||www.juice-inc.com|
|NYSEG Solutions, Inc.||1-800-567-6520||www.nysegsolutions.com|
|Pepco Energy Services, Inc.
(NYC commercial and industrial only)
|Just Energy (GeoPower – Con Ed territory)||1-866-587-8674||www.justenergy.com|
|Just Energy (GeoGas – Con Ed, KeySpan, NFG territories)||1-866-587-8674||www.justenergy.com|
|Central Hudson Gas and Electric||1-800-527-2714||www.centralhudson.com|
|National Grid||1-800-642-4272 (upstate)
1-800-930-5003 (Long Island)
|New York State Electric and Gas||1-800-356-9734||www.nyseg.com|
|Orange and Rockland||1-877-434-4100||www.oru.com|
|Rochester Gas and Electric||1-877-743-9463||www.rge.com|
|Long Island Power Authority(LIPA)||1-800-490-0025||www.lipower.org| | <urn:uuid:c5ac2c27-7304-41b9-936a-59bd96158c32> | CC-MAIN-2013-20 | http://ecoanchornyc.com/resources/clean-energy-programs/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.807671 | 1,372 | 3.5625 | 4 |
- About LENS
- About Neurofeedback
- LENS Treated Disorders
- David Dubin
- LENS Videos
- Contact Us
Archive for author David Dubin
Los Angeles based LENS Neurofeedback provider and blogger.
More posts by David Dubin
Dr. David Dubin
After Einstein’s death, his brain was preserved for future study. Scientists were naturally curious to see how the brain of this genius compared with the brain of a person of ordinary intelligence. Would there be an abundance of neurons (grey matter) or some unusual wiring of the neurons that distinguished his brain? When the brain was dissected, however, the only difference was that the numbers of cells that were not neurons (white matter) was dramatically increased. It is also true that, from evolutionary point of view, as brains became larger and “smarter”, what increased was not the percentage of neurons but of white matter. What does this mean?
When most of us--scientists and lay people alike—imagine the brain, we think of neurons, those cells carrying information in the form of electrical impulses. Neurons are ‘brains of the brain’, so to speak, and the rest of the cells were thought to be there only for support. But neurons account for only 15 percent of the brain, while these so-called ‘support’ cells occupy 85 percent.
The group name for white matter cells, glia—derived from the world ‘glue’—reflects their lowly status. First seen clearly by anatomists in the late 1800’s, Glia were initially thought to be little more than structural support for neurons, because, like scaffolding, glial cells literally hold neurons in place.
It was later found that glia can speed transmission of electrical signals and also deliver energy to neurons and remove neuronal waste products. While appearing to be a little more interesting than originally thought, glia still seemed about as sexy as wire insulation, food delivery and waste management devices.
Unlike neurons, there is no electrical activity within glia to send messages and information. It was therefore assumed that glia were deaf and dumb, incapable of communicating with either neurons or other glia, and therefore not particularly compelling as a focus of research. A good analogue would be the under-appreciated dark matter in astronomy. Dark matter is undetectable because it emits no electromagnetic radiation as the matter in the “visible” universe does. The existence of dark matter was eventually inferred from its gravitational effects on visible matter. While we had believed that the visible universe is the universe, the ordinary matter of our visible universe accounts for less than five percent of the total; dark matter accounts for more than 20 percent.
Today, the pace of knowledge about glia has begun to accelerate, as outlined in an exciting new book, The Other Brain, by Dr. R. Douglas Fields[i] (the title refers to the 85 percent of the brain that is glial). Fields is a neuroscientist specializing in glial cell research, and the information in his book is so new that it isn’t found in standard medical textbooks. Two review articles in the May, 2010 issue of the research journal Nature Neuroscience attest to how much still needs to be learned, and how potentially revolutionary are the implications.
So, are glial cells really dumb as a doorknob? First, we are just now learning that glial cells do communicate, not through synapses but through “gap junctions”. These gap junctions are protein channels connecting one cell to another, like a spaceship docking at the mother station. Glia can pass messages among themselves by using calcium as a chemical messenger instead of sending an electric signal as neurons do. In his research, Fields showed that after a 15-second delay, changes in response to a neuronal firing were seen in the surrounding glial cells. As Fields puts it, glial cells are “listening in” on what neurons are doing, something virtually no one in neuroscience thought possible.
Contrary to all established dogma, it is now known that glia not only communicate directly from one cell to an adjacent one, but also with cells very far away. Glia are even able to “jump” over barriers like a ping pong ball going over a net. And whereas neurons transmit their signals in linear lines, like telephone wires, glia communicate, as Fields puts it, by “broadcasting signals widely, like cell phones.”
Glia are also critical to the growth of neurons. Neuron cells grown in the lab without accompanying glial cells were found to have many fewer synapses than neurons grown with glial cells. Glia seem to play a central role in the number of synapses a neuron develops.
Contrary to what scientists thought, glia also have neurotransmitters; in fact, the same ones that neurons do. And there are receptors for these neurotransmitters both inside and on the outer surface of glial cells. Glial cells have receptors for glutamate, the principal stimulating neurotransmitter in the cortex, and GABA, which acts as a “brake” to calm down neurons. In other words, glia can excite or depress neurons and stimulate or calm the brain, just like medications.
And, unlike neurons, glia can move. They have enormous cellular “fingers” like the elastic Mr. Fantastic of comic book fame, and can move between and on neurons. This constantly changes the circuitry of the brain. These glial fingers also form around synapses. They secrete substances that remodel tissue or stimulate neuron growth during development and repair of the brain making it likely that they function in a similar role during learning in the healthy brain.
Glia repair injury, defend against disease, nurse neurons back to health and act as guide dogs for the re-growth of injured nerve fibers. Glia detect and react to bacteria and viruses, “gobble up” pathogens and release toxic chemicals to kill bacteria. And new research suggests that immature glial cells can act like stem cells and mature glia can stimulate stem cells dormant in the adult brain to form replacement neurons and glia. This could have implications for repair of the nervous system, including new possibilities for treating spinal cord injuries.
This is about as far removed from mere insulation, food delivery and waste management services as can be imagined. Glia are a lot smarter than we thought they were. A 2005 study shows a correlation between organization of fibers made of glial cells and IQ. Finding a greater proportion of glial cells in Einstein’s brain is not so surprising after all.
We still know very little about glia—even the basics such as how many kinds of glial cells there are and what they look like in detail. Their discovery, however, broadens our appreciation of the complexity of the brain. The brain, with its 100 billion neurons and an average of 10,000 synapses per neuron, has more potential connections than the atoms of our galaxy!
We don’t know yet if diet, exercise, supplements and other factors affect glial cells. However, the implications for health and illness—seizures, infections, cancer, addictions, mental illness and diseases such as Parkinson’s and multiple sclerosis may be far-reaching and profound.
As Fields says near the end of his book, “Here are cells that can build the brain of a fetus, direct the connection of its growing axons to wire up the nervous system, repair it after it is injured, sense impulses crackling through axons and hear synapses speaking, control the signals neurons use to communicate with one another at synapses, provide the energy source and substrates for neurotransmitters to neurons, couple large areas of synapses and neurons into functional groups, integrate and propagate the information they receive from neurons through their own private network, release neurotoxic or neuroprotective factors, plug and unplug synapses, move themselves in and out of the synaptic cleft, give birth to new neurons, communicate with the vascular and immune systems, insulate the neuronal lines of communication, and control the speed of impulse traffic through them. And some people ask, ‘Could these cells have anything to do with higher brain function?’ How could they possibly not?”
[i] FIELDS, R. Douglas, The Other Brain, Simon & Schuster, December 2009, 384 p.
People with obsessive-compulsive disorder, or OCD, have recurrent thoughts and behaviors that can be crippling. What follows is a discussion of the biology of the disorder and several aspects of treatment.
Obsessive compulsive disorder is not a single disorder; rather, it’s of a cluster of conditions. In OCD, sufferers might obsess and be anxious and compulsive about hoarding, cleaning, ordering and checking. Patients can also exhibit body dysmorphic disorder (BDD), where they imagine possessing a defect in physical appearance. Other diseases that overlap with OCD include Tourette’s syndrome and hypochondria. OCD also has a genetic component and runs in families; relatives of someone with OCD are 8 times more likely to present symptoms.
The areas of the brain that appears involved with OCD are the orbito-frontal cortex (OFC), a center for decision-making, and the thalamus, which filters and relays information. In these brain regions, the neurotransmitter glutamate is responsible for neuronal signaling. It’s thought that the deficit of glutamate production and function might contribute to the condition of OCD and other counter-productive behavior, including making decisions based on inappropriately perceived danger.
Obsessive Compulsive Disorder Treatment
The neurotransmitter serotonin may play an important role in whether someone gets obsessive compulsive disorder. Researchers have found a defect in the gene that makes a protein that “mops up” serotonin from between neurons. When there’s too much of this protein there is not enough serotonin, and that’s what is found in some with OCD. This is why Serotonin Re-uptake Inhibitors (SRIs) such as Prozac, which makes serotonin more available to the brain, are perhaps the most popular OCD treatment.
Another commonly used OCD treatment is exposure and response prevention (ERP), where the patient is exposed to stimuli that trigger the repetitive behavior but do not allow the patient to actually perform the compulsive behavior. Eventually the patient can learn that nothing bad happens when they don’t act out their compulsion.
Unfortunately, ERP is a stressful treatment for patients to endure. And significant numbers of patients drop out of treatment. Various drugs, such as the SRIs, are now being used in conjunction with ERP.
Anxiety usually is significant part of obsessive compulsive disorder. While anxiety does not appear to be the actual cause of OCD, anxiety can drive persistent thoughts and behaviors. A reduction in anxiety can be important in the treatment of OCD. Various modalities for treating anxiety include medication, neurofeedback (both traditional and LENS Neurofeedback), and/or behavioral approaches.
When anxiety is successfully brought under control, there are not only fewer obsessive thoughts, but those obsessive thoughts that do persist become less prominent. Instead being the dominant focus, compulsive become background music as opposed to a loud concert. These thoughts demand less attention and this makes it easier to control compulsive behavior. | <urn:uuid:d3e20eda-53a0-4cb8-966e-a395b0cb0d95> | CC-MAIN-2013-20 | http://thedubinclinic.com/author/Ddubin89/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.955086 | 2,356 | 2.921875 | 3 |
Earth System Science Partnership (ESSP)
The ESSP is a partnership for the integrated study of the Earth System, the ways that it is changing, and the implications for global and regional sustainability.
The urgency of the challenge is great: In the present era, global environmental changes are both accelerating and moving the earth system into a state with no analogue in previous history.
To learn more about the ESSP, clink on links to access Strategy Paper, brochure and a video presentation by the Chair of the ESSP Scientific Committee, Prof. Dr. Rik Leemans of Wageningen University, The Netherlands.
The Earth System is the unified set of physical, chemical, biological and social components, processes and interactions that together determine the state and dynamics of Planet Earth, including its biota and its human occupants.
Earth System Science is the study of the Earth System, with an emphasis on observing, understanding and predicting global environmental changes involving interactions between land, atmosphere, water, ice, biosphere, societies, technologies and economies.
ESSP Transitions into 'Future Earth' (31/12/2012)
On 31st December 2012, the ESSP will close and transition into 'Future Earth' as it develops over the next few years. During this period, the four global environmental Change research programmes (DIVERSITAS, IGBP, IHDP, WCRP) will continue close collaboration with each other. 'Future Earth' is currently being planned as a ten-year international research initiative for global sustainability (www.icsu.org/future-earth) that will build on decades of scientific excellence of the four GEC research programmes and their scientific partnership.
Click here to read more.
Global Carbon Budget 2012
Carbon dioxide emissions from fossil fuel burning and cement production increased by 3 percent in 2011, with a total of 34.7 billion tonnes of carbon dioxide emitted to the atmosphere. These emissions were the highest in human history and 54 percent higher than in 1990 (the Kyoto Protocol reference year). In 2011, coal burning was responsible for 43 percent of the total emissions, 34 percent for oil, 18 percent for gas and 5 percent for cement.
For the complete 2012 carbon budget and trends, access the Global Carbon Project website.
GWSP International Conference - CALL for ABSTRACTS
The GWSP Conference on "Water in the Anthropocene: Challenges for Science and Governance" will convene in Bonn, Germany, 21 - 24 May 2014.
The focus of the conference is to address the global dimensions of water system changes due to anthropogenic as well as natural influences. The Conference will provide a platform to present global and regional perspectives on the responses of water management to global change in order to address issues such as variability in supply, increasing demands for water, environmental flows, and land use change. The Conference will help build links between science and policy and practice in the area of water resources management and governance, related institutional and technological innovations and identify ways that research can support policy and practice in the field of sustainable freshwater management.
Learn more about the Conference here.
Global Carbon Project (GCP) Employment Opportunity - Executive Director
The Global Carbon Project (GCP) is seeking to employ a highly motivated and independent person as Executive Director of the International Project Office (IPO) in Tsukuba, Japan, located at the Centre for Global Environmental Research at the National Institute for Environmental Studies (NIES). The successful candidate will work with the GCP Scientific Steering Committee (SSC) and other GCP offices to implement the science framework of the GCP. The GCP is seeking a person with excellent working knowledge of the policy-relevant objectives of the GCP and a keen interest in devising methods to integrate social and policy sciences into the understanding of the carbon-climate system as a coupled human/natural system. Read More.
Inclusive Wealth Report
The International Human Dimensions Programme on Global Environmental Change (IHDP) announces the launch of the Inclusive Wealth Report 2012 (IWR 2012) at the Rio +20 Conference in Brazil. The report presents a framework that offers a long-term perspective on human well-being and sustainability, based on a comprehensive analysis of nations' productive base and their link to economic development. The IWR 2012 was developed on the notion that current economic indicators such as Gross Domestic Product (GDP) and the Human Development Index (HDI) are insufficient, as they fail to reflect the state of natural resources or ecological conditions, and focus exclusively on the short-term, without indicating whether national policies are sustainable.
Future Earth: Global platform for sustainability research launched at Rio +20
Rio de Janeiro, Brazil (14 June 2012) - An alliance of international partners from global science, research funding and UN bodies launched a new 10-year initiative on global environmental change research for sustainability at the Forum on Science and Technology and Innovation for Sustainable Development. Future Earth - research for global sustainability, will provide a cutting-edge platform to coordinate scientific research which is designed and produced in partnership with governments, business and, more broadly, society. More details.
APN's 2012 Call for Proposals
The Asia-Pacific Network for Global Change Research (APN) announces the call for proposals for funding from April 2013. The proposals can be submitted under two separate programmes: regional global change research and scientific capacity development. More details.
State of the Planet Declaration
Planet Under Pressure 2012 was the largest gathering of global change scientists leading up to the United Nations Conference on Sustainable Development (Rio +20) with over 3,000 delegates at the conference venue and over 3,500 that attended virtually via live web streaming. The plenary sessions and the Daily Planet news show continue to draw audiences worldwide as they are available On Demand. An additional number of organisations, including 150 Science and Technology Centres worldwide streamed the plenary sessions at Planet Under Pressure-related events reaching an additional 12,000 viewers.
The first State of the Planet Declaration was issued at the conference.
Global Carbon Budget 2010
Global carbon dioxide emissions increased by a record 5.9 per cent in 2010 following the dampening effect of the 2008-2009 Global Financial Crisis (GFC), according to scientists working with the Global Carbon Project (GCP). The GCP annual analysis reports that the impact of the GFC on emissions has been short-lived owing to strong emissions growth in emerging economies and a return to emissions growth in developed economies.
Planet Under Pressure 2012 Debategraph
Debategraph and Planet Under Pressure Conference participants and organisers are collaborating to distill the main arguments and evidence, risks and policy options facing humanity into a dynamic knowledge map to help convey and inform the global deliberation at United Nations Rio +20 and beyond.
Join the debate! (http://debategraph.org/planet)
Integrated Global Change Research
The ESSP and partners - the German National Committee on Global Change Research (NKGCF), International Council for Science (ICSU) and the International Social Science Council (ISSC) is conducting a new study on 'Integrated Global Change Research: Co-designing knowledge across scientific fields, national borders and user groups'. An international workshop (funded by the German Research Foundation) convened in Berlin, 7 - 9 March 2012, designed to elucidate the dimensions of integration, to identify and analyse best practice examples, to exchange ideas about new concepts of integration, to discuss emerging challenges for science, and to begin discussions about balancing academic research and stakeholder involvement.
The Future of the World's Climate
The Future of the World's Climate (edited by Ann Henderson-Sellers and Kendal McGuffie) offers a state-of-the-art overview - based on the latest climate science modelling data and projections available - of our understanding of future climates. The book is dedicated to Stephen H Schneider, a world leader in climate interpretation and communication. The Future of the World's Climate summarizes our current understanding of climatic prediction and examines how that understanding depends on a keen grasp of integrated Earth system models and human interaction with climate. This book brings climate science up to date beyond the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report. More details.
Social Scientists Call for More Research on Human Dimensions of Global Change
Scientists across all disciplines share great concern that our planet is in the process of crossing dangerous biophysical tipping points. The results of a new large-scale global survey among 1,276 scholars from the social sciences and the humanities demonstrates that the human dimensions of the problem are equally important but severely under-addressed.
The survey conducted by the International Human Dimensions Programme on Global Environmental Change (IHDP-UNU) Secretariat in collaboration with the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the International Social Science Council (ISSC), identifies the following as highest research priority areas:
1) Equity/equality and wealth/resource distribution;
2) Policy, political systems/governance, and political economy;
3) Economic systems, economic costs and incentives;
4) Globalization, social and cultural transitions.
Food Security and Global Environmental Change
Food security and global environmental change, a synthesis book edited by John Ingram, Polly Ericksen and Diana Liverman of GECAFS has just been published. The book provides a major, accessible synthesis of the current state of knowledge and thinking on the relationship between GEC and food security. Click here for further information.
GECAFS is featured in the latest UNESCO-SCOPE-UNEP Policy Brief - No. 12 entitled Global Environmental Change and Food Security. The brief reviews current knowledge, highlights trends and controversies, and is a useful reference for policy planners, decision makers and stakeholders in the community.
GWSP Digital Water Atlas
The Global Water System Project (GWSP) has launched its Digital Water Atlas. The purpose and intent of the Digital Water Atlas is to describe the basic elements of the Global Water System, the interlinkages of the elements and changes in the state of the Global Water System by creating a consistent set of annotated maps. The project will especially promote the collection, analysis and consideration of social science data on the global basis. Click here to access the GWSP Digital Water Atlas.
The ESSP office was carbon neutral in its office operations and travel in 2011. The ESSP supported the Gujarat wind project in India. More details.
The Global Carbon Project has published an ESSP commissioned report, "carbon reductions and offsets" with a number of recommendations for individuals and institutions who want to participate in this voluntary market. Click here to learn more and to download the report from the GCP website.
The ESSP is a joint initiative of four global environmental change programmes: | <urn:uuid:120b0d29-fee9-445b-a798-b67b2cbeb131> | CC-MAIN-2013-20 | http://www.essp.org/index.php?id=10&L=0%252F%252Fassets%252Fsnipp%20%E2%80%A6%2F%2Fassets%2Fsnippets%2Freflect%2Fsnippet.reflect.php%3Freflect_base%3D | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.894252 | 2,191 | 2.984375 | 3 |
Tips for helping children behave
(especially in public)
I thought it was time to write down some of the tips I’ve acquired that concern children and behavior, as I’ve learned what I have about the brain and how it works. What I learned is that while you are thinking about something, you feel as you do about it. If you want to change how you feel, changing your thoughts helps us do that immediately. Just as I say, “ Imagine the tip of your left pinky finger… where the nail meets the skin…” Do you think of it? Did you think of your pinky because I mentioned it? You did that because that is how quickly our brain responds.
Tip 1. Keep a picture or small item (from a special time or event) that represents something special to the child with you.
Yes. Helping a child regain composure during a meltdown may be painful and embarrassing, not to mention the discomfort your child perceives at that time. There seems to be great debate over how to handle a situation like a meltdown like let them ride it out or leave immediately or yell or whatever… if you have tried these techniques I’d like you to pause and ask yourself how helpful your response was in the moment.
Keeping a picture of a grandparent or a party or a vacation or an item or toy that they really want for their birthday or special holiday… (Big breath…) you get the picture… it helps take your child’s thoughts to a more pleasant place. You can make this even more effective by asking them questions about it, even if you know the answers ; )
Ask questions like, “Hey remember this? Where/who is this again? Do you remember that thing in the picture…? What do you think happened right before this? What is your favorite thing about this? Can you make up a sentence/song/rhyme/or draw a picture about the picture? Can you spell something in the picture?
These questions will take their focus to something better. Once they’ve calmed down and appear to be past the issue, ask them what happened and explain why that behavior isn’t helpful or necessary.
Tip 2. A simple game of ‘I spy’ with extra OMG
This tip is great if you find yourself out and are caught empty handed without anything handy to keep your kids busy. Look around you and spot something ANYTHING that either you don’t see often or haven’t seen in awhile (not a person) and get excited while smiling and saying, “OMG you are never going to find what I am looking at! It is soooo _____________” This usually leads to a game of I spy which can be changed as your child develops I spy colors… I spy words that start with ___... I spy numbers …
Tip 3. Text Message back and forth
I noticed a long time ago how technology has changed the interpersonal dynamic. Being more of a “find the solution girl”, I established some rules like no cell phone at the table for meals. Please know my daughter was 4 when I did this knowing that when she is grown she will have a phone and I probably will want to use our dining times to connect. I figured our meals are relatively short and whatever needed me could wait the 30 minutes. Also, it was a perfect opportunity to demonstrate the behavior I hope to create.
This is what led me to texting with my child. I remember when she was beginning to read that I looked for every practical experience for her to do so… signs, menus, airport terminals… everything. But sometimes there is nothing to read and nowhere to go as you get stuck waiting in line or for an appointment or for whatever presents a time where you can text and you need to keep kids busy and have fun… text them. Hand them your phone and say here… and let them read your message for them. Then let them respond to you via text. Hand the phone back and forth…after a few texts you may be happy to learn what you do… and you will both have fun while doing it.
Tip 4. Start them on a story.
This is one I use all the time and changes just as much as I use it. I simply start a story using something that I see for example… “Once there was this really cool girl who sat down to write some really helpful stuff for parents…” then they have to look around and continue the story using something they see. Example “But that girl decided to step away from her desk and go play with the fairies living in her backyard…”
Start stories about the cars you see driving or the foods you see in the market or the stars in the sky… about the waves in the ocean and the mermaids that live deep below.
When you run out of time for your part and you’ve got them interested, tell them to draw a picture of it or write a story about it.
How smart and well behaved your children are now. Thanks for reading. You do make a difference in your child’s life. Much Love. | <urn:uuid:f9d8d1b0-2683-4967-8c9a-42f8d9002ae5> | CC-MAIN-2013-20 | http://www.serioushypnotherapy.com/blog/2012/05/10/Tips-for-helping-children-behave-especially-in-public.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.966947 | 1,065 | 2.734375 | 3 |
Kidney Disease of Diabetes
On this page:
- The Burden of Kidney Failure
- The Course of Kidney Disease
- Diagnosis of CKD
- Effects of High Blood Pressure
- Preventing and Slowing Kidney Disease
- Dialysis and Transplantation
- Good Care Makes a Difference
- Points to Remember
- Hope through Research
- For More Information
The Burden of Kidney Failure
Each year in the United States, more than 100,000 people are diagnosed with kidney failure, a serious condition in which the kidneys fail to rid the body of wastes.1 Kidney failure is the final stage of chronic kidney disease (CKD).
Diabetes is the most common cause of kidney failure, accounting for nearly 44 percent of new cases.1 Even when diabetes is controlled, the disease can lead to CKD and kidney failure. Most people with diabetes do not develop CKD that is severe enough to progress to kidney failure. Nearly 24 million people in the United States have diabetes, 2 and nearly 180,000 people are living with kidney failure as a result of diabetes.1
People with kidney failure undergo either dialysis, an artificial blood-cleaning process, or transplantation to receive a healthy kidney from a donor. Most U.S. citizens who develop kidney failure are eligible for federally funded care. In 2005, care for patients with kidney failure cost the United States nearly $32 billion.1
African Americans, American Indians, and Hispanics/Latinos develop diabetes, CKD, and kidney failure at rates higher than Caucasians. Scientists have not been able to explain these higher rates. Nor can they explain fully the interplay of factors leading to kidney disease of diabetes—factors including heredity, diet, and other medical conditions, such as high blood pressure. They have found that high blood pressure and high levels of blood glucose increase the risk that a person with diabetes will progress to kidney failure.
1United States Renal Data System. USRDS 2007 Annual Data Report. Bethesda, MD: National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, U.S. Department of Health and Human Services; 2007.
2National Institute of Diabetes and Digestive and Kidney Diseases. National Diabetes Statistics, 2007. Bethesda, MD: National Institutes of Health, U.S. Department of Health and Human Services, 2008.
The Course of Kidney Disease
Diabetic kidney disease takes many years to develop. In some people, the filtering function of the kidneys is actually higher than normal in the first few years of their diabetes.
Over several years, people who are developing kidney disease will have small amounts of the blood protein albumin begin to leak into their urine. This first stage of CKD is called microalbuminuria. The kidney's filtration function usually remains normal during this period.
As the disease progresses, more albumin leaks into the urine. This stage may be called macroalbuminuria or proteinuria. As the amount of albumin in the urine increases, the kidneys' filtering function usually begins to drop. The body retains various wastes as filtration falls. As kidney damage develops, blood pressure often rises as well.
Overall, kidney damage rarely occurs in the first 10 years of diabetes, and usually 15 to 25 years will pass before kidney failure occurs. For people who live with diabetes for more than 25 years without any signs of kidney failure, the risk of ever developing it decreases.
Diagnosis of CKD
People with diabetes should be screened regularly for kidney disease. The two key markers for kidney disease are eGFR and urine albumin.
eGFR. eGFR stands for estimated glomerular filtration rate. Each kidney contains about 1 million tiny filters made up of blood vessels. These filters are called glomeruli. Kidney function can be checked by estimating how much blood the glomeruli filter in a minute. The calculation of eGFR is based on the amount of creatinine, a waste product, found in a blood sample. As the level of creatinine goes up, the eGFR goes down.
Kidney disease is present when eGFR is less than 60 milliliters per minute.
The American Diabetes Association (ADA) and the National Institutes of Health (NIH) recommend that eGFR be calculated from serum creatinine at least once a year in all people with diabetes.
Urine albumin. Urine albumin is measured by comparing the amount of albumin to the amount of creatinine in a single urine sample. When the kidneys are healthy, the urine will contain large amounts of creatinine but almost no albumin. Even a small increase in the ratio of albumin to creatinine is a sign of kidney damage.
Kidney disease is present when urine contains more than 30 milligrams of albumin per gram of creatinine, with or without decreased eGFR.
The ADA and the NIH recommend annual assessment of urine albumin excretion to assess kidney damage in all people with type 2 diabetes and people who have had type 1 diabetes for 5 years or more.
If kidney disease is detected, it should be addressed as part of a comprehensive approach to the treatment of diabetes.
Effects of High Blood Pressure
High blood pressure, or hypertension, is a major factor in the development of kidney problems in people with diabetes. Both a family history of hypertension and the presence of hypertension appear to increase chances of developing kidney disease. Hypertension also accelerates the progress of kidney disease when it already exists.
Blood pressure is recorded using two numbers. The first number is called the systolic pressure, and it represents the pressure in the arteries as the heart beats. The second number is called the diastolic pressure, and it represents the pressure between heartbeats. In the past, hypertension was defined as blood pressure higher than 140/90, said as "140 over 90."
The ADA and the National Heart, Lung, and Blood Institute recommend that people with diabetes keep their blood pressure below 130/80.
Hypertension can be seen not only as a cause of kidney disease but also as a result of damage created by the disease. As kidney disease progresses, physical changes in the kidneys lead to increased blood pressure. Therefore, a dangerous spiral, involving rising blood pressure and factors that raise blood pressure, occurs. Early detection and treatment of even mild hypertension are essential for people with diabetes.
Preventing and Slowing Kidney Disease
Blood Pressure Medicines
Scientists have made great progress in developing methods that slow the onset and progression of kidney disease in people with diabetes. Drugs used to lower blood pressure can slow the progression of kidney disease significantly. Two types of drugs, angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs), have proven effective in slowing the progression of kidney disease. Many people require two or more drugs to control their blood pressure. In addition to an ACE inhibitor or an ARB, a diuretic can also be useful. Beta blockers, calcium channel blockers, and other blood pressure drugs may also be needed.
An example of an effective ACE inhibitor is lisinopril (Prinivil, Zestril), which doctors commonly prescribe for treating kidney disease of diabetes. The benefits of lisinopril extend beyond its ability to lower blood pressure: it may directly protect the kidneys' glomeruli. ACE inhibitors have lowered proteinuria and slowed deterioration even in people with diabetes who did not have high blood pressure.
An example of an effective ARB is losartan (Cozaar), which has also been shown to protect kidney function and lower the risk of cardiovascular events.
Any medicine that helps patients achieve a blood pressure target of 130/80 or lower provides benefits. Patients with even mild hypertension or persistent microalbuminuria should consult a health care provider about the use of antihypertensive medicines.
In people with diabetes, excessive consumption of protein may be harmful. Experts recommend that people with kidney disease of diabetes consume the recommended dietary allowance for protein, but avoid high-protein diets. For people with greatly reduced kidney function, a diet containing reduced amounts of protein may help delay the onset of kidney failure. Anyone following a reduced-protein diet should work with a dietitian to ensure adequate nutrition.
Intensive Management of Blood Glucose
Antihypertensive drugs and low-protein diets can slow CKD. A third treatment, known as intensive management of blood glucose or glycemic control, has shown great promise for people with diabetes, especially for those in the early stages of CKD.
The human body normally converts food to glucose, the simple sugar that is the main source of energy for the body's cells. To enter cells, glucose needs the help of insulin, a hormone produced by the pancreas. When a person does not make enough insulin, or the body does not respond to the insulin that is present, the body cannot process glucose, and it builds up in the bloodstream. High levels of glucose in the blood lead to a diagnosis of diabetes.
Intensive management of blood glucose is a treatment regimen that aims to keep blood glucose levels close to normal. The regimen includes testing blood glucose frequently, administering insulin throughout the day on the basis of food intake and physical activity, following a diet and activity plan, and consulting a health care team regularly. Some people use an insulin pump to supply insulin throughout the day.
A number of studies have pointed to the beneficial effects of intensive management of blood glucose. In the Diabetes Control and Complications Trial supported by the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), researchers found a 50 percent decrease in both development and progression of early diabetic kidney disease in participants who followed an intensive regimen for controlling blood glucose levels. The intensively managed patients had average blood glucose levels of 150 milligrams per deciliter-about 80 milligrams per deciliter lower than the levels observed in the conventionally managed patients. The United Kingdom Prospective Diabetes Study, conducted from 1976 to 1997, showed conclusively that, in people with improved blood glucose control, the risk of early kidney disease was reduced by a third. Additional studies conducted over the past decades have clearly established that any program resulting in sustained lowering of blood glucose levels will be beneficial to patients in the early stages of CKD.
Dialysis and Transplantation
When people with diabetes experience kidney failure, they must undergo either dialysis or a kidney transplant. As recently as the 1970s, medical experts commonly excluded people with diabetes from dialysis and transplantation, in part because the experts felt damage caused by diabetes would offset benefits of the treatments. Today, because of better control of diabetes and improved rates of survival following treatment, doctors do not hesitate to offer dialysis and kidney transplantation to people with diabetes.
Currently, the survival of kidneys transplanted into people with diabetes is about the same as the survival of transplants in people without diabetes. Dialysis for people with diabetes also works well in the short run. Even so, people with diabetes who receive transplants or dialysis experience higher morbidity and mortality because of coexisting complications of diabetes-such as damage to the heart, eyes, and nerves.
Good Care Makes a Difference
People with diabetes should
- have their health care provider measure their A1C level at least twice a year. The test provides a weighted average of their blood glucose level for the previous 3 months. They should aim to keep it at less than 7 percent.
- work with their health care provider regarding insulin injections, medicines, meal planning, physical activity, and blood glucose monitoring.
- have their blood pressure checked several times a year. If blood pressure is high, they should follow their health care provider's plan for keeping it near normal levels. They should aim to keep it at less than 130/80.
- ask their health care provider whether they might benefit from taking an ACE inhibitor or ARB.
- ask their health care provider to measure their eGFR at least once a year to learn how well their kidneys are working.
- ask their health care provider to measure the amount of protein in their urine at least once a year to check for kidney damage.
- ask their health care provider whether they should reduce the amount of protein in their diet and ask for a referral to see a registered dietitian to help with meal planning.
Points to Remember
- Diabetes is the leading cause of chronic kidney disease (CKD) and kidney failure in the United States.
- People with diabetes should be screened regularly for kidney disease. The two key markers for kidney disease are estimated glomerular filtration rate (eGFR) and urine albumin.
- Drugs used to lower blood pressure can slow the progression of kidney disease significantly. Two types of drugs, angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs), have proven effective in slowing the progression of kidney disease.
- In people with diabetes, excessive consumption of protein may be harmful.
- Intensive management of blood glucose has shown great promise for people with diabetes, especially for those in the early stages of CKD.
Hope through Research
The number of people with diabetes is growing. As a result, the number of people with kidney failure caused by diabetes is also growing. Some experts predict that diabetes soon might account for half the cases of kidney failure. In light of the increasing illness and death related to diabetes and kidney failure, patients, researchers, and health care professionals will continue to benefit by addressing the relationship between the two diseases. The NIDDK is a leader in supporting research in this area.
Several areas of research supported by the NIDDK hold great potential. Discovery of ways to predict who will develop kidney disease may lead to greater prevention, as people with diabetes who learn they are at risk institute strategies such as intensive management of blood glucose and blood pressure control.
Participants in clinical trials can play a more active role in their own health care, gain access to new research treatments before they are widely available, and help others by contributing to medical research. For information about current studies, visit www.ClinicalTrials.gov.
For More Information
National Diabetes Information Clearinghouse
1 Information Way
Bethesda, MD 20892-3560
National Kidney Foundation
30 East 33rd Street
New York, NY 10016
Phone: 1-800-622-9010 or 212-889-2210
National Kidney and Urologic Diseases Information Clearinghouse
The National Kidney and Urologic Diseases Information Clearinghouse (NKUDIC) is a service of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). The NIDDK is part of the National Institutes of Health of the U.S. Department of Health and Human Services. Established in 1987, the Clearinghouse provides information about diseases of the kidneys and urologic system to people with kidney and urologic disorders and to their families, health care professionals, and the public. The NKUDIC answers inquiries, develops and distributes publications, and works closely with professional and patient organizations and Government agencies to coordinate resources about kidney and urologic diseases.
Publications produced by the Clearinghouse are carefully reviewed by both NIDDK scientists and outside experts.
This publication is not copyrighted. The Clearinghouse encourages users of this publication to duplicate and distribute as many copies as desired.
NIH Publication No. 08-3925
Page last updated: September 2, 2010 | <urn:uuid:36a053bc-082c-49ae-841c-dc1010205aef> | CC-MAIN-2013-20 | http://www.kidney.niddk.nih.gov/KUDiseases/pubs/kdd/index.aspx?control=Alternate | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.932766 | 3,214 | 3.28125 | 3 |
Since many ancestors of Americans were foreign born, naturalization records
are another a source of genealogical information that you might want to
investigate. Naturalization is the process through which a foreign born
person becomes a citizen of the United States and is eligible to vote.
Not all immigrants became citizens as it is not required. Many obtained
their citizenship because of pride in their new country and a desire to
participate in democratic elections, a privilege perhaps not accorded
to them in their country of birth. Others became citizens for more materialistic
reasons, such as the right to acquire free land through homesteading.
During times of war, there was often hostility towards people from the
enemy country and immigrants may have obtained citizenship to show their
loyalty to the U.S., especially if they had children serving in the U.S.
Was Your Ancestor Naturalized?
Before beginning a search for a naturalization record, it may save hours
of futile research if you try to determine if there is evidence that the
individual you are researching did become a citizen. There are several
ways to do this:
Even with the above information, keep in mind the following caveats:
- Location of Birth Was the person foreign born? Usually
there's no need to be naturalized if born in the U.S.
- Census The 1900 and 1910 censuses ask if a person is
naturalized and 1920 further asks the year of naturalization. Indirectly,
the 1820 and 1830 census provide a clue with the question "number of
foreigners in each household not naturalized."
- Homesteading Land The person had to have initiated the
naturalization process to be eligible for free land through homesteading.
- Voter Registration Lists Is he/she listed as a voter?
- Occupation Did this person hold a job that required
- Not all foreign born individuals applied for citizenship and a child
born abroad is still a U.S. citizen if his/her parents are. During much
of our history, the wife and children automatically became citizens
when the husband/father took out citizenship papers.
- Naturalization was one of many census questions. The person who provided
the answer may not have known in fact if someone else had been naturalized.
An individual may have said yes because he felt it was the right thing
to say or he intended to begin the process.
- A Declaration of Intent, not final papers, was all that was required
- Not everyone who became a citizen registered to vote. Also, some states
allowed people who had filed a Declaration of Intent to vote even if
they had not received their final papers.
What Is the Procedure?
By now you might be getting the idea that naturalization documents are
not necessarily as easy to use as some other records, such as the census.
Generally, for most of our history there are two rules that apply to naturalizations:
- It was a legal process handled through the courts.
- It was usually a two-part procedure, the first being a Declaration
of Intent indicating that the person intended to become a citizen (voluntary
after 1952). This may have included as part of the document or as a
separate certificate or record information on the individual's date
and place of arrival into the United States. After a required period
of residency (five years, with some exceptions) the individual would
then file a Petition for Naturalization and, if granted, would receive
a Certificate of Naturalization. Both or either the Declaration and/or
the Petition may contain valuable genealogical information.
The procedures and requirements differed greatly depending on the location
and the time period. The first important information the researcher needs
to establish is whether the naturalization was before 1906 or afterwards.
In 1906 the naturalization process was simplified and taken over by the
federal government. It is much easier to find out where to look and what
to expect if it took place after 1906.
Where are Records Located?
Prior to 1906, naturalization could take place in any court having common
law jurisdiction. The court could be federal, state, or local and be called
by many names circuit, supreme, civil, equity, district, common
pleas, chancery, superior. In some cases a municipal, police, criminal,
or probate court did not actually have the right to handle naturalization
but they issued certificates anyway. Prior to 1905, over 5,000 courts
had been handling naturalization. By 1908 that number was reduced to just
over 2,000 courts and the Department of Labor began issuing A Directory
of Courts Having Jurisdiction in Naturalization Proceedings. This
directory, available on microfilm through the Family History Library,
can help you determine which court your ancestor may have used. Naturalizations
can now be handled in either federal or local courts. Since 1929, most
naturalizations have been at federal courts, but earlier records are more
likely to be at a local court because it was closer to the individual.
Prior to 1906, the biggest problem confronting a researcher is where
to find the record. The two procedures did not have to take place in the
same court so the immigrant could have filed a Declaration soon after
his arrival in New York, or perhaps he lived in Ohio for a while and filed
his Declaration there, hoping to qualify for free land. Then, after settling
on land in South Dakota, he may have submitted his Petition to a local
court. The Family History Library has microfilm copies of many pre-1930
records. If your ancestor lived in an urban area, there are many rolls
of films relating to Chicago (1871-1930), New York (1792-1906), Philadelphia
(1793-1911) and New England (1791-1906).
The good news is that copies of all naturalization records from 1906
to 1956 are at the Immigration and Naturalization Service in Washington,
DC. This does not guarantee success though. Since you are dealing with
a government agency, be prepared for a long wait. I had a copy of one
certificate of citizenship which gave the court, location, date, and name
of the immigrant, but the INS was never able to locate the file. You may
also be able to obtain copies of the file from the court, but some courts
will refer you to INS in Washington.
Naturalizations after 1956 are kept at the local INS office. Some records
are being transferred to National Archives branches or state archives.
See their page "Naturalization
Records" for information about records at the National Archives.
What Do the Records Contain?
In 1906, the process was standardized and uniform forms were issued.
The forms have been revised periodically, but generally contain at least
the following information:
- Declaration of Intention The court date and location;
the individual's name, age, occupation, personal description, birth
date and location, and residence; their date and vessel of arrival and
last foreign residence. From 1929 to 1941, it asked for the spouse's
name, marriage date and place, and birth information, plus names, dates,
and places of birth and residence of each child. It also includes a
picture of the applicant. After 1941, it requests the spouse's name
(no details on birth) and doesn't mention children. After 1929, the
last foreign residence is omitted. A separate Certificate of Arrival
giving details of arrival was required for arrivals after 1906, with
- Petition for Naturalization The court date and location;
name, residence, occupation, birth date and place; immigration departure
date and place; U.S. arrival port, date, and ship; date and place of
Declaration of Intention; spouse's name, birth date and place; children's
names, dates and places of birth; residence, witnesses, and oath of
allegiance. From 1929 to 1941 it also asked race, marriage date and
place, date of spouse's entry into the U.S. and naturalization information,
last foreign residence, and name used on arrival. After 1941, a personal
description was added, as well as details of any trips longer than six
months out of the U.S.
- The actual certificate This is the document given to
the new citizen and the one a researcher is most likely to find in old
family papers. It contains little information: court, date. and name
of new citizen. It may contain other information, but the Declaration
and Petition are the papers the researcher should try to locate.
Prior to 1906
There is no predicting what you might find in naturalization papers prior
to 1906. Until 1828, the immigrant had to report to a court to register.
This report was supposed to contain information on the birthplace, age,
and nationality. These alien registry books were separate volumes in many
areas, especially in the northeast. The registry may be found in later
records combined with the Declaration. After 1911, the immigrant was issued
a certificate of arrival.
A Declaration of Intention was usually required, again with exceptions.
It may contain little more than the name of the immigrant, but may also
have some of the details incorporated in the post-1906 form described
above. These are also called "first papers."
Early Petitions are part of the court record and may even be recorded
in separate ledgers called "second papers" or "final record." Information
varies greatly. Certificates of Naturalization were given to the new citizen.
The information was recorded but duplicates of the certificate were not
kept on file.
Spouses and children may derive their citizenship from their husband/father
and not have to go through the procedure themselves. Up until 1922, a
foreign born woman who married an American citizen became naturalized
upon marriage or, if her husband was foreign born, when he became a citizen.
No separate filings were required. Prior to 1906, they usually were not
even mentioned in the husband's petition.
After 1922, a woman had to be naturalized on her own. However, from 1907
to 1922, if a woman married an unnaturalized alien, she took his citizenship.
This created one particularly bizarre situation for a woman who was born
in Poland in September 1901. In November of that same year she came to
the U.S. with her parents. Her father obtained citizenship in 1906 and
she automatically became a citizen as well. In 1918 she married a man
who had immigrated from Russia in 1913, but was not yet a citizen. She
lost her citizenship because of this rule. In November 1922, her husband
became a citizen. This did not help her because on September 22, 1922,
the law was changed to say that any alien woman who married an American
does not become a U.S. citizen automatically. She applied on her own and
again became a U.S. citizen in 1942!
Children under the age of 21 automatically become citizens by the naturalization
of a parent. However, there are many exceptions to this law regarding
residence, whether or not a Declaration is required, what happens if the
parent dies or becomes insane, adopted children, illegitimate children,
step-children, and children born abroad.
Obtaining citizenship generally has been made easier for aliens who served
in the U.S. military. Filing of the Declaration of Intention was often
not required and the period of residency eliminated or reduced. However,
in 1894 the law was changed and during times of peace no one (except Indians)
could serve in the military unless he or she was a U.S. citizen or had
filed a Declaration of Intention. Aliens were allowed to serve during
times of war and to become naturalized.
Some states had laws forbidding aliens from owning land unless they had
filed a declaration. Homesteaders were able to qualify for free public
land after filing the declaration. The National Archives has homestead
records prior to May 1, 1908 and Bureau of Land Management after that
date. BLM can be accessed at http://www.blm.gov/nhp/index.htm.
Obstacles to Research
Besides identifying the court (or courts) that handled the various steps
in the procedure, there are other pitfalls. Some immigrants filed the
Declaration, perhaps for homesteading, but did not follow through with
the final papers. If they could vote and obtain land with the Declaration
only, they had no need to complete the process. Others were allowed to
skip the declaration and only had to file the final petition. In addition,
fraud occurred on a large scale. Thousands of fraudulent certificates
were issued in 1868 in New York because votes were needed in an election.
These certificates had no court records documenting the citizenship. If
you cannot locate the naturalization record in the court where it was
supposed to have occurred, your ancestor may have had a fraudulent certificate.
For further information, see the National Archives and Records Administration
Records" page; the LDS Research Outline on the U.S. (p. 38-41)
and the excellent 43-page booklet American Naturalization Processes
and Procedures 1790-1985 by John J. Newman (Indianapolis: Family History
Section, Indiana Historical Society 1985). | <urn:uuid:ccabc7cf-f689-491d-aeaa-78ca824ea216> | CC-MAIN-2013-20 | http://genealogy.com/genealogy/31_donna.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.962552 | 2,808 | 3.96875 | 4 |
RIVERSIDE, Calif. – Biologists at the University of California, Riverside have found that voluntary activity, such as daily exercise, is a highly heritable trait that can be passed down genetically to successive generations.
Working on mice in the lab, they found that activity level can be enhanced with "selective breeding" – the process of breeding plants and animals for particular genetic traits. Their experiments showed that mice that were bred to be high runners produced high-running offspring, indicating that the offspring had inherited the trait for activity.
"Our findings have implications for human health," said Theodore Garland Jr., a professor of biology, whose laboratory conducted the multi-year research. "Down the road people could be treated pharmacologically for low activity levels through drugs that targeted specific genes that promote activity. Pharmacological interventions in the future could make it more pleasurable for people to engage in voluntary exercise. Such interventions could also make it less comfortable for people to sit still for long periods of time."
In humans, activity levels vary widely from couch-potato-style inactivity to highly active athletic endeavors.
"We have a huge epidemic of obesity in Western society, and yet we have little understanding of what determines variation among individuals for voluntary exercise levels," Garland said.
Study results appear online Sept. 1 in the Proceedings of the Royal Society B.
The researchers began their experiments in 1993 with 224 mice whose levels of genetic variation bore similarity to those seen in wild mouse populations. The researchers randomly divided the base population of mice into eight separate lines – four lines bred for high levels of daily running, with the remaining four used as controls – and measured how much distance the mice voluntarily ran per day on wheels attached to their cages.
With a thousand mice born every generation and four generations of mice each year, the researchers were able to breed highly active mice in the four high-runner lines by selecting the highest running males and females from every generation to be the parents of the next generation. In the control lines, breeders were chosen with no selection imposed, meaning that the mice either changed or did not change over time purely as a result of random genetic drift.
By studying the differences among the replicate lines, the researchers found that mice in the four high-runner lines ran 2.5-3-fold more revolutions per day as compared with mice in the four control lines. They also found that female and male mice evolved differently: females increased their daily running distance almost entirely by speed; males, on the other hand, increased speed but they also ran more minutes per day.
The study is an example of an "experimental evolution" approach applied rigorously to a problem of biomedical relevance. Although this approach is common with microbial systems and fruit flies, it has rarely been applied to vertebrates due to their longer generation times and greater costs of maintenance. The results of such studies can inform biologists about fundamental evolutionary processes as well as "how organisms work" in a way that may lead to new therapeutic strategies.
"This study of experimental evolution confirms some previous observations and raises new questions," said Douglas Futuyma, a distinguished professor of ecology and evolution at Stony Brook University, New York, who was not involved in the research. "It shows that 'there are many ways to skin a cat': different ways in which a species may evolve a similar adaptive characteristic – running activity, in this case. Garland and coauthors go further by beginning to explore the detailed ways in which an adaptive feature, such as muscle size or metabolic rate, may be realized and by showing sex differences in the response to selection. It would be fascinating to know, and challenging to find out, if any one of these different responses is adaptively better than others."
Garland was joined in the research by Scott Kelly, Jessica Malisch, Erik Kolb, Robert Hannon, Brooke Keeney, Shana Van Cleave and Kevin Middleton, all of whom work in his lab.
The study was supported primarily by a grant to Garland from the National Science Foundation.
Details of the experimental set-up
The mice run on wheels attached to their cages. Wheel running is a completely voluntary behavior for the mice. They can sit in their cages and not run at all. If they do run, they can get off the wheels at any time. For the experiments, each mouse was given access to the wheels for only six days of their lives. A computer recorded every minute how much distance (revolutions) the mice ran for the six days. The researchers selected breeders depending on how much distance the mice ran on days 5 and 6.
About Theodore Garland Jr.
Garland received his doctoral degree in biological sciences from UC Irvine. Before joining UCR in 2001, he was a faculty member at the University of Wisconsin-Madison. He is trained in comparative physiology and evolutionary biology, as well as quantitative genetics with emphasis on exercise physiology. He is co-editor of Experimental evolution: concepts, methods, and applications of selection experiments (University of California Press, 2009). On the editorial boards of several scientific journals, he is the author/coauthor of nearly 200 peer-reviewed publications.
The University of California, Riverside (www.ucr.edu) is a doctoral research university, a living laboratory for groundbreaking exploration of issues critical to Inland Southern California, the state and communities around the world. Reflecting California's diverse culture, UCR's enrollment of about 18,000 is expected to grow to 21,000 students by 2020. The campus is planning a medical school and has reached the heart of the Coachella Valley by way of the UCR Palm Desert Graduate Center. The campus has an annual statewide economic impact of more than $1 billion.
A broadcast studio with fiber cable to the AT&T Hollywood hub is available for live or taped interviews. To learn more, call (951) UCR-NEWS.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. | <urn:uuid:6554a304-041a-4b0e-a425-a083b169c3a0> | CC-MAIN-2013-20 | http://www.eurekalert.org/pub_releases/2010-09/uoc--cfe090110.php | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959553 | 1,241 | 3.484375 | 3 |
When someone tells that there was no successful French tank, especially in WWI - don't you believe him!
The Renault FT or Automitrailleuse à chenilles Renault FT modèle 1917, inexactly known as the FT-17 or FT17, was a French light tank; it is among the most revolutionary and influential tank designs in history. The FT was the first operational tank with an armament in a fully rotating turret, and its configuration with the turret on top, engine in the back and the driver in front became the conventional one, repeated in most tanks until today; at the time it was a revolutionary innovation, causing armour historian Steven Zaloga to describe the type as "the world's first modern tank".
Studies on the production of a new light tank were started in May 1916 by the famous car producer Louis Renault. The evidence strongly suggests that Renault himself drew up the preliminary design, being unconvinced that a sufficient power/weight ratio could be achieved for the medium tanks requested by the military. One of his most talented designers, Rodolphe Ernst-Metzmaier, prepared the final drawings.
Though the project was far more advanced than the two first French tanks about to enter production, the Schneider CA1 and the heavy St. Chamond, Renault had at first great trouble getting it accepted. Even after the first British use of tanks, on 15 September 1916, when the French people called for the deployment of their own chars, the production of the light tank was almost cancelled in favour of that of a superheavy tank (the later Char 2C). However, with the unwavering support of Brigadier General Jean-Baptiste Eugène Estienne (1860–1936), the "Father of the Tanks", and the successive French Commanders in Chief, who saw light tanks as a more feasible and realistic option, Renault was at last able to proceed with the design. However, competition with the Char 2C was to last until the very end of the war.
The prototype was slowly refined during the first half of 1917. Early production FTs were often plagued by radiator fan belt and cooling system problems, a characteristic that persisted throughout World War I. Only 84 were produced in 1917 but 2,697 were delivered before the end of the war. At least 3,177 were produced in total, perhaps more; some estimates go as high as 4,000 for all versions combined. However, 3,177 is the delivery total to the French Army; 514 were perhaps directly delivered to the U.S. Army and three to Italy - giving a probable total production number of 3,694.
The tanks had at first a round cast turret; later either an octagonal turret or an even later rounded turret of bent steel plate (called Berliet turret after one of the many coproducing factories). The latter two could carry a Puteaux SA 18 gun, or a 7.92 mm Hotchkiss machine gun. In the U.S., this tank was built on a licence as the Six Ton Tank Model 1917 (950 built, 64 before the end of the war).
There is a most persistent myth about the name of the tank: "FT" is often supposed to have meant Faible Tonnage, or, even more fanciful: Franchisseur de Tranchées (trench crosser). In reality, every Renault prototype was given a combination code; it just so happened it was the turn of "FT".
Another mythical name is "FT-18" for the guntank. A 1918 maintenance manual describes the FT as the Char d'Assault 18HP, a reference to the horsepower of the engine.
FTs captured and re-used by the Germans in World War II were re-designated Panzerkampfwagen FT 18. Either of these might have led to the confusion. Also in "FT 75 BS", the "BS" does not mean Batterie de Support but "Blockhaus Schneider", a reference to the short 75mm Schneider gun with which it was fitted.
The FT was widely used by the French and the US in the later stages of World War I, after 31 May 1918. It was cheap and well-suited for mass production. It reflected an emphasis on quantity, both on a tactical level: Estienne proposed to overwhelm the enemy defences by a "swarm" of light tanks, and on a geostrategic level: the Entente was thought to be able to gain the upper hand by outproducing the Central Powers. A goal was set of 12,260 to be manufactured (4,440 of which in the USA) before the end of 1919.
After the war, FTs were exported to many countries (Poland, Finland, Estonia, Lithuania, Romania, Yugoslavia, Czechoslovakia, Switzerland, Belgium, Netherlands, Spain, Brazil, Turkey, Iran, Afghanistan and Japan). As a result, FT tanks were used by most nations having armoured forces, invariably as their first tank type, including the United States. They took part in many later conflicts, such as the Russian Civil War, Polish-Soviet War, Chinese Civil War, Rif War and Spanish Civil War.
FT tanks were also used in the Second World War, among others in Poland, Finland, France and Kingdom of Yugoslavia, although they were completely obsolete by then. In 1940 the French army still had eight battalions equipped with 63 FTs each and three independent companies with ten each, for a total organic strength of 534, all with machine guns.
Many smaller units, partially raised after the invasion, also used the tank. This has given rise to the popular myth that the French had no modern equipment at all; in fact they had more modern tanks than the Germans; the French suffered from tactical and strategic weaknesses rather than from equipment deficiencies. When the German drive to the Channel cut off the best French units, as an expediency measure the complete French materiel reserve was sent to the front; this included 575 FTs. Earlier 115 sections of FT had been formed for airbase-defence. The Wehrmacht captured 1,704 FTs. A hundred were again used for airfield defence, about 650 for patrolling occupied Europe. Some of the tanks were also used by the Germans in 1944 for street-fighting in Paris. By this time they were hopelessly out of date.
The FT was the ancestor of a long line of French tanks: the FT Kégresse, the NC1, the NC2, the Char D1 and the Char D2. The Italians produced as their standard tank the FIAT 3000, a moderately close copy of the FT:
The Soviet Red Army captured fourteen burnt-out Renaults from White Russian forces, and rebuilt them at the Krasnoye Sormovo Factory in 1920. The Soviets claimed to have originally manufactured these Russkiy Reno tanks, but they actually produced only one exact copy, named 'Freedom Fighter Comrade Lenin'. When Stalin began the arms race of the Thirties, the first completely Soviet-designed tank was the T-18, a derivation of the Renault with sprung suspension:
In all, the FT was used by Afghanistan, Belgium, Brazil, the Republic of China, Czechoslovakia, Estonia, Finland, France, Nazi Germany, Iran, Japan, Lithuania, the Netherlands, Poland, Romania, the Russian White movement, the Soviet Union, Spain, Sweden, Switzerland, Turkey, Norway, the United Kingdom, the United States and the Kingdom of Yugoslavia.
Two interesting variants: | <urn:uuid:8802c369-de05-468a-876f-993aff93f25f> | CC-MAIN-2013-20 | http://www.dieselpunks.org/profiles/blogs/renault-ft-tank | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.975053 | 1,545 | 2.859375 | 3 |
Copyright (c) Arvin S. Quist
INTRODUCTION TO CLASSIFICATION
THE NEED FOR CLASSIFICATION
A government is responsible for the survival of the nation and its people. To ensure that survival, a government must sometimes stringently control certain information that (1) gives the nation a significant advantage over adversaries or (2) prevents adversaries from having an advantage that could significantly damage the nation. Governments protect that special information by classifying it; that is, by giving it a special designation, such as "Secret," and then restricting access to it (e.g., by need-to-know requirements and physical security measures).
This right of a government to keep certain information concerning national security (secrets) from most of the nation's citizens is nearly universally accepted. Since antiquity, governments have protected information that gave them an advantage over adversaries. In wartime, when a nation's survival is at stake, the reasons for secrecy are most apparent, the secrecy restrictions imposed by the government are most widespread,[*] and acceptance of those restrictions by the citizens is broadest.[†] In peacetime, there are fewer reasons for secrecy in government, generally the government classifies less information, and citizens are less willing to accept security restrictions on information.
MAJOR AREAS OF CLASSIFIED INFORMATION
The information that is classified by most democracies, whether in peacetime or wartime, is usually limited to information that concerns the nation's defense or its foreign relations--military and diplomatic information. Most of that information falls within five major areas: (1) military operations, (2) weapons technology, (3) diplomatic activities, (4) intelligence activities, and (5) cryptology. The latter two areas might be considered to be special parts of the first three areas. That is, intelligence and cryptology are "service" functions for the primary areas--military operations, weapons technology, and diplomatic activities. From a historical perspective, the classification of weapons technology became widespread only in the 20th century. Classification of information about military operations and diplomatic activities has been practiced for millennia.
Examples of military-operations information that is frequently classified include information concerning the strength and deployment of forces, troop movements, ship sailings, the location and timing of planned attacks, tactics and strategy, and supply logistics. Obviously, if an enemy learned the major details of an impending attack, that attack would be less successful than if it came as a surprise to the enemy.* Information possessed by a government about an adversary's military activities or capabilities must be protected to preserve the ability to predict those activities or to neutralize those capabilities. If the adversary knew that the government had this information, the adversary would change those plans or capabilities. Military-operations information is usually classified for only a limited time. After an operation is over, most of the important information is known to the enemy.
Weapons technology is classified to preserve the advantage of surprise in the first use of a new weapon,† to prevent an adversary from developing effective countermeasures against a new weapon,‡ or to prevent an adversary from using that technology against its originator (by developing a similar weapon). A major factor in that latter reason for classifying weapons technology is "lead time." Classifying advanced weapons-technology information prevents an adversary from using that information to shorten the time required to produce similar weapons systems for its own use. Consequently, assuming continued advancements in a weapons technology by the initial developer of that technology, the adversary's weapons systems will not be as effective as those of the nation that initially developed that technology, and the adversary will be at a disadvantage.
With respect to lead time, when weapons systems can be significantly improved, then information on "obsolete" weapons is much less sensitive than information on newer weapons. Thus, information on muzzle-loading rifle technology was not as sensitive as that on breech-loading rifle technology, which was not as sensitive as information on lever-action rifle technology, . . . semiautomatic rifle . . . automatic rifle . . . machine gun. However, with respect to nuclear weapons, a "rogue" nation or terrorist group can probably achieve its objectives just as easily with "crude" kiloton nuclear weapons that might require a ship or truck to transport as with sophisticated megaton nuclear weapons that might fit into a (large) suitcase. Thus, "obsolete" nuclear-weapons technology should be continue to be protected, especially with respect to technologies concerning production of highly enriched uranium or other nuclear-weapon materials.
Weapons technology includes scientific and technical information related to that technology. World War I marked the start of the "modern" period when science and technology affected the development of weapons systems to a greater degree than any time previously. That interrelationship became even more pronounced in World War II, with notable scientific and technological successes: the atomic bomb, radar, and the proximity fuse. World War II, particularly with respect to the atomic bomb, marked the first time that the progress of military technology was significantly influenced by scientists, as contrasted to advances by engineers or by scientists working as engineers.
With respect to classification, the more that applied scientific or technical information is uniquely applicable to weapons, the more likely that this information will be classified. Generally, basic research is not classified unless it represents a major breakthrough leading to a completely new weapons system. An example of that circumstance was the rigid classification during World War II, and for several years thereafter, of much basic scientific research related to atomic energy (nuclear weapons).
The need for secrecy in diplomatic negotiations and relations has long been recognized. A nation's ability to obtain favorable terms in negotiations with other countries would be diminished if its negotiating strategy and goals were known in advance to the other countries.* The effectiveness of military-assistance agreements between nations would be impaired if an adversary knew of them and could plan to neutralize them. In New York Times v. United States, the "Pentagon Papers" case, U.S. Supreme Court Justice Stewart recognized the importance of secrecy in foreign policy and national defense matters:
It is elementary that the successful conduct of international diplomacy and the maintenance of an effective national defense requires both confidentiality and secrecy. Other nations can hardly deal with this Nation in an atmosphere of mutual trust unless they know that their confidences will be kept . . .. In the area of basic national defense the frequent need for absolute secrecy is, of course, self evident.
During the term of the first
president, it was established that some need for secrecy in diplomatic matters would remain even after negotiations were completed. President Washington, in 1796, refused a request by the House of Representatives for documents prepared for treaty negotiations with U.S. and gave the following as one reason for refusal: England
The nature of foreign negotiations requires caution, and their success must often depend on secrecy; and even when brought to a conclusion a full disclosure of all the measures, demands, or eventual concessions which may have been proposed or contemplated would be extremely impolitic; for this might have a pernicious influence on future negotiations, or produce immediate inconvenience, perhaps danger and mischief, in relation to other powers.
It has been said that President Nixon initially was not going to attempt to stop the New York Times and other newspapers from publishing the "Pentagon Papers." However, the executive branch was then in secret diplomatic negotiations with
, and Henry Kissinger "is said to have persuaded the president that the Chinese wouldn't continue their secret parleys if they saw that China couldn't keep its secrets." Washington
Intelligence information includes information gathering and covert operations. Collecting military and diplomatic information about other nations involves the use of photoreconnaissance airplanes and satellites, communication intercepts, the review of documents obtained openly, and other overt methods. However, information gathering also includes the use of undercover agents, confidential sources, and other covert methods. For those covert activities, secrecy is usually imposed on the identity of agents or sources, on information about intelligence methods and capabilities, and on much of the information received from the covert sources. Few clandestine agents could be recruited (or, in some instances, would live long) if their identity were not a closely guarded secret. Information provided by a clandestine agent must frequently be classified because, if a government knew that some of its information was compromised, it might be able to determine the identity of the person (agent) who provided the information to its adversary. Successful intelligence-gathering methods must be protected so that the adversary does not know the degree of their success and is not stimulated to develop countermeasures to stop the flow of information. Intelligence information from friendly nations is generally classified by the recipient country. Allies would be less willing to share intelligence information if they knew that it would not be protected against disclosure.
Cryptology encompasses methods to code and transmit secret messages and methods to intercept and decode messages. Writing messages in code, or cryptography,* has been practiced for thousands of years. One of the earliest preserved texts of a coded message is an inscription carved on an Egyptian tomb in about 1900 B.C. The earliest known pottery glaze formula was written in code on a Mesopotamian cuneiform tablet in about 1500 B.C. The Spartans established a system of military cryptography by the 5th century B.C. Persia later used cryptography for political purposes. Cryptography began its steady development in western civilization starting about the 13th century, primarily in
. By the early 16th century, Italy 's ruling Council of Ten had an elaborate organization for enciphering and deciphering messages. Venice
Restrictions on cryptologic information are necessary to protect
communications. Diplomatic negotiations could not successfully be conducted at locations other than the seat of government if safe communications could not be established. Cryptologic information must also be protected to prevent an adversary from learning of a nation's capabilities to intercept and decode messages. If an adversary learns that its communications are not secure, it will use another method, which will require additional time and effort to defeat.[‡] The Allies' World War II success in breaking the German codes contributed to shortening that war. That success was kept secret until 1974, about 34 years after the German code had been broken and about 29 years after World War II had ended. The U.S. Army's success in breaking a World War II U.S.S.R. code (the Venona project, which began in 1943 and continued until 1980) was not made public until about 1995. That was about 50 years after the first such message had been deciphered (and about 45 years after the U.S.S.R. had learned through espionage of the Army's success). U.S.
BASIS FOR CLASSIFICATION IN THE UNITED STATES
The need for governmental secrecy was directly recognized in the U.S. Constitution. Article I, Sect. 5, of the Constitution explicitly authorizes secrecy in government by stating that "Each House shall keep a Journal of its Proceedings, and from time to time publish the same, excepting such Parts as in their Judgment require Secrecy." Also included in the Constitution, in Article I, Sect. 9, is a statement that "a regular Statement and Account of the Receipts and Expenditures of all public Money shall be published from time to time." A U.S. Court of Appeals has determined that the phrase "from time to time" was intended to authorize expenditures for certain military or foreign relations matters that were intended to be kept secret for a time.
The Constitution does not explicitly provide for secrecy by the Executive Branch of the U.S. Government. However, the authority of that Executive Branch to keep certain information secret from most
citizens is implicit in its executive responsibilities, which include the national defense and foreign relations. This presidential authority has been upheld by the Supreme Court in a number of cases. Judicial decisions have also relied on a common-law privilege for a government to withhold information concerning national defense and foreign relations. Congress, by two statutes, the Freedom of Information Act and the Internal Security Act of 1950, has implicitly recognized the president's authority to classify information (see Chapter 3). U.S.
At this time in the
, information is classified either by presidential authority, currently Executive Order 12958, or by statute, the Atomic Energy Act of 1954, as amended (Atomic Energy Act). Classification under Executive Orders and under the Atomic Energy Act is extensively discussed in Chapters 3 and 4, respectively. United States
CLASSIFICATION AND SECURITY
Classification has been variously described as the "cornerstone" of national security, the "mother" of security, and the "kingpin" of an information security system.,,, Classification identifies the information that must be protected against unauthorized disclosure. Security determines how to protect information after it is classified. Security includes both personnel security and physical security.
The initial classification determination, establishing what should not be disclosed to adversaries and the level of protection required, is probably the most important single factor in the security of all classified projects and programs., None of the expensive personnel-clearance and information-control provisions (physical security aspects) of an information security system comes into effect until information has been classified; classification is the pivot on which the whole subsequent security system turns (excluding security for other reasons, such as to prevent theft of materials). 19 Therefore, it is important to classify only information that truly warrants protection in the interest of national security.
Since the mid 1970s, several classification experts have remarked on the increasing emphasis by some government agencies on physical-security matters, which has been accompanied by a decreased emphasis on the classification function. One of the founders (and the first chairman) of the National Classification Management Society (NCMS), who was also an Atomic Energy Commission Contractor Classification Officer, has expressed concern about the tendency to emphasize the word "security" at the expense of the word "classification" with respect to security classification of information.17 In the mid 1980s another charter member of the NCMS pointed out that, although the status of classification still remained high in the Department of Energy (DOE), the situation had changed within the Department of Defense, where Classification Management had been organizationally placed under Security. Even the NCMS, founded as a classification organization, appears to be changing to become increasingly oriented towards security matters rather than classification matters. It is noteworthy that the marked emphasis by the U.S. Government in recent years on physical-security measures has not been accompanied by any significant increased emphasis on classification matters.
The previous paragraph was written in 1989, and the trend described in that paragraph has continued. The classification function at DOE headquarters is now a part of the security organization as is the classification function at many DOE operations offices and DOE-contractor organizations. That function generally used to be part of a technical or other non-security organization. The NCMS has also continued to become more security-oriented.
With respect to classification as a profession (or lack of recognition thereof), it is interesting to note some comments and a recommendation in the Report of the Commission on Protecting and Reducing Government Secrecy. In this 1997 report, that Commission noted the "all-important initial decision of whether to classify at all," and that "this first step of the classification management process . . . tends to be the weakest link in the process of identifying, marking, and then protecting the information." The Commission further stated that "the importance of the initial decision to classify cannot be overstated." However, the Commission then stated that "classification and declassification policy and oversight . . . should be viewed primarily as information management issues which require personnel with subject matter and records management expertise." Although recommending that "The Federal Government . . . [should] create, support, and promote an information systems security career field within the Government," the Commission made no similar recommendation for security classification of information as a profession or career. Res ipsa loquitur.
[*] "When a nation is at war many things that might be said in time of peace are such a hindrance to its effort that their utterance will not be endured so long as men fight and that no Court could regard them as protected by any constitutional right" [Schenck v. United States, 249 U.S. 47, 52 (1919) (J. Holmes)].
[†] Since the September 11, 2001, terrorist attacks against the World Trade Center towers and the Pentagon, the United States considers itself to be in a war against terrorism. One consequence has been a significant shift in opinion, not only of the general public but also of some strong supporters of freedom-of-information matters, towards favoring more control of information that might aid terrorists. This increased control, especially pertaining to weapons of mass destruction, includes (1) establishing broader criteria for identifying information that is classified or "sensitive"; (2) permitting reclassification of declassified information, and (3) restricting further governmental distribution of documents already released to the public.
*However, during the Greek and Roman eras in the Mediterranean, when the infantry was paramount and both sides were approximately equally equipped with respect to weapons, many battles were fought without attempts to maintain secrecy of troop movements or with respect to surprise attacks (B. and F. M. Brodie, From Crossbow to H-Bomb, Indiana University Press, Bloomington, Ind., 1973, p. 17).
†"Secret" weapons have proven decisive in warfare. One example of the decisive impact of a new weapon was at the battle of Crecy in 1346. At this battle, the English used their "secret" weapon, the longbow, to defeat the French decisively. Although the French had a two-to-one superiority in numbers (about 40,000 to 20,000), the French lost about 11,500 men, while the English lost only about 100 men (W. S. Churchill, A History of the English-Speaking Peoples, Vol. 1, Dodd, Mead and Co., New York, 1961, pp. 332-351; B. and F. M. Brodie, From Crossbow to H-Bomb, Indiana University Press, Bloomington, Ind., 1973, pp. 37-40).
‡In World War II, the Germans developed an acoustic torpedo designed to home in on a ship's propellers. However, the Allies obtained advance information about this torpedo so that when it was first used by the Germans, countermeasures were already in place (B. and F. M. Brodie, From Crossbows to H-Bombs, Indiana University Press, Bloomington, Ind., 1973, p. 222).
*In 1921, the United States, Britain, France, Italy, and Japan held a conference to limit their naval armaments. The United States had broken Japan's diplomatic code and thereby knew the lowest naval armaments that Japan would accept. Therefore, U.S. negotiators had merely to wait out Japan's negotiators to reach terms favorable to the United States (J. Bamford, The Puzzle Palace, Houghton, Mifflin Co., Boston, 1982, pp. 9-10).
*The breaking of codes is termed cryptanalysis.
[‡] Even "friendly" nations get upset if they know that one of their codes has been broken. As noted earlier in this chapter, the United States deciphered Japan's diplomatic code in 1921. Herbert O. Yardley, who was principally responsible for breaking this code, wrote a book, The American Black Chamber, published in 1931, which included information on this matter. Yardley's book did not contribute to developing friendly United States-Japanese relations. A consequence of this revelation was enactment of a U.S. statute that made it a crime for anyone who, by virtue of his employment by the United States, obtained access to a diplomatic code or a message in such code and published or furnished to another such code or message, "or any matter which was obtained while in the process of transmission between any foreign government and its diplomatic mission in the United States" (48 Stat. 122, June 10, 1933, codified at 18 U.S.C. Sect. 952.)
B. and F. M. Brodie, From Crossbow to H-Bomb, Indiana University Press, Bloomington, Ind., 1973, p. 172. Hereafter this book is cited as "Brodie."
Brodie, p. 233.
New York Times v. United States, 403 U.S. 713, 728 (1971).
J. D. Richardson, A Compilation of Messages and Papers of the Presidents. 1789-1897, U.S. Government Printing Office, Washington, D.C., Vol. I, at 194-195 (1896).
Richard Gid Powers, "Introduction," in Secrecy--The American Experience, by Daniel Patrick Moynihan, Yale University Press, New Haven, Conn., 1998, p. 32.
D. Kahn, The Codebreakers, MacMillan, Inc., New York, 1967, p. 71. Hereafter cited as "Kahn."
Kahn, p. 75.
Kahn, p. 82.
Kahn, p. 86.
Kahn, p. 106.
Kahn, p. 109.
See, for example, F. W. Winterbotham, The Ultra Secret, Harper & Row, New York, 1974.
Halperin v. CIA, 629 F.2d 144, 154-162 (D.C. Cir., 1980).
U.S. Constitution, Article II, sect. 2.
See, for example, Totten v. United States, 92 U.S. 105 (1875); United States v. Reynolds, 345 U.S. 1 (1952); Weinberger v. Catholic Action of Hawaii, 454 U.S. 139 (1981).
F. E. Rourke, Secrecy and Publicity: Dilemmas of Democracy, Johns Hopkins Press, Baltimore, 1961, pp. 63-64.
D. B. Woodbridge, "Footnotes," J. Natl. Class. Mgmt. Soc. 12 (2), 120-124 (1977), p.122.
R. J. Boberg, "Panel--Classification Management Today," J. Natl. Class. Mgmt. Soc. 5 (2), 56-60 (1969), p. 57.
E. J. Suto, "History of Classification," J. Natl. Class. Mgmt. Soc. 12 (1), 9-17 (1976), p.13.
James J. Bagley, "NCMS - Now and the Future," J. Natl. Class. Mgmt. Soc. 25, 20-29 (1989), p. 28.
T. S. Church, "Panel--Science and Technology, and Classification Management," J. Natl. Class. Mgmt. Soc. 2, 39-45 (1966), p. 40.
W. N. Thompson, "Security Classification Management Coordination Between Industry and DOD," J. Natl. Class. Mgmt. Soc. 4 (2), 121-128 (1969), p. 121.
W. N. Thompson, "User Agency Security Classification Management and Program Security," J. Natl. Class. Mgmt. Soc. 8, 52-53 (1972), p. 52.
Department of Defense Handbook for Writing Security Classification Guidance, DoD 5200.1-H, U.S. Department of Defense, Mar. 1986, p. 1-1.
F. J. Daigle, "Woodbridge Award Acceptance Remarks," J. Natl. Class. Mgmt. Soc. 21, 110-112 (1985), p. 111.
D. C. Richardson, "Management or Enforcement," J. Natl. Class. Mgmt. Soc. 23, 13-20 (1987).
Report of the Commission on Protecting and Reducing Government Secrecy, S. Doc. 105-2, Daniel Patrick Moynihan, Chairman; Larry Combest, Vice Chairman, Commission on Protecting and Reducing Government Secrecy, U.S. Government Printing Office, Washington, D.C., 1997. Hereafter cited as the "Moynihan Report."
Moynihan Report, p. 19.
Moynihan Report, p. 35.
Moynihan Report, p. 44.
Moynihan Report, p. 111. | <urn:uuid:647c5d85-6ea3-4124-aa9a-e2a2146530fb> | CC-MAIN-2013-20 | http://www.fas.org/sgp/library/quist/chap_1.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.94447 | 5,023 | 3.4375 | 3 |
Become a fan of h2g2
We should remember that... of the 6,000 stars [that] the average human eye could see in the entire sky, probably not more than thirty – or one-half of one percent – are less luminous than the Sun; that probably, of the 700-odd stars nearer than ten parsecs, at least 96% are less luminous than the Sun. There is not even ONE real yellow giant – such as Capella, Pollux, or Arcturus – nearer than ten parsecs1 and only about four main sequence A stars.
– Dutch astronomer Willem Jacob Luyten (1899 - 1994)
The closest star to us is, of course, our own Sun. It's unusual because it's a solitary yellow dwarf, while most of the stars nearby are in binary or even multiple systems. What makes our star really special though, is that it provides the energy for the only life in the Universe that we know of.
Less Than Ten Light Years
The nearest star to the Sun is Proxima Centauri (also known as alpha Centauri C), which is a red dwarf, and is 4.2 light years2 distant. It has two stellar companions, the yellow dwarf Rigil Kentaurus (alpha Centauri A) and an orange dwarf, alpha Centauri B. They take up joint second place in our list, at 4.35 light years.
Barnard's Star, a red dwarf, is just under six light years away. Next comes Wolf 359, another red dwarf. Yet another red dwarf, Lalande 21185, was thought to be the fourth-closest star when its co-ordinates were published by Joseph-Jérôme Lefrançais de Lalande (1732 - 1807) in 1801. This was before Barnard's Star and Wolf 359 were discovered. Lalande 21185 cannot be seen by the naked eye because at 7th magnitude it is too dim; however, it counts as sixth-closest to the Sun at 8.3 light years.
The seventh-closest is a star most denizens of Earth would recognise: Sirius (alpha Canis Majoris), with eighth place taken by its companion Sirius B, sometimes referred to as 'the Pup'. Sirius B is classified as a white dwarf, but it is one of the biggest known: in fact, its mass is comparable to that of our own Sun.
Completing the top ten stellar neighbours are BL Ceti, a red dwarf flare star, and its binary partner UV Ceti3, which are 8.7 light years away from our Sun. Flare stars unleash bright flashes of light as well as streams of charged particles. Some of the stars studied have flares of such enormous intensity that they can increase the brightness of the star by up to 10%. The flares are only brief, like a camera flash, but would be detrimental to any nearby planets.
Next in line is Ross 154, one of many discovered in 1925 by American astronomer Frank Elmore Ross (1874 - 1960). Ross 154 is a UV Ceti-type flare star 9.7 light years distant, and is the last of the stars within ten light years of our Solar System.
|Star||Other Name or |
|#1||Proxima Centauri||alpha Centauri C||Red dwarf||Centaurus||4.2|
|#2||Rigil Kentaurus||alpha Centauri A||Yellow dwarf||Centaurus||4.35|
|#2||alpha Centauri B||HD 128621||Orange dwarf||Centaurus||4.35|
|#4||Barnard's Star||Proxima Ophiuchi||Red dwarf||Ophiuchus||5.98|
|#5||Wolf 359||CN Leonis||Red dwarf||Leo||7.7|
|#6||Lalande 21185||HD 95735||Red dwarf||Ursa Major||8.3|
|#7||Sirius||alpha Canis Majoris||Blue-white|
|#7||Sirius B||alpha Canis Majoris B||White dwarf||Canis Major||8.5|
|#9||BL Ceti||Luyten 726-8 A||Red dwarf|
|#9||UV Ceti||Luyten 726-8 B||Red dwarf|
|#11||Ross 154||V1216 Sgr||Red dwarf|
Between Ten and Twelve Light Years
At 10.3 light years is another one on the Ross catalogue: Ross 248, a red dwarf flare star. Due to the wide variety of periods that this star flares (4.2 years, 120 days, and five other catalogued outbursts between 60 and 291 days apart), astronomers suspect that Ross 248 has an undetected companion which is causing the erratic flaring. Next on is Epsilon Eridani, which has a dust disc4 and a suspected extrasolar planet system, the closest detected up to the time of writing, 2012. The two candidate planets are not thought to be hospitable to life (as we know it) because their proposed orbits are so far from the star. If the planets do exist, they are likely to be frigid worlds like our outermost planet Neptune5.
The French astronomer Nicolas Louis de Lacaille (1713 - 62) went on a 1751-4 expedition to the Cape of Good Hope, effectively a blank canvas sky for him to map. Using the planet Mars as a point of reference, his observations were the foundations for working out the lunar and solar parallax. Finding himself somewhat of a celebrity upon his return to Paris, de Lacaille hid from public attention in Mazarin College, writing up his findings. Barely taking care of himself, de Lacaille suffered from gout and was prone to over-working to the point of exhaustion. Unfortunately his catalogue, Coelum Australe Stelliferum, which described 14 new constellations and 42 nebulous objects among almost 10,000 southern stars, wasn't published until after he died at the age of just 49 years. One of those stars, Lacaille 9352, ranks as the 14th-closest to our Sun at 10.7 light years distance.
EZ Aquarii is a triple star system situated at 11.3 light years away. EZ Aquarii A, B and C are all red dwarfs, and they may all be flare stars; however, not much is known about the smallest component (B). They are so dim (magnitude +13) that specialist equipment is required to view them. The system was labelled Luyten 789-6 by Dutch astronomer Willem Jacob Luyten, whose interest in astronomy had been sparked by viewing the predicted return of Halley's Comet in 1910, as an 11-year old schoolboy. In 1925 Luyten lost an eye in an accident but this tragedy did not wreck his chosen career. He was already working at the Harvard College Observatory, having been offered a post by the new director Harlow Shapley, whose own profile had been raised due to his participation in the Shapley-Curtis Debate of 1920. Luyten 'observed and measured more stellar images than anyone else', according to his biography at the National Academy of Sciences. He took up teaching at the University of Minnesota in 1931, and when he retired in 1967 he was given the title of Astronomer Emeritus which he held until his death at the ripe old age of 95.
Procyon is a binary system which registers at +0.3 magnitude. The system consists of a yellow-white main sequence subgiant star, Procyon A, and a white dwarf companion, Procyon B, which was detected by Arthur von Auwers in 1862.
|Star||Other Name or |
|#12||Ross 248||HH Andromedae||Red dwarf|
|#13||Epsilon Eridani||Sadira||Orange dwarf||Eridanus||10.5|
|#14||Lacaille 9352||HD 217987||Red dwarf||Piscis Austrinus||10.7|
|#15||Ross 128||FI Virginis||Red dwarf|
|#16||EZ Aquarii A||Luyten 789-6 A||Red dwarf|
|#16||EZ Aquarii B||Luyten 789-6 B||Red dwarf||Aquarius||11.3|
|#16||EZ Aquarii C||Luyten 789-6 C||Red dwarf|
|#19||Procyon A||alpha Canis Minoris||Yellow-white|
|#19||Procyon B||alpha2 Canis Minoris||White dwarf||Canis Minor||11.4|
The binary system 61 Cygni has two orange dwarf components of 6th magnitude at 11.41 light years away. Its distance was the first to be measured of any star. These two stars claim joint 21st place in our list of close stellar neighbours. Another pair of red dwarf stars, Struve 2398 A and B, positioned at just 11.5 light years distant, are the next nearest. They were studied by Russian-German astronomer Prof Friedrich von Struve (1793 - 1864), director of the Dorpat Observatory (now the Tartu Observatory) in Estonia, who listed them in his Catalogus novus stellarum duplicium (Double Star Catalogue) of 1827.
Groombridge 34 are twin variable red dwarfs. Newly-discovered variable stars are given upper case capital letters, so Groombridge 34 A and B are also known as GX Andromedae and GQ Andromedae respectively. The Epsilon Indi system is fascinating because it contains the closest-known brown dwarfs. Brown dwarfs are approximately the same size as Jupiter6, but their mass is at least ten times greater, possibly up to 50×. These bodies are neither star nor planet, but 'failed' stars. Other titles have been proposed, as it's hardly encouraging to keep referring to them as 'failed stars'. Suggestions so far include planetar (which sounds like something from the science fiction genre) and substar (that 'sub' prefix isn't much of an improvement). Since 2004, planets have been discovered orbiting brown dwarfs, (although not, as yet, in the Epsilon Indi system), so their profile has been raised. Hopefully they are in line for a better class in the future.
DX Cancri is a solo red dwarf flare star which expands to five times its usual brightness during flare activity. It is thought by some astronomers that DX Cancri is a member of the Castor Moving Group which was suggested in 1990 by JP Anosova and VV Orlov at the Astronomical Observatory in Leningrad State University, Russia. A moving group is the term for a collection of stars which share the same origin. Although they are not gravitationally bound to each other they are on the same path on their journey through the galaxy, like an unravelled, stretched-out cluster. The Castor Moving Group is named after the luminary of Gemini, and includes the stars Alderamin (alpha Cephei), Fomalhaut, Vega, psi Velorum and Zubenelgenubi (alpha Librae).
Tau Ceti is one of the few nearby stars which are visible to the naked eye, albeit in the dim constellation Cetus, the Whale. Tau Ceti shot to fame in 1960, when Frank Drake launched Project Ozma, aiming to detect non-natural signals from space. Drake chose two stars which were similar to our Sun, Tau Ceti and Epsilon Eridani, for his project, which evolved to become SETI, the Search for Extra-terrestrial Intelligence. In December 2012, it was announced that five planets had been discovered orbiting Tau Ceti, with one of them possibly residing in the system's habitable zone.
Red dwarf GJ 1061 in the southern constellation Horologium is the last of the stars within 12 light years. GJ 1061 is really small, even on the dwarf star scale: it registers just over ten percent of the Sun's mass. It is so dim (+13 mag) that you'd need a decent-sized telescope to view it, but just in case you ever get the opportunity, its co-ordinates are 03h 36m RA, −44° 30' 46" Dec.
|Star||Other Name or |
|#21||61 Cygni A||V1803 Cyg A||Orange dwarf||Cygnus||11.41|
|#21||61 Cygni B||V1803 Cyg B||Orange dwarf||Cygnus||11.41|
|#23||Struve 2398 A||NSV 11288||Red dwarf||Draco||11.5|
|#23||Struve 2398 B||Gliese 725 B||Red dwarf|
|#25||Groombridge 34 A||GX Andromedae||Red dwarf||Andromeda||11.6|
|#25||Groombridge 34 B||GQ Andromedae||Red dwarf||Andromeda||11.6|
|#27||Epsilon Indi||HD 209100||Orange dwarf +|
two Brown dwarfs
|#28||DX Cancri||LHS 248||Red dwarf|
|#29||Tau Ceti||HD 10700||Yellow dwarf||Cetus||11.88|
|#30||GJ 1061||LHS 1565||Red dwarf||Horologium||11.99|
Nearby Stars in Fantasy and Science Fiction
Stars which are close to our own Solar System have inspired imaginative writers going back hundreds of years. Here is just a sample:
Proxima Centauri: the 1990s TV series Babylon 5 featured the planet Proxima III, which hosts an Earth Alliance colony.
Alpha Centauri A: prolific author Isaac Asimov wrote about the water world Alpha of the Alpha Centauri A system in the Foundation and Earth book of his Foundation series.
Alpha Centauri B: Witburg is a rocky planet orbiting Alpha Centauri B in the 2002 online role-playing game Earth & Beyond.
Barnard's Star: Timemaster, a 1992 novel by Robert L Forward, bases its plot in the Barnard's Star system.
Wolf 359: Star Trek fans will recognise Wolf 359 as the system where Starfleet's armada was practically wiped out by the hive-minded Borg.
Lalande 21185: in the 1951 novel Rogue Queen penned by L Sprague de Camp, the planet Ormazd which orbits Lalande 21185 is investigated by Earth's space authority.
Sirius: Micromégas is one of the earliest known science fiction stories, it was written in 1752 by François-Marie Arouet (better known by his pen name Voltaire). The Micromégas of the story was an extremely tall7 alien visitor to Earth who hailed from one of the planets in the Sirius system.
Sirius B: in Seed of Light, a 1959 novel by Edmund Cooper, the Sirius A star is barren but Sirius B has a hospitable planet, Sirius B III, out of its five attendant worlds. The plot revolves around the people sent there to save the human race after the Earth has been devastated.
BL Ceti: Larry Niven wrote A Gift From Earth in 1968, a part of his Known Space collection of multiple works. The plot involves the twin red dwarf stars BL Ceti and UV Ceti, which are important signposts for the eventual destination.
UV Ceti: a space station called Eldorado, part of the 'Great Circle' route, is based at UV Ceti in the 1981 story Downbelow Station by CJ Cherryh.
Ross 154: the planet Tei Tenga in the Ross 154 system is where the United Aerospace Armed Forces (UAFF) had a couple of military research bases in the video game Doom.
Ross 248: Diadem is an icy world in orbit around Ross 248 in Alastair Reynolds's story Glacial. Following a failed attempt at human colonisation, an investigation a century later reveals that the planet is sentient and it uses cold-blooded annelids, burrowing through its ice-mantle, to 'think'.
Epsilon Eridani: Les Grognards d'Éridan (The Napoleons Of Eridanus), written by French author Claude Avice in 1970, features a detachment of soldiers from the Napoleonic era who are abducted by aliens and transported to the Epsilon Eridani system to fight their battles for them. Also, Epsilon Eridani was the parent star of the planet Reach in the extremely successful Xbox game Halo: Reach.
Lacaille 9352: in the fictional universe of the Hyperion Cantos dreamed up by Dan Simmons, the inhospitable planet Sibiatu's Bitterness orbits the star Lacaille 9352.
Ross 128: Across the Sea of Suns, written in 1984 by Gregory Benford, features a race of alien aquatic creatures which live under the ice-mantle of the frozen world Pocks, a member of the Ross 128 system.
EZ Aquarii A, B and C: the character Sheldon in The Big Bang Theory regularly lists 'the closest stars to me' when ascending and descending stairs. Once, in disguise, he spoke the words 'EZ Aquarii B, EZ Aquarii C,' while passing Amy on the stairs, and was dismayed that she recognised him.
Procyon A/B: His Master's Voice was written by Polish author Stanislaw Lem in 1968. This book focuses on the attempts by highly intelligent Earthlings to understand a message from the Procyon system.
61 Cygni: The region surrounding 61 Cygni is known as the 'Darkling Zone' in the popular TV series Blake's 7.
Groombridge 34: The Groombridge 34 system features in the Halo series of Xbox games.
Epsilon Indi: New New York on Epsilon Indi III has a portal to the Earth, via a created wormhole, in the 1996 Starplex book by Robert J Sawyer.
Tau Ceti: Time for the Stars, written in 1956 by Robert A Heinlein, explores the telepathic bond between twins over the vastness of space. Tau Ceti III, in the Star Trek universe, is a hospitable M-class planet. One of the bountiful fruits which grows there is the Kaferian apple. While they can be eaten raw, they are much more tasty stewed with Talaxian spices and served in a pie, as recommended in the vegetarian options at Quark's Bar on the space station Deep Space Nine. | <urn:uuid:622701f4-46ba-4d33-be1d-268e00f16e5d> | CC-MAIN-2013-20 | http://www.h2g2.com/approved_entry/A87768382 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.924929 | 3,970 | 3.578125 | 4 |
Some people love them and some people hate them, but whatever your tastes you should probably know a little about the vignette effect in photography. A vignette photo has edges that fade to either white or black, typically gradually, although it can be an effect that is used dramatically as well. This effect can be created in camera with certain lenses that are known to produce it, sometimes undesirably. Vignettes can also be created in the darkroom during the printing process. However, in this day and age the most common way to create a vignette is during the post-processing phase in a photo editing program such as Photoshop.
Generally in spite of our very wide field of view, our eyes don’t see with perfect focus at all points in a scene. We see the center of a scene with ideal light sensitivity and perfect focus and then focus and brightness will fall off increasingly toward the periphery. Therefore, a vignette more or less mimics the way our eyes actually see a scene. Although a photograph with precise focus and exposure throughout may be desirable in some scenarios, it can also seem very boring, artificial and even too contrived in others.
Vignetting can act as a way to frame the intended subject in a photo in either a subtle or dramatic way and really make it stand out to great creative effect. By causing the periphery to gradually fade away, not only will the photo be more like actual vision, it can immediately draw the eye to the main subject and away from unimportant elements in the background or periphery. It can also be used as a finishing touch on a photo once the exposure and composition are perfect.
Typically a vignette, as previously mentioned, is used in a gradual or subtle way since our eyes don’t actually make the borders of our vision very dark and fuzzy. When used in this way, a vignette will generally not be consciously detectable by the untrained eye. However, there are times when a more dramatic vignette is a useful effect, such as in a black and white landscape to add a touch of drama or in an intense portrait to create a darker mood.
Older cheaper cameras such as Holgas often had poor optics, so many old photos have vignetting that was created in-camera unintentionally. This vintage look, which was previously not desirable, is now often intentionally recreated during post-processing with software like Photoshop.
How to Create a Vignette in Photoshop
There are a few ways to create a vignette in Photoshop but there are a few bits of information that are important to know first. A vignette should always be the very last step in the editing process. Deciding to crop the image post-vignette, for example, can cause the end result to be not desirable since a vignette is used to highlight the subject. If you are not completely sure that you are absolutely done editing your photo in other ways, you can choose to duplicate the original image layer so that you can go back to it if needed.
The easiest way to create a vignette in Photoshop is to first create a new layer and select the elliptical marquee tool. Create a circle or oval shape in the desired area on your photo.
Next you will need to choose to feather your selection anywhere from 50 to 80 pixels by going to Select>Modify>Feather.
You can also choose the square marquee or even use the lasso tool to create a custom and creative shape. At this point you will go to Select>Inverse and fill the center of the selection with black or white depending on the look you ultimately want to achieve. If it is too dark at this point then you can easily lighten it by lowering the layer opacity.
Another easy and quick way to add vignetting to your photos in Photoshop involves the use of the Lens Correction filter.
First you need to open your photo and as previously mentioned, be sure to finish all photo edits prior to adding the vignetting. Open the Lens Correction filter via the following path Filters>Distort>Lens Correction. You will see various options within the Lens Correction menu but we are going to focus on the Vignette sliders. The Amount slider will make the vignette darker or lighter and the Midpoint slider will adjust the mid-point of the vignette.
The first method of vignetting in Photoshop is definitely preferable if you want to place the vignetting in a custom place on the photo for creative effect, meaning not oval and dead center. However, the second method is great for a very precision vignette much like what would be created naturally with a lens.
A Quick Note About Opinions on Vignettes
One thing to be aware of is that there are photographers that “object” to the use of vignetting. Many see it as a form of cheating and as a way to make a bad photograph better. They argue that the composition itself should be what creates all of the emphasis on the subject and not a post-processing technique. However, the viewer of the photo, unless they possess a critical and trained eye, really doesn’t care about why they like a photo. They just do. Ultimately photography is indeed an art form and artists are as varied art itself. The most important thing is to produce the images that you want to produce.
Rachael Towne is a photographer, digital artist and creator of photoluminary.com | <urn:uuid:470fb366-7f9e-4b0b-b930-8347e6141651> | CC-MAIN-2013-20 | http://www.lightstalking.com/how-a-vignette-can-make-your-photographs-pop-and-how-to-do-it-in-photoshop | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.946789 | 1,123 | 2.6875 | 3 |
In December 1992, Toutatis made a close approach to Earth. At the time, it was an average of about 4 million kilometers (2.5 million miles) from Earth. Images of Toutatis were acquired using radar carried out at the Goldstone Deep Space Communications Complex in California's Mojave desert. For most of the work, a 400,000-watt coded radio transmission was beamed at Toutatis from the Goldstone main 70-meter (230-foot) antenna. The echoes, which took as little as 24 seconds to travel to Toutatis and back, were received by the new 34-meter (112-foot) antenna and relayed back to the 70-meter (230-foot) station, where they were decoded and processed into images.
The images of Toutatis reveal two irregularly shaped, cratered objects about 4 and 2.5 kilometers (2.5 and 1.6 miles) in average diameter which are probably in contact with each other. These "contact binaries" may be fairly common since another one, 4769 Castalia, was observed in 1989 when it passed near the Earth. Numerous surface features on Toutatis, including a pair of half-mile-wide craters, side by side, and a series of three prominent ridges -- a type of asteroid mountain range -- are presumed to result from a complex history of impacts.
Toutatis is one of the strangest objects in the solar system, with a highly irregular shape and an extraordinarily complex "tumbling" rotation. Both its shape and rotation are thought to be the outcome of a history of violent collisions. "The vast majority of asteroids, and all the planets, spin about a single axis, like a football thrown in a perfect spiral, but Toutatis tumbles like a flubbed pass," said Dr. Scott Hudson of Washington State University. One consequence of this strange rotation is that Toutatis does not have a fixed north pole like the Earth. Instead, its north pole wanders along a curve on the asteroid about every 5.4 days. "The stars viewed from Toutatis wouldn't repeatedly follow circular paths, but would crisscross the sky, never following the same path twice," Hudson said.
"The motion of the Sun during a Toutatis year, which is about four Earth years, would be even more complex," he continued. "In fact, Toutatis doesn't have anything you could call a 'day.' Its rotation is the result of two different types of motion with periods of 5.4 and 7.3 Earth days, that combine in such a way that Toutatis's orientation with respect to the solar system never repeats."
The rotations of hundreds of asteroids have been studied with optical telescopes. The vast majority of them appear to be in simple rotation with a fixed pole and periods typically between one hour and one day, the scientists said, even though the violent collisions these objects are thought to have experienced would mean that every one of them, at some time in the past, should have been tumbling like Toutatis.
Internal friction has caused asteroids to change into simple rotational patterns in relatively brief amounts of time. However, Toutatis rotates so slowly that this "dampening" process would take much longer than the age of the solar system. This means that the rotation of Toutatis is a remarkable, well-preserved relic of the collision-related evolution of an asteroid.
On September 29, 2004, Toutatis will pass by Earth at a range of four times the distance between the Earth and the Moon, the closest approach of any known asteroid or comet between now and 2060. One consequence of the asteroid's frequent close approaches to Earth is that its trajectory more than several centuries from now cannot be predicted accurately. In fact, of all the Earth-crossing asteroids, the orbit of Toutatis is thought to be one of the most chaotic.
High Resolution Goldstone Images
These are 8 of the "high resolution" Goldstone images that are being used to produce a higher-resolution 3D model of Toutatis. (From Ostro et al., Science 270:80-83, 1995--© Copyright 1995 by the AAAS)
High Resolution Image
This is a close up of one of the high-resolution images of Toutatis. (From Ostro et al., Science 270:80-83, 1995--© Copyright 1995 by the AAAS)
Topographic Map of Toutatis
This is a topographic map of Toutatis. It is based upon the shape model of Phil Stooke which he produced using rendered images of the shape model by R.S. Hudson and S.J. Ostro and modified by him to fit the radar images. As with all maps, it is the cartographer's interpretation; not all features are necessarily certain given the limited data available. This interpretation stretches the data as far as possible. (Courtesy A. Tayfun Oner)
Shaded Relief Map of Asteroid 4179 Toutatis
This is a shaded relief map of asteroid 4179 Toutatis. As with all maps, it is the cartographer's interpretation and not all features are necessarily certain given the limited data available. This interpretation stretches the data as far as is feasible. (Courtesy Phil Stooke, NSSDC, and NASA)
4 Views of Toutatis
This image shows four frames of asteroid Toutatis that were obtained on December 8-10 and 13, 1992. On each day, the asteroid was in a different orientation with respect to Earth. One large crater can be seen in the December 9 image (upper right) that measures about 700 meters (2,300 feet) in diameter. (Courtesy JPL/NASA)
Computer Model of Toutatis
These images show a computer model of the Earth-orbit-crossing asteroid Toutatis. These views of the asteroid show shallow craters, linear ridges, and a deep topographic "neck" whose geologic origin is not known. It may have been sculpted by impacts into a single, coherent body, or Toutatis might actually consist of two separate objects that came together in a gentle collision. (Courtesy Scott Hudson, Washington State University)
Spin State of Toutatis
This image shows the shape and non-principal-axis spin state of asteroid 4179 Toutatis rendered at a particular instant. The red, green, and blue axes are the principal axes of inertia; the magenta axis is the angular momentum vector; the yellow axis is the spin vector. If a flashlamp was attached to the short axis of inertia (the red axis) and flashed it every 15 minutes for a month, it would trace out the intricate path indicated by the small spheres stacked end-to-end. If this process was continued forever the path would never repeat. Toutatis's spin state differs radically from those of the vast majority of solar system bodies that have been studied. (© Copyright 1995 by the AAAS)
Spin State of Toutatis on Nine Successive Days
This image shows the non-principal-axis spin state of asteroid 4179 Toutatis at one-day intervals (from left to right, top to bottom). The red, green, and blue axes are the principal axes of inertia; the magenta axis is the angular momentum vector; the yellow axis is the spin vector. Toutatis does not spin about a single axis. Instead, its spin vector traces a curve around the asteroid's surface once every 5.41 days. During this time the object rotates once about its long axis, and every 7.35 days, on average, the long axis precesses about the angular momentum vector. The combination of these two motions with different periods give Toutatis its bizarre "tumbling" rotation. (Courtesy Scott Hudson, Washington State University) | <urn:uuid:5e6b2814-db1f-46bc-9374-445f6d3a9edb> | CC-MAIN-2013-20 | http://www.solarviews.com/eng/toutatis.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.940871 | 1,623 | 3.78125 | 4 |
||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
||This article's factual accuracy may be compromised due to out-of-date information. (July 2012)
||This article needs additional citations for verification. (July 2012)
||An editor has expressed a concern that this article lends undue weight to certain ideas, incidents, controversies or matters relative to the article subject as a whole. (July 2012)
Culinary arts is the art of preparing and cooking foods. The word "culinary" is defined as something related to, or connected with, cooking. A culinarian is a person working in the culinary arts. A culinarian working in restaurants is commonly known as a cook or a chef. Culinary artists are responsible for skilfully preparing meals that are as pleasing to the palate as to the eye. They are required to have a knowledge of the science of food and an understanding of diet and nutrition. They work primarily in restaurants, delicatessens, hospitals and other institutions. Kitchen conditions vary depending on the type of business, restaurant, nursing home, etc. The Table arts or the art of having food can also be called as "Culinary arts".
Careers in culinary arts
Related careers
Below is a list of the wide variety of culinary arts occupations.
- Consulting and Design Specialists – Work with restaurant owners in developing menus, the layout and design of dining rooms, and service protocols.
- Dining Room Service – Manage a restaurant, cafeterias, clubs, etc. Diplomas and degree programs are offered in restaurant management by colleges around the world.
- Food and Beverage Controller – Purchase and source ingredients in large hotels as well as manage the stores and stock control.
- Entrepreneurship – Deepen and invest in businesses, such as bakeries, restaurants, or specialty foods (such as chocolates, cheese, etc.).
- Food and Beverage Managers – Manage all food and beverage outlets in hotels and other large establishments.
- Food Stylists and Photographers – Work with magazines, books, catalogs and other media to make food visually appealing.
- Food Writers and Food Critics – Communicate with the public on food trends, chefs and restaurants though newspapers, magazines, blogs, and books. Notables in this field include Julia Child, Craig Claiborne and James Beard.
- Research and Development Kitchens – Develop new products for commercial manufacturers and may also work in test kitchens for publications, restaurant chains, grocery chains, or others.
- Sales – Introduce chefs and business owners to new products and equipment relevant to food production and service.
- Instructors – Teach aspects of culinary arts in high school, vocational schools, colleges, recreational programs, and for specialty businesses (for example, the professional and recreational courses in baking at King Arthur Flour).
Occupational outlook
The occupation outlook for chefs, restaurant managers, dietitians, and nutritionists is fairly good, with "as fast as the average" growth. Increasingly a college education with formal qualifications is required for success in this field. The culinary industry continues to be male-dominated, with the latest statistics showing only 19% of all 'chefs and head cooks' being female.
Notable culinary colleges around the world
- JaganNath Institute of Management Sciences, Rohini, Delhi, India
- College of Tourism & Hotel Management, Lahore, Punjab, Pakistan
- Culinary Academy of India, Hyderabad, Andhra Pradesh, India
- Culinary Academy of India, Hyderabad, Andhra Pradesh, India
- ITM School of Culinary Arts, Mumbai, Maharashtra, India
- Welcomgroup Graduate School of Hotel Administration, Manipal, Karnataka, India
- Institute of Technical Education (College West) – School of Hospitality, Singapore
- ITM (Institute of Technology and Management) – Institute of Hotel Management, Bangalore, Karnataka, India
- Apicius International School of Hospitality, Florence, Italy
- Le Cordon Bleu, Paris, France
- École des trois gourmandes, Paris, France
- HRC Culinary Academy, Bulgaria
- Institut Paul Bocuse, Ecully, France
- Mutfak Sanatlari Akademisi, Istanbul, Turkey
- School of Culinary Arts and Food Technology, DIT, Dublin, Ireland
- Scuola di Arte Culinaria Cordon Bleu, Florence, Italy
- Westminster Kingsway College (London)
- University of West London (London)
- School of Restaurant and Culinary Arts, Umeå University (Sweden)
- Camosun College (Victoria, BC)
- Canadore College (North Bay, ON)
- The Culinary Institute of Canada (Charlottetown, PE)
- Georgian College (Owen Sound, ON)
- George Brown College (Toronto, ON)
- Humber College (Toronto, ON)
- Institut de tourisme et d'hôtellerie du Québec (Montreal, QC)
- Niagara Culinary Institute (Niagara College, Niagara-on-the-Lake, ON)
- Northwest Culinary Academy of Vancouver (Vancouver, BC)
- Nova Scotia Community College (Nova Scotia)
- Pacific Institute of Culinary Arts (Vancouver, BC)
- Vancouver Community College (Vancouver, BC)
- Culinary Institute of Vancouver Island (Nanaimo, BC)
- Sault College (Sault Ste. Marie, ON)
- Baltimore International College, Baltimore, Maryland
- California Culinary Academy, San Francisco, California
- California School of Culinary Arts, Pasadena, California
- California State, Pomona, California
- California State University Hospitality Management Education Initiative
- Chattahoochee Technical College in Marietta, Georgia
- Cooking and Hospitality Institute of Chicago
- Coosa Valley Technical College, Rome, Georgia
- Culinard, the Culinary Institute of Virginia College
- Cypress Community College Hotel, Restaurant Management, & Culinary Arts Program in Anaheim
- Classic Cooking Academy, Scottsdale, Arizona
- Center for Kosher Culinary Arts, Brooklyn, New York
- Culinary Institute of America in Hyde Park, New York
- Culinary Institute of America at Greystone in St. Helena, California
- The Culinary Institute of Charleston, South Carolina
- L'Ecole Culinaire in Saint Louis, Missouri and Memphis, Tennessee
- Glendale Community College (California)
- International Culinary Centers in NY and CA which include:
- Institute for the Culinary Arts at Metropolitan Community College, Omaha, Nebraska
- Johnson & Wales University, College of Culinary Arts
- Kendall College in Chicago, Illinois
- Lincoln College of Technology
- Manchester Community College in Connecticut
- New England Culinary Institute in Vermont
- Orlando Culinary Academy
- Pennsylvania Culinary Institute
- The Restaurant School at Walnut Hill College, Philadelphia, Pennsylvania,
- Scottsdale Culinary Institute
- Secchia Institute for Culinary Education: Grand Rapids Community College, Grand Rapids, MI
- The Southeast Culinary and Hospitality College in Bristol, Virginia
- Sullivan University Louisville, Kentucky
- Los Angeles Trade–Technical College
- Texas Culinary Academy
- Central New Mexico Community College, Albuquerque, NM
- AUT University (Auckland University of Technology)
- MIT (Manukau Institute of Technology)
- Wintec, Waikato Institute of Technology
See also
- McBride, Kate, ed. The Professional Chef/ the Culinary Institute of America, 8th ed. Hoboken, NJ: John Wiley & Sons, INC, 2006.
Further reading
- Beal, Eileen. Choosing a career in the restaurant industry. New York: Rosen Pub. Group, 1997.
- Institute for Research. Careers and jobs in the restaurant business: jobs, management, ownership. Chicago: The Institute, 1977.
External links | <urn:uuid:00d77b91-d37a-448b-b548-f3b5d33be8b6> | CC-MAIN-2013-20 | http://en.wikipedia.org/wiki/Culinary_art | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00002-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.874122 | 1,685 | 2.84375 | 3 |