text
stringlengths
3.69k
124k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
25
190
file_path
stringlengths
138
138
language
stringclasses
1 value
language_score
float64
0.67
0.99
token_count
int64
1k
26.7k
score
float64
2.52
4.44
int_score
int64
3
4
Taking Play Seriously By ROBIN MARANTZ HENIG Published: February 17, 2008 On a drizzly Tuesday night in late January, 200 people came out to hear a psychiatrist talk rhapsodically about play -- not just the intense, joyous play of children, but play for all people, at all ages, at all times. (All species too; the lecture featured touching photos of a polar bear and a husky engaging playfully at a snowy outpost in northern Canada.) Stuart Brown, president of the National Institute for Play, was speaking at the New York Public Library's main branch on 42nd Street. He created the institute in 1996, after more than 20 years of psychiatric practice and research persuaded him of the dangerous long-term consequences of play deprivation. In a sold-out talk at the library, he and Krista Tippett, host of the public-radio program ''Speaking of Faith,'' discussed the biological and spiritual underpinnings of play. Brown called play part of the ''developmental sequencing of becoming a human primate. If you look at what produces learning and memory and well-being, play is as fundamental as any other aspect of life, including sleep and dreams.'' The message seemed to resonate with audience members, who asked anxious questions about what seemed to be the loss of play in their children's lives. Their concern came, no doubt, from the recent deluge of eulogies to play . Educators fret that school officials are hacking away at recess to make room for an increasingly crammed curriculum. Psychologists complain that overscheduled kids have no time left for the real business of childhood: idle, creative, unstructured free play. Public health officials link insufficient playtime to a rise in childhood obesity. Parents bemoan the fact that kids don't play the way they themselves did -- or think they did. And everyone seems to worry that without the chance to play stickball or hopscotch out on the street, to play with dolls on the kitchen floor or climb trees in the woods, today's children are missing out on something essential. The success of ''The Dangerous Book for Boys'' -- which has been on the best-seller list for the last nine months -- and its step-by-step instructions for activities like folding paper airplanes is testament to the generalized longing for play's good old days. So were the questions after Stuart Brown's library talk; one woman asked how her children will learn trust, empathy and social skills when their most frequent playing is done online. Brown told her that while video games do have some play value, a true sense of ''interpersonal nuance'' can be achieved only by a child who is engaging all five senses by playing in the three-dimensional world. This is part of a larger conversation Americans are having about play. Parents bobble between a nostalgia-infused yearning for their children to play and fear that time spent playing is time lost to more practical pursuits. Alarming headlines about U.S. students falling behind other countries in science and math, combined with the ever-more-intense competition to get kids into college, make parents rush to sign up their children for piano lessons and test-prep courses instead of just leaving them to improvise on their own; playtime versus r?m?uilding. Discussions about play force us to reckon with our underlying ideas about childhood, sex differences, creativity and success. Do boys play differently than girls? Are children being damaged by staring at computer screens and video games? Are they missing something when fantasy play is populated with characters from Hollywood's imagination and not their own? Most of these issues are too vast to be addressed by a single field of study (let alone a magazine article). But the growing science of play does have much to add to the conversation. Armed with research grounded in evolutionary biology and experimental neuroscience, some scientists have shown themselves eager -- at times perhaps a little too eager -- to promote a scientific argument for play. They have spent the past few decades learning how and why play evolved in animals, generating insights that can inform our understanding of its evolution in humans too. They are studying, from an evolutionary perspective, to what extent play is a luxury that can be dispensed with when there are too many other competing claims on the growing brain, and to what extent it is central to how that brain grows in the first place. Scientists who study play, in animals and humans alike, are developing a consensus view that play is something more than a way for restless kids to work off steam; more than a way for chubby kids to burn off calories; more than a frivolous luxury. Play, in their view, is a central part of neurological growth and development -- one important way that children build complex, skilled, responsive, socially adept and cognitively flexible brains. Their work still leaves some questions unanswered, including questions about play's darker, more ambiguous side: is there really an evolutionary or developmental need for dangerous games, say, or for the meanness and hurt feelings that seem to attend so much child's play? Answering these and other questions could help us understand what might be lost if children play less.
<urn:uuid:316c7af5-14e1-4d0b-9576-753e17ef2cc5>
CC-MAIN-2013-20
http://query.nytimes.com/gst/fullpage.html?res=9404E7DA1339F934A25751C0A96E9C8B63&scp=2&sq=taking%20play%20seriously&st=cse
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961459
1,055
2.5625
3
CTComms sends on average 2 million emails monthly on behalf of over 125 different charities and not for profits. Take the complexity of technology and stir in the complexity of the legal system and what do you get? Software licenses! If you've ever attempted to read one you know how true this is, but you have to know a little about software licensing even if you can't parse all of the fine print. By: Chris Peters March 10, 2009 A software license is an agreement between you and the owner of a program which lets you perform certain activities which would otherwise constitute an infringement under copyright law. The software license usually answers questions such as: The price of the software and the licensing fees, if any, are sometimes discussed in the licensing agreement, but usually it's described elsewhere. If you read the definitions below and you're still scratching your head, check out Categories of Free and Non-Free Software which includes a helpful diagram. Free vs Proprietary: When you hear the phrase "free software" or "free software license," "free" is referring to your rights and permissions ("free as in freedom" or "free as in free speech"). In other words, a free software license gives you more rights than a proprietary license. You can usually copy, modify, and redistribute free software without paying a fee or obtaining permission from the developers and distributors. In most cases "free software" won't cost you anything, but that's not always the case – in this instance the word free is making no assertion whatsoever about the price of the software. Proprietary software puts more restrictions and limits on your legal permission to copy, modify, and distribute the program. Free, Open-Source or FOSS? In everyday conversation, there's not much difference between "free software," "open source software," and "FOSS (Free and Open-Source Software)." In other words, you'll hear these terms used interchangeably, and the proponents of free software and the supporters of open-source software agree with one another on most issues. However, the official definition of free software differs somewhat from the official definition of open-source software, and the philosophies underlying those definitions differ as well. For a short description of the difference, read Live and Let License. For a longer discussion from the "free software" side, read Why Open Source Misses the Point of Free Software. For the "open-source" perspective, read Why Free Software is Too Ambiguous. Public domain and copyleft. These terms refer to different categories of free, unrestricted licensing. A copyleft license allows you all the freedoms of a free software license, but adds one restriction. Under a copyleft license, you have to release any modifications under the same terms as the original software. In effect, this blocks companies and developers who want to alter free software and then make their altered version proprietary. In practice, almost all free and open-source software is also copylefted. However, technically you can release "free software" that isn't copylefted. For example, if you developed software and released it under a "public domain" license, it would qualify as free software, but it isn't copyleft. In effect, when you release something into the public domain, you give up all copyrights and rights of ownership. Shareware and freeware. These terms don't really refer to licensing, and they're confusing in light of the discussion of free software above. Freeware refers to software (usually small utilities at sites such as Tucows.com) that you can download and install without paying. However, you don't have the right to view the source code, and you may not have the right to copy and redistribute the software. In other words, freeware is proprietary software. Shareware is even more restrictive. In effect, shareware is trial software. You can use it for a limited amount of time (usually 30 or 60 days) and then you're expected to pay to continue using it. End User Licensing Agreement (EULA). When you acquire software yourself, directly from a vendor or retailer, or directly from the vendor's Web site, you usually have to indicate by clicking a box that you accept the licensing terms. This "click-through" agreement that no one ever reads is commonly known as a EULA. If you negotiate a large purchase of software with a company, and you sign a contract to seal the agreement, that contract usually replaces or supersedes the EULA. Most major vendors of proprietary software offer some type of bulk purchasing and volume licensing mechanism. The terms vary widely, but if you order enough software to qualify, the benefits in terms of cost and convenience are significant. Also, not-for-profits sometimes qualify for it with very small initial purchases. Some of the benefits of volume licensing include: Lower cost. As with most products, software costs less when you buy more of it. Ease of installation. Without volume licenses, you usually have to enter a separate activation code (also known as a product key or license key) for each installed copy of the program. On the other hand, volume licenses provide you with a single, organisation-wide activation code, which makes it much easier to find when you need to reinstall the software. Easier tracking of licenses. Keeping track of how many licenses you own, and how many copies you've actually installed, is a tedious, difficult task. Many volume licensing programs provide an online account which is automatically updated when you obtain or activate a copy of that company's software. These accounts can also coordinate licensing across multiple offices within your organisation. To learn more about volume licensing from a particular vendor, check out some of the resources below: Qualified not-for-profits and libraries can receive donated volume licenses for Microsoft products through TechSoup. For more information, check out our introduction to the Microsoft Software Donation Program, and the Microsoft Software Donation Program FAQ. For general information about the volume licensing of Microsoft software, see Volume Licensing Overview. If you get Microsoft software from TechSoup or other software distributors who work with not-for-profits, you may need to go to the eOpen Web site to locate your Volume license keys. For more information, check out the TechSoup Donation Recipient's Guide to the Microsoft eOpen Web Site. Always check TechSoup Stock first to see if there's a volume licensing donation program for the software you're interested in. If TechSoup doesn't offer that product or if you need more copies than you can find at TechSoup, search for "volume licensing not-for-profits software" or just "not-for-profits software." For example, when we have an inventory of Adobe products, qualifying and eligible not-for-profits can obtain four individual products or one copy of Creative Suite 4 through TechSoup. If we're out of stock, or you've used up your annual Adobe donation, you can also check TechSoup's special Adobe donation program and also Adobe Solutions for Nonprofits for other discounts available to not-for-profits. For more software-hunting tips, see A Quick Guide to Discounted Software Programs. Pay close attention to the options and licensing requirements when you acquire server-based software. You might need two different types of license – one for the server software itself, and a set of licenses for all the "clients" accessing the software. Depending on the vendor and the licensing scenario, "client" can refer either to the end users themselves (for example, employees, contractors, clients, and anyone else who uses the software in question) or their computing devices (for example, laptops, desktop computers, smartphones, PDAs, etc.). We'll focus on Microsoft server products, but similar issues can arise with other server applications. Over the years, Microsoft has released hundreds of server-based applications, and the licensing terms are slightly different for each one. Fortunately, there are common license types and licensing structures across different products. In other words, while a User CAL (Client Access License) for Windows Server is distinct from a User CAL for SharePoint Server, the underlying terms and rights are very similar. The TechSoup product pages for Microsoft software do a good job of describing the differences between products, so we'll focus on the common threads in this article. Moreover, Microsoft often lets you license a single server application in more than one way, depending on the needs of your organisation. This allows you the flexibility to choose the licenses that best reflect your organisation's usage patterns and thereby cost you the least amount of money. For example, for Windows Server and other products you can acquire licenses on a per-user basis (for example, User CALs) or per-device basis (for example, Device CALs). The license required to install and run most server applications usually comes bundled with the software itself. So you can install and run most applications "out of the box," as long as you have the right number of client licenses (see the section below for more on that). However, when you're running certain server products on a computer with multiple processors, you may need to get additional licenses. For example, if you run Windows Server 2008 DataCenter edition on a server with two processors, you need a separate license for each processor. SQL Server 2008 works the same way. This type of license is referred to as a processor license. Generally you don't need client licenses for any application that's licensed this way. Client Licenses for Internal Users Many Microsoft products, including Windows Server 2003 and Windows Server 2008, require client access licenses for all authenticated internal users (for example, employees, contractors, volunteers, etc.). On the other hand, SQL Server 2008 and other products don't require any client licenses. Read the product description at CTXchange if you're looking for the details about licensing a particular application. User CALs: User CALs allow each user access to all the instances of a particular server product in an organisation, no matter which device they use to gain access. In other words, if you run five copies of Windows Server 2008 on five separate servers, you only need one User CAL for each person in your organisation who access those servers (or any software installed on those servers), whether they access a single server, all five servers, or some number in between. Each user with a single CAL assigned to them can access the server software from as many devices as they want (for example, desktop computers, laptops, smartphones, etc.). User CALs are a popular licensing option. Device CALs: Device CALs allow access to all instances of a particular server application from a single device (for example, a desktop computer, a laptop, etc.) in your organisation. Device CALs only make sense when multiple employees use the same computer. For example, in 24-hour call centres different employees on different shifts often use the same machine, so Device CALs make sense in this situation. Choosing a licensing mode for your Windows Server CALs: With Windows Server 2003 and Windows Server 2008, you use a CAL (either a User CAL or a Device CAL) in one of two licensing modes: per seat or per server. You make this decision when you're installing your Windows Server products, not when you acquire the CALs. The CALs themselves don't have any mode designation, so you can use either a User CAL or a Device CAL in either mode. Per seat mode is the default mode, and the one used most frequently. The description of User CALs and Device CALs above describes the typical per seat mode. In "per server" mode, Windows treats each license as a "simultaneous connection." In other words, if you have 40 CALs, Windows will let 40 authenticated users have access. The 41st user will be denied access. However, in per server mode, each CAL is tied to a particular instance of Windows Server, and you have to acquire a new set of licenses for each new server you build that runs Windows. Therefore, per server mode works for some small organisations with one or two servers and limited access requirements. You don't "install" client licenses the way you install software. There are ways to automate the tracking of software licenses indirectly, but the server software can't refuse access to a user or device on licensing grounds. The licenses don't leave any "digital footprint" that the server software can read. An exception to this occurs when you license Windows Server in per server mode. In this case, if you have 50 licenses, the 51st authenticated user will be denied access (though anonymous users can still access services). Some key points to remember about client licensing: The licensing scenarios described in this section arise less frequently, and are too complex to cover completely in this article, so they're described briefly below along with more comprehensive resources. You don't need client licenses for anonymous, unauthenticated external users. In other words, if someone accesses your Web site, and that site runs on Internet Information Server (IIS), Microsoft's Web serving software, you don't need a client license for any of those anonymous users. If you have any authenticated external users who access services on your Windows-based servers, you can obtain CALs to cover their licensing requirements. However, the External Connector License (ECL) is a second option in this scenario. The ECL covers all use by authenticated external users, but it's a lot more expensive than a CAL, so only get one if you'll have a lot of external users. For example, even if you get your licenses through the CTXchange donation program, an ECL for Windows Server 2008 has an £76 administrative fee, while a User CAL for Windows Server 2008 carries a £1 admin fee. If only a handful of external users access your Windows servers, you're better off acquiring User CALs. Also, an ECL only applies to external users and devices. In other words, if you have an ECL, you still have to get a CAL for all employees and contractors. Even though Terminal Services (TS) is built into Windows Server 2003 and 2008, you need to get a separate TS CAL for each client (i.e. each user or each device) that will access Terminal Services in your organisation. This TS license is in addition to your Windows Server CALs. Microsoft's System Centre products (a line of enterprise-level administrative software packages) use a special type of license known as a management license (ML). Applications that use this type of licensing include System Center Configuration Manager 2007 and System Center Operations Manager 2007. Any desktop or workstation managed by one of these applications needs a client management license. Any server managed by one of these applications requires a server management license, and there are two types of server management licenses – standard and enterprise. You need one or the other but not both. There are also special licensing requirements if you're managing virtual instances of Windows operating systems. For more information, see TechSoup's Guide to System Center Products and Licensing and Microsoft's white paper on Systems Center licensing. Some Microsoft server products have two client licensing modes, standard and enterprise. As you might imagine, an Enterprise CAL grants access to more advanced features of a product. Furthermore, with some products, such as Microsoft Exchange, the licenses are additive. In other words, a user needs both a Standard CAL AND an Enterprise CAL in order to access the advanced features. See Exchange Server 2007 Editions and Client Access Licenses for more information. With virtualisation technologies, multiple operating systems can run simultaneously on a single physical server. Every time you install a Microsoft application, whether on a physical hardware system or a virtual hardware system, you create an "instance" of that application. The number of "instances" of particular application that you can run using a single license varies from product to product. For more information see the Volume Licensing Briefs, Microsoft Licensing for Virtualization and the Windows Server Virtualization Calculator. For TechSoup Stock products, see the product description for more information. There are a lot of nuances to Microsoft licensing, and also a lot of excellent resources to help you understand different scenarios. About the Author: Chris is a former technology writer and technology analyst for TechSoup for Libraries, which aims to provide IT management guidance to libraries. His previous experience includes working at Washington State Library as a technology consultant and technology trainer, and at the Bill and Melinda Gates Foundation as a technology trainer and tech support analyst. He received his M.L.S. from the University of Michigan in 1997. Originally posted here. Copyright © 2009 CompuMentor. This work is published under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License. The latest version of Microsoft Office Professional Plus is an integrated collection of programs, servers, and services designed to work together to enable optimised information work.
<urn:uuid:c337bcd8-6aa1-4f2d-8c48-b916442ebbee>
CC-MAIN-2013-20
http://www.ctt.org/resource_centre/getting_started/learning/understanding_licenses
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910602
3,479
3.234375
3
Hold the salt: UCLA engineers develop revolutionary new desalination membrane Process uses atmospheric pressure plasma to create filtering 'brush layer' Desalination can become more economical and used as a viable alternate water resource. By Wileen Wong Kromhout Originally published in UCLA Newsroom Researchers from the UCLA Henry Samueli School of Engineering and Applied Science have unveiled a new class of reverse-osmosis membranes for desalination that resist the clogging which typically occurs when seawater, brackish water and waste water are purified. The highly permeable, surface-structured membrane can easily be incorporated into today's commercial production system, the researchers say, and could help to significantly reduce desalination operating costs. Their findings appear in the current issue of the Journal of Materials Chemistry. Reverse-osmosis (RO) desalination uses high pressure to force polluted water through the pores of a membrane. While water molecules pass through the pores, mineral salt ions, bacteria and other impurities cannot. Over time, these particles build up on the membrane's surface, leading to clogging and membrane damage. This scaling and fouling places higher energy demands on the pumping system and necessitates costly cleanup and membrane replacement. The new UCLA membrane's novel surface topography and chemistry allow it to avoid such drawbacks. "Besides possessing high water permeability, the new membrane also shows high rejection characteristics and long-term stability," said Nancy H. Lin, a UCLA Engineering senior researcher and the study's lead author. "Structuring the membrane surface does not require a long reaction time, high reaction temperature or the use of a vacuum chamber. The anti-scaling property, which can increase membrane life and decrease operational costs, is superior to existing commercial membranes." The new membrane was synthesized through a three-step process. First, researchers synthesized a polyamide thin-film composite membrane using conventional interfacial polymerization. Next, they activated the polyamide surface with atmospheric pressure plasma to create active sites on the surface. Finally, these active sites were used to initiate a graft polymerization reaction with a monomer solution to create a polymer "brush layer" on the polyamide surface. This graft polymerization is carried out for a specific period of time at a specific temperature in order to control the brush layer thickness and topography. "In the early years, surface plasma treatment could only be accomplished in a vacuum chamber," said Yoram Cohen, UCLA professor of chemical and biomolecular engineering and a corresponding author of the study. "It wasn't practical for large-scale commercialization because thousands of meters of membranes could not be synthesized in a vacuum chamber. It's too costly. But now, with the advent of atmospheric pressure plasma, we don't even need to initiate the reaction chemically. It's as simple as brushing the surface with plasma, and it can be done for almost any surface." In this new membrane, the polymer chains of the tethered brush layer are in constant motion. The chains are chemically anchored to the surface and are thus more thermally stable, relative to physically coated polymer films. Water flow also adds to the brush layer's movement, making it extremely difficult for bacteria and other colloidal matter to anchor to the surface of the membrane. "If you've ever snorkeled, you'll know that sea kelp move back and forth with the current or water flow," Cohen said. "So imagine that you have this varied structure with continuous movement. Protein or bacteria need to be able to anchor to multiple spots on the membrane to attach themselves to the surface — a task which is extremely difficult to attain due to the constant motion of the brush layer. The polymer chains protect and screen the membrane surface underneath." Another factor in preventing adhesion is the surface charge of the membrane. Cohen's team is able to choose the chemistry of the brush layer to impart the desired surface charge, enabling the membrane to repel molecules of an opposite charge. The team's next step is to expand the membrane synthesis into a much larger, continuous process and to optimize the new membrane's performance for different water sources. "We want to be able to narrow down and create a membrane selection system for different water sources that have different fouling tendencies," Lin said. "With such knowledge, one can optimize the membrane surface properties with different polymer brush layers to delay or prevent the onset of membrane fouling and scaling. "The cost of desalination will therefore decrease when we reduce the cost of chemicals [used for membrane cleaning], as well as process operation [for membrane replacement]. Desalination can become more economical and used as a viable alternate water resource." Cohen's team, in collaboration with the UCLA Water Technology Research (WaTeR) Center, is currently carrying out specific studies to test the performance of the new membrane's fouling properties under field conditions. "We work directly with industry and water agencies on everything that we're doing here in water technology," Cohen said. "The reason for this is simple: If we are to accelerate the transfer of knowledge technology from the university to the real world, where those solutions are needed, we have to make sure we address the real issues. This also provides our students with a tremendous opportunity to work with industry, government and local agencies." A paper providing a preliminary introduction to the new membrane also appeared in the Journal of Membrane Science last month. Published: Thursday, April 08, 2010
<urn:uuid:c0b175bb-65fb-420e-a881-a80b91d00ecd>
CC-MAIN-2013-20
http://www.environment.ucla.edu/water/news/article.asp?parentid=6178
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924981
1,115
2.8125
3
This section provides primary sources that document how Indian and European men and one English and one Indian woman have described the practice of sati, or the self-immolation of Hindu widows. Although they are all critical of self-immolation, Francois Bernier, Fanny Parks, Lord William Bentinck, and Rev. England present four different European perspectives on the practice of sati and what it represents about Indian culture in general, and the Hindu religion and Hindu women in particular. They also indicate increasing negativism in European attitudes toward India and the Hindu religion in general. It would be useful to compare the attitudes of Bentinck and England as representing the secular and sacred aspects of British criticism of sati. A comparison of Bentinck’s minute with the subsequent legislation also reveals differences in tone between private and public documents of colonial officials. Finally, a comparison between the Fanny Parks and the three men should raise discussion on whether or not the gender and social status of the writer made any difference in his or her appraisal of the practice of self-immolation. The three sources by Indian men and one by an Indian woman illustrate the diversity of their attitudes toward sati. The Marathi source illuminates the material concerns of relatives of the Hindu widow who is urged to adopt a son, so as to keep a potentially lucrative office within the extended family. These men are willing to undertake intense and delicate negotiations to secure a suitably related male child who could be adopted. This letter also documents that adoption was a legitimate practice among Hindus, and that Hindu women as well as men could adopt an heir. Ram Mohan Roy’s argument illustrates a rationalist effort to reform Hindu customs with the assistance of British legislation. Roy illustrates one of the many ways in which Indians collaborate with British political power in order to secure change within Indian society. He also enabled the British to counter the arguments of orthodox Hindus about the scriptural basis for the legitimacy of self-immolation of Hindu widows. The petition of the orthodox Hindu community in Calcutta, the capital of the Company’s territories in India, documents an early effort of Indians to keep the British colonial power from legislating on matters pertaining to the private sphere of Indian family life. Finally, Pandita Ramabai reflects the ways in which ancient Hindu scriptures and their interpretation continued to dominate debate. Students should consider how Ramabai’s effort to raise funds for her future work among child widows in India might have influenced her discussion of sati. Two key issues should be emphasized. First, both Indian supporters and European and Indian opponents of the practice of self-immolation argue their positions on the bodies of Hindu women, and all the men involved appeal to Hindu scriptures to legitimate their support or opposition. Second, the voices of Indian women were filtered through the sieve of Indian and European men and a very few British women until the late 19th century. - How do the written and visual sources portray the Hindu women who commit self-immolation? Possible aspects range from physical appearance and age, motivation, evidence of physical pain (that even the most devoted woman must suffer while burning to death), to any evidence of the agency or autonomy of the Hindu widow in deciding to commit sati. Are any differences discernible, and if so, do they seem related to gender or nationality of the observer or time period in which they were observed? - How are the brahman priests who preside at the self-immolation portrayed in Indian and European sources? What might account for any similarities and differences? - What reasons are used to deter Hindu widows from committing sati? What do these reasons reveal about the nature of family life in India and the relationships between men and women? - What do the reasons that orthodox Hindus provide to European observers and to Indian reformers reveal about the significance of sati for the practice of the Hindu religion? What do their arguments reveal about orthodox Hindu attitudes toward women and the family? - How are Hindu scriptures used in various ways in the debates before and after the prohibition of sati? - What is the tone of the petition from 800 Hindus to their British governor? Whom do they claim to represent? What is their justification for the ritual of self-immolation? What is their attitude toward the Mughal empire whose Muslim rulers had preceded the British? What is their characterization of the petitioners toward those Hindus who support the prohibition on sati? How do the petitioners envision the proper relationship between the state and the practice of religion among its subjects? - Who or what factors do European observers, British officials, and Indian opponents of sati hold to be responsible for the continuance of the practice of sati? - What were the reasons that widows gave for committing sati? Were they religious, social or material motives? What is the evidence that the widows were voluntarily committing sati before 1829? What reasons did the opponents of sati give for the decisions of widows to commit self-immolation? What reasons did opponents give for widows who tried to escape from their husbands’ pyres? - What are the reasons that Lord Bentinck and his Executive Council cite for their decision to declare the practice of sati illegal? Are the arguments similar to or different from his arguments in his minute a month earlier? What do these reasons reveal about British attitudes toward their role or mission in India? Do they use any of the arguments cited by Ram Mohan Roy or Pandita Ramabai? - What do these sources, both those who oppose sati and those who advocate it, reveal about their attitudes to the Hindu religion in particular and Indian culture in general?
<urn:uuid:672e69ee-fd10-42dc-8e01-f4fde95914a0>
CC-MAIN-2013-20
http://chnm.gmu.edu/wwh/modules/lesson5/lesson5.php?menu=1&c=strategies&s=0
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957252
1,164
3.734375
4
March 30, 2012 CDC Releases New Report on Autism Prevalence in U.S. Researchers at the Johns Hopkins Bloomberg School of Public Health contributed to a new Centers for Disease Control and Prevention (CDC) report that estimates the prevalence of Autism Spectrum Disorders (ASD) as affecting 1 in 88 U.S. children overall, and 1 in 54 boys. This is the third such report by the CDC’s Autism and Developmental Disabilities Monitoring Network (ADDM), which has used the same surveillance methods for more than a decade. Previous ADDM reports estimated the rate of ASDs at 1 in 110 children in the 2009 report that looked at data from 2006, and 1 in 150 children in the 2007 report, which covered data from 2002. The current prevalence estimate, which analyzed data from 2008, represents a 78 percent increase since 2002, and a 23 percent increase since 2006. ASDs include diagnoses of autistic disorder, Asperger disorder, and Pervasive Developmental Disorder-Not Otherwise Specified (PDD-NOS). ASDs encompass a wide spectrum of conditions, all of which affect communication, social and behavioral skills. The causes of these developmental disorders are not completely understood, although studies show that both environment and genetics play an important and complex role. There is no known cure for ASDs, but studies have shown that behavioral interventions, particularly those begun early in a child’s life, can greatly improve learning and skills. The latest CDC report, “Prevalence of Autism Spectrum Disorders – Autism and Developmental Disabilities Monitoring Network, 14 Sites, United States, 2008,” provides autism prevalence estimates from different areas of the United States, including Maryland. The purpose of the report is to provide high-quality data on the extent and distribution of ASDs in the U.S. population, to promote better planning for health and educational services, and to inform the further development of research on the causes, progression, and treatments. “We continue observing increases in prevalence since the inception of the project in 2000,” said Li-Ching Lee, PhD, a psychiatric epidemiologist with the Bloomberg School"s Departments of Epidemiology and Mental Health and the principal investigator for the prevalence project’s Maryland site. “In Maryland, we found 27 percent of children with ASDs were never diagnosed by professionals. So, we know there are more children out there and we may see the increase continue in coming years.” The new report, which focuses on 8-year-olds because that is an age where most children with ASD have been identified, shows that the number of those affected varies widely among the 14 participating states, with Utah having the the highest overall rate (1 in 47) and Alabama the lowest (1 in 210). Across all sites, nearly five times as many boys as girls are affected. Additionally, growing numbers of minority children are being diagnosed, with a 91 percent increase among black non-Hispanic children and a 110 percent increase for Hispanic children. Researchers say better screening and diagnosis may contribute to those increases among minority children. The overall rate in Maryland is 1 in 80 children; 1 in 49 boys and 1 in 256 girls. In Maryland, the prevalence has increased 85 percent from 2002 to 2008. The increase was 41 percent between 2004 and 2008, and 35 percent between 2006 and 2008. The data were gathered through collaboration with the Maryland State Department of Education and participating schools in Anne Arundel, Baltimore, Carroll, Cecil, Harford and Howard counties, as well as clinical sources such as Kennedy Krieger Institute, Mt. Washington Pediatric Hospital, and University of Maryland Medical System. While the report focuses on the numbers, its authors acknowledge that the reasons for the increase are not completely understood and that more research is needed. They note that the increase is likely due in part to a broadened definition of ASDs, greater awareness among the public and professionals, and the way children receive services in their local communities. “It’s very difficult, if not impossible, to tease these factors apart to quantify how much each of these factors contributed to the increase,” Dr. Lee said. But whatever the cause, “This report paints a picture of the magnitude of the condition across our country and helps us understand how communities identify children with autism. One thing the data tell us with certainty – there are more children and families that need help,” said CDC Director Thomas Frieden, MD, MPH. Researchers also identified the median age of ASD diagnosis, documented in records. In Maryland, that age was 5 years and 6 months, compared with 4 years, 6 months nationally. Across all sites, children who have autistic disorder tend to be identified earlier, while those with Asperger Disorder tend to be diagnosed later. Given the importance of early intervention, ADDM researchers carefully track at what age children receive an ASD diagnosis. “Unfortunately, most children still are not diagnosed until after they reach age 4. We’ve heard from too many parents that they were concerned long before their child was diagnosed. We are working hard to change that,” said Coleen Boyle, PhD, MSHyg, director of CDC’s National Center on Birth Defects and Developmental Disabilities. To see the full report: http://www.cdc.gov/mmwr/preview/mmwrhtml/ss6103a1.htm?s_cid=ss6103a1_w To the Community Report with state statistics: http://www.cdc.gov/ncbddd/autism/documents/ADDM-2012-Community-Report.pdf Media contact for Johns Hopkins Bloomberg School of Public Health: Natalie Wood-Wright at 410-614-6029 or email@example.com
<urn:uuid:c63d3de8-6d63-4687-b45d-41b137a97594>
CC-MAIN-2013-20
http://www.jhsph.edu/news/news-releases/2012/lee-autism-prevalence.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946067
1,200
2.8125
3
Throughout life there are many times when outside influences change or influence decision-making. The young child has inner motivation to learn and explore, but as he matures, finds outside sources to be a motivating force for development, as well. Along with being a beneficial influence, there are moments when peer pressure can overwhelm a child and lead him down a challenging path. And, peer pressure is a real thing – it is not only observable, but changes the way the brain behaves. As a young adult, observational learning plays a part in development through observing and then doing. A child sees another child playing a game in a certain way and having success, so the observing child tries the same behavior. Albert Bandura was a leading researcher in this area. His famous bobo doll studies found that the young child is greatly influenced by observing other’s actions. When a child sees something that catches his attention, he retains the information, attempts to reproduce it, and then feels motivated to continue the behavior if it is met with success. Observational learning and peer pressure are two different things – one being the observing of behaviors and then the child attempting to reproduce them based on a child’s own free will. Peer pressure is the act of one child coercing another to follow suit. Often the behavior being pressured is questionable or taboo, such as smoking cigarettes or drinking alcohol. Peer Pressure and the Brain Recent studies find that peer pressure influences the way our brains behave, which leads to better understanding about the impact of peer pressure and the developing child. According to studies from Temple University, peer pressure has an effect on brain signals involved in risk and reward department, especially when the teen’s friends are around. Compared to adults in the study, teenagers were much more likely to take risks they would not normally take on their own when with friends. Brain signals were more activated in the reward center of the brain, firing greatest during at risk behaviors. Peer pressure can be difficult for young adults to deal with, and learning ways to say “no” or avoid pressure-filled situations can become overwhelming. Resisting peer pressure is not just about saying “no,” but how the brain functions. Children that have stronger connections among regions in their frontal lobes, along with other areas of the brain, are better equipped to resist peer pressure. During adolescence, the frontal lobes of the brain develop rapidly, causing axioms in the region to have a coating of fatty myelin, which insulates them and causes the frontal lobes to more effectively communicate with other brain regions. This helps the young adult to develop judgment and self-control needed to resist peer pressure. Along with the frontal lobes contributing to the brain and peer pressure, other studies find that the prefrontal cortex plays a role in how teens respond to peer pressure. Just as with the previous study, children that were not exposed to peer pressure had greater connectivity within the brain as well as abilities to resist peer pressure. Working through Peer Pressure The teenage years are exciting years. The young adult is often going through physical changes due to puberty, adjusting to new friends and educational environments, and learning how to make decisions for themselves. Adults can offer a helping and supportive hand to young adults when dealing with peer pressure by considering the following: Separation: Understanding that this is a time for the child to separate and learn how to be his own individual is important. It is hard to let go and allow the child to make mistakes for himself, especially when you want to offer input or change plans and actions, but allowing the child to go down his own path is important. As an adult, offering a helping hand if things go awry and being there to offer support is beneficial. Talk it Out: As an adult, take a firm stand on rules and regulations with your child. Although you cannot control whom your child selects as friends, you can take a stand on your control of your child. Setting specific goals, rules, and limits encourages respect and trust, which must be earned in response. Do not be afraid to start talking with your child early about ways to resist peer pressure. Focus on how it will build your child’s confidence when he learns to say “no” at the right time and reassure him that it can be accomplished without feeling guilty or losing self-confidence. Stay Involved: Keep family dinner as a priority, make time each week for a family meeting or game time, and plan family outings and vacations regularly. Spending quality time with kids models positive behavior and offers lots of opportunities for discussions about what is happening at school and with friends. If at any time there are concerns a child is becoming involved in questionable behavior due to peer pressure, ask for help. Understand that involving others in helping a child cope with peer pressure, such as a family doctor, youth advisor, or other trusted friend, does not mean that the adult is not equipped to properly help the child, but that including others in assisting a child, that may be on the brink of heading down the wrong path, is beneficial. By Sarah Lipoff. Sarah is an art educator and parent. Visit Sarah’s website here. Read More →
<urn:uuid:4fafe4c1-2dd0-49fd-8b1b-41d1829f7cdf>
CC-MAIN-2013-20
http://www.funderstanding.com/category/child-development/brain-child-development/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963305
1,062
3.8125
4
Vol. 17 Issue 6 One-Legged (Single Limb) Stance Test The One-Legged Stance Test (OLST)1,2 is a simple, easy and effective method to screen for balance impairments in the older adult population. You may be asking yourself, "how can standing on one leg provide you with any information about balance, after all, we do not go around for extended periods of time standing on one leg?" True, as a rule we are a dynamic people, always moving, our world always in motion, but there are instances were we do need to maintain single limb support. The most obvious times are when we are performing our everyday functional activities. Stepping into a bath tub or up onto a curb would be difficult, if not impossible to do without the ability to maintain single limb support for a given amount of time. The ability to switch from two- to one-leg standing is required to perform turns, climb stairs and dress. As we know, the gait cycle requires a certain amount of single limb support in order to be able to progress ourselves along in a normal pattern. When the dynamics of the cycle are disrupted, loss of balance leading to falls may occur. This is especially true in older individuals whose gait cycle is altered due to normal and potentially abnormal changes that occur as a result of aging. The One-Legged Stance Test measures postural stability (i.e., balance) and is more difficult to perform due to the narrow base of support required to do the test. Along with five other tests of balance and mobility, reliability of the One-Legged Stance Test was examined for 45 healthy females 55 to 71 years old and found to have "good" intraclass correlations coefficients (ICC range = .95 to .099). Within raters ICC ranged from 0.73 to 0.93.3 To perform the test, the patient is instructed to stand on one leg without support of the upper extremities or bracing of the unweighted leg against the stance leg. The patient begins the test with the eyes open, practicing once or twice on each side with his gaze fixed straight ahead. The patient is then instructed to close his eyes and maintain balance for up to 30 seconds.1 The number of seconds that the patient/client is able to maintain this position is recorded. Termination or a fail test is recorded if 1) the foot touches the support leg; 2) hopping occurs; 3) the foot touches the floor, or 4) the arms touch something for support. Normal ranges with eyes open are: 60-69 yrs/22.5 ± 8.6s, 70-79 yrs/14.2 ± 9.3s. Normal ranges for eyes closed are: 60-69 yrs/10.2 ± 8.6s, 70-79 yrs/4.3 ± 3.0s.4 Briggs and colleagues reported balance times on the One-Legged Stance Test in females age 60 to 86 years for dominant and nondominant legs. Given the results of this data, there appears to be some difference in whether individuals use their dominant versus their nondominant leg in the youngest and oldest age groups. When using this test, having patients choose what leg they would like to stand on would be appropriate as you want to record their "best" performance. It has been reported in the literature that individuals increase their chances of sustaining an injury due to a fall by two times if they are unable to perform a One-Legged Stance Test for five seconds.5 Other studies utilizing the One-Legged Stance Test have been conducted in older adults to assess static balance after strength training,6 performance of activities of daily living and platform sway tests.7 Interestingly, subscales of other balance measures such as the Tinetti Performance Oriented Mobility Assessment8 and Berg Balance Scale9 utilize unsupported single limb stance times of 10 seconds and 5 seconds respectively, for older individuals to be considered to have "normal" balance. Thirty percent to 60 percent of community-dwelling elderly individuals fall each year, with many experiencing multiple falls.10 Because falls are the leading cause of injury-related deaths in older adults and a significant cause of disability in this population, prevention of falls and subsequent injuries is a worthwhile endeavor.11 The One-Legged Stance Test can be used as a quick, reliable and easy way for clinicians to screen their patients/clients for fall risks and is easily incorporated into a comprehensive functional evaluation for older adults. 1. Briggs, R., Gossman, M., Birch, R., Drews, J., & Shaddeau, S. (1989). Balance performance among noninstitutionalized elderly women. Physical Therapy, 69(9), 748-756. 2. Anemaet, W., & Moffa-Trotter, M. (1999). Functional tools for assessing balance and gait impairments. Topics in Geriatric Rehab, 15(1), 66-83. 3. Franchignoni, F., Tesio, L., Martino, M., & Ricupero, C. (1998). Reliability of four simple, quantitative tests of balance and mobility in healthy elderly females. Aging (Milan), 10(1), 26-31. 4. Bohannon, R., Larkin, P., Cook, A., & Singer, J. (1984). Decrease in timed balance test scores with aging. Physical Therapy, 64, 1067-1070. 5. Vellas, B., Wayne, S., Romero, L., Baumgartner, R., et al. (1997). One-leg balance is an important predictor of injurious falls in older persons. Journal of the American Geriatric Society, 45, 735-738. 6. Schlicht, J., Camaione, D., & Owen, S. (2001). Effect of intense strength training on standing balance, walking speed, and sit-to-stand performance in older adults. Journal of Gerontological Medicine and Science, 56A(5), M281-M286. 7. Frandin, K., Sonn, U., Svantesson, U., & Grimby, G. (1996). Functional balance tests in 76-year-olds in relation to performance, activities of daily living and platform tests. Scandinavian Journal of Rehabilitative Medicine, 27(4), 231-241. 8. Tinetti, M., Williams, T., & Mayewski, R. (1986). Fall risk index for elderly patients based on number of chronic disabilities. American Journal of Medicine, 80, 429-434. 9. Berg, K., et al. (1989). Measuring balance in the elderly: Preliminary development of an instrument. Physio Therapy Canada, 41(6), 304-311. 10. Rubenstein, L., & Josephson, K. (2002). The epidemiology of falls and syncope. Clinical Geriatric Medicine, 18, 141-158. 11. National Safety Council. (2004). Injury Facts. Itasca, IL: Author. Dr. Lewis is a physical therapist in private practice and president of Premier Physical Therapy of Washington, DC. She lectures exclusively for GREAT Seminars and Books, Inc. Dr. Lewis is also the author of numerous textbooks. Her Website address is www.greatseminarsandbooks.com. Dr. Shaw is an assistant professor in the physical therapy program at the University of South Florida dedicated to the area of geriatric rehabilitation. She lectures exclusively for GREAT Seminars and Books in the area of geriatric function. APTA Encouraged by Cap Exceptions New process grants automatic exceptions to beneficiaries needing care the most Calling it "a good first step toward ensuring that Medicare beneficiaries continue to have coverage for the physical therapy they need," Ben F Massey, Jr, PT, MA, president of the American Physical Therapy Association (APTA), expressed optimism that the new exceptions process will allow a significant number of Medicare patients to receive services exceeding the $1,740 annual financial cap on Medicare therapy coverage. The new procedure, authorized by Congress in the recently enacted Deficit Reduction Act (PL 109-171), will be available to Medicare beneficiaries on March 13 under rules released this week by the Centers for Medicare and Medicaid Services (CMS). "APTA is encouraged by the new therapy cap exceptions process," Massey said. "CMS has made a good effort to ensure that Medicare beneficiaries who need the most care are not harmed by an arbitrary cap." As APTA recommended, the process includes automatic exceptions and also grants exceptions to beneficiaries who are receiving both physical therapy and speech language pathology (the services are currently combined under one $1,740 cap). "We have yet to see how well Medicare contractors will be able to implement and apply this process. Even if it works well, Congress only authorized this new process through 2006. Congress must address this issue again this year, and we are confident that this experience will demonstrate to legislators that they must completely repeal the caps and provide a more permanent solution for Medicare beneficiaries needing physical therapy," Massey continued. The therapy caps went into effect on Jan. 1, 2006, limiting Medicare coverage on outpatient rehabilitation services to $1,740 for physical therapy and speech therapy combined and $1,740 for occupational therapy. The American Physical Therapy Association is a national professional organization representing more than 65,000 members. Its goal is to foster advancements in physical therapy practice, research and education. New Mouthwash Helps With Pain Doctors in Italy are studying whether a new type of mouthwash will help alleviate pain for patients suffering from head and neck cancer who were treated with radiation therapy, according to a new study (International Journal of Radiation Oncology*Biology*Physics, Feb. 1, 2006). Fifty patients, suffering from various forms of head and neck cancer and who received radiation therapy, were observed during the course of their radiation treatment. Mucositis, or inflammation of the mucous membrane in the mouth, is the most common side effect yet no additional therapy has been identified that successfully reduces the pain. This study sought to discover if a mouthwash made from the local anesthetic tetracaine was able to alleviate the discomfort associated with head and neck cancer and if there would be any negative side effects of the mouthwash. The doctors chose to concoct a tetracaine-based mouthwash instead of a lidocaine-based version because it was found to be four times more effective, worked faster and produced a prolonged relief. The tetracaine was administered by a mouthwash approximately 30 minutes before and after meals, or roughly six times a day. Relief of oral pain was reported in 48 of the 50 patients. Sixteen patients reported that the mouthwash had an unpleasant taste or altered the taste of their food.
<urn:uuid:f8131c7f-1b2a-41bd-9eaa-951dad06e313>
CC-MAIN-2013-20
http://physical-therapy.advanceweb.com/Article/One-Legged-Single-Limb-Stance-Test.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.919898
2,250
3.078125
3
Problems of Philosophy Chapter 5 - Knowledge by Acquaintance and Knowledge by Description After distinguishing two types of knowledge, knowledge of things and knowledge of truths, Russell devotes this fifth chapter to an elucidation of knowledge of things. He further distinguishes two types of knowledge of things, knowledge by acquaintance and knowledge by description. We have knowledge by acquaintance when we are directly aware of a thing, without any inference. We are immediately conscious and acquainted with a color or hardness of a table before us, our sense-data. Since acquaintance with things is logically independent from any knowledge of truths, we can be acquainted with something immediately without knowing any truth about it. I can know the color of a table "perfectly and completely when I see it" and not know any truth about the color in itself. The other type of knowledge of things is called knowledge by description. When we say we have knowledge of the table itself, a physical object, we refer to a kind of knowledge other than immediate, direct knowledge. "The physical object which causes such-and-such sense-data" is a phrase that describes the table by way of sense-data. We only have a description of the table. Knowledge by description is predicated on something with which we are acquainted, sense-data, and some knowledge of truths, like knowing that "such- and-such sense-data are caused by the physical object." Thus, knowledge by description allows us to infer knowledge about the actual world via the things that can be known to us, things with which we have direct acquaintance (our subjective sense-data). According to this outline, knowledge by acquaintance forms the bedrock for all of our other knowledge. Sense-data is not the only instance of things with which we can be immediately acquainted. For how would we recall the past, Russell argues, if we could only know what was immediately present to our senses. Beyond sense-data, we also have "acquaintance by memory." Remembering what we were immediately aware of makes it so that we are still immediately aware of that past, perceived thing. We may therefore access many past things with the same requisite immediacy. Beyond sense-data and memories, we possess "acquaintance by introspection." When we are aware of an awareness, like in the case of hunger, "my desiring food" becomes an object of acquaintance. Introspective acquaintance is a kind of acquaintance with our own minds that may be understood as self-consciousness. However, this self-consciousness is really more like a consciousness of a feeling or a particular thought; the awareness rarely includes the explicit use of "I," which would identify the Self as a subject. Russell abandons this strand of knowledge, knowledge of the Self, as a probable but unclear dimension of acquaintance. Russell summarizes our acquaintance with things as follows: "We have acquaintance in sensation with the data of the outer senses, and in introspection with the data of what may be called the inner sense—thoughts, feelings, desires, etc.; we have acquaintance in memory with things which have been data either of the outer senses or of the inner sense. Further, it is probable, though not certain, that we have acquaintance with Self, as that which is aware of things or has desires towards things." All these objects of acquaintance are particulars, concrete, existing things. Russell cautions that we can also have acquaintance with abstract, general ideas called universals. He addresses universals more fully later in chapter 9. Russell allocates the rest of the chapter to explaining how the complicated theory of knowledge by description actually works. The most conspicuous things that are known to us by description are physical objects and other people's minds. We approach a case of having knowledge by description when we know "that there is an object answering to a definite description, though we are not acquainted with any such object." Russell offers several illustrations in the service of understanding knowledge by description. He claims that it is important to understand this kind of knowledge because our language uses depends so heavily on it. When we say common words or proper names, we are really relying on the meanings implicit in descriptive knowledge. The thought connoted by the use of a proper name can only really be explicitly expressed through a description or proposition. Bismarck, or "the first Chancellor of the German Empire," is Russell's most cogent example. Imagine that there is a proposition, or statement, made about Bismarck. If Bismarck is the speaker, admitting that he has a kind of direct acquaintance with his own self, Bismarck might have voiced his name in order to make a self-referential judgment, of which his name is a constituent. In this simplest case, the "proper name has the direct use which it always wishes to have, as simply standing for a certain object, and not for a description of the object." If one of Bismarck's friends who knew him directly was the speaker of the statement, then we would say that the speaker had knowledge by description. The speaker is acquainted with sense-data which he infers corresponds with Bismarck's body. The body or physical object representing the mind is "only known as the body and the mind connected with these sense-data," which is the vital description. Since the sense-data corresponding to Bismarck change from moment to moment and with perspective, the speaker knows which various descriptions are valid. Still more removed from direct acquaintance, imagine that someone like you or I comes along and makes a statement about Bismarck that is a description based on a "more or less vague mass of historical knowledge." We say that Bismarck was the "first Chancellor of the German Empire." In order to make a valid description applicable to the physical object, Bismarck's body, we must find a relation between some particular with which we have acquaintance and the physical object, the particular with which we wish to have an indirect acquaintance. We must make such a reference in order to secure a meaningful description. To usefully distinguish particulars from universals, Russell posits the example of "the most long-lived of men," a description which wholly consists of universals. We assume that the description must apply to some man, but we have no way of inferring any judgment about him. Russell remarks, "all knowledge of truths, as we shall show, demands acquaintance with things which are of an essentially different character from sense-data, the things which are sometimes called 'abstract ideas', but which we shall call 'universals'." The description composed only of universals gives no knowledge by acquaintance with which we might anchor an inference about the longest-lived man. A further statement about Bismarck, like "The first Chancellor of the German Empire was an astute diplomatist," is a statement that contains particulars and asserts a judgment that we can only make in virtue of some acquaintance (like something heard or read). Statements about things known by description function in our language as statements about the "actual thing described;" that is, we intend to refer to that thing. We intend to say something with the direct authority that only Bismarck himself could have when he makes a statement about himself, something with which he has direct acquaintance. Yet, there is a spectrum of removal from acquaintance with the relevant particulars: from Bismarck himself, "there is Bismarck to people who knew him; Bismarck to those who only know of him through history" and at a far end of the spectrum "the longest lived of men." At the latter end, we can only make propositions that are logically deducible from universals, and at the former end, we come as close as possible to direct acquaintance and can make many propositions identifying the actual object. It is now clear how knowledge gained by description is reducible to knowledge by acquaintance. Russell calls this observation his fundamental principle in the study of "propositions containing descriptions": "Every proposition which we can understand must be composed wholly of constituents with which we are acquainted." Indirect knowledge of some particulars seems necessary if we are to expressively attach meanings to the words we commonly use. When we say something referring to Julius Caesar, we clearly have no direct acquaintance with the man. Rather, we are thinking of such descriptions as "the man who was assassinated on the Ides of March" or "the founder of the Roman Empire." Since we have no way of being directly acquainted with Julius Caesar, our knowledge by description allows us to gain knowledge of "things which we have never experienced." It allows us to overstep the boundaries of our private, immediate experiences and engage a public knowledge and public language. This knowledge by acquaintance and knowledge by description theory was a famous epistemological problem-solver for Russell. Its innovative character allowed him to shift to his moderate realism, a realism ruled by a more definite categorization of objects. It is a theory of knowledge that considers our practice of language to be meaningful and worthy of detailed analysis. Russell contemplates how we construct a sense of meaning about objects remote from our experience. The realm of acquaintance offers the most secure references for our understanding of the world. Knowledge by description allows us to draw inferences from our realm of acquaintance but leaves us in a more vulnerable position. Since knowledge by description also depends on truths, we are prone to error about our descriptive knowledge if we are somehow mistaken about a proposition that we have taken to be true. Critics of this theory have held that Russell's hypothesis of knowledge by description is confusing. His comments when defining sense-data, that the physical world is unknowable to us, contradict his theory of knowledge by descriptions. He implies that "knowledge by description" is not really a form of knowledge since we can only know those things with which we are acquainted and we cannot be acquainted with physical objects. Russell's theory amounts to the proposition that our acquaintance with mental objects appears related in a distant way to physical objects and renders us obliquely acquainted with the physical world. Sense-data are our subjective representations of the external world, and they negotiate this indirect contact. While innovative, Russell's theory of knowledge by description is not an attractive theory of knowledge. It is clearly unappealing because our impressions of the real world, on his view, are commensurate with muddy representations of reality. Though we have direct access to these representations, it seems impossible to have any kind of direct experience of reality. Reality, rather, consists in unconscious, inferential pieces of reasoning. Readers' Notes allow users to add their own analysis and insights to our SparkNotes—and to discuss those ideas with one another. Have a novel take or think we left something out? Add a Readers' Note!
<urn:uuid:0abbeab1-fafa-4389-9f7e-7573eb693c9e>
CC-MAIN-2013-20
http://www.sparknotes.com/philosophy/problems/section5.rhtml
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963978
2,196
3.21875
3
When he shot President Lincoln, John Wilkes Booth was 26 years old, and one of the nation’s most famous actors. (Charles DeForest Fredericks/National Portrait Gallery) John Wilkes Booth, a Maryland native, spent the war performing in theatrical productions. But the conflict was never far from his mind. In a letter to his mother, he expressed chagrin that he hadn’t joined the Confederate army, writing, “I have … begun to deem myself a coward, and to despise my own existence.” He was outraged by the reelection of Lincoln, whom he viewed as the instigator of all the country’s woes. The month after the inauguration, Booth learned that Lincoln would be attending a performance at Ford’s Theatre on April 14. That night, he crept into Lincoln’s theater box and shot him in the back of the head. It was the first time a president had been murdered. “Wanted” posters were issued for Booth, and on April 26, he was cornered in a tobacco barn and shot by a federal sergeant, acting against orders to bring him in alive. Several months later, Charles Creighton Hazewell, a frequent contributor, sought to make sense of the assassination—speculating that the plot may have been hatched in Canada (where a number of secessionist schemes had originated) and hinting at evidence that the plan had been endorsed at the highest levels of the Confederate government.—Sage Stossel The assassination of President Lincoln threw a whole nation into mourning … Of all our Presidents since Washington, Mr. Lincoln had excited the smallest amount of that feeling which places its object in personal danger. He was a man who made a singularly favorable impression on those who approached him, resembling in that respect President Jackson, who often made warm friends of bitter foes, when circumstances had forced them to seek his presence; and it is probable, that, if he and the honest chiefs of the Rebels could have been brought face to face, there never would have been civil war,—at least, any contest of grand proportions; for he would not have failed to convince them that all that they had any right to claim, and therefore all that they could expect their fellow-citizens to fight for, would be more secure under his government than it had been under the governments of such men as Pierce and Buchanan, who made use of sectionalism and slavery to promote the selfish interests of themselves and their party … Ignorance was the parent of the civil war, as it has been the parent of many other evils,—ignorance of the character and purpose of the man who was chosen President in 1860–61, and who entered upon official life with less animosity toward his opponents than ever before or since had been felt by a man elected to a great place after a bitter and exciting contest … That one of the most insignificant of [the secessionists’] number should have murdered the man whose election they declared to be cause for war is nothing strange, being in perfect keeping with their whole course. The wretch who shot the chief magistrate of the Republic is of hardly more account than was the weapon which he used. The real murderers of Mr. Lincoln are the men whose action brought about the civil war. Booth’s deed was a logical proceeding, following strictly from the principles avowed by the Rebels, and in harmony with their course during the last five years. The fall of a public man by the hand of an assassin always affects the mind more strongly than it is affected by the fall of thousands of men in battle; but in strictness, Booth, vile as his deed was, can be held to have been no worse, morally, than was that old gentleman who insisted upon being allowed the privilege of firing the first shot at Fort Sumter. Ruffin’s act is not so disgusting as Booth’s; but of the two men, Booth exhibited the greater courage,—courage of the basest kind, indeed, but sure to be attended with the heaviest risks, as the hand of every man would be directed against its exhibitor. Had the Rebels succeeded, Ruffin would have been honored by his fellows; but even a successful Southern Confederacy would have been too hot a country for the abode of a wilful murderer. Such a man would have been no more pleasantly situated even in South Carolina than was Benedict Arnold in England. And as he chose to become an assassin after the event of the war had been decided, and when his victim was bent upon sparing Southern feeling so far as it could be spared without injustice being done to the country, Booth must have expected to find his act condemned by every rational Southern man as a worse than useless crime, as a blunder of the very first magnitude. Had he succeeded in getting abroad, Secession exiles would have shunned him, and have treated him as one who had brought an ineffaceable stain on their cause, and also had rendered their restoration to their homes impossible. The pistol-shot of Sergeant Corbett saved him from the gallows, and it saved him also from the denunciations of the men whom he thought to serve. He exhibited, therefore, a species of courage that is by no means common; for he not only risked his life, and rendered it impossible for honorable men to sympathize with him, but he ran the hazard of being denounced and cast off by his own party … All Secessionists who retain any self-respect must rejoice that one whose doings brought additional ignominy on a cause that could not well bear it has passed away and gone to his account. It would have been more satisfactory to loyal men, if he had been reserved for the gallows; but even they must admit that it is a terrible trial to any people who get possession of an odious criminal, because they may be led so to act as to disgrace themselves, and to turn sympathy in the direction of the evil-doer … Therefore the shot of Sergeant Corbett is not to be regretted, save that it gave too honorable a form of death to one who had earned all that there is of disgraceful in that mode of dying to which a peculiar stigma is attached by the common consent of mankind. Whether Booth was the agent of a band of conspirators, or was one of a few vile men who sought an odious immortality, it is impossible to say. We have the authority of a high Government official for the statement that “the President’s murder was organized in Canada and approved at Richmond”; but the evidence in support of this extraordinary announcement is, doubtless for the best of reasons, withheld at the time we write. There is nothing improbable in the supposition that the assassination plot was formed in Canada, as some of the vilest miscreants of the Secession side have been allowed to live in that country … But it is not probable that British subjects had anything to do with any conspiracy of this kind. The Canadian error was in allowing the scum of Secession to abuse the “right of hospitality” through the pursuit of hostile action against us from the territory of a neutral … That a plan to murder President Lincoln should have been approved at Richmond is nothing strange; and though such approval would have been supremely foolish, what but supreme folly is the chief characteristic of the whole Southern movement? If the seal of Richmond’s approval was placed on a plan formed in Canada, something more than the murder of Mr. Lincoln was intended. It must have been meant to kill every man who could legally take his place, either as President or as President pro tempore. The only persons who had any title to step into the Presidency on Mr. Lincoln’s death were Mr. Johnson, who became President on the 15th of April, and Mr. Foster, one of the Connecticut Senators, who is President of the Senate … It does not appear that any attempt was made on the life of Mr. Foster, though Mr. Johnson was on the list of those doomed by the assassins; and the savage attack made on Mr. Seward shows what those assassins were capable of. But had all the members of the Administration been struck down at the same time, it is not at all probable that “anarchy” would have been the effect, though to produce that must have been the object aimed at by the conspirators. Anarchy is not so easily brought about as persons of an anarchical turn of mind suppose. The training we have gone through since the close of 1860 has fitted us to bear many rude assaults on order without our becoming disorderly. Our conviction is, that, if every man who held high office at Washington had been killed on the 14th of April, things would have gone pretty much as we have seen them go, and that thus the American people would have vindicated their right to be considered a self-governing race. It would not be a very flattering thought, that the peace of the country is at the command of any dozen of hardened ruffians who should have the capacity to form an assassination plot, the discretion to keep silent respecting their purpose, and the boldness and the skill requisite to carry it out to its most minute details: for the neglect of one of those details might be fatal to the whole project. Society does not exist in such peril as that. john wilkes booth, a Maryland native, spent the war performing in theatrical productions. But the conflict was never far from his mind. In a letter to his mother, he expressed chagrin that he hadn’t joined the Confederate army, writing, “I have … begun to deem myself a coward, and to despise my own existence.” He was outraged by the reelection of Lincoln, whom he viewed as the instigator of all the country’s woes. The month after the inauguration, Booth learned that Lincoln would be attending a performance at Ford’s Theatre on April 14. That night, he crept into Lincoln’s theater box and shot him in the back of the head. It was the first time a president had been murdered. “Wanted” posters were issued for Booth, and on April 26, he was cornered in a tobacco barn and shot by a federal sergeant, who acted against orders to bring him in alive. Several months later, Charles Creighton Hazewell, a frequent Atlantic contributor, sought to make sense of the assassination—speculating that the plot may have been hatched in Canada (where a number of secessionist schemes had originated) and hinting at evidence that the plan had been endorsed at the highest levels of the Confederate government. Read the full text of this article here. This article available online at:
<urn:uuid:b48891ec-4670-49b3-85a7-ec1a2ad95bf5>
CC-MAIN-2013-20
http://www.theatlantic.com/magazine/print/2012/02/assassination/308804/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.986341
2,194
3.8125
4
On January 9th, citizens living in southern Sudan will vote on a referendum to secede from the northern part of the country. A clock in the town of Juba, the political center of southern Sudan, counts down to this referendum, symbolical of the locals’ excitement to part from the hegemonic north. Nearby, the Darfur genocide crisis that continues to plague the area is not an isolated event. It’s all related, part of two brutal civil wars that have been for decades tearing the nation apart; as of late, literally. Sudan has traditionally been seen by many as the bridge between the Arab and the African worlds—one not particularly easy to cross. The north and the south of Sudan are just about as culturally and religiously different from each other as you could possibly imagine. In the north, Arab culture dominates, and the majority religion is Islam. In the south, the predominant culture is more traditionally sub-Saharan African, and the primary religions are animist belief systems and Christianity. Ever since the country gained independence from Britain in 1956, the cultural and religious systems of the north have been heavily imposed on the whole of Sudan, resulting in southern resistance and the ongoing strife. In particular, this imposition of a differing set of beliefs can in large part be attributed to the current Sudanese president, Omar al-Bashir. Al-Bashir arose to power in 1989 through a bloodless coup, and this past April, won the first ostensibly democratic election the nation has held in 24 years. I hesitate to call the election democratic because many believe that al-Bashir, who is notorious for his corruption, rigged it in his favor. While there is no proof, it is generally not unsafe to consider that leaders who are in power through a coup have significant sway in any following elections. Whether he is rightfully in power or not, al-Bashir has imposed northern ideals throughout the whole nation, a primary cause of the Sudanese civil wars. Many attribute the Darfur genocide, just a single episode of the extensive bloodshed since Sudan’s independence, to al-Bashir. Because of these accusations, he is currently on trial for war crimes, the only current head of state in such a predicament. To drive home his impositional tendencies further, al-Bashir has said that if the south secedes, he will impose Shari’a in the north, in an effort to make northern Sudan officially an Islamic state. My first response to this situation was wondering: How did two peoples so immensely different from one another end up together in the first place? This is not the same as the American Civil War, where regional differences led to ideological differences, which in turn led to secession. In the Sudanese case, ideological and cultural differences existed long before the country gained independence. Thus, one should look to colonialism as the primary cause of Sudan’s problems. It seems to me that Sudan’s independence process was dangerously arbitrary; occurring at the time of mass European decolonization in Africa. It’s as if Britain backed out of the region and drew a national border at random. And now, after over half a century, the people want that to change. Despite the referendum on schedule for next month, the potential new border still has not been set. Money, of course, is a factor. Sudan is one of the most oil-rich nations of Africa, but most of the country’s oil is found in the south. On the one hand, the north might not want to draw a new boundary where the south gets all of the resource wealth, a potential cause for even more strife. On the other hand, some see oil as a potential area that could keep the two sides friendly if they do end up splitting. Mutual desire for the oil wealth may bring the two sides together diplomatically if the split ends up happening peacefully. As you can see, this situation is extremely complex, far more so than the south simply saying “we want to secede” and secession then happening. To better understand the context, one needs to consider the past, but one should also consider the future: what will happen if the current nation of Sudan does in fact split? I am wondering particularly about those who have their roots in the south but live in the north. Since the referendum was announced, many of these people have moved back to the south, but a fair number still remain in the north. What will happen to these primarily non-Muslim people (and Muslims alike) if the north does in fact impose Shari’a on al-Bashir’s whim? Al-Bashir will go from an imposer of northern Arab and Islamic values to being completely intolerant of this significant minority in his newly allotted half of Sudan, and the results would be tragic. What message would a Sudanese split portray to the rest of Africa, the rest of the world? The African Union fears that a Sudanese split would incite other secessionists around the continent. Other nations undergoing similar domestic, regional conflicts of interest may feel not only that they have a right to secede, but may even feel encouraged to do so. Is this kind of outright division the right answer to such a complicated historical struggle? Is there even a right answer? Experts seem to agree that the nation will inevitably split. Whether this bifurcation happens via a timely, democratic, and peaceful referendum or through continuing bloodshed is a matter that only time will tell. I will certainly be following this issue in the coming weeks, and I wrote this article before the scheduled referendum in the hope to spark more interest on the issue. I urge you to follow it in the news; the results affect a much wider area than simply Sudan. Stay tuned for my next column, where I will compare and contrast two leaders in South America on opposite sides of the political spectrum and compare their respective political systems to that of the United States. Latest posts by David Klayton (see all) - Should Turkey be a part of "Europe?" - February 26, 2011 - Moderately Extreme: Ideological Flexibility in Latin American Politics - January 27, 2011 - When One Nation Becomes Two - December 31, 2010
<urn:uuid:775f924c-1e42-4c9d-96ef-70a47ee35ac0>
CC-MAIN-2013-20
http://www.wupr.org/2010/12/31/when-one-nation-becomes-two/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96128
1,284
2.796875
3
Ki Tisa (Mitzvot) For more teachings on this portion, see the archives to this blog, below at March 2006. This week’s parasha is best known for the dramatic and richly meaningful story of the Golden Calf and the Divine anger, of Moses’ pleading on behalf of Israel, and the eventual reconciliation in the mysterious meeting of Moses with God in the Cleft of the Rock—subjects about which I’ve written at length, from various aspects, in previous years. Yet the first third of the reading (Exod 30:11-31:17) is concerned with various practical mitzvot, mostly focused on the ritual worship conducted in the Temple, which tend to be skimmed over in light of the intense interest of the Calf story. As this year we are concerned specifically with the mitzvot in each parasha, I shall focus on this section. These include: the giving by each Israelite [male] of a half-shekel to the Temple; the making of the laver, from which the priests wash their hands and feet before engaging in Divine service; the compounding of the incense and of the anointing oil; and the Shabbat. I shall focus here upon the washing of the hands. Hand-washing is a familiar Jewish ritual: it is, in fact, the first act performed by pious Jews upon awakening in the morning (some people even keep a cup of water next to their beds, so that they may wash their hands before taking even a single step); one performs a ritual washing of the hands before eating bread; before each of the daily prayers; etc. The section here dealing with the laver in the Temple (Exod 30:17-21) is also one of the four portions from the Torah recited by many each morning, as part of the section of the liturgy known as korbanot, chapters of Written and Oral Torah reminiscent of the ancient sacrificial system, that precede Pesukei de-Zimra. Sefer ha-Hinukh, at §106, explains the washing of hands as an offshoot of the honor due to the Temple and its service—one of many laws intended to honor, magnify, and glorify the Temple. Even if the priest was pure and clean, he must wash (literally, “sanctify”) his hands before engaging in avodah. This simple gesture of purification served as a kind of separation between the Divine service and everyday life. It added a feeling of solemnity, of seriousness, a sense that one was engaged in something higher, in some way separate from the mundane activities of regular life. (One hand-washing by kohanim, in the morning, was sufficient, unless they left the Temple grounds or otherwise lost the continuity of their sacred activity.) Our own netilat yadaim, whether before prayer or breaking bread, may be seen as a kind of halakhic carryover from the Temple service, albeit on the level of Rabbinic injunction. What is the symbolism of purifying one’s hands? Water, as a flowing element, as a solvent that washes away many of the things with which it comes in contact, is at once a natural symbol of both purity, and of the renewal of life. Mayim Hayyim—living waters—is an age old association. Torah is compared to water; water, constantly flowing, is constantly returning to its source. At the End of Days, “the land will be filled with knowledge of the Lord, like waters going down to the sea.” A small part of this is hinted in this simple, everyday gesture. “See that this nation is Your people” But I cannot pass over Ki Tisa without some comment on the incident of the Golden Calf and its ramifications. This week, reading through the words of the parasha in preparation for a shiur (what Ruth Calderon, founder of Alma, a secularist-oriented center for the study of Judaism in Tel Aviv, called “barefoot reading”—that is, naïve, without preconceptions), I discovered something utterly simple that I had never noticed before in quite the same way. At the beginning of the Calf incident, God tells Moses, who has been up on the mountain with Him, “Go down, for your people have spoiled” (32:7). A few verses later, when God asks leave of Moses (!) to destroy them, Moses begs for mercy on behalf of the people with the words “Why should Your anger burn so fiercely against Your people…” (v. 11). That is, God calls them Moses’ people, while Moses refers to them as God’s people. Subsequent to this exchange, each of them refers to them repeatedly in the third person, as “the people” or “this people” (העם; העם הזה). Neither of them refers to them, as God did in the initial revelation to Moses at the burning bush (Exodus 3:7 and passim) as “my people,” or with the dignified title, “the children of Israel”—as if both felt a certain alienation, of distance from this tumultuous, capricious bunch. Only towards the end, after God agrees not to destroy them, but still states “I will not go up with them,” but instead promises to send an angel, does Moses says “See, that this nation is Your people” (וראה כי עמך הגוי הזה; 33:13). What does all this signify? Reading the peshat carefully, there is one inevitable conclusion: that God wished to nullify His covenant with the people Israel. It is in this that there lies the true gravity, and uniqueness, of the Golden Calf incident. We are not speaking here, as we read elsewhere in the Bible—for example, in the two great Imprecations (tokhahot) in Lev 26 and Deut 28, or in the words of the prophets during the First Temple—merely of threats of punishment, however harsh, such as drought, famine, pestilence, enemy attacks, or even exile and slavery. There, the implicit message is that, after a period of punishment, a kind of moral purgation through suffering, things will be restored as they were. Here, the very covenant itself, the very existence of an intimate connection with God, hangs in the balance. God tells Moses, “I shall make of you a people,” i.e., instead of them. This, it seems to me, is the point of the second phase of this story. Moses breaks the tablets; he and his fellow Levites go through the camp killing all those most directly implicated in worshipping the Calf; God recants and agrees not to destroy the people. However, “My angel will go before them” but “I will not go up in your midst” (33:2, 3). This should have been of some comfort; yet this tiding is called “this bad thing,” the people mourn, and remove the ornaments they had been wearing until then. Evidently, they understood the absence of God’s presence or “face” as a grave step; His being with them was everything. That is the true importance of the Sanctuary in the desert and the Tent of Meeting, where Moses speaks with God in the pillar of cloud (33:10). God was present with them there in a tangible way, in a certain way continuing the epiphany at Sinai. All that was threatened by this new declaration. Moses second round of appeals to God, in Exod 33:12-23, focuses on bringing God, as it were, to a full reconciliation with the people. This is the significance of the Thirteen Qualities of Mercy, of what I have called the Covenant in the Cleft of the Rock, the “faith of Yom Kippur” as opposed to that of Shavuot (see HY I: Ki Tisa; and note Prof. Jacob Milgrom’s observation that this chapter stands in the exact center, in a literary sense, of the unit known as the Hextateuch—Torah plus the Book of Joshua). But I would add two important points. One, that this is the first place in the Torah where we read about sin followed by reconciliation. After Adam and Eve ate of the fruit of the Garden, they were punished without hope of reprieve; indeed, their “punishment “ reads very much like a description of some basic aspects of the human condition itself. Cain, after murdering Abel, was banished, made to wander the face of the earth. The sin of the brothers in selling Joseph, and their own sense of guilt, is a central factor in their family dynamic from then on, but there is nary a word of God’s response or intervention. It would appear that God’s initial expectation in the covenant at Sinai was one of total loyalty and fidelity. The act of idolatry was an unforgivable breach of the covenant—much as adultery is generally perceived as a fundamental violation of the marital bond. Moses, in persuading God to recant of His jealousy and anger, to give the faithless people another chance, is thus introducing a new concept: of a covenant that includes the possibility of even the most serious transgressions being forgiven; of the knowledge that human beings are fallible, and that teshuvah and forgiveness are essential components of any economy of men living before a demanding God. The second, truly astonishing point is the role played by Moses in all this. Moshe Rabbenu, “the man of God,” is not only the great teacher of Israel, the channel through which they learn the Divine Torah, but also, as it were, one who teaches God Himself. It is God who “reveals His Qualities of Mercy” at the Cleft of the Rock; but without Moses cajoling, arguing, persuading (and note the numerous midrashim around this theme), “were it not for my servant Moses who stood in the breach,” all this would not have happened. It was Moses who elicited this response and who, so to speak, pushed God Himself to this new stage in his relation with Israel—to give up His expectations of perfection from His covenanted people, and to understand that living within a covenant means, not rigid adherence to a set of laws, but a living relationship with real people, taking the bad with the good. (Again, the parallel to human relationships is obvious)
<urn:uuid:c4c19472-691a-44c6-a55b-21fbb183475b>
CC-MAIN-2013-20
http://hitzeiyehonatan.blogspot.com/2008_02_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966594
2,269
2.671875
3
America's oil and natural gas industry is committed to protecting the environment and to continuously improving its hurricane preparation and response plans. After any hurricane or tropical storm, the goal is to return to full operations as quickly and as safely as possible. For the 2012 hurricane season, the industry continues to build upon critical lessons learned from 2008's major hurricanes, Gustav and Ike, as well as other powerful storms, such as 2005's Katrina and Rita and 2004's Ivan. API plays two primary roles for the industry in preparing for hurricanes. First, it helps the industry gain a better understanding of the environmental conditions in and around the Gulf of Mexico during hurricane or tropical storm activity and then assists industry in using that knowledge to make offshore and onshore facilities less vulnerable. Second, API collaborates with member companies, other industries and with federal, state and local governments to prepare for hurricanes and return operations as quickly and as safely as possible. API member companies also independently work to improve preparedness for hurricanes and other natural or manmade disasters. They have, for example, reviewed and updated emergency response plans, established redundant communication paths and made pre-arrangements with suppliers to help ensure they have adequate resources during an emergency. The API Subcommittee on Offshore Stuctures, the International Association of Drilling Contractors, and the Offshore Operators Committee, serve as a liaison to regulatory agencies, coordinate industry review of critical design standards and provide a forum for sharing lessons learned from previous hurricanes. These combined efforts are critical since the Gulf of Mexico accounted for about 23 percent of the oil and 8 percent of total natural gas produced in the United States (approximately 82 percent of the oil supply comes from deepwater facilities), and the Gulf Coast region is home to almost half of the U.S. refining capacity. Upstream (Exploration and Production) During the major 2005 hurricanes, waves were higher and winds were stronger than anticipated in deeper parts of the Gulf so the industry moved away from viewing it as a uniform body of water. Evaluating the effects of those and other storms, helped scientists discover that the Central Gulf of Mexico was more prone to hurricanes because it acts as a gathering spot for warm currents that can strengthen a storm. The revised wind, wave and water current measurements ("metocean" data) prompted API to reassess its recommended practices (RPs) for industry operations in the region. - The upstream segment continues to integrate the updated environmental (metocean) data on how powerful storms affect conditions in the Gulf of Mexico into its offshore structure design standards. This effort led to the publication in 2008 of an update to RP 2SK, Design and Analysis of Stationkeeping Systems for Floating Structures, that provides guidance for design and operation of Mobile Offshore Drilling Unit (MODU) mooring systems in the Gulf of Mexico during the hurricane season. API RP 95J, Gulf of Mexico Jack-up Operations for Hurricane Season, which recommends locating jack-up rigs on more stable areas of the sea floor, and positioning platform decks higher above the sea surface, was also updated. API publications are available at our (Search and Order API in the past six years also has issued a number of bulletins to help better prepare for and bring production back online after Gulf hurricanes. These include: Production and Hurricanes (steps industry takes to prepare for and return after a storm) - Bulletin 2TD, Guidelines for Tie-downs on Offshore Production Facilities for Hurricane Season, which is aimed at better-securing separate platform equipment. - Bulletin 2INT-MET, Interim Guidance on Hurricane Conditions in the Gulf of Mexico, which provides updated metocean data for four regions of the Gulf, including wind velocities, deepwater wave conditions, ocean current information, and surge and tidal data. - Bulletin 2INT-DG, Interim Guidance for Design of Offshore Structures for Hurricane Conditions, which explains how to apply the updated metocean data during design. - Bulletin 2INT-EX, Interim Guidance for Assessment of Existing Offshore Structures for Hurricane Conditions, which assists owners/operators and engineers with existing facilities. - Bulletin 2HINS, Guidance on Post-hurricane Structural Inspection of Offshore Structures, which provides guidance on determining if a structure sustained hurricane-induced damage that affects the safety of personnel, the primary structural integrity, or its ability to perform the purpose for which it was intended. Refineries and Pipelines - Days in advance of a tropical storm or hurricane moving toward or near their drilling and production operations, companies will evacuate all non-essential personnel and begin the process of shutting down production. - As the storm gets closer, all personnel will be evacuated from the drilling rigs and platforms, and production is shut down. Drillships may relocate to a safe location. Operations in areas not forecast to take a direct hit from the storm often will be shut down as well because storms can change direction with little notice. - After a storm has passed and it is safe to fly, operators will initiate "flyovers" of onshore and offshore facilities to evaluate damage from the air. For onshore facilities, these "flyovers" can identify flooding, facility damage, road or other infrastructure problems, and spills. Offshore "flyovers" look for damaged drilling rigs, platform damage, spills, and possible pipeline damage. - Many offshore drilling rigs are equipped with GPS locator systems, which allow federal officials and drilling contractors to remotely monitor the rigs' location before, during and after a hurricane. If a rig is pulled offsite by the storm, locator systems allow crews to find and recover the rig as quickly and as safely as possible. - Once safety concerns are addressed, operators will send assessment crews to offshore facilities to physically assess the facilities for damage. - If facilities are undamaged, and ancillary facilities, like pipelines that carry the oil and natural gas, are undamaged and ready to accept shipments, operators will begin restarting production. Drilling rigs will commence operations. Despite sustaining unprecedented damage and supply outages during the 2005 and 2008 hurricanes, the industry quickly and safely brought refining and pipeline operations back online, delivering to consumers near-record levels of gasoline and record levels of distillate (diesel and heating oil) in 2008. The oil and oil-product pipelines operating on or near the Gulf of Mexico continue to review their assets and operations to minimize the potential impacts of storms and shorten the time it takes to recover. While there have been some shortages caused by hurricanes, supply disruptions have been temporary despite extensive damage to supporting infrastructure, such as electric power generation and distribution, production shut-ins and refinery shutdowns. Pipelines need a steady supply of crude oil or refined products to keep product flowing to its intended destinations. To prepare for future severe storms, refiners and pipeline companies have Refineries and hurricanes (steps industry takes to prepare for and return after a storm) - Worked with utilities to clarify priorities for electric power restoration critical to restarting operations and to help minimize significant disruptions to fuel distribution and delivery. - Secured backup power generation equipment and worked with federal, state and local governments to ensure that pipelines and refineries are considered "critical" infrastructure for back-up power purposes. - Established redundant communications systems to support continuity of operations and locate employees. - Worked with vendors to pre-position food, water and transportation, and updated emergency plans to secure other emergency supplies and services. - Provided additional training for employees who have participated in various exercises and drills. - Reexamined and improved emergency response and business continuity plans. - Strengthened onshore buildings and elevated equipment where appropriate to minimize potential flood damage. - Worked with the states and local emergency management officials to provide documentation and credentials for employees who need access to disaster sites where access is restricted during an emergency. - Participated in industry conferences to share best practices and improvement opportunities. Pipelines and hurricanes (steps industry takes to prepare for and return after a storm) - Refiners, in the hours before a large storm makes landfall, will usually evacuate all non-essential personnel and begin shutting down or reducing operations. - Operations in areas not forecast to take a direct hit from the storm often are shut down or curtailed as a precaution because storms can change direction with little notice. - Once safe, teams come in to assess damage. If damage or flooding has occurred, it must be repaired and dealt with before the refinery can be brought back on-line. - Other factors that can cause delays in restarting refineries include the availability of crude oil, electricity to run the plant and water used for cooling the process units. - Refineries are complex. It takes more than a flip of a switch to get a refinery back up and running. Once a decision has been made that it is safe to restart, it can take several days before the facility is back to full operating levels. This is because the process units and associated equipment must be returned to operation in a staged manner to ensure a safe and successful startup. - If facilities are undamaged or necessary repairs have been made, and ancillary facilities - like pipelines that carry the oil and natural gas - are undamaged and ready to accept shipments, operators will begin restarting production. - Pipeline operations can be impacted by storms, primarily through power outages, but also by direct damage. - Offshore pipelines damaged require the hiring of divers, repairs and safety inspections before supplies can flow. Damaged onshore pipelines must be assessed, repaired and inspected before resuming operations. - Without power, crude oil and petroleum products cannot be moved through pipelines. Operators routinely hold or lease back-up generators but need time to get them onsite. - If there is no product put into pipelines because Gulf Coast/Gulf of Mexico crude or natural gas production has been curtailed, or because of refinery shutdowns, the crude and products already in the pipelines cannot be pushed out the other end. - Wind damage to above ground tanks at storage terminals can also impact supplies into the pipeline. : The 2008 hurricane season was very active, with 16 named storms, of which eight became hurricanes and five of those were major hurricanes. For the U.S. oil and natural gas industry, the two most serious storms of 2008 were Hurricane Ike, which made landfall in mid-September near Baytown, Texas, and Hurricane Gustav, which made landfall on September 1 in Louisiana. Hurricane Gustav, a strong Category 2 storm, kept off-line oil and natural gas delivery systems and production platforms that had not yet been fully restored from a smaller storm two weeks earlier, and brought significant flooding as far north as Baton Rouge. Hurricane Ike, another strong Category 2 hurricane, caused significant portions of the production, processing, and pipeline infrastructure along the Gulf Coast in East Texas and Louisiana to shut down. Ike caused significant destruction to electric transmission and distribution lines, and these damages delayed the restart of major processing plants, pipelines, and refineries. As many as 3.7 million customers were without electric power following the storm, with about 2.5 million in Texas alone. At the peak of disruptions, more than 20 percent of total U.S. refinery capacity was idled. The Minerals Management Service - now called Bureau of Ocean Energy Management, Regulation and Enforcement (BOEMRE) estimated that 2,127 of the 3,800 total oil and natural gas production platforms in the Gulf of Mexico were exposed to hurricane conditions, with winds greater than 74 miles per hour, from Hurricanes Gustav and Ike. A total of 60 platforms were destroyed as a result of Hurricanes Gustav and Ike. Some platforms which had been previously reported as having extensive damage were reassessed and determined to be destroyed. The destroyed platforms produced 13,657 barrels of oil and 96.5 million cubic feet of natural gas daily or 1.05 percent of the oil and 1.3 percent of the natural gas produced daily in the Gulf of Mexico. : The 2005 hurricane season was the most active in recorded history, shattering previous records. According to the Department of Energy, refineries in the path of hurricanes Katrina and Rita accounting for about 29 percent of U.S. refining capacity were shut down at the peak of disruptions. Offshore, the Minerals Management Service (MMS) estimated 22,000 of the 33,000 miles of pipelines and 3,050 of the 4,000 platforms in the Gulf were in the direct paths of the two Category 5 storms. Together the storms destroyed 115 platforms and damaged 52 others. Even so, there was no loss of life among industry workers and contractors. An MMS report found "no accounts of spills from facilities on the federal Outer Continental Shelf that reached the shoreline; oiled birds or mammals; or involved any discoveries of oil to be collected or cleaned up". : Hurricane Ivan was the strongest hurricane of the 2004 season and among one of the most powerful Atlantic hurricanes on record. It moved across the Gulf of Mexico to make landfall in Alabama. Ivan then looped across Florida and back into the Gulf, regenerating into a new tropical system, which moved into Louisiana and Texas. The MMS estimated approximately 150 offshore facilities and 10,000 miles of pipelines were in the direct path of Ivan. Seven platforms were destroyed and 24 others damaged. The oil and natural gas industry submitted numerous damage reports to MMS, including for mobile drilling rigs, offshore platforms, producing wells, topside systems including wellheads and production and processing equipment, risers, and pipeline systems that transport oil and gas ashore from offshore facilities.
<urn:uuid:5a1087ae-92e7-46db-8ae2-20172a204f5d>
CC-MAIN-2013-20
http://www.api.org/news-and-media/hurricane-information/hurricane-preparation.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95195
2,794
2.546875
3
History of Initiative & Referendum in Arizona |Laws • History| |List of measures| The History of Initiative & Referendum in Arizona began when acquired statewide initiative, referendum, and recall rights at the time of statehood in 1912. The first initiative in the state was for women's suffrage. It was a landslide victory, passing by a margin of greater than two to one on Nov. 5, 1912. Then, in 1914, Arizona saw of 15 qualified initiatives, which held the record until 2006 when 19 initiatives were passed. Four of the 1914 initiatives passed because of the efforts of organized labor. One prohibited blacklisting of union members; a second established an "old age and mothers' pension"; another established a state government contract system, and a fourth limited businesses employment of non-citizens. Lastly, the voters in 1914 passed an initiative that barred the governor and legislature from amending or repealing initiatives. In response, the legislature tried to pass a constitutional amendment that would make it more difficult to pass initiatives. Because this amendment needed the approval of voters, the Arizona Federation of Labor waged a campaign against the measure. The amendment was narrowly defeated in 1916. - This chart includes all ballot measures to appear on the Arizona ballot in the year indicated, not just initiated measures. See also Arizona ballot measures. |Year||Propositions on ballot||How many were approved?||How many were defeated?| Arizonans owe many of their reforms to John Kromko. Kromko, like most Arizonans, is not a native; he was born near Erie, Pennsylvania, in 1940 and moved to Tucson in the mid-1960s. He was active in protests against the Vietnam War, and in the 1970s and 1980s he was elected to the lower house of the state legislature several times. By night, he was a computer-programming instructor; by day, he was Arizona’s "Mr. Initiative." Kromko’s first petition was a referendum drive to stop a Tucson city council ordinance banning topless dancing, arguing for free speech. In 1976 Kromko was among the handful of Arizonans who, in cooperation with the People’s Lobby Western Bloc campaign, succeeded in putting on the state ballot an initiative to phase out nuclear power. The initiative lost at the polls, but Kromko’s leadership on the issue got him elected to his first term in the legislature. Repealing the sales tax on food Once elected, Kromko set his sights on abolishing the sales tax on food, a "regressive" tax that hits the poor hardest. Unsuccessful in the legislature, Kromko launched a statewide initiative petition and got enough signatures to put food tax repeal on the ballot. The legislature, faced with the initiative, acted to repeal the tax. After the food tax victory, Kromko turned to voter registration reform. Again the legislature was unresponsive, so he launched an initiative petition. He narrowly missed getting enough signatures in 1980, and he failed to win re-election that year. Undaunted, he revived the voter registration campaign and turned to yet another cause: Medicaid funding. Arizona in 1981 was the only state without Medicaid, since the legislature had refused to appropriate money for the state's share of this federal program. In 1982, with an initiative petition drive under way and headed for success, the legislature got the message and established a Medicaid program. Kromko and his allies on this issue, the state’s churches, were satisfied and dropped their petition drive. Motor Voter initiative The voter registration initiative, now under the leadership of Les Miller, a Phoenix attorney, and the state Democratic Party, gained ballot placement and voter approval. In the ensuing four years, this "Motor Voter" initiative increased by over 10 percent the proportion of Arizona’s eligible population who were registered to vote. Late legislative career Kromko, re-elected to the legislature in 1982, took up his petitions again in 1983 to prevent construction of a freeway in Tucson that would have smashed through several residential neighborhoods. The initiative was merely to make freeway plans subject to voter approval, but Tucson officials, seeing the campaign as the death knell for their freeway plans, blocked its placement on the ballot through various legal technicalities. Kromko and neighborhood activists fighting to save their homes refused to admit defeat. They began a new petition drive in 1984, qualified their measure for the ballot, and won voter approval for it in November 1985. Arizona’s moneyed interests poured funds into a campaign to unseat Kromko in 1986. Kromko not only survived but also fought back by supporting a statewide initiative to limit campaign contributions, sponsored by his colleague in the legislature, Democratic State Representative Reid Ewing of Tucson. Voters passed the measure by a two to one margin. Kromko’s initiative exploits have made him the most effective Democratic political figure, besides former governor Bruce Babbitt, in this perennially Republican-dominated state. And Babbitt owes partial credit for one of his biggest successes - enactment of restrictions on the toxic chemical pollution of drinking water - to Kromko. Early in 1986 Kromko helped organize an environmentalist petition drive for an anti-toxic initiative, while Babbitt negotiated with the legislature for passage of a similar bill. When initiative backers had enough signatures to put their measure on the ballot, the legislature bowed to the pressure and passed Babbitt's bill. Even today, Kromko is still active in politics, writing letters to the editor about immigration policies. Petition drive problems in 2008 2008 was a tough year for ballot initiatives in Arizona. Nine citizen initiatives filed signatures to qualify for the November 2008 Arizona ballot by the state's July 3 petition drive deadline. In the end, only six of the initiatives were certified, with three initiatives disqualified as a result of an historically high number of problems with flawed petition signatures. When the November vote was held, of the six that qualified for the ballot, only one was approved., Criticisms of process After 19 were proposed in 2006, legislators were worried about "ballot fatigue" or overuse of the initiative system. This led to legislators considering steps to limit or otherwise exert more control over the initiative process. Ironically, any attempt to alter the initiative and referendum process would require an amendment to the state constitution, and thus in itself be put forth as a referendum. This article is significantly based on an article published by the Initiative & Referendum Institute, and is used with their permission. Their article, in turn, relies on research in David Schmidt's book, Citizen Lawmakers: The Ballot Initiative Revolution. Also portions of this article were taken from Wikipedia, the free encyclopedia under the GNU license. - ↑ Arizona Daily Star, "'Clown' takes some serious initiative", July 20, 2007 - ↑ Arizona Republic, "'Flawed' election petitions face review", September 13, 2008 - ↑ Phoenix New Times, "Citizen initiatives have been kicked off the ballot this year in record numbers, and the problems could go much deeper than invalid signatures", August 21, 2008 - ↑ Legislators seeking more control over initiatives, Arizona Republic, Feb. 13, 2007 - ↑ History of Arizona's initiative - ↑ Citizen Lawmakers: The Ballot Initiative Revolution Temple University Press, 352 pp., ISBN-10: 0877229031, October 1991 History of I&R Alaska · Arizona · Arkansas · California · Colorado · Florida · Idaho · Illinois · Kentucky · Maine · Maryland · Massachusetts · Michigan · Mississippi · Missouri · Montana · Nebraska · Nevada · New Mexico · North Dakota · Ohio · Oklahoma · Oregon · South Dakota · Utah · Washington · Wyoming Direct Legislation by the Citizenship Through the Initiative and Referendum · Citizen Lawmakers: The Ballot Initiative Revolution · Direct Legislation: Voting on Ballot Propositions in the United States
<urn:uuid:a244d012-82a0-4c3c-8ce5-37d705085e34>
CC-MAIN-2013-20
http://ballotpedia.org/wiki/index.php/History_of_Initiative_&_Referendum_in_Arizona
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948028
1,630
3.140625
3
General Chemistry/Periodicity and Electron Configurations Blocks of the Periodic Table The Periodic Table does more than just list the elements. The word periodic means that in each row, or period, there is a pattern of characteristics in the elements. This is because the elements are listed in part by their electron configuration. The Alkali metals and Alkaline earth metals have one and two valence electrons (electrons in the outer shell) respectively. These elements lose electrons to form bonds easily, and are thus very reactive. These elements are the s-block of the periodic table. The p-block, on the right, contains common non-metals such as chlorine and helium. The noble gases, in the column on the right, almost never react, since they have eight valence electrons, which makes it very stable. The halogens, directly to the left of the noble gases, readily gain electrons and react with metals. The s and p blocks make up the main-group elements, also known as representative elements. The d-block, which is the largest, consists of transition metals such as copper, iron, and gold. The f-block, on the bottom, contains rarer metals including uranium. Elements in the same Group or Family have the same configuration of valence electrons, making them behave in chemically similar ways. Causes for Trends There are certain phenomena that cause the periodic trends to occur. You must understand them before learning the trends. Effective Nuclear Charge The effective nuclear charge is the amount of positive charge acting on an electron. It is the number of protons in the nucleus minus the number of electrons in between the nucleus and the electron in question. Basically, the nucleus attracts an electron, but other electrons in lower shells repel it (opposites attract, likes repel). Shielding Effect The shielding (or screening) effect is similar to effective nuclear charge. The core electrons repel the valence electrons to some degree. The more electron shells there are (a new shell for each row in the periodic table), the greater the shielding effect is. Essentially, the core electrons shield the valence electrons from the positive charge of the nucleus. Electron-Electron Repulsions When two electrons are in the same shell, they will repel each other slightly. This effect is mostly canceled out due to the strong attraction to the nucleus, but it does cause electrons in the same shell to spread out a little bit. Lower shells experience this effect more because they are smaller and allow the electrons to interact more. Coulomb's Law Coulomb's law is an equation that determines the amount of force with which two charged particles attract or repel each other. It is , where is the amount of charge (+1e for protons, -1e for electrons), is the distance between them, and is a constant. You can see that doubling the distance would quarter the force. Also, a large number of protons would attract an electron with much more force than just a few protons would. Trends in the Periodic table Most of the elements occur naturally on Earth. However, all elements beyond uranium (number 92) are called trans-uranium elements and never occur outside of a laboratory. Most of the elements occur as solids or gases at STP. STP is standard temperature and pressure, which is 0° C and 1 atmosphere of pressure. There are only two elements that occur as liquids at STP: mercury (Hg) and bromine (Br). Bismuth (Bi) is the last stable element on the chart. All elements after bismuth are radioactive and decay into more stable elements. Some elements before bismuth are radioactive, however. Atomic Radius Leaving out the noble gases, atomic radii are larger on the left side of the periodic chart and are progressively smaller as you move to the right across the period. Conversely, as you move down the group, radii increase. Atomic radii decrease along a period due to greater effective nuclear charge. Atomic radii increase down a group due to the shielding effect of the additional core electrons, and the presence of another electron shell. Ionic Radius For nonmetals, ions are bigger than atoms, as the ions have extra electrons. For metals, it is the opposite. Extra electrons (negative ions, called anions) cause additional electron-electron repulsions, making them spread out farther. Fewer electrons (positive ions, called cations) cause fewer repulsions, allowing them to be closer. |Ionization energy is the energy required to strip an electron from the atom (when in the gas state). Ionization energy is also a periodic trend within the periodic table organization. Moving left to right within a period or upward within a group, the first ionization energy generally increases. As the atomic radius decreases, it becomes harder to remove an electron that is closer to a more positively charged nucleus. Ionization energy decreases going left across a period because there is a lower effective nuclear charge keeping the electrons attracted to the nucleus, so less energy is needed to pull one out. It decreases going down a group due to the shielding effect. Remember Coulomb's Law: as the distance between the nucleus and electrons increases, the force decreases at a quadratic rate. It is considered a measure of the tendency of an atom or ion to surrender an electron, or the strength of the electron binding; the greater the ionization energy, the more difficult it is to remove an electron. The ionization energy may be an indicator of the reactivity of an element. Elements with a low ionization energy tend to be reducing agents and form cations, which in turn combine with anions to form salts. Electron Affinity |Electron affinity is the opposite of ionization energy. It is the energy released when an electron is added to an atom. Electron affinity is highest in the upper left, lowest on the bottom right. However, electron affinity is actually negative for the noble gasses. They already have a complete valence shell, so there is no room in their orbitals for another electron. Adding an electron would require creating a whole new shell, which takes energy instead of releasing it. Several other elements have extremely low electron affinities because they are already in a stable configuration, and adding an electron would decrease stability. Electron affinity occurs due to the same reasons as ionization energy. Electronegativity is how much an atom attracts electrons within a bond. It is measured on a scale with fluorine at 4.0 and francium at 0.7. Electronegativity decreases from upper right to lower left. Electronegativity decreases because of atomic radius, shielding effect, and effective nuclear charge in the same manner that ionization energy decreases. Metallic Character Metallic elements are shiny, usually gray or silver colored, and good conductors of heat and electricity. They are malleable (can be hammered into thin sheets), and ductile (can be stretched into wires). Some metals, like sodium, are soft and can be cut with a knife. Others, like iron, are very hard. Non-metallic atoms are dull, usually colorful or colorless, and poor conductors. They are brittle when solid, and many are gases at STP. Metals give away their valence electrons when bonding, whereas non-metals take electrons. The metals are towards the left and center of the periodic table—in the s-block, d-block, and f-block . Poor metals and metalloids (somewhat metal, somewhat non-metal) are in the lower left of the p-block. Non-metals are on the right of the table. Metallic character increases from right to left and top to bottom. Non-metallic character is just the opposite. This is because of the other trends: ionization energy, electron affinity, and electronegativity.
<urn:uuid:7ab562e2-c61b-4988-9c51-24c5b3cb1d20>
CC-MAIN-2013-20
http://en.wikibooks.org/wiki/General_Chemistry/Periodicity_and_Electron_Configurations
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917487
1,666
4.4375
4
Per Square Meter Warm-up: Relationships in Ecosystems (10 minutes) 1. Begin this lesson by presenting the powerpoint, “Per Square Meter”. 2. After the presentation, ask students to think of animal relationships that correspond to each of the following types; Competition, Predation, Parasitism, and Mutualism a. For example, two animals that compete for food are lions and cheetahs (they compete for zebras and antelopes) 3. Record the different types of relationships on the board. Activity One: My Own Square Meter (30 minutes) 1. Have students go outside and pick a small area (about a square meter each) to explore. It is preferable that this area be grassy or ‘natural’. The school playground might be a good spot. 2. Each student should keep a list of both the living organisms and man-made products found in their area (i.e grass, birds, insects, flowers, sidewalk etc.) Students are allowed to collect a few specimens from this area to show to the class. If students do not have jars, they can draw their observations. *See Reproducible #1 Activity Two: Who lives in our playground? (10 minutes) 1. After listing, collecting, and drawing specimens, students should return to the classroom and present their findings. a. Have the students sit in a circle. Each student should read his or her list of findings out loud. If they collected specimens or drew observations, have them present them to the class. 2. Make a list of these findings on the board. Only write repeated findings once (to avoid writing grass as many times as there are students). Keep one list of living organisms and one list of man-made products. 3. For now, focus on the list of living organisms. As a class, help students name possible relationships between the organisms. See if they can find one of each type of relationship. For example, a bee on a flower is an example of mutualism because the nectar from the flower nourishes the bee and in return, the bee pollinates the flower. Activity Three: Humans and the Environment: Human Effect on one Square Meter (15 minutes) 1. Now that students have focused on the animal relationships of their square meter, it is time to examine the effect of humans on the natural environment. Focus on the human-made product list. Ask students to consider the possible relationships between the human-made products and the environment. Prompt a brief class discussion on the effects of man-made products on the environment. Use the following questions as guidelines. a. What is the effect of an empty drink bottle (or any other piece of trash) in a grassy field? Will it decompose? Will it be used by an animal as a habitat or food? Answer: Trash is an invasive man-made product. Most trash is non-bio degradable and is harmful to the environment and to eco-system relations.Therefore, it is a harmful addition to the square meter. b. Who left the bottle there? Do you think they are still thinking about it? Did they leave it there on purpose? Why did they leave it there? Answer: Most people litter thoughtlessly. They are not thinking about their actions and how they may effect the environment or eco-systems. It is important that people recognize that litter has a major effect on the environment. c. What about a bench? Does a park bench have the same effect on the environment as a piece of trash? Answer: A park bench can be considered as a positive human-made product. A park bench has little negative effect on the environment and even helps humans further appreciate eco-systems. The Park Bench may even provide shelter or a perch for the eco-systems living organisms. d. Is there a difference between positive human-made products and negative ones? What are some examples of each? Answer: Yes, there is a difference between positive and negative human-made products. Positive products have minimal effect on the functioning of eco systems whereas negative products have major effects on eco systems. An example of a positive human-made product would be a solar powered house. An example of a negative human-made product would be a car that produces a lot of pollution. Wrap Up: Our Classroom Eco-Web (20-30 minutes) 1. Have students create classroom artwork by illustrating the relationships between their eco-systems. 2. Each student should draw at least two components of his or her square meter. 3. After everyone has finished their illustrations, create a web relating the illustrations. Draw arrows between illustrated components with written indications of the type of relationship exemplified. 4. Post the finished product in the classroom so that students can see the interconnectedness of the earth’s eco-systems. Extension: Exploring Aquatic Eco-Systems (On-going Activity) Students can explore another type of eco-system by creating a classroom aquarium or terrarium. The supplies for both of these mini eco-systems can be found at your local pet store. Students should help set up and maintain the aquarium or terrarium throughout the year. Periodically, students should observe how the mini-ecosystem is progressing, note changes, and assess the relationships between the organisms of the eco-system. This way, students are able to directly participate in the functioning of a natural system. Another related activity might be to take your students on a field trip to a different eco-system from that of your school. If you live near a river, lake, or ocean take them there to explore different ecological relations. If you live in a city, examples of diverse eco-systems can be found at the local zoo or aquarium.
<urn:uuid:c76adb43-fdc6-442d-882e-b7781f7e7d83>
CC-MAIN-2013-20
http://www.earthday.org/square-meter
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939051
1,207
3.921875
4
From the time of Aristotle (384-322 BC) until the late 1500’s, gravity was believed to act differently on different objects. - Drop a metal bar and a feather at the same time… which one hits the ground first? - Obviously, common sense will tell you that the bar will hit first, while the feather slowly flutters to the ground. - In Aristotle’s view, this was because the bar was being pulled harder (and faster) by gravity because of its physical properties. - Because everyone sees this when they drop different objects, it wasn’t questioned for almost 2000 years. Galileo Galilei was the first major scientist to refute (prove wrong) Aristotle’s theories. - In his famous (at least to Physicists!) experiment, Galileo went to the top of the leaning tower of Pisa and dropped a wooden ball and a lead ball, both the same size, but different masses. - They both hit the ground at the same time, even though Aristotle would say that the heavier metal ball should hit first. - Galileo had shown that the different rates at which some objects fall is due to air resistance, a type of friction. - Get rid of friction (air resistance) and all objects will fall at the same rate. - Galileo said that the acceleration of any object (in the absence of air resistance) is the same. - To this day we follow the model that Galileo created. ag = g = 9.81m/s2 ag = g = acceleration due to gravity Since gravity is just an acceleration like any other, it can be used in any of the formulas that we have used so far. - Just be careful about using the correct sign (positive or negative) depending on the problem. Examples of Calculations with Gravity Example 1: A ball is thrown up into the air at an initial velocity of 56.3m/s. Determine its velocity after 4.52s have passed. In the question the velocity upwards is positive, and I’ll keep it that way. That just means that I have to make sure that I use gravity as a negative number, since gravity always acts down. vf = vi + at = 56.3m/s + (-9.81m/s2)(4.52s) vf = 12.0 m/s This value is still positive, but smaller. The ball is slowing down as it rises into the air. Example 2: I throw a ball down off the top of a cliff so that it leaves my hand at 12m/s. Determine how fast is it going 3.47 seconds later. In this question I gave a downward velocity as positive. I might as well stick with this, but that means I have defined down as positive. That means gravity will be positive as well.vf = vi + at = 12m/s + (9.81m/s2)(3.47s) vf = 46 m/s Here the number is getting bigger. It’s positive, but in this question I’ve defined down as positive, so it’s speeding up in the positive direction. Example 3: I throw up a ball at 56.3 m/s again. Determine how fast is it going after 8.0s. We’re defining up as positive again. vf = vi + at = 56.3m/s + (-9.81m/s2)(8.0s) vf = -22 m/s Why did I get a negative answer? - The ball reached its maximum height, where it stopped, and then started to fall down. - Falling down means a negative velocity. There’s a few rules that you have to keep track of. Let’s look at the way an object thrown up into the air moves. As the ball is going up… - It starts at the bottom at the maximum speed. - As it rises, it slows down. - It finally reaches it’s maximum height, where for a moment its velocity is zero. - This is exactly half ways through the flight time. As the ball is coming down… - The ball begins to speed up, but downwards. - When it reaches the same height that it started from, it will be going at the same speed as it was originally moving at. - It takes just as long to go up as it takes to come down. Example 4: I throw my ball up into the (again) at a velocity of 56.3 m/s. a) Determine how much time does it take to reach its maximum height. - It reaches its maximum height when its velocity is zero. We’ll use that as the final velocity. - Also, if we define up as positive, we need to remember to define down (like gravity) as negative. a = (vf - vi) / t t = (vf - vi) / a = (0 - 56.3m/s) / -9.81m/s2 t = 5.74s b) Determine how high it goes. - It’s best to try to avoid using the number you calculated in part (a), since if you made a mistake, this answer will be wrong also. - If you can’t avoid it, then go ahead and use it. vf2 = vi2 + 2ad d = (vf2 = vi2) / 2a = (0 - 56.32) / 2(-9.81m/s2) d = 1.62e2 m c) Determine how fast is it going when it reaches my hand again. - Ignoring air resistance, it will be going as fast coming down as it was going up. You might have heard people in movies say how many "gee’s" they were feeling. - All this means is that they are comparing the acceleration they are feeling to regular gravity. - So, right now, you are experiencing 1g… regular gravity. - During lift-off the astronauts in the space shuttle experience about 4g’s. - That works out to about 39m/s2. - Gravity on the moon is about 1.7m/s2 = 0.17g
<urn:uuid:43ce7457-915e-4a8a-b78f-fca95b28656c>
CC-MAIN-2013-20
http://www.studyphysics.ca/newnotes/20/unit01_kinematicsdynamics/chp04_acceleration/lesson12.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94489
1,359
3.953125
4
March 30, 2011 by Valerie Elkins The short answer is keizu. The longer answer is not so easy. There several reasons why it is difficult for those of Japanese ancestry living outside of Japan to trace their lineage. One of the main reasons is a lack of understanding of the language. I am not going to sugar coat it, learning Japanese is hard, BUT learning how to pronounce it is not. There are 5 basic vowel sounds in Japanese. They are always pronounced the same unlike in English! Vowel lengths are all uniformly short: |a||as in ‘father’| |e||as in ‘bet’| |i||as in ‘beet’| |u||as in ‘boot’| |o||as in ‘boat’| You do not need to know everything in Japanese but learning some genealogical terms is helpful. Glossary of Japanese genealogical terms to begin building your vocabulary. - koseki ~ household register, includes everyone in a household under the head of house (who usually was male) - koseki tohon ~ certified copy which recorded everything from the original record. - koseki shohon ~ certified copy which recorded only parts from the original. - joseki ~ expired register in which all persons originally entered have been removed because of death, change of residence, etc. A joseki file is ordinarily available for 80 years after its expiration. - kaisei genkoseki ~ revised koseki - honseki ~ permanent residence or registered address (i.e. person may move to Tokyo but their records remain in hometown city hall). - genseki ~ another name for honseki - kakocho ~ Buddist death register - kaimyo ~ Buddist name given to deceased person and recorded in kakocho. - homyo ~ Buddist name given to living converts, similar to homyo. - kuni ~ country or nation - ken ~ prefecture - shi ~ city - gun ~ county - to ~ metropolitan prefecture (Tokyo-to). Similar to ken. - do ~ urban prefecture (Hokkaido). Similar to ken. - fu ~ urban prefecture (Kyoto-fu, Osaka-fu) similar to ken. - ku ~ ward in some large cities (Sapparo, Sendai, Tokyo) divided in to town (cho). - cho ~ town - aza ~unorganized district - machi ~ town within a city (cho) or ward (ku), town within a county (gun). - chome ~ smaller division of a town (cho) in some neighborhoods. - mura or son ~ village within a county (gun). - koshu or hittousha or stainushi ~ head of household, the head of the family - zen koshu ~ former head of household - otto ~ husband - tsuma ~ wife - chichi or fu ~ father - haha or bo ~ mother - sofu ~ grandfather - sobo ~ grandmother - otoko or dan or nan ~ male, man, son - onna or jo ~ female, woman, daughter - ani or kei or kyou ~ older brother - otouto or tei ~ younger brother - ane or shi ~ older sister - imouto or mai ~ younger sister - mago or son ~ grandchild - himago or souson ~ great-grandchild - oi ~ nephew - mei ~ niece - youshi ~ adopted child or son - youjo ~ adopted daughter - muko youshi ~ a man without sons may adopt his eldest daughter’s husband as his own son and the young man will take his wife’s surname and be listed on her family’s koseki - seimei or shime ~ full name, family name - shussei or shusshou ~ birth - shibou ~ deceased - nen or toshi ~ year - gatsu, getsu or tsuki ~ month - hi or nichi or ka ~ day - ji or toki ~ hour, time - sai or toshi ~ age - issei ~ person born in Japan and later immigrate elsewhere - nisei ~ child/generation of issei and born outside of Japan - sansei ~ child/generation of nisei and born outside of Japan - yonsei ~ child/generation of sansei and born outside of Japan - gosei ~ child/generation of yonsei and born outside of Japan There is another Japanese term you really need to know. It is ganbatte which means ‘hang in there’ or ‘do your best’ and either one is will work. Category Uncategorized | Tags:
<urn:uuid:6af83eea-e9c9-4bc3-83b0-3d1f7e904cd8>
CC-MAIN-2013-20
http://advantagegenealogy.com/blog/2011/03/30/how-do-you-say-genealogy-in-japanese/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.909762
1,055
2.890625
3
Cleopatra, queen of Egypt and lover of Julius Caesar and Mark Antony, takes her life following the defeat of her forces against Octavian, the future first emperor of Rome. Cleopatra, born in 69 B.C., was made Cleopatra VII, queen of Egypt, upon the death of her father, Ptolemy XII, in 51 B.C. Her brother was made King Ptolemy XIII at the same time, and the siblings ruled Egypt under the formal title of husband and wife. Cleopatra and Ptolemy were members of the Macedonian dynasty that governed Egypt since the death of Alexander the Great in 323 B.C. Although Cleopatra had no Egyptian blood, she alone in her ruling house learned Egyptian. To further her influence over the Egyptian people, she was also proclaimed the daughter of Re, the Egyptian sun god. Cleopatra soon fell into dispute with her brother, and civil war erupted in 48 B.C. Rome, the greatest power in the Western world, was also beset by civil war at the time. Just as Cleopatra was preparing to attack her brother with a large Arab army, the Roman civil war spilled into Egypt. Pompey the Great, defeated by Julius Caesar in Greece, fled to Egypt seeking solace but was immediately murdered by agents of Ptolemy XIII. Caesar arrived in Alexandria soon after and, finding his enemy dead, decided to restore order in Egypt. During the preceding century, Rome had exercised increasing control over the rich Egyptian kingdom, and Cleopatra sought to advance her political aims by winning the favor of Caesar. She traveled to the royal palace in Alexandria and was allegedly carried to Caesar rolled in a rug, which was offered as a gift. Cleopatra, beautiful and alluring, captivated the powerful Roman leader, and he agreed to intercede in the Egyptian civil war on her behalf. In 47 B.C., Ptolemy XIII was killed after a defeat against Caesar's forces, and Cleopatra was made dual ruler with another brother, Ptolemy XIV. Julius and Cleopatra spent several amorous weeks together, and then Caesar departed for Asia Minor, where he declared "Veni, vidi, vici" (I came, I saw, I conquered), after putting down a rebellion. In June 47 B.C., Cleopatra bore a son, whom she claimed was Caesar's and named Caesarion, meaning "little Caesar." Upon Caesar's triumphant return to Rome, Cleopatra and Caesarion joined him there. Under the auspices of negotiating a treaty with Rome, Cleopatra lived discretely in a villa that Caesar owned outside the capital. After Caesar was assassinated in March 44 B.C., she returned to Egypt. Soon after, Ptolemy XIV died, likely poisoned by Cleopatra, and the queen made her son co-ruler with her as Ptolemy XV Caesar. With Julius Caesar's murder, Rome again fell into civil war, which was temporarily resolved in 43 B.C. with the formation of the second triumvirate, made up of Octavian, Caesar's great-nephew and chosen heir; Mark Antony, a powerful general; and Lepidus, a Roman statesman. Antony took up the administration of the eastern provinces of the Roman Empire, and he summoned Cleopatra to Tarsus, in Asia Minor, to answer charges that she had aided his enemies. Cleopatra sought to seduce Antony, as she had Caesar before him, and in 41 B.C. arrived in Tarsus on a magnificent river barge, dressed as Venus, the Roman god of love. Successful in her efforts, Antony returned with her to Alexandria, where they spent the winter in debauchery. In 40 B.C., Antony returned to Rome and married Octavian's sister Octavia in an effort to mend his strained alliance with Octavian. The triumvirate, however, continued to deteriorate. In 37 B.C., Antony separated from Octavia and traveled east, arranging for Cleopatra to join him in Syria. In their time apart, Cleopatra had borne him twins, a son and a daughter. According to Octavian's propagandists, the lovers were then married, which violated the Roman law restricting Romans from marrying foreigners. Antony's disastrous military campaign against Parthia in 36 B.C. further reduced his prestige, but in 34 B.C. he was more successful against Armenia. To celebrate the victory, he staged a triumphal procession through the streets of Alexandria, in which he and Cleopatra sat on golden thrones, and Caesarion and their children were given imposing royal titles. Many in Rome, spurred on by Octavian, interpreted the spectacle as a sign that Antony intended to deliver the Roman Empire into alien hands. After several more years of tension and propaganda attacks, Octavian declared war against Cleopatra, and therefore Antony, in 31 B.C. Enemies of Octavian rallied to Antony's side, but Octavian's brilliant military commanders gained early successes against his forces. On September 2, 31 B.C., their fleets clashed at Actium in Greece. After heavy fighting, Cleopatra broke from the engagement and set course for Egypt with 60 of her ships. Antony then broke through the enemy line and followed her. The disheartened fleet that remained surrendered to Octavian. One week later, Antony's land forces surrendered. Although they had suffered a decisive defeat, it was nearly a year before Octavian reached Alexandria and again defeated Antony. In the aftermath of the battle, Cleopatra took refuge in the mausoleum she had commissioned for herself. Antony, informed that Cleopatra was dead, stabbed himself with his sword. Before he died, another messenger arrived, saying Cleopatra still lived. Antony had himself carried to Cleopatra's retreat, where he died after bidding her to make her peace with Octavian. When the triumphant Roman arrived, she attempted to seduce him, but he resisted her charms. Rather than fall under Octavian's domination, Cleopatra committed suicide on August 30, 30 B.C., possibly by means of an asp, a poisonous Egyptian serpent and symbol of divine royalty. Octavian then executed her son Caesarion, annexed Egypt into the Roman Empire, and used Cleopatra's treasure to pay off his veterans. In 27 B.C., Octavian became Augustus, the first and arguably most successful of all Roman emperors. He ruled a peaceful, prosperous, and expanding Roman Empire until his death in 14 A.D. at the age of 75.
<urn:uuid:299ef7cb-3829-4f4b-87d9-ff022fe5d13d>
CC-MAIN-2013-20
http://www.history.com/this-day-in-history/cleopatra-commits-suicide
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97725
1,406
3.453125
3
From Abracadabra to Zombies | View All N'kisi & the N'kisi Project N'kisi (pronounced ‘‘in-key-see’’) is a captive bred eight or nine-year-old hand raised African Grey Parrot whose owner, Aimée Morgana, thinks uses language. She doesn't think he just sounds out words. She thinks he communicates with her in language, which would in effect make N'kisi a rational parrot. For example, N'kisi utters "pretty smell medicine" when he wants to describe the aromatherapy oils that Aimée uses.* Furthermore, Aimée says her parrot has a fine sense of humor and knows how to laugh. Imagine having conversations with a humorous parrot. Think of all the things you could talk and joke about, besides aromatherapy. You could discuss the fame that would come to anyone who had a parrot that can think and converse in intelligent discourse, like pretty smell medicine and look at my pretty naked body.* And when some nasty skeptic makes fun of you, the two of you can joke about it. I'm afraid that this story stretches the boundaries of reasonable credibility, though stories of rational parrots go back at least to the 17th century. John Locke, for example, relates a tale of a Portuguese-speaking parrot of some note in his Essay Concerning Human Understanding (II.xxvii.8). These cases are more likely cases of self-deception, delusion, and gullibility than of language-using parrots. Listen to this audio clip of N'kisi, Aimée, and a toy that "talks" when a button is pushed. First listen without reading the transcript. Some of it is intelligible, especially after the fourth or fifth repetition, but it is difficult to understand the "conversation," especially with the toy making its sounds as Aimée stimulates her parrot. Some of the tape sounds like gibberish until you are told what to listen for. When you listen while reading the transcript something amazing happens: you can hear just what you're reading. Why is that? The same thing happens when you listen to audio tapes played backward. When you just listen without anyone telling you what to listen for, you usually don't understand anything intelligible. But as soon as someone shows or tells you what to listen for, you can hear the message. Such is the power of suggestion and the way of audio perception. Hearing is a constructive process, like vision, in that bits of sensory data are "filled in" by the brain to produce a visual or auditory perception that is clear and distinct, and in accord with your expectations. Consider the following from an interview with Dr. Irene Pepperberg, Morgana's inspiration, who has been studying Alex, an African Grey Parrot, for many years: We were doing demos at the Media Lab [at MIT] for our corporate sponsors; we had a very small amount of time scheduled and the visitors wanted to see Alex work. So we put a number of differently colored letters on the tray that we use, put the tray in front of Alex, and asked, "Alex, what sound is blue?" He answers, "Ssss." It was an "s", so we say "Good birdie" and he replies, "Want a nut." Well, I don't want him sitting there using our limited amount of time to eat a nut, so I tell him to wait, and I ask, "What sound is green?" Alex answers, "Ssshh." He's right, it's "sh," and we go through the routine again: "Good parrot." "Want a nut." "Alex, wait. What sound is orange?" "ch." "Good bird!" "Want a nut." We're going on and on and Alex is clearly getting more and more frustrated. He finally gets very slitty-eyed and he looks at me and states, "Want a nut. Nnn, uh, tuh." Not only could you imagine him thinking, "Hey, stupid, do I have to spell it for you?" but the point was that he had leaped over where we were and had begun sounding out the letters of the words for us. This was in a sense his way of saying to us, "I know where you're headed! Let's get on with it," which gave us the feeling that we were on the right track with what we were doing.* Dr. Pepperberg thinks the bird is responding cognitively to her questions rather than simply responding to a stimulus. She thinks the bird is getting frustrated, but she has stipulated earlier in the interview: I never claim that Alex has full-blown language; I never would. I'm not going to be able to put Alex on a "T" stand and have you interview him the way you interview me. So, whereas you or I might say "give me the nut or this interview is over" were we parrots with intentionality and language, the parrot's movements and sounds have to be less direct and more complex, so that they have to be interpreted for us by Pepperberg. In her view, Alex is "clearly getting more frustrated" and his frustration culminates with a "very slitty-eyed" expression. But this is Pepperberg's interpretation, as is her hearing the bird sound out the letters of the word 'nut'. It could have been a stutter for all we know, but Pepperberg is facilitating Alex's communication by telling us what she hears. The final paragraph indicates that Pepperberg is having a hard time drawing the line between imagining what a parrot might be thinking and projecting those thoughts into the parrot's movements and sounds. She's also having a hard time getting grant money (NIH turned her down), so she started her own private foundation, the Alex Foundation. When news of N'kisi broke on the pages of BBC online, there was no mention in the article by Alex Kirby of the parrot having conversations with people other than Aimée Morgana. (The story was originally told in USA Today in the February 12, 2001, edition.) Despite the headline "Parrot's oratory stuns scientists," there was no evidence given that the parrot had stunned anyone during a conversation. It seems that Aimée is to her parrot what the facilitator is to her client in facilitated communication, except that the parrot is actually providing data to interpret and is more like clever Hans, the horse that responded to unconscious movements of his master, than a disabled human who may not be providing any content or direction at all to the facilitator. It is Aimée who gives intentionality to the parrot's sounds. She is the one who attributes 'laughter' to his shrieks and conscious awareness to his responses, though those responses could be due to any one of many stimuli, consciously or unconsciously provided by Aimée or items in the immediate environment. Nevertheless, Dr. Jane Goodall, who studies chimpanzees, met N'kisi and said that he provides an "outstanding example of interspecies communication." There is some evidence, however, that much of the work with language-using primates also mistakes subjective validation by scientists for complex linguistic abilities of their animal subjects (Wallman 1992). According to Mr. Kirby, N'kisi not only uses language but has been tested for telepathy and he passed the test with flying colors: In an experiment, the bird and his owner were put in separate rooms and filmed as the artist opened random envelopes containing picture cards. Analysis showed the parrot had used appropriate keywords three times more often than would be likely by chance. Kirby doesn't provide any details about the experiment, so a reader might misinterpret this claim as implying that this parrot did about twice as well as people did in the ganzfeld telepathy experiments. In those experiments, subjects in separate rooms were monitored as one tried to telepathically send information from a picture or video to the other. Typically, there was a 20% chance of guessing what the item was but results as high as 38% were reported in some meta-analyses. If the parrot scored three times better than chance, then he would have gotten 60% correct. The odds of a parrot randomly blurting out words that match up 60% of the time with pictures being looked at simultaneously in another room are so high that there is virtually no way that this could happen by chance. However, as you might suspect, Kirby's claim is a bit misleading. I assume that Kirby was writing about an experiment that was part of the N'kisi project, a joint effort by Morgana and Rupert Sheldrake to test not only the parrot's language-using abilities but his telepathic talents as well. Sheldrake has already validated the telepathic abilities of a dog and thinks the "findings [of this experiment] are consistent with the hypothesis that N'kisi was reacting telepathically to Aimée's mental activity."* The full text of Sheldrake's study published in the peer reviewed Journal of Scientific Exploration is available online. The title of the paper would send most journal editors to their grave, killed by laughter: "Testing a Language-Using Parrot for Telepathy." Fortunately for Sheldrake and his associates there will always be a sympathetic editor for another story like that of J. B. Rhine and the telepathic horse, "Lady Wonder." At least Sheldrake's protocols show some measure of sophistication, unlike Rhine's. Even so, as the editor at the Journal of Scientific Exploration commented: "once again, we have suggestive results, a level of statistical significance that is less than compelling, and the devout wish that further work with refined protocols will ensue."* So, we'll just have to wait and see whether further study of N'kisi supports the telepathic hypothesis. Anyway, here is how Sheldrake set up the experiment. He first compiled a list of 30 words from the bird's vocabulary that "could be represented by visual images." A package of 167 photos from a stock supplier was used for the test. Since only 20 of the photos corresponded to words on the list, the word list was reduced to 20. The word 'camera' was removed from the list because 'N’kisi "used it so frequently to comment on the cameras used in the tests themselves." Thus, they were left with 19 words. During the tests, N’kisi remained in his cage in Aimée’s apartment in Manhattan, New York. There was no one in the room with him. Meanwhile, Aimée went to a separate enclosed room on a different floor. N’kisi could not see or hear her, and in any case, Aimée said nothing, as confirmed by the audio track recorded on the camera that filmed her continuously. The distance between Aimée and N’kisi was about 55 feet. Aimée could hear N’kisi through a wireless baby monitor, which she used to gain ‘‘feedback’’ to help her to adjust her mental state as image sender. Both Aimée and N’kisi were filmed continuously throughout the test sessions by two synchronized cameras on time-coded videotape. The cameras were mounted on tripods and ran continuously without interruption throughout each session. N’kisi was also recorded continuously on a separate audio tape recorder. (Sheldrake and Morgana 2003) According to Sheldrake: We conducted a total of 147 two-minute trials. The recordings of N’kisi during these trials were transcribed blind by three independent transcribers....He scored 23 hits: the key words he said corresponded to the target pictures....If N’kisi said a key word that did not correspond to the photograph, that was counted as a miss, and if he said a key word corresponding to the photograph, that was a hit. (Sheldrake and Morgana 2003) However, sixty of the trials were discarded because in those trials N'kisi either was silent or uttered things that were not key words, i.e., showed no signs of telepathy. A few other trials were discarded because the transcribers did not agree on what N'kisi said. In short, Sheldrake's statistical conclusions are based on the results of 71 of the trials. I'll let the reader decide whether it was proper to omit 40% of the data because the parrot didn't utter a word on the key word list during those trials. Some might argue that those sessions should be counted as misses and that by ignoring so much data where the parrot clearly did not indicate any sign of telepathy is strong evidence that Sheldrake was more interested in confirming his biases than in getting at the truth. N'kisi's misses were listed at 94. Ten of the 23 hits were on the picture that corresponded to the word 'flower', which N'kisi uttered 23 times during the trials. The flower image, selected randomly, was used in 17 trials. The image corresponding to water was used in 10 of the trials. The bird said 'water' in twelve trials and got 2 hits. It seems oddly biased that almost one-third of the images and more than half the hits came from just 2 of the 19 pictures. One of the peer reviewers thought that the fact that the flower word and picture played so heavy a role in the outcome that the paper's results were distorted and that the paper should not be published. The other reviewer accepted Sheldrake's observation that even if you throw out the flower data, you still get some sort of statistical significance. This may be true. However, since the bird allegedly had a vocabulary of some 950 words at the time of the test, omitting sessions where the bird said nothing or said something not on the key list, is unjustifiable. Furthermore, there is no evidence that it is reasonable to assume that when the parrot is by itself uttering words that it is trying to communicate telepathically with Morgana. Or are we to accept Sheldrake's assumption that the parrot turns his telepathic interest off and on, and it was on only when he uttered a word on the key list? That assumption is no more valid that Morgana's belief that the telepathy doesn't work as well when she makes an effort to send a telepathic message to her parrot. In any case, I wonder why Sheldrake didn't do a baseline study, where the parrot was videotaped for two-minutes at a time while Morgana was taking an aromatherapy bath or meditating or doing something unrelated to the key word pictures. Had he made several hundred such clips, he could then have randomly selected 71 and compared them to the 71 clips he used for his analysis. If there was no significant difference between the randomly selected clips and the ones that emerged during the experiment, then the telepathy hypothesis would not be supported. On the other hand, if he found a robust statistically significant difference, then the telepathy hypothesis would be supported. I suggest he do something along these lines when he attempts to replicate his parrot telepathy test. In some trials, N’kisi repeated a given key word. For example, in one trial N’kisi said ‘‘phone’’ three times, and in another he said ‘‘flower’’ ten times, and in the tabulation of data the numbers of times he said these words are shown in parentheses as: phone (3); flower (10). For most of the statistical analyses, repetitions were ignored, but in one analysis the numbers of words that were said more than once in a given trial were compared statistically with those said only once for both hits and misses. For each trial, the key word or words represented in the photograph were tabulated. Some images had only one key word, but others had two or more. For example, a picture of a couple hugging in a pool of water involved two key words, ‘‘water’’ and ‘‘hug.’’ (Sheldrake and Morgana 2003) He calculated 51 hits and 126 misses when repetitions were included. I'm not going to bother with any more detail because by now the overall picture should be clear. Once the statisticians went to work on the data, they were able to provide support for the claim that the data were consistent with the telepathic hypothesis. But nowhere in Sheldrake's paper can I find a claim that the parrot did three times better than expected by chance. In any case, I have to agree with the editor who published Sheldrake's parrot paper: the results have a statistical significance that is less than compelling. However, unlike that editor, my devout wish is that when such studies as these are published in the future, responsible journalists continue to ignore them and recognize them for the rubbish they are. On the other hand, if you happen to think your parrot is psychic, drop Dr. Sheldrake a line. He's set up a page just for you. Sheldrake has responded to this article. His comments and my responses are posted here. books and articles new Grey parrots use reasoning where monkeys and dogs can’t "Christian Schloegl and his team at the University of Vienna, let six parrots choose between two containers, one containing a nut. Both containers were shaken, one eliciting a rattling sound and the other nothing. The parrots preferred the container that rattled, even if only the empty container was shaken....Thus, grey parrots seem to possess ape-like reasoning skills...." [/new] Last updated 16-Aug-2012
<urn:uuid:3c04e227-6238-4f75-ba68-f256ea5b9b66>
CC-MAIN-2013-20
http://www.skepdic.com/nkisi.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.977373
3,683
2.546875
3
This is the second book wrote by Lee Lehman and presents in a very detailed manner the astrological dignities. It was published in 1989 by Whitford Press. In Chapter 1 - Two Unsung Revolutions in Astrology the author explains how the Copernican Revolution changed the way astrologers understand dignities. At page 18 one can find a table with traditional and modern essential dignities. Chapter 2 - Using Traditional Rulerships Here you'll find many practical examples of charts analyzed using traditional dignities. There are presented five countries (Confederate States of America, Italy, Iran, Switzerland, USSR), five corporations (General Motors, Ford, Chrysler, Coca-Cola, Pepsi), five individuals (Jane Austen, Lewis Carroll, Doyle Arthur, Niccolo Machiavelli, Mark Twain) and one horary chart. Of course, it is always nice to see how the theory applies in practice, but I was expecting from these examples to emphasize the different results which appears when analyzing the charts with traditional and modern dignities. Unfortunately, this is not happening, the charts are analyzed using only traditional dignities. In Chapter 3 - The Origin of Rulerships: A Botanical Interlude you can find out which planer or sign rules every planet. You'll see that onion is ruled by Mars, beans by Venus, holly by Saturn etc. Also, there is a table with the medicinal uses of Jupiter- ruled plants. I didn't test these, but it may be helpful. Chapter 4 - Modern “Rulerships”: Do They Work? The author is trying to prove that modern rulerships aren't working well and to find arguments. She points out that: “when modern astrologers discuss the modern rulerships the criterion appears to be: Which body (planet, asteroid or comet) has qualities which most resembles the sign in question?” So, modern rulerships are assigned counting if a planet qualities are similar with the sign qualities and not looking at the planet strength in a sign. See another quotation: “We haven't any evidence that the ancients thought that Pisces and Jupiter were synonymous. It was a question of the strength of Jupiter in Pisces, not the similarity of Jupiter and Pisces.” Now, I think the idea is pretty clear. I must say that I totally agree with this point of view. Then the charts of Marie Curie, Jeddu Krishnamurti, Adolf Hitler and Death of Dracula are analyzed. This time, Lee Lehman makes an analogy between the charts interpretations with modern and traditional rulerships. The results are pretty good and the lecture enjoyable. Only one problem, from my point of view. It is analyzed the chart “Death of Dracula”, where Lee writes things like: “I have been fascinated by charts of people who are, so to speak, energy sucks”, “Scorpio Sun (life of the vampire)”, etc. Hei, I am from Romania and I tell you there is no vampire. Dracula is just a myth assigned to a Romanian prince, Vlad III of Wallachia. It is true that he was cruel and liked to kill people by impaling them on a sharp pole, but everything else is imagination. Chapter 5 – The Meaning of Each of the Essential Dignities In this chapter you'll find some general characteristics for the five essential dignities: ruler, exaltation, triplicity, term and face. At page 127 is a table with key words associated with these dignities. Starting from these key words Lee Lehman gives many descriptive explanations for dignities, but it just seems to much! There are the same things explained over and over again, it seemed pretty boring to me. In Chapter 6 – A Statistical Interlude the author is trying to determine the influence of terms (both Chaldean and Egyptian) making a few tests. She selected a number of charts from different categories (suicide, scientist, sport champions) and counted the terms for each planet. In the final, we can see that the planet that rules the category (for example, Mars for sport champions) obtained more points that usually, on a normal pattern. Even the results apparently validates the importance of terms I won't give to much credit to such a test. Why? Because I don't see terms so important to determine a person belong to a category or another. For example, more points in the term of Saturn won't drive you to suicide because can be many other (not even major) aspects that can change this influence. Probably, I just don't believe terms are so important an if Lee Lehman is making those test it is clear that she also has doubts. Chapter 7 – Detriment, Falls and Peregrines means several pages where you can find short descriptions for every planet detriment and fall. In Chapter 8 – Conclusions there are the final words. MY EVALUATION: 6 Conclusion. If I would have to say quickly, at my first impression, some words about this book I think would be: “too much noise for nothing”. But, then, if you think for a moment you realize that you can't say “for nothing” because dignities are a very important part in astrology and one could write a whole interesting book about this subject. So, back to my reasoning, why this impression? Why “too much noise for nothing?”. Maybe, because this book presents shortly the five dignities associated with some main characteristics, ideas repeated in different chapters, but the rest of the book is somewhat near the subject. You can read about history, botany, statistics, all connected with dignities, but the book doesn't seem to touch the essential points. It is a surface play. It doesn't have those clear, rational statements that gives you a better understanding of the subject. If a medium astrologer reads this book I don't think will have much to learn and to integrate in his astrological system. Maybe I am a little too harsh, but it is my purpose here to criticize and to present a clear point of view about the astrological books I read. My evaluation is 6.
<urn:uuid:3a9f6d3f-436b-449d-865e-ed292dd45fc0>
CC-MAIN-2013-20
http://astrologycritics.com/essential-dignities.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954156
1,295
2.671875
3
Mali has been engrossed in civil war since January 2012, when separatists in Mali’s northern Azawad region began demanding independence from the southern, Bamako-based government. After forcing the Malian military from the north, however, the separatist forces soon became embroiled in a conflict of their own, between the original Mouvement National pour la Libération de l’Azawad (MNLA) and extremist Islamist splinter factions closely linked with Al-Qaeda. On 11 January 2013, France responded to Mali’s urgent request for international assistance and initiated ‘Operation Serval’ to aid the recapture of Azawad and defeat the extremist group. From the 18th, West African states began reinforcing French forces with at least 3,300 extra troops. In a BBC ’From Our Own Correspondent’ editorial, Hugh Schofield wrote of ‘la Francafrique’, or France’s considerable interests in West Africa held over from the end of formal empire. In fits and spurts, France has sought to extract itself from la Francafrique and to seek a new relationship with the continent. But in the complex world of post-colonial relationships, such a move is difficult. France retains strong economic, political, and social links with West Africa. Paris, Marseille, and Lyon are home to large expatriate African communities. Opinions at l’Elycée Palace, too, have wildly shifted over the years. Jacques Chirac, at least according to Schofield, was ‘a dyed-in-the-wool Guallist’, and an ideological successor to a young François Mitterand who, in 1954, defiantly pronounced that ‘L’Algérie, c’est la France’. Nicolas Sarkozy, on the other hand, dramatically distanced himself both from Chirac and from the la Francafrique role. The problem is, at least in part, topographical in nature. West Africa’s geography is dangerous, vast, and difficult to subordinate. On the eve of much of West Africa’s independence from France in 1961, R J Harrison Church spoke of the so-called Dry Zone, the area running horizontally from southern Mauritania across central Mali and Niger, as the great “pioneer fringe” of the region’s civilization. David Hilling, in his 1969 Geographical Journal examination, added that by “taming” the Saharan interior, France gained an important strategic advantage over their British rivals in the early twentieth century, enjoying access to resources unavailable along the coast. But, as A T Grove discussed in his 1978 review, “colonising” West Africa was much easier said than done, and the French left a West Africa mired in dispute, open to incursions, and still heavily reliant on the former imperial power. The French relationship with the region’s extreme geography was difficult at best; political boundaries were similar to those of the Arabian Peninsula and the Rub ‘al-Khali in particular: fluid, ill-defined, and not always recognised by local peoples. European-set political boundaries only exacerbated tensions between indigenous constituencies who had little or no say in the border demarcations. French and African efforts to dam the Niger River, for instance, were hampered by high costs, arduous terrain, and political instability well into the 1960s. On independence, the French left what infrastructure they could, mostly in West Africa’s capital and port cities; the vast interiors were often left to their own devices. As a result of these events, France has maintained a large military, economic, and social presence in the region ever since. The difficulty is that such areas under weak political control, such as the Malian, Somalian, and Sudanese deserts, have become havens for individuals who wish to operate outside international and national law. R J Harrison Church, 1961, ‘Problems and Development of the Dry Zone of West Africa‘, The Geographical Journal 127 187-99. David Hilling, 1969, ‘The Evolution of the Major Ports of West Africa‘, The Geographical Journal 135 365-78. A T Grove, 1978, ‘Geographical Introduction to the Sahel‘, The Geographical Journal 144 407-15. Ieuan Griffiths, 1986, ‘The Scramble for Africa: Inherited Political Boundaries‘, The Geographical Journal 152 204-16. ‘Le Mali attend le renfort des troupes ouest-africaines‘, Radio France Internationale, 19 January 2013, accessed 19 January 2013. Hugh Schofield, ‘France and Mali: An “ironic” relationship’, BBC News, 19 January 2013, accessed 19 January 2013.
<urn:uuid:e1a169c6-9906-46a6-bb7d-724095f8ebef>
CC-MAIN-2013-20
http://blog.geographydirections.com/tag/france/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933288
1,015
2.75
3
Karuk Tribe: Learning from the First Californians for the Next California Editor's Note: This is part of series, Facing the Climate Gap, which looks at grassroots efforts in California low-income communities of color to address climate change and promote climate justice. This article was published in collaboration with GlobalPossibilities.org. The three sovereign entities in the United States are the federal government, the states and indigenous tribes, but according to Bill Tripp, a member of the Karuk Tribe in Northern California, many people are unaware of both the sovereign nature of tribes and the wisdom they possess when it comes to issues of climate change and natural resource management. “A lot of people don’t realize that tribes even exist in California, but we are stakeholders too, with the rights of indigenous peoples,” says Tripp. Tripp is an Eco-Cultural Restoration specialist at the Karuk Tribe Department of Natural Resources. In 2010, the tribe drafted an Eco-Cultural Resources Management Plan, which aims to manage and restore “balanced ecological processes utilizing Traditional Ecological Knowledge supported by Western Science.” The plan addresses environmental issues that affect the health and culture of the Karuk tribe and outlines ways in which tribal practices can contribute to mitigating the effects of climate change. Before climate change became a hot topic in the media, many indigenous and agrarian communities, because of their dependence upon and close relationship to the land, began to notice troubling shifts in the environment such as intense drought, frequent wildfires, scarcer fish flows and erratic rainfall. There are over 100 government recognized tribes in California, which represent more than 700,000 people. The Karuk is the second largest Native American tribe in California and has over 3,200 members. Their tribal lands include over 1.48 million acres within and around the Klamath and Six Rivers National Forests in Northwest California. Tribes like the Karuk are among the hardest hit by the effects of climate change, despite their traditionally low-carbon lifestyles. The Karuk, in particular have experienced dramatic environmental changes in their forestlands and fisheries as a result of both climate change and misguided Federal and regional policies. The Karuk have long depended upon the forest to support their livelihood, cultural practices and nourishment. While wildfires have always been a natural aspect of the landscape, recent studies have shown that fires in northwestern California forests have risen dramatically in frequency and size due to climate related and human influences. According to the California Natural Resources Agency, fires in California are expected to increase 100 percent due to increased temperatures and longer dry seasons associated with climate change. Some of the other most damaging human influences to the Karuk include logging activities, which have depleted old growth forests, and fire suppression policies created by the U.S. Forest Service in the 1930s that have limited cultural burning practices. Tripp says these policies have been detrimental to tribal traditions and the forest environment. “It has been huge to just try to adapt to the past 100 years of policies that have led us to where we are today. We have already been forced to modify our traditional practices to fit the contemporary political context,” says Tripp. Further, the construction of dams along the Klamath River by PacifiCorp (a utility company) has impeded access to salmon and other fish that are central to the Karuk diet. Fishing regulations have also had a negative impact. Though the Karuk’s dependence on the land has left them vulnerable to the projected effects of climate change, it has also given them and other indigenous groups incredible knowledge to impart to western climate science. Historically, though, tribes have been largely left out of policy processes and decisions. The Karuk decided to challenge this historical pattern of marginalization by formulating their own Eco-Cultural Resources Management Plan. The Plan provides over twenty “Cultural Environmental Management Practices” that are based on traditional ecological knowledge and the “World Renewal” philosophy, which emphasizes the interconnectedness of humans and the environment. Tripp says the Plan was created in the hopes that knowledge passed down from previous generations will help strengthen Karuk culture and teach the broader community to live in a more ecologically sound way. “It is designed to be a living document…We are building a process of comparative learning, based on the principals and practices of traditional ecological knowledge to revitalize culturally relevant information as passed through oral transmission and intergenerational observations,” says Tripp. One of the highlights of the plan is to re-establish traditional burning practices in order to decrease fuel loads and the risk for more severe wildfires when they do happen. Traditional burning was used by the Karuk to burn off specific types of vegetation and promote continued diversity in the landscape. Tripp notes that these practices are an example of how humans can play a positive role in maintaining a sound ecological cycle in the forests. “The practice of utilizing fire to manage resources in a traditional way not only improves the use quality of forest resources, it also builds and maintains resiliency in the ecological process of entire landscapes” explains Tripp. Another crucial aspect of the Plan is the life cycle of fish, like salmon, that are central to Karuk food traditions and ecosystem health. Traditionally, the Karuk regulated fishing schedules to allow the first salmon to pass, ensuring that those most likely to survive made it to prime spawning grounds. There were also designated fishing periods and locations to promote successful reproduction. Tripp says regulatory agencies have established practices that are harmful this cycle. “Today, regulatory agencies permit the harvest of fish that would otherwise be protected under traditional harvest management principles and close the harvest season when the fish least likely to reach the very upper river reaches are passing through,” says Tripp. The Karuk tribe is now working closely with researchers from universities such as University of California, Berkeley and the University of California, Davis as well as public agencies so that this traditional knowledge can one day be accepted by mainstream and academic circles dealing with climate change mitigation and adaptation practices. According to the Plan, these land management practices are more cost effective than those currently practiced by public agencies; and, if implemented, they will greatly reduce taxpayer cost burdens and create employment. The Karuk hope to create a workforce development program that will hire tribal members to implement the plan’s goals, such as multi-site cultural burning practices. The Plan has a long way to full realization and Federal recognition. According to the National Indian Forest Resources Management Act and the National Environmental Protection Act, it must go through a formal review process. Besides that, the Karuk Tribe is still solidifying funding to pursue its goals. The work of California’s environmental stewards will always be in demand, and the Karuk are taking the lead in showing how community wisdom can be used to generate an integrated approach to climate change. Such integrated and community engaged policy approaches are rare throughout the state but are emerging in other areas. In Oakland, for example, the Oakland Climate Action Coalition engaged community members and a diverse group of social justice, labor, environmental, and business organizations to develop an Energy and Climate Action Plan that outlines specific ways for the City to reduce greenhouse gas emissions and create a sustainable economy. In the end, Tripp hopes the Karuk Plan will not only inspire others and address the global environmental plight, but also help to maintain the very core of his people. In his words: “Being adaptable to climate change is part of that, but primarily it is about enabling us to maintain our identity and the people in this place in perpetuity.” Dr. Manuel Pastor is Professor of Sociology and American Studies & Ethnicity at the University of Southern California where he also directs the Program for Environmental and Regional Equity and co-directs USC’s Center for the Study of Immigrant Integration. His most recent books include Just Growth: Inclusion and Prosperity in America’s Metropolitan Regions (Routledge 2012; co-authored with Chris Benner) Uncommon Common Ground: Race and America’s Future (W.W. Norton 2010; co-authored with Angela Glover Blackwell and Stewart Kwoh), and This Could Be the Start of Something Big: How Social Movements for Regional Equity are Transforming Metropolitan America (Cornell 2009; co-authored with Chris Benner and Martha Matsuoka).
<urn:uuid:003baaf4-69c7-4ee7-b37f-468bf9b55842>
CC-MAIN-2013-20
http://www.resilience.org/stories/2012-10-19/karuk-tribe-learning-from-the-first-californians-for-the-next-california
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945849
1,714
3.296875
3

FineWeb-Edu Micro

This dataset is a subset of the FineWeb-Edu Sample-10BT, which contains passages that are at least 1000 tokens long, totalling about 1 Million tokens .

This dataset was primarily made to evaluate different RAG Chunking mechanisms in Chonkie

Downloads last month
47
Edit dataset card