text
stringlengths
247
264k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
20
294
date
stringlengths
20
20
file_path
stringclasses
370 values
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
62
58.7k
This article describes the formula syntax and usage of the SUMIF function (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.) in Microsoft Excel. You use the SUMIF function to sum the values in a range (range: Two or more cells on a sheet. The cells in a range can be adjacent or nonadjacent.) that meet criteria that you specify. For example, suppose that in a column that contains numbers, you want to sum only the values that are larger than 5. You can use the following formula: In this example, the criteria is applied the same values that are being summed. If you want, you can apply the criteria to one range and sum the corresponding values in a different range. For example, the formula =SUMIF(B2:B5, "John", C2:C5) sums only the values in the range C2:C5, where the corresponding cells in the range B2:B5 equal "John." Note To sum cells based on multiple criteria, see SUMIFS function. SUMIF(range, criteria, [sum_range]) The SUMIF function syntax has the following arguments (argument: A value that provides information to an action, an event, a method, a property, a function, or a procedure.): - range Required. The range of cells that you want evaluated by criteria. Cells in each range must be numbers or names, arrays, or references that contain numbers. Blank and text values are ignored. - criteria Required. The criteria in the form of a number, expression, a cell reference, text, or a function that defines which cells will be added. For example, criteria can be expressed as 32, ">32", B5, 32, "32", "apples", or TODAY(). Important Any text criteria or any criteria that includes logical or mathematical symbols must be enclosed in double quotation marks ("). If the criteria is numeric, double quotation marks are not required. - sum_range Optional. The actual cells to add, if you want to add cells other than those specified in the range argument. If the sum_range argument is omitted, Excel adds the cells that are specified in the range argument (the same cells to which the criteria is applied). - You can use the wildcard characters — the question mark (?) and asterisk (*) — as the criteria argument. A question mark matches any single character; an asterisk matches any sequence of characters. If you want to find an actual question mark or asterisk, type a tilde (~) preceding the character. - The SUMIF function returns incorrect results when you use it to match strings longer than 255 characters to the string #VALUE!. - The sum_range argument does not have to be the same size and shape as the range argument. The actual cells that are added are determined by using the upper leftmost cell in the sum_range argument as the beginning cell, and then including cells that correspond in size and shape to the range argument. For example: |If range is ||And sum_range is ||Then the actual cells are However, when the range and sum_range arguments in the SUMIF function do not contain the same number of cells, worksheet recalculation may take longer than expected. Use the embedded workbook shown here to work with examples of this function. You can inspect and change existing formulas, enter your own formulas, and read further information about how the function works. These examples demonstrate how SUMIF adds the values in a range (range: Two or more cells on a sheet. The cells in a range can be adjacent or nonadjacent.) that meet criteria that you specified. To work in-depth with this workbook, you can download it to your computer and open it in Excel. For more information, see the article Download an embedded workbook from SkyDrive and open it on your computer. These examples demonstrate how SUMIF adds the categories in a range (range: Two or more cells on a sheet. The cells in a range can be adjacent or nonadjacent.) that meet criteria that you specified.
<urn:uuid:cda10059-3384-40a7-a13c-7ac5e471e595>
CC-MAIN-2013-20
http://office.microsoft.com/en-us/excel-help/sumif-function-HP010342932.aspx
2013-05-18T07:26:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.773866
884
In this module we need to be a little bit more precise about temperature and heat energy than we have been so far. Heat energy is usually measured in terms of calories. The calorie was originally defined as the amount of energy required to raise one gram of water one degree Celsius at a pressure of one atmosphere. This definition is not complete because the amount of energy required to raise one gram of water one degree Celsius varies with the original temperature of the water by as much as one percent. Since 1925 the calorie has been defined as 4.184 joules, the amount of energy required to raise the temperature of one gram of water from 14.5 degrees Celsius to 15.5 degrees Celsius. For our purposes here we can ignore the fact that the effect of one calorie of energy varies depending on the temperature of the water. Newton's model of cooling can be thought of, more precisely, as involving two steps. The picture above shows a brick whose length is four centimeters. We mentally divide the brick into two unequal pieces. The lefthand piece has a length of one centimeter and the righthand piece has a length of three centimeters. Heat is flowing across the mental boundary between the two pieces from left to right at the rate of A calories per hour. As a result the average temperature of the lefthand piece is changing at the rate of -kA degrees Celsius per hour. The constant k depends on the composition of the brick and its cross-sectional area. The average temperature of the righthand piece is changing at the rate of kA / 3 degrees Celsius per hour. The three in the denominator comes from the fact that since the righthand piece is three times the length of the lefthand piece, its mass is three times as big.
<urn:uuid:a815ae2f-316e-4506-a921-7c7a166c371c>
CC-MAIN-2013-20
http://www.math.montana.edu/frankw/ccp/modeling/continuous/heatflow2/energytemp.htm
2013-05-23T11:33:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946519
363
Drones come in many shapes and sizes, but now they can also be 3d printed! To make these drones, the [Decode] group used a selective laser sintering process which is pretty interesting in itself. Once the printing process is done, these little planes are built with only five structural and aerodynamic components. Because of their simplicity, these drones can reportedly be assembled and ready to fly with no tools in only ten minutes! This design was done by the [Engineering and Physical Sciences Research Council] at the University of Southampton in the UK by a group of students. Besides this particular plane, they concentrate their efforts on building autonomous drones under 20 Kilograms. Using a 3D sintering process with this design allowed them to make this plane how they wanted, regardless of the ease of machining the parts.
<urn:uuid:98555d96-1256-4e4c-8311-50f38423d2a6>
CC-MAIN-2013-20
http://hackaday.com/2011/08/03/a-3d-printed-aerial-drone/?like=1&source=post_flair&_wpnonce=ea0388e61b
2013-05-25T20:09:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951919
168
H.A.R.T Healthy Advocacy Response Team Dating Violence is a pattern of coercive and abusive tactics employed by one partner in a relationship to gain power and control over the other partner. It can take many forms, including physical violence, coercion, threats, intimidation, isolation, and emotional, sexual or economic abuse. Violence often starts with little things that can be denied, ignored, or forgiven. But, from there, a pattern of violence can grow quickly. - Exhibits jealousy when you talk to others. - Tries to control where you go, whom you go with, what you wear, say, or do. - Attempts to isolate you from loved ones. May try to cut you off from all resources, friends, and family. - Uses force or dominance in sexual activity. - Degrades or puts you down. Runs down accomplishments that you achieve. - Acts like Dr. Jekyll and Mr. Hyde. May be kind one minute and exploding the next; charming in public and cruel in private. - Threatens to use physical force. Breaks or strikes objects to intimidate you. - Physically restrains you from leaving the room, pushes, and/or shoves you. - Has hit other partners in the past but assures you that the violence was provoked. What to Do - Notice how you feel. Are you depressed? Do you feel more free to be yourself when your partner is not around? - Notice what you do. Do you find yourself making excuses for your partner? Do you spend less time with friends and family? Do you change how you act to avoid making your partner angry? - Talk to friends. Often a friend or family member can see things more clearly. Do they see abuse in your relationship? Take Steps to Stay Safe - Be clear about behavior you won’t accept and stick to your limits. - Trust your feelings. If something feels uncomfortable, pay attention. - Have a support system. Stay in touch with friends, family, and/or a counselor. - Avoid drinking and drug use. YOU HAVE THE RIGHT TO BE RESPECTED AND RESPECT YOURSELF!!!
<urn:uuid:5bed8b51-c9b2-4a41-b602-0d8141996773>
CC-MAIN-2013-20
http://lavc.edu/sexualassaultpolicy/datingviolence.html
2013-05-20T03:02:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698222543/warc/CC-MAIN-20130516095702-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921736
456
3 questions we often hear are: - What is Dyslexia? Does my child have Dyslexia? What is the best Dyslexia Program? have worked with Dr. Linda Silverman from the Gifted Development Center in Denver. Dr. Silverman uses the term visual-spatial learner -- to describe a student who learns differently. We call these kids right-brained learners. We have found that the keys to helping a visual spatial or right brained learner succeed include: - Identifying how they learn best other co-existing conditions -- visual-spatial learners and right-brained learners often have visual processing, auditory processing and/or attention issues they are a right-brained learner, then which is the best right-brained program for my child with whatever co-existing challenges they have Let's take two cases -- Matthew and Michael. Matthew's mom asked the traditional questions and the answers were as follows: "What is Dyslexia?" Dyslexia is a problem with words and phonics. "Does my child have Dyslexia?" Yes, Matthew has Dyslexia. "What is the best Dyslexia Program?" The parents were told they needed an Orton Gillingham Program (R) that stressed phonics. improved -- but he still struggled with the small words, he still skipped words and lines when reading, and his parents were incredibly frustrated with his inability to pay attention Michael's parents were lucky. They had read the works of Dr. Linda Silverman, Jeffrey Freed and Dr. Ned Hallowell. first question for them was, "Was their child a right-brained learner -- or a visual spatial learner? The answer was clearly yes. Michael remembered places he had visited, even from years ago; details from movies he had seen; and learned best when he saw and experienced on Jeffrey Freed's book "The Right-Brained Child in a Left-Brained World", they realized that ADHD and being a right-brained learner were common. They also had Michael tested for a visual processing issue, and that too was an issue. They looked for a program that addressed: - The sight word vocabulary and pattern recognition issues that often plague even the brightest right-brained learners - ADHD with natural solutions - The visual processing issues with tools that addressed both the tracking, and worked on cross mid-line exercies - Connectivity -- they realized they needed a program where their child could relate - Teaching them to be the coach and advocate their child needed approaches can work -- but the integrated approach can often lead to faster, more significant and longer lasting gains -- because it addresses both the underlying issues, empowering the student, and providing the tools the parents need to help their right-brained learner to be successful in a left-brained world Halpert M.Ed. is the developer and director of the 3D Learner Program (R). She has learned a great deal from the likes of Dr. Linda Silverman, Jeffrey Freed and Ned Hallowell. Mira has four successful children -- two of whom are right-brained learner. Mira believes the right-brained learners can be Outrageously Successful with the right tools, the right motivation and when the parents learn how to be the coach and advocate their right-brained learner needs. For more information go to Parents Make The Difference or call Mira at 561-361-7495
<urn:uuid:44f3771d-cd87-426d-9e56-cddb95c32742>
CC-MAIN-2013-20
http://www.3dlearner.com/programs/articles/for-parents-of-bright-brained-learners/
2013-05-20T02:31:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952996
761
How the Rhinoceros Beetle Got Its Horns A rhinoceros beetle shows off its antler-like horn. CREDIT: Douglas Emlen Sporting a horn on your head two-thirds the length of your body might seem like a drag. For the rhinoceros beetle, though, massive head-weapons are no big deal. Turns out, pitchfork-shaped protrusions on the heads of rhinoceros beetles don't slow them down during flight, new research shows. The findings may explain why the beetles' horns are so diverse and elaborate, said study researcher Erin McCullough, a doctoral student at the University of Montana. "Because the horns don't impair the beetles' ability to fly, they might be unconstrained by natural selection," McCullough told LiveScience, referring to the evolutionary process that weeds out weak traits while passing on advantageous ones. The finding would clear up a rhinoceros beetle mystery. Male rhinoceros beetles (there are more than 300 species) are known for their huge horns, some of which can exceed the length of the rest of the beetle's body. The males use these horns, which come in an array of shapes, to battle each other for supremacy of sap-leaking sites on trees. Females are drawn to these sites to feed, and males perched there are more successful at mating with those females. [Images: Amazing Rhinoceros Beetles] "Rhinoceros beetles are just fantastic creatures," McCullough said. "They have the most elaborate weapons that we find really in almost any animal." McCullough and her colleagues expected those weapons came at a cost. Flashy body parts often do; in fact, scientists theorize that wild feathers or other elaborate mate-attracting devices send a signal that says, "Mate with me! I'm so healthy I can support a totally useless appendage!" To evaluate the cost of the beetles' horns, McCullough tested Asian rhinoceros beetles (Trypoxlus dichotomus), which have horns about two-thirds as long as their bodies. After euthanizing the beetles, she weighed them with and without their horns. She also determined the center of mass of the beetles with and without their horns. Finally, she tested the beetle bodies in a wind tunnel to see how the horns affected the drag on the beetles' bodies and thus the force they'd need for flight. What she found surprised her. The beetle horns weren't a drag at all. The horns turned out to be very dry and hollow, McCullough said. They comprised only 0.5 percent to 2.5 percent of body weight. Because of their low mass, they hardly affected the beetles' center of mass. Cutting off a male's horn moved his center of mass only about 1.7 percent. And in flight, the horns made no difference at all. The beetles fly slowly with their bodies in a near-vertical position, McCullough found. At this angle, even a huge horn adds almost no drag. The researchers report their findings today (March 12) in the journal Proceedings of the Royal Society B. "This is not what I was expecting, but it's actually a nice simple explanation for my big interest in why we see so much diversity in these horns," McCullough said. Without much of a cost to the beetle's survival, evolution is essentially free to experiment with weird and wild horn shapes. "There's a big benefit to having these horns, but I haven't found any evidence for any cost," she said. MORE FROM LiveScience.com
<urn:uuid:01f54095-336b-4d41-8e02-48ec4bfe69a7>
CC-MAIN-2013-20
http://www.livescience.com/27851-rhinoceros-beetle-horn-evolution.html
2013-06-20T08:44:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962986
745
WAGNER, WILLIAM (Wilhelm), land surveyor, immigration agent, author, farmer, jp, politician, and office holder; b. 13 Sept. 1820 in Grabowo (Grabowo nad Prosna, Poland), son of Ernst Wagner; m. 6 Aug. 1859 Adelheid Fenner, and they had at least two children; d. 25 Feb. 1901 in Winnipeg. William Wagner’s father was a judge in the city-province of Posen (Poznan), in Prussian Poland, and his grandfather had been a major-general of engineers in the Prussian army who distinguished himself during the Napoleonic Wars. Wagner studied engineering and architecture at the universities of Breslau (Wroclaw, Poland), Posen, and Berlin, and graduated in 1847. He supported the revolutionaries of 1848 and took part in fighting at Xions (Książ Wielkopolski), a small town south of Posen. He was taken prisoner, but managed to escape and immigrated to the United States. In 1850 he came to the Canadas and settled in Ottawa. Seven years later, on 7 April, he was certified as a land surveyor in Lower Canada. The following year he was certified for Upper Canada. In 1859 Wagner returned to Europe, married a native of Ossowa (Poland), and successfully passed examinations as a land surveyor. Back in Upper Canada, he pointed out to the provincial government the possibility of inducing large numbers of Germans to immigrate to the Canadas. In 1860 he was appointed immigration commissioner and sent to Germany, where he stayed until 1863 promoting immigration to the Canadas. He would remain interested in the cause for many years. Settling in Montreal on his return to North America, he resumed the profession of surveyor. He became a member of the German Society of Montreal, a benevolent organization, and in 1867 was elected its president, an office he held until 1870. The following year he was sent to the new province of Manitoba as a government land surveyor charged with surveying several townships between Winnipeg and Rat Creek. His account of the journey from Toronto, the first work written in German on Manitoba, was published in the Berliner Journal of Berlin (Kitchener), Ont., and was issued as a pamphlet, Einwanderung nach Manitoba, by the German Society of Montreal in 1872. It provides an accurate report of travel conditions at the time and is accompanied by advice for travellers and immigrants. It also contains reliable descriptions of Manitoba, its population, settlements, legal and social institutions, history, economy, climate, and natural resources. Above all the account presents a far more positive estimate of Manitoba’s potential than that put forward by certain government officials who considered the prairie province good for hunting and trapping but unsuitable for agriculture and colonization. His report of the journey was also issued as a pamphlet with the financial support of the Canadian government for distribution in Germany. It was dedicated to the German Society of Montreal to further its project of establishing a German settlement in Manitoba, a project which eventually failed since the society could not attract a sufficient number of immigrants to the township chosen by Wagner and set aside by the Department of Agriculture. Wagner was the author of several other booklets on Canada intended for German immigrants. Wagner decided to remain in Manitoba and took up 1,000 acres of land at Ossowa, a village founded by him some miles north of Poplar Point and named after his wife’s birthplace. He was promoter of dairy farming in the west, and as president of the Manitoba Dairy Association was in great measure responsible for improvements made in the manufacture of butter and cheese. He was named a justice of the peace and appointed to the board of examiners for provincial land surveyors. A member of the Conservative party, he represented Woodlands in the Legislative Assembly from 1883 to 1886. After his defeat in the elections of 1886 he was appointed swamp lands commissioner, a federal position he would hold until the Liberal government of Wilfrid Laurier* came into power in 1896. Hugh John Macdonald*, premier of Manitoba in 1900, appointed Wagner assistant sergeant-at-arms for his loyalty to the Conservative cause and for the services he had rendered to the province. Wagner died early in 1901 at his residence on College Avenue, Winnipeg. The funeral was held under the auspices of the masonic lodge, of which he had been a prominent member. William Wagner is the author of Anleitung für Diejenigen, welche sich in Canada and besonders am Ottawa-Flusse (Canada-West) niederlassen wollen (Berlin, 1861; repr. 1862); Canada, ein Land für deutsche Auswanderung (Berlin, 1861); Das Petroleum, aus Canada bezogen, in seinem Werthe für Deutschland (Berlin, 1863); “Der Nordwesten von Canada, eine geographisch-historische Skizze des grossen Weizenlandes von Nord-Amerika,” Mitteilungen des Vereins für Erdkunde zu Leipzig (Leipzig, Germany, 1883), 115–44; and Einwanderung nach Manitoba ([Montreal?, 1872]), repr., intro. K. R. Gürttler, as “Das Manitoba-Siedlungsprojekt der Deutschen Gesellschaft zu Montreal,” German-Canadian yearbook (Toronto), 10 (1988): 33–71. Man., Legislative Library (Winnipeg), Vert. file, William Wagner. NA, RG 17, A I 67, 69, 72–73, 75, 79; RG 68, General index, 1841–67. PAM, MG 12, A; B 1; RG 17, C1, 523, 539; RG 18, A2. Manitoba Morning Free Press, 26 Feb. 1901. Winnipeg Telegram, 28 Feb. 1901. Can., House of Commons, Journals, 1875, app.4: 4. Agriculture, Agriculture -- Farmers, Authors, Authors -- Pamphlets, essays, and polemics, Authors -- Travel accounts, Legal Professions, Legal Professions -- Magistrates and justices of the peace, Office Holders, Office Holders -- Federal, Office Holders -- Provincial and territorial, Politicians, Politicians -- Provincial and territorial governments, Surveyors Europe, Europe -- Germany, Europe -- Poland, North America, North America -- Canada, North America -- Canada -- Manitoba, North America -- Canada -- Quebec, North America -- Canada -- Quebec -- Montréal/Outaouais
<urn:uuid:a3562f94-5625-4c3c-a88e-c6b00568045d>
CC-MAIN-2013-20
http://biographi.ca/009004-119.01-e.php?id_nbr=7123
2013-05-25T13:43:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705955434/warc/CC-MAIN-20130516120555-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936615
1,404
Sometimes when people first learn about Web accessibility they look for quick ways of improving the sites they build. This often leads to misuse or overuse of certain HTML features that are meant to aid accessibility, but when used wrongly have no effect and can actually have the opposite effect by making the page less accessible and less usable. Many of the commonly misused accessibility features are HTML attributes. It is my feeling that they get misused either by developers who mean well but don’t quite understand how the attributes help end users, or by developers who add them simply to be able to tick “accessibility” off their to-do list and shut up their manager, client or whoever is pestering them about making the site more accessible. Here are a few examples of HTML attributes that are often misused or overused: accesskeyattribute. A potentially useful attribute, the accesskeyattribute creates a keyboard shortcut for a link or form control. However, it is so badly implemented in most browsers that it’s safest to avoid using it. Very few users are aware of it, and with current implementations it can conflict with shortcut keys used for other functions in the browser. accesskeycan be useful to some people if many sites use the same shortcut keys. Many public sector sites use the same shortcut keys since they follow a guideline which states which keys to use for what. It doesn’t solve the user agent problem, but at least it makes the use of It is quite common to see accesskeyuse go completely overboard though, with just about every link and form control having an accesskeyattribute, especially in the administrative interfaces of CMSs and other tools that claim to be accessible. tabindexattribute. Changing the order in which elements receive keyboard focus from the order they appear in the markup can perhaps be useful in some hypothetical cases. I can’t really think of any such cases, but that is not how the tabindexattribute is normally used. Instead it is often used to define the tabbing order of elements that are already in a logical order in the markup. This wouldn’t really be noticed or cause any problems if it weren’t for the fact that elements with a tabindexattribute take precedence over all other elements when using the keyboard to navigate. A good example is the comment form in a default WordPress installation. The form controls (input fields and submit button) all have tabindexattributes despite already being in a logical order in the source. The effect is that keyboard users will skip straight to the comment form when they start tabbing through the page. Very annoying and completely useless, though probably well-meaning. titleattribute. The developers of several CMSs that are popular in my part of the world have apparently learned about the titleattribute and that it can be used to clarify the target of a link. So they want to use it for all links that their CMS creates, mindlessly repeating exactly what is already in the actual link text, sometimes with “Link to: ” prepended. That is completely useless and does nothing to improve accessibility. All it does is increase document size. altattribute. Overly explicit and verbose alt text is a nuisance. One of my favourite examples used to be csn.se, the website for the Swedish National Board of Student Aid. Until a few weeks ago, the site consisted of old-school nested tables and spacer GIFs. Somebody, probably a well-meaning person, added alt text to the many spacer GIFs and other presentational imgelements that were used on the site. So far so good. But unfortunately the alt text should have been empty to indicate that the images were purely decorative. Instead, the text “Typografisk luft” (“Typographical space”) was used for spacer images and “Webbplatsens hörn” (“The website’s corner”) for images whose only purpose is to create rounded corners. There wasn’t just one or two of them either. On the English About CSN page I could count to no less than 185 spacer GIFs with alt="Typografisk luft". Take that, screenreader users! It makes for a superb example when demonstrating what not to do, so in a way it is unfortunate that they have now updated the site to get rid of the spacer GIFs. They do misuse the In early August this year, Patrick H. Lauke held an excellent presentation where he brings up many of these overused accessibility features. The presentation slides can be downloaded in several formats from Too much accessibility. There is also an audio recording of the presentation, which is really great since you can listen to Patrick talk while going through the slides. Patrick also brings up several other features that can improve accessibility if used correctly, so I highly recommend that you take the time to go through the entire presentation. You will come away with a much better understanding of why the HTML attributes I mention here can be problematic when used wrong, and how to use them well. - Previous post: Accessibility is part of your job - Next post: DOM Assistant updated to 2.5, adds CSS selector support Subscribe / follow DreamHost web hosting Use the promo code 456BEREASTREET3 to save USD 20 when you sign up for DreamHost
<urn:uuid:d8ca5576-37b3-42ea-834d-504673f11596>
CC-MAIN-2013-20
http://www.456bereastreet.com/archive/200712/overdoing_accessibility/
2013-05-25T19:41:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922842
1,120
Clinical trials show that patients with glioblastoma multiforme, a particularly aggressive brain cancer, survived longer if they were treated with a vaccine followed by chemotherapy than did those patients treated with either the vaccine or chemotherapy alone. The finding continues recent progress in immune treatments for brain tumors (see “Brain Tumor Researchers Let Slip the Immune Cells of War,” May-June 2005 BrainWork). The vaccine appears to kill off chemotherapy-resistant cells, leaving behind a population of cells that can be treated with chemotherapy, report John S. Yu, co-director of the Comprehensive Brain Tumor Program at Cedars-Sinai Medical Center in Los Angeles, and colleagues in the August issue of Oncogene. To make the vaccine, dendritic cells were harvested from each patient’s blood, grown in a dish that contained proteins from glioblastoma tumors, and then injected back into the patient’s bloodstream. The process generates dendritic cells that display proteins to immune cells, instructing them to kill other cells that have those proteins on their surfaces, including tumor cells. “What we show now is that one of the antigens being targeted by the vaccine is TRP-2 [tyrosinase-related protein-2],” Yu says. “When treated with the vaccine, patients had much less antigen in their subsequent tumor than they did before treatment.” Tumors that had less TRP-2 were more sensitive to chemotherapy than tumors with lots of TRP-2. The results suggest that targeting TRP-2 is important in treating glioblastomas.
<urn:uuid:9791ee61-9e61-44a6-afcd-f2b3d6720150>
CC-MAIN-2013-20
http://dana.org/news/brainwork/detail.aspx?id=672
2013-05-20T02:14:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962267
341
The Young Man Abraham Lincoln was born February 12, 1809 in a log cabin in Hodgenville, Kentucky. His father, Thomas, was a carpenter, while his mother Nancy took care of Abraham and his older sister Sarah. There are some reports that Abraham was in fact an illegitimate child of one Wesley Enloe, a North Carolina farmer, but with no credible accounts but conjectures on Abe's genetic makeup in today's age, this can be discarded as mere rumor. His parents strongly disagreed with slavery, and instilled this belief in their son at a young age. When young Abraham was seven, the family packed their belongings and moved to Indiana (after Lincoln's death, this area was incorporated as Lincoln City.) Abe loved books and spent a lot of time after school reading, much to his layman father's chagrin. In the early 1800s there was little treatment for most disease - home-baked remedies and poultices were used, but to little effect. When Nancy came down ill with milk sickness (obtained by drinking milk toxified through the cow's consumption of white snakeroot), there was little the family could do but pray. Nancy lingered for several days, and passed away on October 5, 1818. Her dying words to Abe and Sarah were Be good and kind to your father, to one another, and to the world. Thomas sat Abe down to write a letter to a Kentucky reverend and family friend, asking him to provide the funeral sermon. In William Thayer's biography of the President, the elder Lincoln's eyes welled with pride as he watched nine-year-old Abraham open the letter and read it back to him. No one in his family had done that before. Abraham himself was reluctant to speak about his mother's death, and Thayer records the event as "the loss of a good mother to a bright, obedient, and trusting boy, hid away in the woods, where a mother’s presence and love must be doubly precious." Still, the family had to thrive on in the relative wilderness of the Indiana frontier. Seeking companionship, Thomas Lincoln remarried in 1819 to his old sweetheart before Nancy, Sarah Bush, who had three children of her own from a previous marriage. Thomas and Sarah were good people, and they frequently took in relatives' children during times of hardship. Abraham often served as nurse, teacher, and babysitter for various cousins, nephews, nieces, and his younger siblings. He also continued to his own self-education, reading voraciously through all works of literature, law, and politics. He occasionally worked alongside his father building houses and barns, but mostly worked as a handyman around the area to make money. Abraham's father was an enterprising man, but he rarely had a day of luck. Once he built a flatboat to deliver some cheap pork he had purchased in Northern Indiana to Kentucky and other parts of the South. However, his boat capsized on the Ohio River, and he lost everything, nearly drowning in the process. He resumed his carpentry to repay the debts he had accrued in the venture. He eventually began to specialize in making wheels - particularly little spinning wheels designed for smaller clothes, such as socks and diapers. For all of these activities, Abraham was nearby to help with construction, artisanship, and entrepreneurship. Reports vary about Abraham's general character and nature as a boy. Some say he was sickly and gloomy, while others close to him say he was always bright and talkative. One relative who spent time in the Lincoln household claimed Abe liked to show off his athleticism by bending over backwards and putting his head on the floor, doing handsprings, and wrestling. He always had his nose in a book, and was frequently rebuffed by Thomas for "butting in" to other people's conversations to correct or amplify the facts. While Abe was not a particularly religious person in his youth, he spent many hours reading and memorizing the Bible, a trick which came in handy many times during his legislative and political career. He also earned his lifelong nickname "Honest Abe" in his youth, for his excellent skills in rhetoric and debate, and his ability to arbiter the various arguments of friends and family. Abraham's reading slowly molded him into an admirer of the public servants of America past and present. He set as his role model George Washington, and his fascination with The Life Of Henry Clay convinced him at an early age to become a Whig. He would frequently recite recent speeches for his schoolmates, often instead of working in the fields or at the various houses he assisted. His father would lecture him constantly about exhibiting "more aptitude for fun than work," but Abraham's practice eventually paid off. Eventually Lincoln's handiwork led him to William Wood. Wood's subscription to a local paper ran by a temperance group awed young Abe, so much so that he wrote his own essay on temperance and had it published in the paper. Wood asked the boy to write a paper for a local political newsletter. In his essay, Lincoln wrote [I believe] that the American Government is the best form of government for an intelligent people; that it ought to be sound, and preserved for ever; that general education should be fostered and carried all over the country; that the Constitution should be saved, the Union perpetuated, and the laws revered, respected, and enforced. These words would later be reshaped into his 1860 inaugural address. Abraham continued to contribute articles and essays to both papers, and began developing a knack for the epigram - writing of himself on one occasion, "Abraham Lincoln, his hand and pen/He will be good, but God knows when." In addition to his literary prowess, Abe Lincoln was known as the strongest boy in Southern Indiana. A giant at the age of eighteen (possibly due to the effects of Marfan Syndrome, passed on by his grandfather), he became a full-time log splitter and did all kinds of heavy work: sometimes young Abe would do the lifting of "three ordinary men." His traits of strength, courage, and wit would help him succeed in almost everything he did in later life. The Working Man In early 1828, Abraham made his first flatboat trip down the Mississippi River to New Orleans. It was here that Abraham had his first brush with death. While staying overnight at a friend's house just north of Baton Rouge, he and his boatmate were attacked by a group of angry Negroes. They fought hand to hand for nearly ten minutes, until finally Abe pushed two of the attackers into the river and the others fled. After the battle, Abraham was erudite, merely commenting that "slavery had robbed them of everything, and they must think it fair play to take what they can get." Despite this travail, Abraham was hooked: flatboating was the life for him. Upon his return to Indiana, Abe took a keener interest in the courts, often making day trips to nearby Booneville. Here he met a future adversary in Texan John C. Breckinridge, who served as a traveling prosecutor and attorney throughout the young western states. He even formed a temporary lyceum in his backyard, encouraging local boys to come give speeches and write compositions for weekly reviews that he helped print. Tragedy struck the family again when his sister Sarah died during childbirth in late 1828. Abraham was already 19 at this point, and restless to get out on his own. His love of the courts and speakers led him to move in with his uncle John in Decatur. He liked the area so much that he talked his father into moving out to Decatur himself. They found a nice area and built a new cabin with much of the lumber and nails from the old house. While here, he was offered another trip to the Big Easy on a flatboat, and Abe accepted. However, the flatboat itself was poorly built, and it took nearly three more weeks than expected to reach the Mississippi delta. Disgusted, Abraham set about designing a new boat: a noiseless steam-powered flatboat. Although his inventing stage never got beyond a crude wooden model, he was admired by his boatmates for his ingenuity and innovation. Upon returning to Illinois, he was met by one Daniel Needham, a semi-professional wrestler who had heard of Lincoln's great strength. He challenged Lincoln to a fight; after throwing Needham twice, Abe allowed the humbled fighter to acquiesce before he received a "serious thrashing." Lincoln, magnanimous in victory, never claimed to have beaten Needham, but that Needham was a fine fellow at the end of the day. Lincoln's flatboat piloting experience gave him a steady job as a merchant in Illinois under the tutelage of Denton Offutt. He took charge of Offutt's granary and grocery store while Offutt tended to business in other parts of the United States. Here he continued building on his grand reputation as an honest, considerate, and upright citizen and businessman. Tales of him donating food to recent widows, quoting Scripture, entertaining the local children with his stories, and fighting off a local gang who terrorized the area spread like wildfire. "That Abraham Lincoln is some man!" one woman declared. Before Abraham Lincoln was even born, Native Americans were being pushed out of their native lands, farther and farther west into uncharted territories. One such 1805 "arrangement" had put the Black Hawk tribe of Missouri and Illinois out into the wilds of Iowa. By 1832, the Black Hawks were sick of their treatment and decided to reclaim their old lands. Governor John Reynolds asked for a volunteer militia to be formed to root out and rout the infringing Indians. Abraham Lincoln was one of the first to enlist for the Black Hawk War - many of his friends followed suit out of respect for the man. His popularity was restated when he was named captain of the 1st Regiment. For thirty days, the makeshift army tramped out across northern Illinois. They never found the Indians, though, and their group disbanded. Lincoln rejoined again for thirty more days, and again for thirty more after another unsuccessful campaign. Finally, the Battle of Bad Axe was fought, and the war ended. One particularly resonant story exists from Lincoln's limited military experience. While traveling through southern Wisconsin, still searching for the Black Hawks, an old Indian entered the camp. At first he was accosted by the regulars, who threatened to kill him. The man handed Lincoln a note which proved to be a vouch for his fidelity from General Lewis Cass. The others claimed it was a forgery, and remained determined to kill the "Injun." In protest, Lincoln stood in the way of the Army's pointed guns, challenging them, "To shoot him, you must shoot HIM also!" The group capitulated, and Lincoln earned their respect and trust through his fearless resolve. The Public Servant The returning war hero was given a handsome welcome by his friends in New Salem, and it wasn't long before he was being talked into running for the state legislature. He declined, claiming that he had only lived in New Salem a short while - and to make matters worse, the election was only ten days away! But his friends persisted, and in particular a Robert B. Rutledge finally convinced Honest Abe to make a run for the legislature. Although he lost, he garnered nearly 25% of the vote in the county, a surprisingly large number. This had been Rutledge's goal anyway: to put Lincoln's name in the voters' minds for 1834, when he would run again. As a Whig, Lincoln's political views were right in line with the party leader Henry Clay's: he supported a national bank, a federal program to improve the roads, railroads, and waterways, and a high protective tariff. While stumping, Lincoln's opponents constantly berated his appearance. Lincoln, who was not remotely wealthy and could not afford custom tailoring for his abnormal frame, often settled on ill-fitting clothes and worn-out shoes. By the time the election was over, Abe was broke and went back to work as a blacksmith. Still, his heart was in another place. After smithing for a few months, he bought out a local mercantile shop and partnered up with a man named Berry, who turned out to be a drunk and a thief. When he split town one night, Lincoln was left with tremendous debt, which he paid part of by selling the store. He took up a job working at a local tavern, and at night read Shakespeare, Marlowe, and especially books on law. Again, his fortunes changed for the better, when he met a county surveyor by the name of John C. Calhoun - later Senator Calhoun - who enticed young Abe into the business of surveying. He did this for a year, and then in 1833 was appointed postmaster of New Salem. The town did not receive much mail, and often times Abe would simply place the letters under his hat and go for a walk through town, delivering the post at each stop. From river pilot to merchant to blacksmith to surveyor to postmaster, Lincoln had been a hard worker and made many friends. This finally paid off in 1834, when he routed his opponents for a spot on the Illinois state legislature. He would be re-elected three more times after that, in 1836, 1838, and 1840. While serving in the legislature, he was turned on to a career in law by John Stuart, an eminent attorney in Springfield. In 1835, the term "circuit court" had literal application: the judges would travel town to town, and many times lawyers would travel with them to represent various cases in the cities. Lincoln was one such lawyer, and quickly grew to be respected as an intelligent orator and a scrupulous man. He refused to take cases where he knew his client was guilty; and always deferred to the authority of justice, even if it meant losing the trial. He once famously got off a client by refuting eyewitness testimony: the witness claimed he had witnessed the accused murder the victim by the light of the moon, but Lincoln pointed out the moon had been covered by clouds the night of the killing, and his client was exonerated. Between serving as a legislator and his new prestige as a barrister, Lincoln soon came into much money, and began repaying his debts: for his failed store, for his father's mortgage, and for the many friends he had borrowed from in his earlier days. One day while at a cotillion, Lincoln spotted a beautiful young lady seated in the room. He walked up to her and said very plainly, Miss Todd, I want to dance with you the worst way. Such were the humble beginnings of Abe's relationship with Mary Todd, whom he began to court shortly thereafter. They were very sweet on each other, and became engaged in late 1840. However, they had a prominent fight at a New Years party, and the engagement was called off. During the break, Todd was seen around Springfield with another young politico named Stephen Douglas. However, a family acquaintance Simeon Francis finally brought the two lovebirds together, and they were married on November 4, 1842. In 1840, Lincoln had taken the next step up in his political career by running for and winning a seat on the United States House of Representatives. His Whig politics had enamored him to Henry Clay, who had personally sent letters of recommendation for Lincoln to post in newspapers throughout the area. Lincoln's legislative career was marked with excitement, as the annexation of Texas and Oregon came to fruition, and the Mexican-American War broke out in 1848. He spoke out against slavery in the new territories as "unjust and cruel" and became a feverish proponent of the Wilmot Proviso outlawing slavery in these areas. His efforts were unrewarded though, and he declined a fifth term in 1848, choosing instead to remain a lawyer in Illinois with his wife, and young sons Robert (born August 1, 1843) and Eddie (born March 10, 1846). His lawyering skills were known throughout Illinois, and he was a celebrated attendant of the United States Circuit Court in Springfield on many occasions. The Private Man In late 1849, Eddie took ill with an unknown illness (now believed to be pulmonary tuberculosis). Despite Mary's and the doctors' best efforts, the youngest Lincoln passed away February 1, 1850. It was especially heartbreaking to Abraham, who wrote of his son's demise: Angel Boy - fare thee well, farewell Sweet Eddie, We bid thee adieu! Affection's wail cannot reach thee now Deep though it be, and true. Bright is the home to him now given For "of such is the Kingdom of Heaven." The Lincolns were blessed with a third son, Willie, on December 21, 1850. He was the most affectionate and charming of the Lincoln boys, and he was Abraham's particular favorite. He was particularly studious, memorizing railroad timetables, Scripture, and his multiplication tables at an early age. Abe took him on many visits to Chicago, Washington, D.C., and back to Decatur to visit his stepmother and father, who was in failing health. Abe was particularly distant from his father, who still had the trappings of a poor businessman. He had to sell a third of his land to Abe in order to avoid the poorhouse, and though Abe took care of him, Thomas felt Abe had become "too good" for him, and was envious of his son's position. When Thomas finally passed away on January 17, 1851, Abe did not attend the funeral. The Great Orator In 1852, he also suffered the death of his friend and party mate Henry Clay; Lincoln delivered a solemn eulogy on the steps of the Capitol to his fallen compatriot. On April 4, 1853, the fourth Lincoln boy, Tad, was born. Abraham continued on as a lawyer until 1854, when the Kansas-Nebraska Act passed through Congress, giving the states popular sovereignty to determine whether they would have slaves. He felt this was an absolute moral wrong, and protested as such to anyone who would listen. Eventually, he determined the best way would be to re-enter politics. In 1856, he began campaigning for the new Republican Party for the open United States Senate spot in Illinois. His opponent was Mary's old flame, Stephen Douglas, the "Little Giant." On the eve of his nomination to the candidacy, Lincoln gave one of his most famous speeches, one which contained the most stirring cry for federal unity: A house divided against itself cannot stand. I believe this government cannot endure permanently, half slave and half free. I do not expect the Union to be dissolved, — I do not expect the house to fall; but I do expect it will cease to be divided. It will become all one thing or all the other. As for Douglas, the two men held opposing views on slavery: Lincoln wanted it banned outright, while Douglas supported the Act and popular sovereignty. While campaigning, the two agreed to hold a series of debates in various towns throughout the Illinois area. These debates became famous for their fiery rhetoric and impassioned reasoning, a result of the bygone era where substance always won out over style. Finally, election day arrived, and the votes were tallied: the Democrat Douglas narrowly edged out Lincoln to take over the seat. Still, the fires within Lincoln remained alight. Lincoln continued to stump for the Party in the western states, so much so that in 1859, the Republican Party added his name to a small set of potential candidates for the United States Presidency. He was running against such party stalwarts as future Supreme Court Justice Salmon P. Chase, William L. Dayton, and South Carolina Senator Benjamin F. Wade. Most prominent of all was William Seward, a New York Senator who had spent many years railing against slavery in Congress. He had many supporters at the convention, but after the first two ballots, it was obvious Seward would gain no more ground. Thus, in a compromise, his backers went with Lincoln, the relative unknown, who seemed a good substitute for Seward's value system. Abraham Lincoln would be their choice. With Hannibal Hamlin as his running mate, Lincoln was once again pitted against his in-state rival Douglas, running as the Democratic choice for President. In addition to these two, current Vice President John C. Breckinridge, whom Lincoln had observed and admired so many years ago, was being run as a Southern Democratic alternative to Douglas, and John Bell, a former Tennessee senator who also stood against slavery as the Constitutional Union Party candidate. The four men all issued their ideas on the terms of states' rights in America, but Lincoln's powerful message resonated throughout the entire North. Douglas made a rare and unprecedented trip to New England to try and sway voters, but to no avail. On Election Day 1860, Lincoln captured 40% of the popular vote - and every electoral vote of the North and West, sending him to the White House. Before heading to the District of Columbia, Lincoln listened to the advice of a little girl, and for the first time in his life, grew the beard which was to be his trademark in years to come. Two months after Lincoln arrived to D.C. and was sworn in, the battle of Fort Sumter occurred and the states of the South slowly began seceding from the Union. Lincoln quickly arranged for a standing army to be created to fight against the rebelling Southerners, and soon the Civil War was in full swing. Lincoln took a very active role in the war, selecting generals and assigning them regiments and areas to attack and protect, as well as approving many of the battle plans. Lincoln was continuously worried that the border states might soon fall victim to the Confederacy, and was frequently exasperated by the lack of cohesion and groundmaking by his generals. 1862 marked a troubled year for Honest Abe. In early January, Willie had taken mysteriously ill, and despite all efforts to help him, he passed away February 20. Lincoln was grief-stricken, crying out, "My poor boy. He was too good for this earth!" Still, the war was a pressing matter, and Lincoln's leadership was required. With Southern opposition absent from Congress, the legislature passed the Homestead Act of 1862, which legitimized government grants of land to freemen moving into new territories. He also passed the National Banking Act, which set up a network of national banks to distribute and control currency, and the charter for the first transcontinental railroad. He also (controversially) suspended the writ of habeas corpus, a blatant overstep of executive authority, but one which Lincoln felt was necessary to save the nation from protesters in the North. When Supreme Court Chief Justice Roger Taney issued a writ to free one such person, Lincoln ordered the military to ignore the issuance, causing friction in the capital. While in the White House, Lincoln entertained many guests, including Frederick Douglass, Horace Greeley, Harriet Beecher Stowe (to whom he famously said, "So this is the little lady who started this great big war"), and Louisa May Alcott. They all implored him to free the slaves, and by the end of 1862, Lincoln had prepared his most powerful work - the Emancipation Proclamation. On September 22, he delivered the proclamation, which began That on the 1st day of January, A.D. 1863, all persons held as slaves within any State or designated part of a State the people whereof shall then be in rebellion against the United States shall be then, thenceforward, and forever free. It was perhaps Lincoln's finest moment. The Civil War raged on, and it looked as if no end was in sight. The Republican Party was growing slightly restless with their President, and Lincoln's mind was greatly troubled by the apparent demise of the Union. On November 19, 4 months after the bloody Battle of Gettysburg had passed, Lincoln stopped there while on a speaking tour of the country. There he gave his renowned Gettysburg Address, printed in full: Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we can not dedicate -- we can not consecrate -- we can not hallow -- this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us -- that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion -- that we here highly resolve that these dead shall not have died in vain -- that this nation, under God, shall have a new birth of freedom -- and that government of the people, by the people, for the people, shall not perish from the earth. A month before his famous address, Lincoln had passed an order recognizing Thanksgiving Day. It seemed to him fairly sad that there was little to be thankful for that November. Tired of General George Halleck's hesitance in battle, Lincoln named current hero Ulysses S. Grant in charge of the Union Army, and soon results began to appear. Sherman's infernal "March to the Sea" campaign began, and the North made significant blows in Tennessee, Georgia, and Virginia. The Republican Party renominated Lincoln for President, with a Southerner Andrew Johnson on the ticket to help garner votes in the border states. The ousted McClellan ran as the Democratic opposition, but his lack of political savvy and the Union's rally behind Lincoln gave Lincoln his second term as President of the United States. Lincoln was continually worried about security - he had received numerous death threats over the years, but could do little to stop them except prepare. When the war finally ended on April 9, 1865, at Appomattox Court House, Lincoln was greatly received. He decided to celebrate quietly with a night out with his wife. They went to see a promising play, "Our American Cousin," at Ford's Theatre, near the White House. On April 14, as the two sat in a private balcony overseeing the playhouse, John Wilkes Booth, an actor and Southern sympathizer, snuck into the balcony and shot Lincoln in the head at point-blank range. Lincoln was quickly whisked away to a nearby house, where he held on until 7:22 AM, the following day, when he expired. His body was sent away to Illinois for a burial in his home state, and the bodies of his two dead children, Eddie and Willie, were buried next to their father. The whole nation mourned the loss of their President, the first one assassinated in American history. The legacy of Abraham Lincoln will be forever known as the freer of the slaves. His principles as President were above the popular attitudes of the time, and reached far into what was simply right. A great man, gone before his time, as eulogized by Walt Whitman in his famous poem, "O Captain! My Captain!": The ship is anchor’d safe and sound, its voyage closed and done; From fearful trip, the victor ship, comes in with object won. Exult, O shores, and ring, O bells! But I, with mournful tread, Walk the deck my Captain lies, Fallen cold and dead.
<urn:uuid:a2a89e61-e767-4d70-93f1-547cc2c3e755>
CC-MAIN-2013-20
http://everything2.com/title/abraham+lincoln
2013-05-23T18:46:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.986276
5,849
Autumn walks in wooded parks bring tree leaves in focus. Even though the leaves are dead, they can still present questions and answers about the evolutionary processes that brought them forth. Consider, for example, the shapes of these tulip poplar (Liriodendron tulipifera) leaves I picked up during a walk near my house the other evening. I had never noticed before that the shapes of tulip poplar leaves were so variable. I am sure there are other shapes that I missed. These are the ones I noticed and picked up while trying to carry on a conversation on unrelated subjects with my wife. If I had paid any more attention to the leaves, she would have kicked me in the butt. Can all or some of these variants be found on one tree or does each tree produce only one type of leaf? Does the shape of a leaf change during its growth? What is the functional significance of the leaf shape? Obviously, a leaf with fewer or smaller lobes would have a larger surface area for catching sun light than an equally long and wide leaf with more or larger lobes. These leaves were along a circuitous path no longer than about 2 miles, but the actual distances between the trees the leaves came from were much shorter than that. So the general environment of all the trees was the same. But, perhaps, the microhabitat of a tulip poplar determines the shapes of its leaves.
<urn:uuid:2a5b6901-7615-42bb-b5ac-01753541f1fe>
CC-MAIN-2013-20
http://snailstales.blogspot.com/2010/11/variable-leaves-of-tulip-poplar.html
2013-06-20T08:45:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.984471
292
An Indian tribe of the Iroquois confederation formerly living in New York state. (ScGbt: t. 507; l. 158'4"; b. 28'0"; dph. 12'0"; dr. 10'6"; s. 11˝ k.; cpl. 84; a. 1 11" D. sb., 1 20-pdr. P.r., 2 24-pdr. how.) The first Seneca-a wooden-hulled “ninety day gunboat” built at New York City by J. Simonson-was launched on 27 August 1861; and commissioned at the New York Navy Yard on 14 October 1861, Lt. Daniel Ammen in command. On 5 November 1861, Seneca and three other Federal Union gunboats engaged and dispersed a Confederate squadron near Port Royal, S.C.; two days later, she took part in the capture of Port Royal, which proved to be an invaluable Union naval base throughout the remainder of the Civil War. From the 9th to the 12th, she took part in the expedition which took possession of Beaufort, S.C. On the 5th of December, she participated in the operations about Tybee Sound to help seal off Savannah, Ga. The next day, she was in sight during the capture of schooner, Cheshire, entitling her crew to share in prize money. From January 1862 to January 1863, Seneca's area of operations extended from Wilmington, North Carolina, to Florida. On 27 January 1863, she took part in the attack on Fort McAllister, Ga.; and, on 1 February, she participated in a second attack. On 28 February, in the Ogeechee River, she supported Montauk in the destruction of privateer, Rattlesnake, the former Confederate warship, Nashville. In July 1863, she was one of the vessels in the attack on Fort Wagner. She later returned via Port Royal to the New York Navy Yard where she was decommissioned on 15 January 1864. She was recommissioned on 3 October 1864, Comdr. George E. Belknap in command, and was assigned to the North Atlantic Blockading Squadron. On 24 and 25 December 1864, Seneca took part in the abortive attack on Fort Fisher; and, between 13 and 15 January 1865, she participated in the successful second attack which finally captured that Southern coastal stronghold and doomed Wilmington, closing the Confederacy's last major seaport. On 17 February, she was in the force which attacked Fort Anderson and captured it two days later. At the end of the war, Seneca returned to Norfolk, Va., where she was decommissioned on 24 June 1865. The ship was sold on 10 September 1868 at Norfolk to Purvis and Company.
<urn:uuid:4f774edb-7d5b-49d2-bea1-2dac83ba7b91>
CC-MAIN-2013-20
http://www.history.navy.mil/danfs/s9/seneca-i.htm
2013-05-25T06:33:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973169
583
Working with Keys (Visual Database Tools) A primary key is a constraint that assures that each table contains no duplicate rows. A foreign key is a constraint that enforces referential integrity. |If the table is published for replication, you must make schema changes using the Transact-SQL statement ALTER TABLE or SQL Server Management Objects (SMO). When schema changes are made using the Table Designer or the Database Diagram Designer, it attempts to drop and recreate the table. You cannot drop published objects, therefore the schema change will fail.| For details about working with keys, see the following topics.
<urn:uuid:9174fdaa-ed76-4de1-9ec3-5401851ef96b>
CC-MAIN-2013-20
http://technet.microsoft.com/en-us/library/ms179610(v=sql.90).aspx
2013-05-19T02:33:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.85321
125
Is there, or should there ever be, a point when a state is no longer penalized for its discriminatory past? Not according to the Department of Justice, which recently rejected a South Carolina law that would have required voters to show a valid photo ID before casting their ballots. Justice says the law discriminates against minorities. The Obama administration said, "South Carolina's law didn't meet the burden under the 1965 Voting Rights Act, which outlawed discriminatory practices preventing blacks from voting." Why South Carolina? Because, the Justice Department contends, it's tasked with approving voting changes in states that have failed in the past to protect the rights of blacks. Are they serious? There are two African-Americans representing South Carolina in the U.S. House of Representatives, One is Tim Scott, a freshman Republican. The other is 10-term Rep. James Clyburn, the current assistant Democratic leader. There are numerous minority members of the S.C. state legislature, and Gov. Nikki Haley is Indian-American. This is not your grandfather's South Carolina. This is not the South Carolina of the then-segregationist and Dixiecrat presidential candidate Strom Thurmond. Yesterday's South Carolina had segregated schools, lunch counters, restrooms and buses and a dominant Democratic Party. Today's South Carolina is a modern, integrated, forward-looking, dual-party state. If Justice thinks proving who one is by showing a valid photo ID discriminates against minorities, how does it explain the election of so many minority legislators? Are only whites voting for them? Democrats, especially, should be sensitive to states and people who have demonstrated that they have changed. It was the Democratic Party of the late 19th century that resisted integration throughout the South, passing Jim Crow laws that frustrated blacks who wanted to vote. Those were Southern Democrats who stood in schoolhouse doors, barring blacks from entering. Today, many members of that same party refuse to allow poor, minority students to leave failing government schools as part of the school voucher system because they, apparently, value political contributions from teachers unions more than they value educational achievement. The South Carolina law that offends the Justice Department anticipated objections that some poor minorities might not have driver's licenses (and certainly not a passport) because they might not own cars. So the state will provide free voter ID cards with a picture of the voter. All someone has to do is prove they are who they claim to be. A birth certificate will do nicely. A utility bill can be used to prove residency. Not requiring a voter to prove his or her citizenship and residence is a recipe for voter fraud. Democrats like to accuse Republicans of trying to keep minorities from voting because they know most will vote for Democrats. Even if that were true (and it's debatable) the reverse is probably truer. Some Democrats have allegedly encouraged people to vote who were not eligible, some more than once. Without a valid ID, how can we stop this? The Brennan Center for Justice at New York University School of Law has compiled a list of new voter identification laws passed this year. In addition to the one in South Carolina, all require some form of photo identification. Will Justice go after all of them, as well? According to the Brennan Center, a new law in Kansas, effective Jan. 1, 2012, requires a photo ID, with certain exceptions such as a physical disability that makes it impossible for the person to travel to a government office to acquire one, though they must have "qualified for permanent advance voting status ..." A new Texas law, which took effect on Sept. 1, requires a photo ID in order to vote, or another form of personal ID card issued by the Department of Public Safety. Even historically liberal Wisconsin passed a new law this year requiring voters to prove who they are, in most cases with a photo ID. Governor Haley and South Carolina Rep. Joe Wilson vow to fight the Justice Department ruling. They should. Photo IDs are required when flying on commercial aircraft or cashing a check. That discriminates against no one. Neither does requiring people to prove who they are before voting, unless, of course, there's another agenda, like "stuffing" the ballot box. Cal Thomas is a syndicated columnist. He may be contacted at email@example.com.
<urn:uuid:ef20fa27-f05d-4368-874a-4dba76b14cf1>
CC-MAIN-2013-20
http://www.wsbt.com/topic/bs-ed-thomas-20111231,0,6640675.story
2013-05-25T13:08:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961758
883
Content Area Reading; Literature; Reading Comprehension; Literature Appreciation; Research Skills Books and resources to help you teach and celebrate World Oceans Day on June 8. Addition and Subtraction; Counting and Numbers; Money; Number Sense Flashlight Tag, Subtraction Salute, Speed Racer, and four other fun ways to practice basic math Reading Comprehension; Historic Figures; Holocaust; Human Rights; Jewish Experience Curriculum Development; Classroom Management; Back to School; New Teacher Resources; Teacher Tips and Strategies A teacher-created book list of professional resources for the end of the school year — the best time to prepare for the next year! Hobbies, Play, Recreation; Culture and Diversity; World History Chinese Tangrams, the Big Snake, a game from Ghana, and other fun games from around the world Browse more classroom materials By rolling and bouncing a ball, children strengthen motor control. Here are some fun activities to encourage physical development skills. Sharpen your students' fine motor skills and hand-eye coordination with this creative — and colorful — summertime activity. Students learn about the significance of various American symbols, such as the U.S. flag and the Vietnam Veterans Memorial. Teach about the meaning behind the flag by studying and singing the "Star-Spangled Banner", creating paper flags, and more! Supplement a lesson on art history with this Van Gogh craft that can be hung on the wall for decoration. Creative spins on traditional games, such as musical chairs and tag, that emphasize cooperation and take advantage of the warm weather A fun way to teach about the main idea, character, and other literary elements by having students create their own comic strips. Students practice identifying words with long-vowel sounds and gain experience with sound-spellings. Choose from graphic organizers, charts, reading response sheets, and other donwloadable e-reading resources that can be used with Storia books. Sign up today for free teaching ideas, lesson plans, online activities, tips for your classroom, and much more. Choose your grade range: See a sample >
<urn:uuid:d7884568-7ae9-4c06-99f8-7184872538aa>
CC-MAIN-2013-20
http://www.scholastic.com/teachers/lesson-plans/free-lesson-plans?subject[]=380
2013-06-20T08:45:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.89064
446
Milton Friedman on Tides of Political Thought in Modern History In this 1999 video from an International Society for Individual Liberty conference in Costa Rica, economist and Nobel laureate Milton Friedman delivers a live lecture to the audience through a teleconferencing system. Friedman speaks about various “tides” of economic and political ideas throughout the modern era, beginning with the lassiez-faire influence of the Adam Smith tide in the 1700s, progressing through the Fabian tide of big government authoritarianism during the greater portion of the 20th century, and concluding with the contemporaneous Hayek tide and the resurgence of classical liberal ideas following the collapse of some of the world’s largest and most restrictive authoritarian states. Milton Friedman (1912-2006) was one of the most recognizable and influential proponents of liberty and markets in the 20th century, and leader of the Chicago School of economics. Read more about Milton Friedman’s life and watch other videos featuring him at http://www.libertarianism.org/people/milton-friedman.
<urn:uuid:4c817b1e-1cc5-45da-ad49-56e7718db990>
CC-MAIN-2013-20
http://www.libertarianism.org/media/video-collection/milton-friedman-tides-political-thought-modern-history
2013-05-25T06:33:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.881514
218
What is a "veteran?" One would think that would be an easy question to answer. In the millions of laws passed over two centuries by Congress, you would think that at least one of them would define the term "military veteran." Most dictionaries define "veteran" as (1) A person who has served in the armed forces, or (2) An old soldier who has seen long service. Using the dictionary definition, one would be a military veteran with just one day of military service, even with a dishonorable discharge. I like the following definition, which was once penned by an unknown author: A veteran is someone who, at one point in his/her life, wrote a blank check made payable to "The United States of America," for an amount of "up to and including my life." In actuality, there is no standardized legal definition of "military veteran" in the United States. You see, veteran benefits weren't created all at one time. They've been added one-by-one for over 200 years by Congress. Each time Congress passed a new law authorizing and creating a new veteran benefit, they included eligibility requirements for that particular benefit. Whether or not one is considered a "veteran" by the federal government depends entirely upon which veteran program or benefit one is applying for. Veteran's Preference for Federal Jobs Veteran's are given preference when it comes to hiring for most federal jobs. However, in order to be considered a "veteran" for hiring purposes, the individual's service must meet certain conditions: Preference is given to those honorable separated veterans (this means an honorable or general discharge) who served on active duty (not active duty for training) in the Armed Forces: - during any war (this means a war declared by Congress, the last of which was World War II). - For more than 180 consecutive days, any part of which occurred after 1/31/55 and before 10/15/76. - during the period April 28, 1952, through July 1, 1955 (Korean War). - in a campaign or expedition for which a campaign medal has been authorized, such as El Salvador, Lebanon, Granada, Panama, Southwest Asia, Somalia, and Haiti. - those honorably separated veterans who 1) qualify as disabled veterans because they have served on active duty in the Armed Forces at any time and have a present service-connected disability or are receiving compensation, disability retirement benefits, or pension from the military or the Department of Veterans Affairs; or 2) are Purple Heart recipients. Campaign medal holders and Gulf War veterans who originally enlisted after September 7, 1980, or entered on active duty on or after October 14, 1982, without having previously completed 24 months of continuous active duty, must have served continuously for 24 months or the full period called or ordered to active duty. Effective on October 1, 1980, military retirees at or above the rank of major or equivalent, are not entitled to preference unless they qualify as disabled veterans. For more information about the Veteran's Preference Hiring Program, see the Federal Government's Veteran's Preference Web Page. Home Loan Guarantee Military veterans are entitled to a home loan guarantee of up to $359,650, when they purchase a home. While this is commonly referred to as a "VA Home Loan," the money is not actually loaned by the government. Instead, the government acts as a sort of co-signer on the loan, and guarantees the lending institution that they will cover the loan if the veteran defaults. This can result in a substantial reduction in interest rates, and a lower down payment requirement. However, whether or not the Veteran's Administration (VA) defines someone as a "veteran" under this program also depends on (1) when they served, (2) how long they served, and (3) what kind of discharge they received. First of all, the law requires that the veteran's discharge be under "other than dishonorable conditions." This is not the same as a "dishonorable discharge." What this means is that for all discharges other than honorable or general, the VA will make an individual determination as to whether or not the conditions of the discharge are considered to be "dishonorable." Required periods of service are: At least 90 days of active duty service during WWI, WWII, Korean War, or the Vietnam War (09/16/40 to 07/25/47, 06/27/50 to 01/31/55, and 08/05/64 to 05/07/75). The 90 days does not have to be continuous. If you served less than 90 days, you may be eligible if discharged for a service connected disability. For active duty service prior to 09/07/80 (enlisted) and 10/16/81 (officer) -- other than the dates listed above, you must have served 181 days of continuous active duty to qualify for a home loan guarantee.
<urn:uuid:b239df95-cfc6-408f-adfc-2afeba93154d>
CC-MAIN-2013-20
http://usmilitary.about.com/od/benefits/a/vetbenefits.htm
2013-06-19T18:53:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959372
1,036
In this lesson, our instructor Evren Edler gives an introduction to tempo operations. He starts by discussing tempo events, enable conductor, tempo change and option click. He then moves on to snap to bar and tempo operations: constant , linear, parabolic, s-curved, scale, stretch, & pencil tool. Graphic Tempo Editor: to view the tempo operations, click the little arrow in the tempo ruler. Tempo events are visible when the conductor icon is enabled on transport window. Tempo Change: In order to make tempo changes, we need to always enable the “conductor” icon in the Transport Window. Once we type the changes on certain cars, we can drag the “Tempo Change” icon in the Tempo ruler up to go faster or drag down to slow down. Option Click: gets rid of tempo changes in a similar way we work with our markers or we can select the tempo change icons and delete them by delete/backspace key or clearing them under Edit Menu or short cut of (Com B). Tempo Operations: We can also apply tempo changes to a time selection using the Tempo Operations under the Event menu. There are 6-sub menu you will see under Tempo Operations. Pencil Tool: You can draw in the new tempo events, replacing existing ones, using the pencil tool The Free hand tool lets you draw freely by moving the mouse, producing a series of steps that depends on the Tempo Edit Density settings. Parabolic & S-Curve Tools lets you draw the best possible curve to fit your freehand drawing, again producing a series of steps that depend on the Tempo Edit The Triangle, Square & Random Pencil tool cannot be used to create tempo events! Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.
<urn:uuid:a9738bc3-a626-4d8d-aa64-28d53800ff1c>
CC-MAIN-2013-20
http://www.educator.com/music-theory/pro-tools-music-production/edler/tempo-operations.php
2013-05-22T07:14:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.829093
431
Introduction: This lesson will follow a lesson teaching perspective. The students know how objects appear to go back in space. They also learned how to make objects appear to have volume. Motivation: I will start the class introducing the term, pointillism. Pointillism is the use of tiny, dot-like brushstrokes of contrasting colors to represent the play of light in a painting. The students will understand pointillism. The students will learn basic facts about Georges Seurat and his work, "The "Maria" at Honfleur". The students will look at pictures of ships and use their understanding of perspective, from the previous lesson, to draw ships, docks, marinas, etc. The students will discuss the different parts of ships, stern, mast, ropes, deck, anchor, etc. The students will use markers to use the technique of pointillism in their work of art. The students will engage in a group critique of their works of art. 1. I will start the class by having half of the students come up and look at a small portion of the print up close. Then, of course, have the other half do the same. The students will discuss what they saw, small dots of color. 2. I will then uncover the rest of the print. The students will then see that the small dots of color to create an image. This is when I would explain pointillism. 3. I would then introduce the artist, Georges Seurat, and the title of the print. 4. I will talk about the artist in further detail. For example, how he was the founder of Neo-Impressionism, his influences, background information, etc. 5. If the class has no questions or anything to add to the discussion of Seurat, I will then discuss the print. This will lead into a conversation of ships. I will pass out pictures of ships as a reference. I will make a list on the board of the parts of a ship from the information I gather from the students. 6. I will then tell the children to gather around my desk. I will then explain to the students their assignment. I will start by showing examples of previous students work and an example I created. 7. The students will draw a ship, in perspective, with a sea-like environment around it. I will briefly remind them how objects appear to get smaller as they go back into space. They will be told to draw lightly so later the pencil lines will be erased. I will demonstrate at my desk. 8. I will then explain how to use pointillism to add color and volume to their ships. Remind them how objects cast shadows and have light cast on then from the direction of the sun. 9. The students will then return to their seats as I pass out the paper and markers. The students will then create their works of art as I walk around helping children individually. 10. At the very end of class each student will hang their images on a bulletin board and put the markers back in the boxes and neatly place them on the cart. The students will then sit on the floor in front of the bulletin board. Closure: Their images will stay hanging on the bulletin board. We will then hold a group critique. I would ask the students if they like the technique of pointillism. I am sure the students will have conflicting opinions, which is good. Evaluation/Assessment: The evaluation of the work will be a group discussion. We will discuss who did a good job using perspective and pointillism and what was difficult to do using this technique or what might have been easy and enjoyable. This will allow students to reflect on their work of art and voice their opinions of the lesson. This can also act as learning experience for myself to see how to improve the lesson for future use.
<urn:uuid:a24d5d3c-a39f-409a-9bcc-b33c84c4c475>
CC-MAIN-2013-20
http://teachers.net/lessons/posts/2038.html
2013-05-22T00:42:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953426
793
Java as a programming language Java is an Object oriented application programming language developed by Sun Microsystems. Java is a very powerful general-purpose programming language. Java as an Object Oriented Language In this section, we will discuss the OOPs concepts along with fundamentals used to develop the java applications and programs. Applications and Applets Now a days, Java is widely used for applications and applets. The code for the application in object format resides on the user's machine and is executed by a run-time interpreter. The concept of Write-once-run-anywhere (known as the Platform independent) is one of the important key feature of java language that makes java as the most powerful language. To commence with Java programming, we must know the significance of Java Compiler. When we write any program in a text editor like Notepad, we use Java compiler to compile it. We can run Java on most platforms provided a platform must has a Java interpreter. That is why Java applications are platform independent. Java interpreter translates the Java bytecode into the code that can be understood by the Operating System. Java debugger helps in finding and the fixing of bugs in Java language programs. The Java debugger is denoted as jdb. It works like a command-line debugger for Java classes. Header file generator Firstly, The native methods are in pure C code, not C++. The function prototypes are in an object-oriented form of C which are being provided by javah , but they are still not object methods. Sun Microsystems has provided a computer software tool known as Javadoc. This tool is used to generate API documentation into HTML format from Java source code. It is interesting to know that Javadoc is the industry standard for documenting Java classes. Applet viewer is a command line program to run Java applets. It is included in the SDK. It helps you to test an applet before you run it in a browser. Before going any further, lets see what an applet is? Java Empowered Browsers Java language is the most powerful language and is widely used in the web application. So the current versions of most of the web browser are made java enabled. Mostly used java enabled web browsers are: Installing Java ... Before getting started developing an application the developer must ensure that he/she is going to develop an application by using the best tools. The combination of two features Platform Independent and Object Oriented makes the java powerful to build the flexible application. Let's start with the Java Development Kit JDK Installation on Sun Solaris JDK Installation on Windows 95 and Windows NT JDK installation on Apple Macintosh system 7.5 Testing the installation Exploring the java Developer's kit Distributing the Java Virtual Machine Other Development Environment To Commence with Java ... ( Learn Java in a day) To comprehend any programming language, there are several kind of comments which are used. These comments are advantageous in the sense that they make the programmer feel convenient to grasp the logic of the program. There are few keywords in Java programming language. Remember, we cannot use these keywords as identifiers in the program. The keywords const and goto are reserved though, they are not being currently used. Java Data Types Data type defines a set of permitted values on which the legal operations can be performed. In java, all the variables needs to be declared first i.e. before using a particular variable, it must be declared in the program for the memory allocation process. By literal we mean any number, text, or other information that represents a value. This means what you type is what you get. We will use literals in addition to variables in Java statement. While writing a source code as a character sequence, we can specify any value as a literal such as an integer. Operators and Expressions Operators are such symbols that perform some operations on one or more operands. Operators are used to manipulate primitive data types. Once we declare and initialize the variables, we can use operators to perform certain tasks like assigning a value, adding the numbers etc. In Java, Operator Precedence is an evaluation order in which the operators within an expression are evaluated on the priority bases. Operators with a higher precedence are applied before operators with a lower precedence. Java Syntax ... There are some kind of errors that might occur during the execution of the program. An exception is an event that occurs and interrupts the normal flow of instructions. That is exceptions are objects that store the information about the occurrence of errors. Hello world (First java program) Java is a high level programming language and it is used to develop the robust application. Java application program is platform independent and can be run on any operating System Under Standing the HelloWorld Program Class is the building block in Java, each and every methods & variable exists within the class or object. (instance of program is called object ). The public word specifies the accessibility of the class. The type of value that a variable can hold is called data type. When we declare a variable we need to specify the type of value it will hold along with the name of the variable. Primitive Data Types Declaring and Assigning values to Variables In this section you will be introduced to the concept of Arrays in Java Programming language. You will learn how the Array class in java helps the programmer to organize the same type of data into easily manageable format. Introduction to Arrays Structure of Arrays Control Statements ... To start with controlling statements in Java, lets have a recap over the control statements in C++. You must be familiar with the if-then statements in C++. The if-then statement is the most simpler form of control flow statement. In the world of Java programming, the for loop has made the life much more easier. It is used to execute a block of code continuously to accomplish a particular condition. For statement consists of tree parts i.e. initialization, condition, and iteration. Sometimes it becomes cumbersome to write lengthy programs using if and if-else statements. To avoid this we can use Switch statements in Java. The switch statement is used to select multiple alternative execution paths. while and do-while Lets try to find out what a while statement does. In a simpler language, the while statement continually executes a block of statements while a particular condition is true. Break and Continue Statements Sometimes we use Jumping Statements in Java. Using for, while and do-while loops is not always the right idea to use because they are cumbersome to read. Using jumping statements like break and continue it is easier to jump out of loops to control other areas of program flow. To know the concept of inheritance clearly you must have the idea of class and its features like methods, data members, access controls, constructors, keywords this, super etc. Java does not support multiple Inheritance Methods and Classes While going through the java language programming you have learned so many times the word abstract. In java programming language the word abstract is used with methods and classes. Package is a mechanism for organizing a group of related files in the same directory. In a computer system, we organize files into different directories according to their functionality, usability and category. In this section we will learn about Interface and Marker Interfaces in Java. This tutorial will clarify you questions "What is marker Interface?" and "Why use Marker Interface?" and "difference between abstract class and the interface". Exception, that means exceptional errors. Actually exceptions are used for handling errors in programs that occurs during the program execution. During the program execution if any error occurs and you want to print your own message..... The Java I/O means Java Input/Output and is a part of java.io package. This package has a InputStream and OutputStream. Java InputStream is for reading the stream, byte stream and array of byte stream. Read Text from Standard IO Filter Files in Java Java read file line by line Create File in Java Copying one file to another Serializing an Object in Java De-serializing an Object in java Applet is java program that can be embedded into HTML pages. Java applets runs on the java enables web browsers such as mozila and internet explorer. Applet is designed to run remotely on the client browser, so there are some restrictions on it. Advantages of Applet Disadvantages of Java Applet Life Cycle of Applet Creating First Applet Example Passing Parameter in Java Applet If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:ff701b9b-9ec7-4ab5-b58b-c5469c3e0840>
CC-MAIN-2013-20
http://www.roseindia.net/java/master-java/
2013-05-26T03:24:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.882939
1,819
Brown v. Board of Education (1954), Making Segregation Illegal The dramatic civil rights and segregation battles that set the tone for much of the 1960s didn’t just happen. Several events preceded those battles, perhaps none more important than the 1954 Supreme Court decision of Brown v. Board of Education. That decision overturned the 1896 Plessy v. Ferguson edict, which set the precedent for legalized segregation. Segregation precedent prior to Brown v. Board of Education Before the 1950s, most court cases challenging segregated schooling targeted higher education. Longtime Howard Law School dean Charles Hamilton Houston, the first African American editor of the Harvard Law Review, crafted the brilliant strategy to challenge Jim Crow’s separate but equal mandate in graduate education in the 1930s when he led the NAACP Legal Defense Fund. Consequently, the NAACP legal team, which had tried many of the key segregation cases since 1935, gained momentum with these 1950 landmark decisions: Sweatt v. Painter: Denied admission to the University of Texas School of Law in 1946 despite meeting all requirements but race, Heman Marion Sweatt pursued legal action to force the school to accept him. Because a law school admitting blacks opened in 1947 while his case was still being heard, the Texas courts upheld the University of Texas’s denial of admission to Sweatt. The Supreme Court overturned the Texas courts’ decision citing that the University of Texas had substantially more professors and students plus a larger law library than the black law school, marking the first time the Court factored in issues of substantive quality and not just the existence of a separate school. McLaurin v. Oklahoma State Regents: Although admitted to the University of Oklahoma, doctoral student George W. McLaurin was forced to sit in a designated row in class, at a separate table for lunch, and at a special desk in the library. Oklahoma courts denied McLaurin’s appeal to remove these separate restrictions. The Supreme Court overturned the lower court’s decision, ruling that Oklahoma’s treatment of McLaurin violated the Fourteenth Amendment, which prevents any separate treatment based on race. In essence, these decisions undermined the rationale behind the Supreme Court’s 1896 ruling in Plessy v. Ferguson: that separate facilities were equal. In June 1950, NAACP lead attorney Thurgood Marshall, who later became the first black Supreme Court Justice, convened the NAACP’s board of directors and some of the nation’s top lawyers to discuss the next phase of attack. They decided that the NAACP, which had already initiated some lawsuits, would pursue a full-out legal assault on school segregation. Brown v. Board of Education and the legal strategy behind the verdict Wanting to form a representative sample of the nation as a whole, the Supreme Court consolidated five cases to form the more popularly known Brown v. Board of Education. To get Plessy overturned, Thurgood Marshall and his team knew they had to show that segregation in and of itself actually harmed black children. To do so, he relied on the research of Dr. Kenneth Clark and his wife Mamie Phipps Clark, the first and second African Americans to receive doctorates in psychology from Columbia University. To figure out how black children saw themselves, the Clarks placed white and black dolls before black children and asked them to identify the nice and bad doll as well as choose the one most like them. Most children identified the white doll as nice and the black doll as bad even when they identified themselves with the black doll. Based on these findings, the Clarks concluded that black children had impaired self-images. Few expected the unanimous decision finally delivered on May 17, 1954. [I]n the field of public education the doctrine of ‘separate but equal’ has no place, ruled the Supreme Court. Separate educational facilities are inherently unequal. A year later, on May 31, 1955, the case known as Brown II, the Court established guidelines to desegregate all public education in the United States. As the implementation of the Brown ruling began, the nation discovered that restoring equality and applying its principle were two different battles. Although many whites didn’t fully support the Supreme Court’s decision, they abided by it. Others simply refused. In parts of the South, resistance reached dramatic heights.
<urn:uuid:6f0bb6fa-0d69-4e4d-84aa-4a25188e39b7>
CC-MAIN-2013-20
http://www.dummies.com/how-to/content/brown-v-board-of-education-1954-making-segregation.navId-323312.html
2013-05-22T14:36:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963659
889
Careful thought and planning may go into a child’s Halloween costume, but the excitement of the night can cause children to forget to be careful on the streets. Both children and adults need to give real attention to safety on this annual day of make-believe. And with a little extra thought and planning, we can make sure that all children have fun and safe outings on Halloween. Only fire-retardant materials should be used for costumes. Costumes should be loose, so warm clothes can be worn underneath. Costumes should not be so long that they are a tripping hazard. (Falls are the leading cause of unintentional injuries on Halloween.) Outfits should be made with light-colored materials. Strips of reflective tape should be used to make children even more visible. For youngsters under the age of 12, attach their names, addresses and telephone numbers (including their area code) to their clothes where it will be easily visible. Masks can obstruct a child’s vision. Facial makeup is safer and more colorful. When buying special Halloween makeup, check for packages containing ingredients that are labeled “Made with U.S. Approved Color Additives,” “Laboratory Tested,” “Meets Federal Standards for Cosmetics,” or “Non-Toxic.” Follow instruction for application. If masks are worn, they should have nose and mouth openings and large eye holes. Knives, swords and other accessories should be made from cardboard or flexible materials. Do not allow children to carry sharp objects. Bags or sacks carried by youngsters should be light-colored or trimmed with retroreflective tape if trick-or-treaters are allowed out after dark. Carrying flashlights will help children see better and be seen more clearly. On the way Children should understand and follow these rules: • Do not enter homes or apartments without adult supervision. • Walk from house to house. Do not run. Do not cross yards and lawns where unseen objects or the uneven terrain can present tripping hazards. • Walk on sidewalks, not in the street. • Walk on the left side of the road, facing traffic, if there are no sidewalks Motorists should be especially alert on Halloween and know the following driving tips: • Watch for children darting out from between parked cars. • Watch for children walking on roadways, medians and curbs. • Enter and exit driveways and alleys carefully. • If you are driving children, be sure they exit on the curb side, away from traffic. • Do not wear your mask while driving. • At twilight or later in the evening, watch for children in dark clothing. 4-H Open House Make plans now to attend 4-H Open House on Friday, Nov. 7. Open House is a prime opportunity to explore 4-H in general and check out more information concerning specific projects and clubs that will be represented. If you are interested in learning more about 4-H then this is the place to be. There will be goodies for new members enrolling as well as 4-H merchandise for sale for all members and adults. So again, join us down at the fairgrounds from 4-6 p.m. for a great time. Check out our Web page at www.archuleta.colostate.edu for calendar events and information.
<urn:uuid:58480ced-b2d7-4236-a5fd-ec453fbfe67a>
CC-MAIN-2013-20
http://pagosasun.com/archives/2008/10october/103008/extension.html
2013-05-21T17:10:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92072
715
There are various environmental factors that can affect the performance of noise-measuring instruments and their Sound-measuring equipment should perform within design specifications over a temperature range of -20 °F to 140 °F (-29 °C to 60 °C). If the temperature at the measurement site is outside of this range, refer to the manufacturer's specifications to determine if the sound level meter or dosimeter is capable of functioning properly. Sound-measuring instruments should not be stored in automobiles during hot or cold weather because this may cause warm-up drift, moisture condensation, and weakened batteries. Most noise instruments will perform accurately as long as moisture does not condense or deposit on the microphone diaphragm. If excessive moisture or rain is a problem in an exposure situation, refer to the manufacturer's instructions or other noise professionals for technical - Atmospheric Pressure Atmospheric pressure affects the output of sound level calibrators. When checking an acoustical calibrator, always apply the corrections for atmospheric pressure that are specified in the manufacturer's instruction manual. - In general, if the altitude of the measurement site is less than 10,000 feet above sea level, no pressure correction is needed. If the measurement site is at an altitude higher than 10,000 feet, or if the site is being maintained at a pressure greater that its surroundings (for example, in underwater tunnel construction), use the following equation to correct the instrument reading: Air Pressure Correction Equation |C = 10 log ||correction, in decibels, to be added to or subtracted from the measured sound level ||temperature in degrees Fahrenheit ||barometric pressure in inches of mercury For high altitude locations, C will be positive; in hyperbaric conditions (above atmospheric pressure), C will be negative. Wind or Dust Wind or dust blowing across the microphone of the dosimeter or sound level meter produces turbulence, which may cause a positive error in the measurement. A wind screen should be used for all outdoor measurements and whenever there is significant air movement or dust inside a building (for example, when cooling fans are in use or wind is gusting through open windows). Certain equipment and operations, such as heat sealers, induction furnaces, generators, transformers, electromagnets, arc welding, and radio transmitters generate electromagnetic fields that can induce current in the electronic circuitry of sound level meters and noise dosimeters and cause erratic readings. If instruments must be used near such devices or operations, the extent of the field's interference should be determined by consulting the manufacturer's instructions. For sound level meters and noise dosimeters equipped with omnidirectional microphones, the effects of the microphone placement and orientation are negligible in an environment that is typically reverberant. If the measurement site is nonreverberant and the noise source is highly directional, consult the manufacturer's literature to determine proper microphone placement and orientation. For determining compliance with the impulse noise provision of 29 CFR 1910.95(b)(1) or 29 CFR 1926.52(e), use the unweighted peak mode setting of the sound level meter or equivalent impulse precision sound level meter.
<urn:uuid:6d3ff192-5092-411f-a776-c4f8e33d17f6>
CC-MAIN-2013-20
http://www.osha.gov/dts/osta/otm/noise/exposure/environment_effects.html
2013-05-19T18:59:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.843424
668
In 1883, German physiologist Max Rubner proposed that an animal's metabolic rate is proportional to its mass raised to the 2/3 power. This idea was rooted in simple geometry. If one animal is, say, twice as big as another animal in each linear dimension, then its total volume, or mass, is 23 times as large, but its skin surface is only 22 times as large. Since an animal must dissipate metabolic heat through its skin, Rubner reasoned that its metabolic rate should be proportional to its skin surface, which works out to mass to the 2/3 power. In 1932, however, animal scientist Max Kleiber of the University of California, Davis looked at a broad range of data and concluded that the correct exponent is 3/4, not 2/3. In subsequent decades, biologists have found that the 3/4-power law appears to hold sway from microbes to whales, creatures of sizes ranging over a mind-boggling 21 orders of magnitude. … Rubner was on the right track in comparing surface area with volume, but that an animal's metabolic rate is determined not by how efficiently it dissipates heat through its skin but by how efficiently it delivers fuel to its cells. Rubner should have considered an animal's "effective surface area," which consists of all the inner surfaces across which energy and nutrients pass from blood vessels to cells, says West. These surfaces fill the animal's entire body, like linens stuffed into a laundry machine. The idea, West says, is that a space-filling surface scales as if it were a volume, not an area. If you double each of the dimensions of your laundry machine, he observes, then the amount of linens you can fit into it scales up by 23, not 22. Thus, an animal's effective surface area scales as if it were a three-dimensional, not a two-dimensional, structure. This creates a challenge for the network of blood vessels that must supply all these surfaces. In general, a network has one more dimension than the surfaces it supplies, since the network's tubes add one linear dimension. But an animal's circulatory system isn't four dimensional, so its supply can't keep up with the effective surfaces' demands. Consequently, the animal has to compensate by scaling back its metabolism according to a 3/4 exponent. Though the original 1997 model applied only to mammals and birds, researchers have refined it to encompass plants, crustaceans, fish, and other organisms. The key to analyzing many of these organisms was to add a new parameter: temperature. Mammals and birds maintain body temperatures between about 36°C and 40°C, regardless of their environment. By contrast, creatures such as fish, which align their body temperatures with those of their environments, are often considerably colder. Temperature has a direct effect on metabolism—the hotter a cell, the faster its chemical reactions run. In 2001, after James Gillooly, a specialist in body temperature, joined Brown at the University of New Mexico, the researchers and their collaborators presented their master equation, which incorporates the effects of size and temperature. An organism's metabolism, they proposed, is proportional to its mass to the 3/4 power times a function in which body temperature appears in the exponent. The team found that its equation accurately predicted the metabolic rates of more than 250 species of microbes, plants, and animals. These species inhabit many different habitats, including marine, freshwater, temperate, and tropical ecosystems. … A single equation predicts so much, the researchers contend, because metabolism sets the pace for myriad biological processes. An animal with a high metabolic rate processes energy quickly, so it can pump its heart quickly, grow quickly, and reach maturity quickly. Unfortunately, that animal also ages and dies quickly, since the biochemical reactions involved in metabolism produce harmful by-products called free radicals, which gradually degrade cells. "Metabolic rate is, in our view, the fundamental biological rate," Gillooly says. There is a universal biological clock, he says, "but it ticks in units of energy, not units of time." … The team's master equation may resolve a longstanding controversy in evolutionary biology: Why do the fossil record and genetic data often give different estimates of when certain species diverged? … The problem is that there is no universal clock that determines the rate of genetic mutations in all organisms, Gillooly and his colleagues say. They propose in the Jan. 4 Proceedings of the National Academy of Sciences that, instead, the mutation clock—like so many other life processes—ticks in proportion to metabolic rate rather than to time. The DNA of small, hot organisms should mutate faster than that of large, cold organisms, the researchers argue. An organism with a revved-up metabolism generates more mutation-causing free radicals, they observe, and it also produces offspring faster, so a mutation becomes lodged in the population more quickly. When the researchers use their master equation to correct for the effects of size and temperature, the genetic estimates of divergence times—including those of rats and mice—line up well with the fossil record. Friday, February 11, 2005 Animal lifespans and space-filling curves Science News has a review article on the 3/4 law of animal lifespans and metabolism.
<urn:uuid:9eff0812-daa2-4143-b0c0-c8e23807d343>
CC-MAIN-2013-20
http://russabbott.blogspot.com/2005/02/animal-lifespans-and-space-filling.html
2013-05-22T00:43:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951727
1,087
Palisades State Park is located in eastern South Dakota and is filled with Sioux quartzite formations estimated to be 1.2 billion years old throughout, including in Split Rock Creek which flows through the park. The park is popular with rock climbers who can climb the 50 feet or higher quartzite cliffs. The picture shows King and Queen Rock which are popular with rock climbers. Sioux quartzite comes from sandstone that was fused into blocks of solid quartz and is pink in color. Plains Indians used this quartzite for their ceremonial pipes. An American Indian quarry outside of Pipestone, Minnesota was a trading hub whose goods have been found as far as the western side of the Rocky Mountains in archaeological digs. Many buildings in eastern South Dakota and Minnesota were built out of the rock including the Federal Building in Sioux Falls, South Dakota in 1892. View Larger Map
<urn:uuid:ea0eb22c-4c92-4ec6-a3cd-b6de80f6f2a9>
CC-MAIN-2013-20
http://www.geographictravels.com/2011_08_01_archive.html
2013-05-23T12:02:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978136
174
Herschel and Planck buckled up and ready Herschel, ESA’s infrared space observatory, and Planck, ESA’s mission to look back to the dawn of time, are now integrated with the launcher, bringing them a step closer to launch on 14 May from ESA’s Spaceport in Kourou, French Guiana. Engineers began to fill Herschel’s cryostat with liquid helium at –268°C on 12 April, and by 25 April it was at a cool –271.3°C. Both Herschel and Planck have since moved several kilometres closer to the launch pad: after the integration of Planck with the launcher on 23 April, Herschel checked into its seat on their shared Ariane 5 on 30 April. Defining the new cool Herschel’s detectors must work at extremely low temperatures, close to absolute zero (–273.15°C or 0K), so that they can detect faint light from the furthest reaches of space. The cryostat houses the instruments detectors and keeps them cool. At launch, it will contain more than 2300 litres of superfluid helium colder than –271.8°C (2.5°K). After launch, the helium will cool further, below 1.65K, in the first few weeks of the mission, making the instruments as sensitive as possible. Further cooling, down to 0.3K, is required for the detectors of the Spectral and Photometric Imaging Receiver and the Photoconductor Array Camera and Spectrometer. Getting to that stage is a lengthy process that began before the launch campaign was underway. Creating superfluid helium Helium vacuum flasks were shipped to the launch site in Kourou in advance and decanted into smaller portable flasks. Helium filling started with ‘helium-1’, that is, the helium was liquid and slowly boiling off at ambient pressure at 4.2K. This was used for final preparation and during hydrazine fuelling. After fuelling was completed, the helium in the cryostat was then topped up. Using a vacuum pump over several days, the temperature of the remaining helium in the cryostat was lowered, converting it to superfluid helium at about 2.17K. The temperature was lowered further by continued vacuum pumping until it reached about 1.7K. About half the helium evaporated during the process, so the cryostat was continuously topped up, even after the satellite was moved to the launcher, and the vacuum pumping continued until four days before launch. The tank was then 95% to 98% full, and at a temperature of about 1.7K. Once the fairing is installed, Herschel will be accessible only through small doors in the fairing. Engineers will flush helium gas through the cryostat’s three insulating shields. The shield cooling will stop and the doors will be closed early in the day the launcher is rolled out to the pad. This ensures the cryostat is as cold as possible. Planck and Herschel check in With fuelling complete, Planck was moved to the final assembly building on 22 April, 2.6 km from the launch pad. There, Planck was integrated onto the upper stage. The lower part of the Ariane 5 was assembled and prepared in parallel. On top of the main stage is the upper stage and the vehicle equipment bay (VEB), attached to a 78 cm-high and 3.9 m-diameter adapter cone, composed of a strong carbon structure and two aluminium rings. This cone is the interface between the VEB and Planck. After checks, engineers installed the ‘Sylda’ unit over Planck on 27 April. Sylda encloses Planck and carries Herschel on its forward end. The Sylda will be jettisoned after the fairing is ejected, the upper stage separates and Herschel is released. Planck will separate after the Sylda is jettisoned. After its installation on Ariane, the electrical connections for Herschel between the Sylda and the launcher were set up and verified. A nitrogen flushing system was also connected to the Sylda, to provide Planck with a dry atmosphere. On 29 April, Herschel was brought into the building for integration with the launcher. The satellite was lowered into place and its adapter ring was bolted onto the Sylda on 30 April. The electrical connections between the adapter and the Sylda were installed and verified, followed by a check of Herschel. Notes for editors: Herschel, ESA's cutting-edge space observatory, will carry the largest, most powerful infrared telescope ever flown in space. A pioneering mission to study the origin and evolution of stars and galaxies, it will help understand how the Universe came to be what it is today. Named after the German Nobel laureate Max Planck (1858-1947), ESA's Planck mission will be the first European space observatory whose main goal is the study of the Cosmic Microwave Background or CMB – the relic radiation from the Big Bang. Observing at microwave wavelengths, Planck is the third, most advanced space mission of its kind. It will measure tiny fluctuations in the CMB with an accuracy set by fundamental astrophysical limits.
<urn:uuid:1b90de17-e04d-4a68-942d-14a6dd59134f>
CC-MAIN-2013-20
http://www.esa.int/Our_Activities/Space_Science/Herschel_and_Planck_buckled_up_and_ready
2013-05-26T02:44:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949278
1,103
European Space Agency clears SABRE orbital engines Skylon space plane gets its power plant A British-built rocket/jet engine designed to enable Mach 6 flight and orbital capability has passed a key milestone, now that the European Space Agency has cleared the revolutionary cooler that fuels it. The Synergetic Air-Breathing Rocket Engine (SABRE) extracts the oxygen it needs to fly from the air itself while in jet mode using the newly-approved cooling system, which consists of 50 kilometers of 1mm thick tubing filled with liquid nitrogen that can cool incoming air from 1,000°C to -150°C. It's key to the success of the Skylon spaceplane project run by British engineering firm Reaction Engines. "The SABRE engine has the potential to revolutionise our lives in the 21st century in the way the jet engine did in the 20th Century. This is the proudest moment of my life," said company founder Alan Bond. Bond has spent the last 30 years of his life working on the project, which envisages the Skylon spaceplane as a jumbo-sized craft powered by cryogenic liquid hydrogen and oxygen. The cooling system will supply oxygen for the jet portion of the flight, a Mach 5.5 burn up around 25km, before the rocket engines kick in to push the craft into orbit. After delivering an estimated 15 tons of payload for a tenth of the cost of current systems, or taking 30 passengers for a ride, the craft de-orbits, reactivates its jets, and returns to its original runway. It needs a slightly longer than normal runway – and a tougher one, too, given its 275-ton takeoff weight – but the engine is a major step forward in fully-reusable orbital delivery systems. SABRE tests get thumbs up The newly-cleared cooling system is also at the heart of Scimitar, Reaction Engine's commercial aircraft design, which is funded in part by the EU. This eliminates the rocket stage of the design, enabling Mach 5 flights of up to 20,000 kilometers, which would cut a trip from Brussels to Sydney down to four hours. "One of the major obstacles to developing air-breathing engines for launch vehicles is the development of lightweight high-performance heat exchangers," said Dr Mark Ford, ESA's Head of Propulsion Engineering. "With this now successfully demonstrated by Reaction Engines Ltd, there are currently no technical reasons why the SABRE engine programme cannot move forward into the next stage of development." With clearance obtained and over 100 test runs completed, the full prototype engine can now be built, albeit with some help from investors. The company needs £250m, around 90 per cent of which is going to have to come from private investors, although the UK government might kick in for a spaceport . ®
<urn:uuid:35430b15-9696-4975-a96f-93d0ddcb186f>
CC-MAIN-2013-20
http://www.theregister.co.uk/2012/11/29/esa_sabre_clearance/print.html
2013-05-24T22:31:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949265
585
Line 26: “Nothing out of fifty-nine” 1859 marked the peak of the Colorado Gold Rush. Line 30: “Wagon-tenting” – Referring to the style of wagon with a canvas covered tent roof. Line 31: “jade” - A contemptuous name for a horse; a horse of inferior breed, e.g. a cart- or draught-horse as opposed to a riding horse; a roadster, a hack; a sorry, ill-conditioned, wearied, or worn-out horse; a vicious, worthless, ill-tempered horse. (OED) Line 32: “Squaw” – A North American Indian woman or wife (OED). Line 50: “Arrowhead” – Could refer to Arrowhead, California, though is probably placed for emphasis on the Native-American connection (Melville’s home was called Arrowhead?). Line 59: “Rio” – Referring to the Rio Grande river on the border of Texas and Mexio or Rio de Jainero in Brazil. Line 61: “Indiana” – “Land of the Indians” in Modern Latin.
<urn:uuid:e2493f4e-6382-450d-ad69-601fe53ebcf3>
CC-MAIN-2013-20
http://sites.jmu.edu/thebridge/2011/03/30/ii-powhatans-daughter-indiana-lines-25-67/
2013-06-18T04:32:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.889534
258
Evangelical Synod of North America ||This article needs additional citations for verification. (February 2013)| The Evangelical Synod of North America, before 1927 German Evangelical Synod of North America, in German (Deutsche) Evangelische Synode von Nord-Amerika, was a Protestant Christian denomination in the United States existing from the mid-19th century until its 1934 merger with the Reformed Church in the United States to form the Evangelical and Reformed Church. This church merged with the Congregational Christian Churches denomination in 1957 to create the United Church of Christ. Centered in the Midwest, the denomination was made of German Protestant congregations of mixed Lutheran and Reformed heritage, reflecting the 1817 union of those traditions in Prussia (and subsequently in other areas of Germany). This union, both in Germany and in the United States, was deeply influenced by pietism. The denomination accepted both the Reformed Heidelberg Catechism, Luther's Small Catechism, and the Lutheran Augsburg Confession as its confessional documents; where there was disagreement the individual believer had freedom to believe either. The church eventually developed its own Evangelical Catechism, reflecting its "united" faith. In keeping with core Protestant convictions, the Bible was considered the ultimate standard of its faith. The Evangelical Synod of North America was founded on October 15, 1840, at Deutsche Evangelische St. Johannes Gemeinde Zu Gravois Settlement Missouri. St. Johns Evangelical United Church of Christ (as it is known today) was founded in 1838 by newly arrived German immigrants. They were living in a wilderness farming community a day's journey south of St. Louis. The small congregation built a church out of logs by hand on this hill. A memorial was erected in 1925 commemorating the founding of the Evangelical Synod of North America and still stands today in front of the church. The denomination established Eden Theological Seminary in St. Louis, Missouri, for the training of its clergy; today, Eden remains a seminary of the United Church of Christ. In the early 20th century, the Evangelical Synod became active in the ecumenical movement, joining the Federal Council of Churches and pursuing church union. In 1934, it joined with another denomination of German background, the Reformed Church in the United States, forming the Evangelical and Reformed Church. This church united, in turn, in 1957 with the General Council of Congregational Christian Churches to form the United Church of Christ. Notable people and congregations The oldest Evangelical Synod congregations are believed to be Femme Osage United Church of Christ near Augusta, Missouri; Bethlehem United Church of Christ in Ann Arbor, Michigan; Saint John's-Saint Luke Evangelical and Reformed United Church of Christ in Detroit, Michigan; or The United Church in Washington, DC, each of which were founded in 1833. The oldest Lutheran church in Chicago, Illinois, was an Evangelical Synod congregation. The Deutsche Evangelische Lutherische Sankt Paulus Gemeinde (German Evangelical Lutheran St. Pauls Congregation) was founded in 1843 and is now known as St. Pauls United Church of Christ ("St. Pauls" is properly spelled without the apostrophe, reflecting its German heritage, as there is no apostrophe in the German language). Zion Evangelical and Reformed Church, St. Joseph, Missouri, was founded 1858 by Rev. Heckmann to serve the families who had come from various parts of Germany who were part Lutheran, part United and part Reformed. The new congregation was named The United Evangelical Protestant Congregation of St. Joseph, Missouri. The Zion Evangelical Church in Cleveland, Ohio, was founded in 1867. In this building the merger of the Evangelical Synod and the Reformed Church in the United States took place, 26–27 June 1934. Reinhold Niebuhr and H. Richard Niebuhr, two siblings who developed strong reputations during the mid-20th century for their theological acumen, were both members of the Evangelical Synod and its successors. - "The Deaconess Movement in 19th-century America: pioneer professional women" by Ruth W. Rasche in Hidden History in the United Church of Christ.
<urn:uuid:ffe880f5-2657-4bfd-8e41-5c9955d2b5d9>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Evangelical_Synod_of_North_America
2013-05-20T03:15:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698222543/warc/CC-MAIN-20130516095702-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960169
896
Volatile Substance Abuse BAMA has two resources, updated in 2007, aimed at educating adults about VSA. Hard copies are available free of charge on request, please email email@example.com indicating how many of each resource you require. Alternatively download pdf versions below. What is VSA? Volatile Substance Abuse (VSA) is the practice of inhaling common household volatile substances like glues, gases and aerosols in order to get high. It was commonly called glue sniffing in the 70s when it first emerged as an issue in the UK and remains a serious social problem, mostly among young people. In any home there are around 50 products, all with a legitimate purpose, which can be abused in this way. However, VSA, unlike most drug abuse, can kill instantly, often the very first time someone tries it. At its height in the early 90s three young people were dying each week. For more background information take a look at the Re-Solv web site, at http://www.re-solv.org/. The Department of Health, together with DFES (now DCSF) and the Home Office set up a stakeholder group in 2005 as part of its strategy 'Out of sight - Not out of mind '(which can be accessed on the DH web site at http://www.dh.gov.uk/assetRoot/04/11/56/05/04115605.pdf.) The strategy aims to tackle this complex social problem in a coordinated way, raising awareness and preventing deaths. BAMA sits on the stakeholder group and on the relevant working groups. What do BAMA and its members do to help prevent VSA? BAMA has been concerned about VSA since the 1970s and has been actively involved in many initiatives to educate professionals, retailers, young people and consumers in general about the hazards. For many years, Government and BAMA's policy on warning labels was not to apply them as it might attract the attention of potential abusers to abusable products. Over the years, evidence showed that this is unlikely to be the case and BAMA voluntarily adopted the warning 'Use only as directed. Intentional misuse by deliberately concentrating and inhaling the contents can be harmful or fatal'. Most other abusable products remained unlabelled. In the mid 90s, a major research project was undertaken by the Department of Trade and Industry Consumer Safety Unit with the VSA Industry Forum, chaired by BAMA. The research showed a strong response from consumers that a new clear warning was required. After careful consideration of the issues around labelling, BAMA recommends that all aerosols should be labelled on the back with the warning about the dangers of volatile solvent abuse. Because there is no information on a fatal dose or the effect of mixing products this should be regarded as a general warning about the risks of solvent abuse. It should be on all aerosol packs and not just those considered to be potentially abusable. The phrase Solvent Abuse Can Kill Instantly ('SACKI') in the badge format shown here should be applied to the back of all aerosols, the artwork for this logo can be requested from the Guides & Publications section of this website. If you would like more information or support, you can contact any of the following organisations: - Drugscope: 020 7928 1211, http://www.drugscope.org.uk/ - provides information on all aspects of drug problems - Childline: 0800 1111 - the free 24-hour national helpline for children in trouble or danger - Re-Solv: a national charity dedicated to the prevention of VSA and provides advice and educational materials. Telephone: 01785 817885, http://www.re-solv.org/ - Solve-It: provides support to young people, parents, guardians, carers and all those affected by volatile substance abuse, 24hr helpline: 01536 510010, http://www.solveitonline.co.uk/ Data on deaths from VSA: http://www.vsareport.org/
<urn:uuid:c9274c46-34c9-47f3-88f4-fcb4f371a14a>
CC-MAIN-2013-20
http://www.bama.co.uk/volatile_substance_abuse/
2013-05-20T03:15:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698222543/warc/CC-MAIN-20130516095702-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928895
852
Technology and Technology changes What are the types of technology changes? There are three types of trends and conditions — technology diffusion and disruptive technologies, the information age,and increasing knowledge intensity — through which technology is significantly altering the nature of competition and in doing so, is contributing to unstable competitive environments. Technology diffusion is the rate at which new products become available and are used. For example, when consumers switched to computers or other types of word processing technology, it caused a decrease of sales for typewriters. Perpetual innovation is how fast and consistently new information technologies replace their predecessors. The product life cycles are shorter, and as a result these rapid diffusions of new innovations place a competitive premium of being able to quickly introduce new, innovative goods and services into the market place. When products become somewhat impossible to differentiate because of the extensive and rapid diffusion of technologies, rapidity to market with innovative products may be the primary source of completive advantage. Another pointer of rapid innovation diffusion is that today, it may take only 12 to 18 months for firms to gather information about their competitor’s research and development and product decisions. Disruptive technology is an innovation that destroys the value of an existing technology and creates new market for a particular product or service. New markets are created by the technologies underlying the development of products such as iPods and PDAs, but based on price or image, appeal to a different target demographic than what the pioneer products of the technologies may have initially been marketed to.Products of these types are thought by some to represent breakthrough innovations. A disruptive innovation technology can create what is essentially a new industry or can harm a learning process for a particular industry. Some industries are able to adapt based on their superior resources, experience, and ability to gain access to the new technology through multiple resources (such as acquisitions, alliaces, and ongoing internal basic research).When a disruptive technology creates a new industry, competitors usually follow. One example for disruptive innovation is Amazon.com –Amazon.com’s launching created a new industry by making use of the disruptive technology we know as the internet today! In addition to making innovative use of the internet to create Amazon.com, Jeff Bezos also uses core competence in technology to study information about its customers. These efforts result in opportunities to understand individual consumers’ needs and then target goods and services to satisfy those needs. (Amazon.com: using Technology to create change).
<urn:uuid:aab1e92e-2c9f-440d-beef-9e3989d8ec4f>
CC-MAIN-2013-20
http://www.innovationcoach.com/2009/08/technology-and-technology-changes/
2013-05-25T13:00:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940706
497
As a father, husband and son, Women’s History Month has special meaning to me. March is an especially appropriate time, as President Obama declared, “. . . to remember those who fought to make our freedom [to succeed] as real for our daughters as for our sons.” In my “Bread and Roses” column during March of last year, I recalled for you the Lawrence Textile Strike, the historic and successful protest movement for fair wages led by working women during January to March 1912. Yet, history informs us that, by March of the following year, the wider struggle for gender equality had shifted to the legal and political realm. This year, as House Democratic Leader Nancy Pelosi has reminded us, “We celebrate a turning point in our history: the moment, 100 years ago, when women marched down Pennsylvania Avenue to demand the vote.” Historians, of course, would also point to the earlier 1848 Seneca Falls Convention – as well as Susan B. Anthony, Elizabeth Cady Stanton, and their 1878 proposal for a Women’s Suffrage Amendment to our federal Constitution. Yet, during the 35 years that followed, the movement to acknowledge every woman’s equal right to vote gained traction only in a few of our newly formed western states – and, in response, American suffragists Alice Paul and Lucy Burns began advocating a national strategy to achieve universal women’s suffrage. Conceived to coincide with newly elected Woodrow Wilson’s upcoming Washington inauguration in March of 1913, the small group that they formed organized a March 3 “Women’s Suffrage Parade to march in a spirit of protest against the present political organization of society, from which women are excluded.” On that historic day, the gathered marchers numbered 8,000-10,000, an extraordinary number for that era. Capturing the nation’s attention, the marchers were supported by ten bands, five mounted female brigades and 26 floats. Sadly, according to newspaper accounts of the time, the suffragists encountered an angry, jeering response by mostly-male opponents – and more than 200 of the women marchers were hospitalized. The March 3, 1913 attack upon the women marchers evoked national outrage, becoming an eerie precursor to that later, better known clash for civil rights in March 1965—“Bloody Sunday” on Selma’s Edmund Pettus Bridge. Yet, like their future brothers and sisters in the struggle for equality, the 1913 suffragists persevered. The Suffragist Movement continued unbowed and unbroken - and on August 18, 1920, the Nineteenth Amendment to the United States Constitution became the law of our land. Upon reflection, all Americans should find these events from America’s historical journey compelling – truly American stories that are filled with inspiration, drama, conflict, courage and, ultimately, success. We could undervalue their lessons for our own time, however, if we failed to perceive the striking connections that link our lives today with those of these brave Americans from a century ago. The struggles of Lawrence textile workers reach across the decades to inform us in our own fight to secure better jobs, living wages and expanded economic opportunity. The Suffragists of 1913 created a more inclusive and promising political dynamic for our era – a new “Bread and Roses” coalition of conscience that is challenging the reactionary status quo of today. On Nov. 7, 2012, CNN’s exit polls revealed that women made up 54 percent of our voting electorate. President Obama’s 12-point 2008 gender advantage among this nation’s women had grown to an 18-point advantage last year. In Ohio, for example, the president won by 12 points among women, while losing among men. Pennsylvania revealed a 16-point gender gap for our president – support for his vision of progressive change that tipped the electoral scales and re-elected Barack Obama. America’s women voters, who marched to protest the inaction of President Wilson in 1913, had become a major force in supporting President Barack Obama’s mandate for constructive action in our own time. For them, and for us all, however, the struggle for security and opportunity continues. Economic data from the National Women’s Law Center confirms that our national economic recovery is reaching America’s women, but only slowly. Job losses among women serving in government have significantly offset - by 25 percent - the jobs that women have regained in the private sector. These are harsh realities that the additional budget cuts contemplated by my Republican colleagues will aggravate – just as cuts in critically important women’s health, domestic violence, and food stamp programs will hit poor women the hardest. Today, as it was a century in our past, American women continue to make progressive history for us all, even as they struggle toward greater opportunity for themselves and their families. Their struggle for Bread and Roses continues. Yet, the lessons of our past give all of us reason for hope. Congressman Elijah Cummings (D-Md.) represents Maryland’s Seventh Congressional District in the United States House of Representatives.
<urn:uuid:d1e4abab-44f0-4552-a7fa-ebb18e3a66f9>
CC-MAIN-2013-20
http://www.afro.com/sections/opinion/story.htm?storyid=77750
2013-06-19T12:24:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961281
1,077
Paleontologist and painter Neil Clark is dampening the hopes of Nessie believers around the world when he says the monster was perhaps a swimming pachyderm. Clark noticed similarities in the hump-and-trunk silhouettes of swimming Indian elephants and the serpentine shapes of 1930s Nessie descriptions and photographs, such as the famous 1934 image shown as an inset above. Why would an elephant be swimming in a chilly Scottish lake? "The reason why we see elephants in Loch Ness is that circuses used to go along the road to Inverness and have a little rest at the side of the loch and allow the animals to go and have a little swim around," Clark told CBS News. And there's one more wrinkle in this elephantine mystery. In 1933 a circus promoter in the area—acting perhaps on inside information that the monster was really a big top beast—offered a rich reward for Nessie's capture, says Clark, a curator at the Hunterian Museum and Art Gallery in Glasgow. Clark's theory is published in the current edition of the journal of the Open University Geological Society. I don’t believe for a minute that Nessie is an elephant. I also am not convinced that there is a monster in Loc Ness. Like the bigfoot stories, I’ll believe it when they produce a carcass of the creature. A giant squid carcass has been found but so far no sea monsters or bigfoot. Link to the National Geographic paddling pachyderm story here.
<urn:uuid:d3fa350a-f20b-4bf1-9af3-0bc7344cc771>
CC-MAIN-2013-20
http://caveviews.blogs.com/cave_news/2008/03/loch-ness-monst.html
2013-06-18T05:37:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954951
319
|Antiques Digest||Browse Auctions||Appraisal||Antiques And Arts News||Home| Silver, always a symbol of wealth, was at one time actually that. Having no banks, the people with extra coins took them to a silversmith to be turned into useful household articles. Thus, the money of their day brought not only wealth but utility and beauty into the home. From a silver coin to a spoon was a common transposition. Silver followed the changes in styles and can be found in many pleasing shapes and forms. Early American silver reflects the lives and times which produced it. Its simplicity of line and graceful forms exhibit a delightful charm. Surface decoration was in good taste as well as superb workmanship. In making hollow ware the metal was rolled into sheets and beaten with a mallet into the desired shape. The surface decoration, if any, was then added to the article. Ornamentation took several forms: engraving, which consisted of marking with a sharp tool that removed a portion of the surface; chasing, done with tools without a cutting edge by displacing the metal through pressure; repousse, a relief decoration accomplished by hammering instead of by slow pressure, as in chasing; and piercing a pattern. The word "sterling" began appearing on silver around 1,965 to denote silver up to standard. Some silver before that time was of equal quality, but it was not until the term sterling came into use that the buyer was assured of the exact quality purchased. Pure silver is considered 1000 parts fine, while coin silver is 900 parts fine; sterling silver is made up of 925 parts pure silver and 75 parts copper for strength. Sheffield plate looked like silver on the surface but actually was a sheet of copper surrounded on both sides with thin sheets of silver. This was the poor man's silverware. Sheffield was made in nearly all the same styles as silver popular at the same time. The Sheffield that has survived the ravages of time has worn down considerably through constant handling and polishings, often to a point where the copper shows. Once this happens there is nothing that can be done, for it has lost its value and is practically worthless. Some people have worn Sheffield replated by modern means, but of course once this is done it is no longer Sheffield, but just an old article which even lacks patina. Electroplating was invented about 1840. It became popular immediately because it brought plated silver within the price range of average people. Previous to that time silver was strictly a luxury item. The plating did not have to be thick, .001 of an inch was sufficient just as long as it was of uniform thickness all over. Several bases were used for silver plated wares: nickel silver (an alloy of copper, zinc and nickel), copper, and white metal. Often you will find the letters EPN, EPC, or EPWM On the bottom of plated silver indicating which metal was used for the base. Unless Sheffield or plated silver is in EXCELLENT shape it is a poor buy, regardless of the price. Sterling silver can never be worn away to expose another metal for it is all silver and therefore is a lifetime investment. If you cannot afford to buy fine old sterling or authenticated coin silver, it might be wiser to purchase sterling reproductions from one of the several firms which have been continuously pro ducing the same good quality over a century or more, than to buy items in which the silver is already worn or wearing off. In silver, just as in everything else, you should buy the best you can afford, keeping in mind the style, age and condition. Early silver possesses a soft lustrous color and texture which is unmatched in modern pieces and should never be buffed. The patina acquired by the passing of the years through usage and polishings is best cleaned by the use of any good quality silver polish (the author prefers a paste polish) a soft cloth or celonese sponge, along with some good, old-fashioned "elbow grease." Never use the quick-cleaning methods involving chemicals, for that destroys the oxidation (the black coloring in the fine depressions of the design) and gives a harsh, tinny color to the silver. The beautiful appearance of fine silver enhances the room in which it is displayed, and improves any meal, no matter how simple. Owning and using silver is well worth the little effort needed keeping it polished.
<urn:uuid:c7d44aac-d54a-4172-a1e3-2b7e54c57991>
CC-MAIN-2013-20
http://www.oldandsold.com/articles01/article520.shtml
2013-05-22T07:41:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974889
920
These appliances burn fuels — typically gas, both natural and liquefied petroleum; kerosene; oil; coal; and wood. Under certain conditions, these appliances can produce CO. However, with proper installation and maintenance, they are safe to use. CO is a colorless, odorless gas. The initial symptoms of CO poisoning are similar to flu, and include headache, fatigue, nausea, and dizziness. Exposure to high levels of CO can cause death. CPSC recommends that the yearly professional inspection include checking chimneys, flues, and vents for leakage and blockage by creosote and debris. Leakage through cracks or holes could cause black stains on the outside of the chimney or flue. These stains can mean that pollutants are leaking into the house. In addition, have all vents to furnaces, water heaters, boilers and other fuel-burning appliances checked to make sure they are not loose or disconnected. The CPSC also advises consumers to get appliances inspected for adequate ventilation. CPSC recommends that every home should have at least one CO alarm that meets the requirements of the most recent Underwriters Laboratories (UL) 2034 standard or International Approval Services 6-96 standard. Publication date: 11/06/2000
<urn:uuid:76237786-2a43-4943-a5e7-de891146154c>
CC-MAIN-2013-20
http://www.achrnews.com/articles/print/cpsc-urges-actions-to-prevent-co-poisoning
2013-06-19T12:48:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920454
260
|Environmental Changes on the Coasts of Indonesia (UNU, 1980, 53 pages)| Although there has been geomorphological research on several parts of the Indonesian coastline, the coastal features of Indonesia have not Yet been well documented. The following account-based on studies of maps and charts, air photographs (including satellite photographs), reviews of the published literature, and our own traverses during recent years-is a necessary basis for dealing with environmental changes on the coasts of Indonesia. Coastal features will be described in a counter-clockwise sequence around Sumatra, Java, Kalimantan, Sulawesi, Bali and the eastern islands, and Irian Jaya. Inevitably, the account is more detailed for the coasts of Java and Sumatra, which are better mapped and have been more thoroughly documented than other parts of Indonesia. In the course of description, reference is made to evidence of changes that have taken place, or are still in progress. Measurements of shoreline advance or retreat have been recorded by various authors, summarized and tabulated by Tjia et al. (1968). Particular attention has been given to changes on deltaic coasts, especially in northern Java (e.g, Hollerwoger 1964), but there is very little information on rates of recession of cliffed coasts. Measurements are generally reported in terms of linear advance or retreat at selected localities, either over stated periods of time or as annual averages, but these can be misleading because of lateral variations along the coast and because of fluctuations in the extent of change from year to year. Our preference is for areal measurements of land gained or lost or, better still, sequential maps showing the patterns of coastal change over specified periods. We have collected and collated sequential maps of selected sites and brought them up-to-date where possible. Coastal changes can be measured with reference to the alignments of earlier shoreline features, such as beach ridges or old cliff lines stranded inland behind coastal plains. In Sumatra, beach ridges are found up to 150 kilometres inland. The longest time scale of practical value is the past 6,000 years, the period since the Holocene marine transgression brought the sea up to its present level. Radiocarbon dating can establish the age of shoreline features that developed within this period, and changes during the past few centuries can be traced from historical evidence on maps and nautical charts of various dates. These have become increasingly reliable over the past century, and can be supplemented by outlines shown on air photographs taken at various times since 1940. Some sectors have shown a consistent advance, and others a consistent retreat; some have alternated. A shoreline sector should only be termed "advancing" if there is evidence of continuing gains by deposition and/or emergence, and "retreating" if erosion and/or submergence are still demonstrably in progress (Fig. 4). Coastal changes may be natural, or they may be due, at least in part, to the direct or indirect effects of Man's activities in the coastal zone and in the hinterland. Direct effects include the building of sea walls, groynes, and breakwaters, the advancement of the shoreline artificially by land reclamation, and the removal of beach material or coral from the coastline. Indirect effects include changes in water and sediment yield from river systems following the clearance of vegetation or a modification of land use within the catchments, or the construction of dams to impound reservoirs that intercept some of the sediment flow. There are many examples of such man-induced changes on the coasts of Indonesia. Reference will also be made to ecological changes that accompany gains or losses of coastal terrain, and to some associated features that result from man's responses to changes in the coastal environment. Incidental references to some of the coastal features of Sumatra were included in Verstappen's (1973) geomorphological reconnaissance, but there has been no systematic study of this coastline. Verstappen's geomorphological map (1:2,500,000) gives only a generalized portrayal of coastal features: it does not distinguish cliffed and steep coasts, the extent of modern beaches, fringing reefs, or mangrove areas, but it does indicate several sectors where Holocene beach ridge plains occur. Sumatra is 1,650 kilometres long and up to 350 kilometres wide, with an anticlinal mountain chain and associated volcanoes bordered to the east by a broad depositional lowland with extensive swamp areas along the Straits of Malacca. Off the west coast the Mentawai Islands constitute a "non-volcanic arc," consisting of uplifted and tilted Tertiary formations, their outer shores being generally cliffed -facing the predominant south-westerly swell transmitted across the Indonesian Ocean-while the inner shores are typically lower and more indented, with embayments fringed by mangroves. There are emerged coral reefs and beach ridges, especially on the outer shores, and the possibility of continued tilting is supported by the disappearance of islets off the coast of Simalur even within the present century (according to Craandijk 1908: quoted by Verstappen 1973). There are, however, contrasts between the islands, the relatively high island of Nias (summit 886 metres) being encircled by emerged reef terraces suggestive of uplift with an absence of tilting, while Enggano is tabular, steep-sided, and reef-fringed. Much more detailed work is needed to establish the evolution of these island coasts, and the effects of recurrent earthquakes and tsunami. At this stage, no information is available on rates and patterns of shoreline changes taking place here. The south-west coast of mainland Sumatra is partly steep along the fringes of mountainous spurs, and partly low-lying, consisting of depositional coastal plains. Swell from the Indonesian Ocean is interrupted by the Mentawai Islands and arrives on the mainland coast in attenuated form. It is stronger to the north of Calang, where there are surf beaches bordering the blunted delta of the Tuenom River, and south-east of Seblat, where there are steep promontories between gently curving sandy shorelines backed by beach ridges and low dunes, interrupted by such blunted deltas as the Mana, the Seblat, and the Ketuan. Coral reefs are rare along the central part of the south-west coast of Sumatra because of the large sediment yield from rivers draining the high hinterland, but to the south there are reef-fringed rocky promontories. Pleistocene and Holocene raised beaches and emerged coral reefs are also extensive, especially on headlands near Krui and Bengkulu, where reefs raised 30 metres above the present sea level have been truncated by the recession of steep cliffs. Farther south the coast shows the effects of vulcanicity on the slopes of Rajabasa. The Krakatau explosion of 1883 generated a tsunami that swept large coral boulders onshore and produced a fallout of volcanic ash that blanketed coastal features and augmented shore deposits. Near Cape Cina the steep coasts of Semangka Bay and Tabuan Island are related to en echelon fault scarps that run north-west to south-east, and the termination of the coastal plain near Bengkulu may also result from tectonic displacement transverse to this coastline. Farther north, the Indrapura River turns parallel to the coast to follow a swale behind beach ridges before finding an eventual outlet to the sea with the Batang River. Padang is built on beach ridges at the southern end of a coastal plain that stretches to beyond Pariaman. The extensive shoreline progradation that occurred here in the past has evidently come to an end, for there are sectors of rapid shoreline erosion in Padang Bay, where groynes and sea walls have been built in an attempt to conserve the dwindling beach. North of Pariaman the cliffed coast intersects the tuffs deposited from the Manindjau volcano, and farther north there is another broad swampy coastal plain, with associated beach ridges built by wave action reworking fluvially supplied sediment derived from the andesite cones, Ophir and Malintang, in the hinterland. Towards Sirbangis this plain is interrupted by reef-fringed headlands of andesite on the margins of a dissected Pleistocene volcano. Beach erosion has become prevalent in the intervening embayments between here and Natal, and Verstappen (1973) suggested that the swampy nature of the coastal plain here could be due to recent subsidence, which might also explain the present recession of the coast. Broader beach ridge plains occur farther north, interrupted by Tapanuli Bay, which runs back to the steep hinterland at Sibolga. Musala Island, offshore, is another dissected volcano. Next comes the broad lowland on either side of the swampy delta of the Simpan Kanang, in the lee of Banyak Island, and beyond this the coast is dominated by sandy surf beaches, backed in some sectors by dune topography, especially in the long, low sector that extends past Meulaboh. At the northern end of Sumatra the mountain ranges break up into steep islands with narrow straits scoured by strong tidal currents. Weh Island is of old volcanic rocks, terraced and tilted, with emerged coral reefs up to 100 metres above sea level. Uplifted reefs are also seen on some of the promontories of the northern Sumatran mainland. At Kutaraja the Aceh River has filled an intermontane trough, but the deltaic shoreline has been smoothed by waves from the north-west, coming down the Bengalem Passage between Weh and Peunasu islands, so that the mouths of distributary channels have been deflected behind sand spits and small barrier islands. Beach ridges built of fluvially supplied sediment form intersecting sequences near Cape Intem, where successive depositional plains have been built and then truncated, and there is an eastward drift of beach material along the coast towards Lhokseumawe. Within this sector Verstappen (1964a) examined the coastal plain near the mouth of the Peusangan River. He concluded that a delta had been built out north of Bireuen, only to be eroded after the Peusangan was diverted by river capture 8 kilometres to the south (Fig. 5). Following this capture, the enlarged river has built a new delta to the east. Patterns of truncated beach ridges on the coastal plain commemorate the shorelines of the earlier delta, which also retains traces of abandoned distributary channels and levees on either side of a residual creek, the Djuli. At the point of capture the Peusangan valley has since been incised about 20 metres, but the old delta was clearly built with the sea at its present level, and so piracy must have taken place within the past 6,000 years, after the Holocene marine transgression had brought the sea up to this level. The new delta has developed in two stages (A, B in Fig. 5), the first indicated by converging beach ridges on either side of an abandoned river channel, the second farther east, around the present mouth. Dating of these beach ridges could establish rates of coastal advance and retreat in this area, and show when the river piracy took place. South-east from Cape Diamant the low-lying swampy shores of the Straits of Malacca have sectors of narrow sandy beach interspersed with mudflats backed by mangroves, which also fringe the tidal creek systems to the rear. As the Straits narrow the tide ranges increase, and river mouths become larger, funnel-shaped estuaries bordered by extensive swamps instead of true deltas. The widest estuary is that of the Kampar River, where the tide range is sufficient to generate tidal bores that move rapidly upstream. The river channels are fringed by natural levees, and patterns of abandoned levees may be traced throughout the swamps. Locally there has been tectonic subsidence- marked by the formation of lakes amid the swamps-as on either side of the Siak Kecil River and south of the meandering Rokan estuary where lakes which formed along an abandoned river channel as it was enlarged by subsidence are now shrinking as the result of swamp encroachment. In the narrower part of the Straits of Malacca there are elongated shoal and channel systems, and some of the shoals have developed into swampy islands, as on either side of the broad estuary of the Mampar. Verstappen (1973) suggested that the Bagansiapiapi Peninsula and the islands of Rupat, Bengkalis, and Tebingtinggi may be due to recent tectonic uplift, and the Rupat and Pajang Straits to alignments of corridor subsidence. The islands have extensive swamps, but their northern and western coasts are fringed by beach ridges possibly derived from sandy material on the sea floor during the shallowing that accompanied emergence. Farther south the Indragiri and Batanghari estuaries traverse broad swamp lands, in which they have deposited large quantities of sediment derived from the erosion of tuffs from volcanoes in their headwater regions. These very broad swamp areas have developed in Holocene times with the sea at, or close to, its present level. The rapidity of their progradation may be related to several factors: an abundance of fluvial sediment yield derived from the high hinterland by runoff under perenially warm and wet conditons; the luxuriance of swamp vegetation which has spread rapidly forward to stabilize accreting sediment, and has also generated the extensive associated peat deposits; and the presence of a broad, shallow, shelf sea, on which progradation may have been aided by tectonic uplift. In eastern Sumatra, progradation appears to have been very rapid within historical times, but there is not yet sufficient information to permit detailed reconstruction and dating of the shoreline sequences. Studies of early maps, the accuracy of which is uncertain, and interpretations of descriptions by Chinese, Arab, and European travellers led Obdeijn (1941) to suggest that there had been progradation of up to 125 kilometres on the Kuantan delta since about 1600 AD. In further papers, Obdeijn (1942a, 1942b, 1943, 1944) found supporting evidence for extensive shoreline progradation along the Straits of Malacca and in southern Sumatra. In the fifteenth century Palembang, Djambi, and Indragiri were ports close to the open sea or a short distance up estuarine inlets (Van Bemmelen 1949). More recently, the shoreline of the Djambi delta prograded up to 7.5 kilometres between 1821 and 1922, while on the east coast the fishing harbour of Bagansiapiapi has silted up, and the old Sri Vijayan ports are now stranded well inland (Verstappen 1960, 1964b). Witkamp (1920) described hillocks up to 4 metres high occupied by kitchen middens containing marine shell debris and now located over 10 kilometres inland near Serdang, but these have not been dated. Tjia et al. (1968) quoted various reports of beach ridges up to 150 kilometres at Air Melik and Indragiri, which were former shorelines, but such features are sparser on these swampy lowlands than on the deltaic plains of northern Java. Commenting on the rarity of beach ridges, Verstappen (1973) suggested that the sandy loads of the rivers are largely deposited upstream, so that only finer sediment reaches the coast to be deposited in the advancing swamp lands. Some beach ridges were derived from sediment eroded from the margins of drier "red soil"-ta/ang-particularly around former islands now encircled by swamps, as in the Mesuji district. If progradation has been aided by emergence one would expect beach ridges to be preserved as surface features, for where progradation has been accompanied by subsidence (as on most large deltas) the older beach ridges are found buried as sand lenses within the inner delta stratigraphy. The Holocene evolution of the lowlands of eastern Sumatra still requires more detailed investigation, using stratigraphic as well as geomorphological evidence. Patterns of active erosion and deposition alongside the estuaries north of Palembang have been mapped by Chambers and Sobur (1975). The changes are due partly to estuarine meandering, with undercutting of the outer banks on meander curves as the inner banks are built up. Towards the sea there has been swamp encroachment, for example along the Musi-Banjuasin estuary, which is bordered by low natural levees breached by orthogonal tributary creeks. The shoreline on the peninsula north of Sungsang is advancing seawards, and there is active progradation along much of the southern coast of Bangka Strait. Bangka Island rises to a steep-sided plateau with a granite interior: like the Riau and Lingga islands to the north it is geologically a part of the Malaysian Peninsula. Pleistocene terraces occur up to 30 metres above present sea level on Bangka, and its northern and eastern shores have coralfringed promontories and bays backed by sandy beach ridges, but the southern shores, bordering Bangka Strait, are low and swampy, with mangrove-fringed channels opening on to shoaly seas. Belitung is morphologically similar, but has more exposed coasts, with sandy beach-ridge plains extensive south of Manggar on the east coast, facing the south-easterly waves from the Java Sea. Both islands have tin-bearing alluvial deposits in river valleys and out beneath the sea floor, where such valleys extended and incised during glacial low sea-level phases and were submerged and infliled as the sea subsequently rose. South of Bangka the east-facing coast of Sumatra consists of beach ridges backed by swamps and traversed by estuaries. Lobate salients such as Cape Menjangan and Cape Serdang are beach-fringed swamps rather than deltas, but beach ridges curve inland behind swamps on either side of the Tulangbawang River, where progradation has filled an estuarine gulf. At Telukbetung the lowlands come to an end as mountain ranges intersect the coast in steep promontories bordering Sunda Strait. In 1883 the explosion of Krakatau, an island volcano in Sunda Strait (Fig. 6), led to the ejection of about 18 cubic kilometres of pumice and ash, leaving behind a collapsed caldera of irregular outline, up to more than 300 metres deep and about 7 kilometres in diameter (Fig. 7). The collapse caused a tsunami up to 30 metres high on the shores of Sunda Strait and surges of lesser amplitude around much of Java and Sumatra (Verbeek 1886). Marine erosion has cut back the cliffs produced by the explosive eruption: at Black Point on Pulau Krakatau-Ketjil, cliffs cut in pumice deposited during the 1883 eruption had receded up to 1.5 kilometres by 1928 (Umbgrove 1947). Since 1927 a new volcanic island, Anak Krakatau, has been growing in the centre of the caldera, with phases of rapid enlargement and outward progradation in the 1940s and early 1960s (Zen 1969). Sunda Strait is bordered by volcanoes, the coast consisting of high volcanic slopes, with sectors of coral reef, some of which have developed rapidly in the century since the Krakatau explosion destroyed or displaced their predecessors. Panaitan Island consists of strongly folded Tertiary sediments, with associated volcanic rocks, and has a sandy depositional fringe around much of its shoreline. Similar rocks form the higher western part (Mount Payung) of the peninsula of Ujong Kulon, the rest consisting of a plateau of Mio-Pliocene sedimentary rocks. This peninsula is a former island, attached to the mainland of Java by a depositional isthmus (Verstappen 1956). It is cliffed on its south-western shores, but the southern coast has beaches backed by parallel dune ridges up to 10 metres high, covered by dense Pandanus scrub, the beach curving out to attach a coral island at Tereleng as a tombolo. The northwest coast has cliffs up to 20 metres high, passing into bluffs behind a coral reef that lines the shore past Cape Alang and into Welkomst Bay. Volcanic ash and negro heads on and behind this reef date from the Krakatau explosion, when a tsunami washed over this coast. Verstappen (1956) found that notches up to 35 centimetres deep had been excavated by solution processes and surf swash on the coral boulders thrown up onto this shore in 1883. This is rapid compared with solution notching measured by Hodgkin (1970) at about 1 millimetre/year on tropical limestone coasts. Within Welkomst Bay there are mangrove sectors, prograding rapidly on the coast in the lee of the Handeuleum reef islands. The geomorphological features of Sunda Strait deserve closer investigation, with particular reference to forms that were initiated by catastrophic events almost a century ago (cf. Symons 1888). An island about 1,000 kilometres long and up to 250 kilometres wide, Java is threaded by a mountain range which includes several active volcanoes. To the north are broad deltaic plains on the shores of the Java Sea; to the south steeper coasts, interrupted by sectors of depositional lowland, face ocean waters. The west coast of Java is generally steep, except for the Bay of Pulau Liwungan, where the Ciliman River enters by way of a beach-ridge plain. Near Merak the coast is dominated by the steep slopes of the Karang volcano which descend to beachfringed shores. Panjang and Tunda islands, offshore, are of Miocene limestone, but the shores of Banten Bay are lowlying and swampy, with some beach ridges, widening to a deltaic plain of the Ciujung River. This marks the beginning of the extensive delta coastline built by the silt-laden rivers of northern Java. There are protruding lobes of deposition around river mouths and intervening sectors of erosion, especially where a natural or artificial diversion of the river has abandoned earlier deltaic lobes, or sediment yield has been reduced by dam construction. A patchy mangrove fringe persists although there has been widespread removal of mangroves, in the course of constructing tambak (brackishwater fishponds), and in places these are being eroded. Some sectors are beach-fringed and the prevalence of northeasterly wave action generates a westward drifting of shore sediment. Fig.8 shows the pattern of change on the north coast of West Java detected from comparisons of maps, drawn between 1883 and 1885, and 1976 Landsat imagery: there has been seaward growth of land in the vicinity of river mouths, and smoothing and recession of the shoreline in intervening sectors. There was rapid progradation of the Ciujung delta after the diversion of its lower course for irrigation and flood-control purposes. Growth of the new delta led to the joining of Dua, a former island, to the Javanese mainland, and this has raised problems of wildlife management, for the island had been declared a bird sanctuary in 1973, before it became so readily accessible from the land. Immediately to the west there have been similar changes on the Cidurian delta since 1927, when an irrigation canal was cut, and a new outlet established 4.5 kilometres west of the old natural river mouth. Comparison of outlines on air photographs showed that over an 18-year period the new delta built up to 2.5 kilometres seawards at the mouth of the artificial outlet, while the old delta lobe to the east was cut back by wave action which removed the mangrove fringe and eroded fishponds to the rear (Verstappen 1953a). Changes have also taken place on the large and complex delta built by the Cisadane River. Natural breaching of levees by floodwaters led to the development of a new outlet channel, and when delta growth began at the new outlet the delta preciously built around the old river mouth began to erode, the irregular deltaic shoreline being smoothed as it was cut back (Verstappen 1953a). Numerous coral reefs and coralline islands (the Thousand Islands) lie off Jakarta Bay, and many of these have shown changes in configuration during the past century. As a sequel to the studies by Umbgrove (1928,1929a,1929b). Zaneveld and Verstappen (1952) traced changes with reference to maps made in 1975,1927, and 1950. Haarlem have grown larger as the result of accretion on sand cays and shingle ramparts, but there are also sectors where there has been erosion or lateral displacement of such features on island shorelines. In general the shingle ramparts have developed around the northern and eastern margins, exposed to relatively strong wave action, while the sand cays lie to the south-west, in more sheltered positions. Verstappen (1954) found changes in the position of shingle ramparts before and after 1926, on these islands, which he related to climatic variations. In the years 1917-1926 easterly winds predominated, with the ITC in a relatively northerly position because the Asian anticyclone was weak, and wave action built ramparts on the northern and eastern shores; after 1926 westerly winds became dominant, with the ITC farther south because of stronger Asian anticylonicity, and waves built new ramparts of shingle on the western shores (Verstappen 1968). There is evidence of subsidence on some of the coral islands, such as Pulan Pugak, where nineteenth-century bench-marks have now sunk beneath the sea, while others have emerged: Alkmaar Island, for example, has a reef above sea level undergoing dissection. Some of the islands have been modified by the quarrying of coral limestone for use in road-making and buildings in Jakarta. This quarrying augmented the supply of gravel to shingle ramparts, but several islands that were quarried have subsequently been reduced by erosion: Umbgrove (1947) quoted the example of Schiedam, a large low-wooded island on a 1753 chart, reduced to a small sand cay by the 1930s. The features of Jakarta Bay were described in a detailed study by Verstappen (1953a). The shores are low-lying, consisting of deltaic plains with a mangrove fringe inter rupted by river mouths and some sectors of sandy beach. Between 1869 and 1874 and 1936 and 1940 as much as 26 square kilometres of land was added to the bay shores by deltaic progradation, mainly on the eastern shores (Fig. 9). Detailed comparisons of maps made between 1625 and 1977 show the pattern of advance at Sunda Kelapa, Jakarta (Fig. 10). Inland, patterns of beach ridges mark earlier alignments of the coast during its irregular progradation, the variability of which has been related to fluctuations in the position of river mouths delivering sediment (Fig. 11). The beach ridges diverge from an old cuspate foreland at Tanjung Priok, across the deltaic plains of the Bekasi-Cikarang and Citarum rivers to the east. Pardjaman (1977) published a map based on a comparison of nautical charts made in 1951 and 1976 which showed substantial accretion along the eastern shores of the bay, especially alongside the mouths of the Bekasi and Citarum rivers. This was accompanied by shallowing off the mouths of these rivers. Along the southern shores at Jakarta a fringe of new land a kilometre wide has been created artificially for recreational use by reclaiming the mangrove zone and adjacent mudflats. On the other hand, removal of lorry-loads of sand from Cilincing Beach resulted in accelerated shoreline erosion. In the 65 years between 1873 and 1938 the shoreline retreated about 50 metres but in the 24 Years between 1951 and 1975, with sand extraction active, it went back a further 600 metres (Pardjaman 1977). East of Jakarta the Citarum River drains an area of about 5,700 square kilometres, including mountainous uplands, plateau country, foothills, and a wide coastal plain, with beach ridges up to 12 kilometres inland. It has built a large delta (Fig. 12), which in recent decades has grown northwestwards at Tanjung Karawang, with subsidiary growth northwards and southwards at the mouths of the Bungin and Blacan distributaries. At present the river heads north from Karawang, and swings to the north-west at Pingsambo, but at an earlier stage it maintained a northward course to build a delta in the Sedari sector. This has since been cut back, leaving only a rounded salient, along the shores of which erosion is continuing (Plate 1). The shores are partly beach-fringed, the beaches showing the effects of westward longshore drifting, which builds spits that deflect creek mouths in that direction. Eroding patches of mangrove persist locally and, north of Sungaibuntu, there is erosion and dissection of fishponds (PIate 21. According to Verstappen (1953a) the Citarum delta prograded by up to 3 kilometres between 1873 and 1938, although sectors of the eastern shore of Jakarta Bay retreated by up to 145 metres. After the completion of the Jatiluhur Dam upstream in 1970 a marked slackening of the rate of progradation of the deltaic shoreline was noted at the mouth of the Citarum. BY contrast, growth on the neighbouring Bekasi delta accelerated after 1970. It was decided that dam construction had diminished the rate of sediment flow down the Citarum River because of interception of silt in the impounded reservoir whereas the sediment yield from the undammed Bekasi River had increased. Such reduction of the rate of progradation has been widely recognized on many deltaic shorelines, following dam construction within their catchments, and the onset of delta shoreline erosion is a phenomenon that has also been documented widely around the world's coastlines (Bird 1976). There is little doubt that the rate and extent of delta shoreline progradation will diminish and that shoreline erosion will accelerate and become more extensive as further dams are built in the catchments of the rivers of northern Java. This will be accompanied by increasing penetration of brackish water into the river distributaries and the gradual spread of soil salinization into deltaic lands. East of the Citarum delta are the extensive depositional plains built up by the Cipunegara River (Fig. 13). The Cipunegara has a catchment of about 1,450 square kilometres, with mountainous headwater regions, carrying relics of a natural deciduous rain forest and extensive tea plantations; a hilly central catchment with teak forest, rubber plantations, and cultivated land; and a broad coastal plain bearing irrigated ricefields. The river meanders across this plain, branching near Tegallurung, where the main stream runs northwards and a major distributary, the Pancer, flows to the north-east. An 1865 map shows the Cipunegara opening through a large lobate delta, the Pancer having a smaller delta to the east, but when topographical maps were made in 1939 the Pancer had developed two large delta lobes extending 3 to 4 kilometres out into the Java Sea while the Cipunegara delta had been truncated, with shoreline recession of up to 1.5 kilometres. Aerial photographs taken in 1946 showed further advance on the Pancer delta, and continued smoothing of the former delta lobe to the west (Hollerwoger 1964). Tjia et al. (1968) confirmed this sequence with reference to the pattern of beach ridges truncated on the eastern shores of Ciasem Bay and the 1976 Landsat pictures show that a new delta has been built out to the north-east (Fig. 14). Along the coast the mangrove fringe (mainly Rhizophora) has persisted on advancing sectors but elsewhere has been eroded or displaced by the construction of fishponds. Third in the sequence of major deltas east of Jakarta is that built by the Cimanuk River (Fig. 15). The Cimanuk and its tributaries drain a catchment of about 3,650 square kilometres, the headstreams rising on the slopes of the Priangan mountain and the Careme volcano, which carry rain forest and plantations. There has been extensive soil erosion in hilly areas of the central catchment following clearance of the forest and the introduction of grazing and cultivation, particularly in the area drained by the Cilutung tributary (Van Dijk and Vogelzang 1948). The Cimanuk thus carries massive loads of silty sediment down to the coast: of the order of 5 million tonnes a Year (Tjia et al. 1968). The broad coastal plain bears extensive rice-fields, with fishponds and some residual mangrove fringes along the shoreline to the north. The river meanders across this plain, the distributary Rambatan diverging north-westwards near Plumbon. Hollerwoger (1964) traced changes on the delta shoreline with references to maps made in 1857, 1917, and 1935, and air photographs taken in 1946. Examination of beach-ridge patterns, marking successive shorelines, shows that before 1857 the Cimanuk took a more northerly course and built a delta lobe (Fig 16). BY 1857 this was in course of truncation, and the Cimanuk mouth had migrated westwards to initiate a new deltaic protrusion. Between 1857 and 1917 large delta lobes were built by the Cimanok and the Rambatan but an irrigation channel, the Anyar Canal, had been cut from Losarang to the coast, diminishing the flow to the Rambatan, and a new delta began to grow at the canal mouth, out into the embayment between the Cimanuk and Rambatan deltas. BY 1935 this embayment had been filled, the shoreline having advanced about 6 kilometres in 17 years, while erosion had cut back the adjacent Rambatan delta. Continued growth occurred at the mouth of the Anyar Canal and the Cimanuk between 1935 and 1946, by which time the Rambatan delta shoreline had retreated up to 300 metres. During a major flood in 1947 the Cimanuk established a new course north-east of Indramayu, and a complex modern delta has since grown here (Plate 3). Stages in the evolution of this modern delta are shown in Fig. 17. At first there was only a single channel, but three main distributaries-the Pancer Balok, Pancer Payang, and Pancer Song-have developed as the result of levee crevassing, and each of these shows further bifurcations resulting from channel-mouth shoal formation, as well as the cutting of artificial lateral outlet channels (Tjia 1965; Hehanussa et al. 1975; Hehanussa and Hehuwat 1979). Since 1974 the Pancer Balok has replaced the Pancer Payang as the main outlet. Erosion has continued on the northern lobe where the present coastline shows an enlargement of tidal creeks, probably the result of compactionsubsidence. On the east coast, south of Pancer Song, there has been erosion in recent decades. Sand drifting northwards has been intercepted by the oil terminal jetty at Balongan, and the shoreline north of the jetty is retreating rapidly. According to Purbohadiwidjojo (1964) Cape Ujung, to the south, was an ancient delta lobe, but there is no evidence that any channel led this way. Tjia (1965) suggested that it might be related to a buried reef structure, but there is no evidence of this either. In fact, the cuspate promontory is situated where one of the earlier beach ridges has been truncated by the present shoreline. Patterns on the 1976 Landsat picture suggest that the cape is at the point of convergence of two current systems in the adjacent sea area, but it is not clear whether the pattern is a cause or a consequence of present coastal configuration. Although it has only a relatively small catchment (250 square kilometres) the Bangkaderes has built a substantial delta (Fig. 18) on the coast south-east of Cirebon. This is because of its large annual sediment load, derived from a hilly catchment where severe soil erosion has followed forest clearance and the introduction of farming. An 1853 map showed a small lobate delta but by 1922 two distributary lobes had been built, advancing the shoreline by up to 2.7 kilometres. Air photographs taken in 1946 show continued enlargement of the eastern branch, extended by up to 1.8 kilometres seawards, and erosion of the western branch, which no longer carried outflow (Hollerwöger 1964). A few kilometres to the east are the Sanggarung and Bosok deltas (Fig. 19). The Sanggarung has a catchment of 940 square kilometres, and rises on the slopes of the volcanic Mt. Careme. The headwater regions are steep and forested and partly farmed land, and the coastal plain consists largely of rice-fields, with fishponds to seaward and some mangrove fringes. An 1857 survey showed a delta built out northeastwards along the Bosok distributary, and between 1857 and 1946 deposition filled in the embayment to the east, on either side of the Sebrongan estuary, and there was minor growth on the Bosok delta; to the north west the Sanggarung built out a major deltaic feature, with several distributaries leading to cuspate outlets. The coastal lowland here has thus shown continuing progradation of a confluent delta plain without the alternations that occur as the result of natural or artificial diversion of river mouths (Hollerwoger 1964). The Pemali delta (Fig. 20) also showed consistent growth between an 1865 survey, 1920 mapping, and 1946 air photography (Hollerwöger 1964). The river drains a catchment of about 1,200 square kilometres, with forested mountainous headwater regions and extensive hilly country behind the swampy coastal plain. The delta grew more rapidly between 1920 and 1946 than it had over the 56 years preceding the 1920 survey, possibly because of accelerated soil erosion in hilly country as the result of more intensive farming. The growth of the Comal delta to the east has shown fluctuations (Fig. 21).When it was mapped in 1870 the Comal (catchment area of about 710 square kilometres) was building a lobate delta to the northwest but by 1920 growth along a more northerly distributary had taken place. The river then developed an outlet towards the north-east, leading to the growth of a new delta in this direction by the time air photographs were taken in 1946. The earlier lobes to the west had by then been truncated. In this, as in the other north Java deltas, growth accelerated after 1920, probably as a result of increasing soil erosion due to intensification of farming within the hilly hinterland (Hollerwöger 1964). The Bodri delta (Fig. 22) is the next in sequence. The Bodri River rises on the slope of the Prahu volcano, and drains a catchment of 640 square kilometres. Again the mountainous headwater region backs a hilly area, with a depositional coastal plain, mainly under rice cultivation. An 1864 survey shows the Bodri opening to the sea through a broad lobate delta which had grown northwards to Tanjung Korowelang at the mouths of two distributaries when it was remapped in 1910. Thereafter a new course developed, probably as the result of canal-cutting to the north east, and by 1946, when air photographs were taken, a major new delta had formed here, prograding the shoreline by up to 4.2 kilometres. Meanwhile, the earlier delta at Tanjung Korowelang had been truncated and the shoreline smoothed by erosion (Hollerwoger 1964). East of Semarang large-scale progradation is thought to have taken place in recent centuries. Demak, a sixteenth-century coastal port, is now about 12.5 kilometres inland behind a prograded deltaic shoreline. Continuing progradation is indicated by the small delta growing at the mouth of a canal cut from the River Anyar to the sea, but otherwise the coastline north to Jepara is almost straight at the fringe of a broad depositional plain. According to Niermeyer (1913: quoted by Van Bemmelen 1949) the Muria volcano north-east of Demak was still an island in the eighteenth century, when seagoing vessels sailed through the strait that separated it from the Remang Hills, a strait now occupied by marshy alluvium. This inference, however, needs to be checked by geomorphological and stratigraphical investigations. The shoreline of the Serang delta, south of Jepara, changed after the construction of the Wulan Canal in 1892, which diverted the sediment yield from the Kedung River to a new outlet, around which a substantial new delta has been formed. In 1911 this was of cuspate form, but by 1944 it was elongated, and by 1972 it had extended in a curved outline northwards, branching into three distributaries (Fig. 23). Between 1911 and 1944 the new delta gained 297 hectares, and from 1944 to 1972 a further 385 hectares, including beach-ridge systems and a seaward margin adapted for brackish-water fishponds. Beyond Jepara the coast steepens on the flanks of Muria, but the shores are beach-fringed rather than cliffed. To the east the Juwana River opens on to the widening deltaic plain behind Rembang Bay, but at Awarawar the coast consists of bluffs cut in Pliocene limestone. Tuban has beaches and low dunes of quartzose sand, supplied by rivers draining sandstones in the hinterland, but otherwise the beaches on northern Java are mainly of sediments derived from volcanic or marine sources. Hilly country continues eastwards until the protrusion of the Solo River delta. The modern Solo delta (Fig. 24) has been built out rapidly from the coast at Pangkah since a new artificial outlet from this river was cut at the beginning of the present century (Verstappen 1977). Comparisons of the outlines of the Solo delta shown on 1 :50,000 topographical maps made in 1915 and 1936 and on air photographs taken in 1943 and 1970 indicated #award growth of 3,600 metres in 1915 to 1936, a further 800 metres between 1936 and 1943, and 3,100 metres between 1943 and 1970; in real terms the delta increased by 8 square kilometres in the first period, 1 square kilometre in the second, and a further 4 square kilometres in the third (Verstappen 1964a, 1977). The rate of progradation of such a delta depends partly on the configuration of the sea floor, for as the water deepens offshore a greater volume of sediment is required to produce the same increase in surface area. It also depends on the rate of fluvial sediment yield, which has here increased following deforestation and intensified land use within the catchment, so that larger quantities of silt and clay have been derived from the intensely weathered volcanic and marry outcrops in the hinterland: the average suspended sediment load is 2.75 kilograms per cubic metre. Much of the silt has been deposited to form levees, while the finer sediment accumulates in bordering swamps. The features of this delta include a relatively smooth eastern shoreline backed by parallel beach ridges and fronted by sand bars, the outlines determined by northeasterly wave action during the winter months. As this is also the dry season, there has been a tendency for distributaries and creeks formed on the eastern side of the Solo to be blocked off by wave deposition and silted up, the outcome being that the channels opening north-westwards have persisted to carry the bulk of the discharge and sediment yield from the Solo in the wet season, so that the delta has grown more rapidly in this direction. Mangroves (mainly Rhizophora spp.) are patchy and eroded on the eastern shore, but broad and spreading seawards between the distributary mouths on the more sheltered western shore. The tide range is small (less than 1 metre), but at low tide the mudflats exposed on the western shores are up to 200 metres wide. The rapid growth of such a long, narrow delta, protruding more than 20 kilometres seawards, is related partly to the shallowness of the adjacent sea and the consequent low-wave energy conditions and partly to the predominance of clay in the deltaic sediment, which is sufficiently cohesive to form persistent natural levees projecting out into the Java Sea. Between 1915 and 1936 there was some lateral migration of the Solo River, marked by undercutting of banks on the outer curves of meanders, and a new outlet channel (3 in Fig. 24) was initiated, probably as the result of flood overflow and levee crevassing on the meander curve. A small delta formed here, but by 1970 it had been largely eroded leaving only a minor protuberance on an otherwise smoothly prograded eastern coast. The effects of canal construction are well illustrated where a channel, cut between 1936 and 1943 from a distributary (2 in Fig. 24) to irrigate rice-fields, increased drainage into an adjacent creek (5 in Fig 24) which then developed levees that grew out seawards. However, by 1970 this, too, had been cut back. A similar development farther south (4 in Fig. 24) converted a creek into a minor distributary of the Solo, with its own sub-delta lobe by 1943, but progradation of mangrove swamps (largely replaced by fishponds) has proceeded rapidly on this part of the western coastline, and by 1970 the distributary, although lengthened, protruded only slightly seawards. In the course of its growth, the Solo delta has incorporated the former island of Mangari, which consists of Pliocene limestone (Verstappen 1977). East of the broad funnel-shaped entrance to Surabaya Strait the Bangkalan coast of north-west Madura Island shows several small mangrove-fringed deltas on a muddy shoreline. The north coast of the island of Madura is remarkably straight, with terraces that show intermittent emergence as the result of tectonic uplift. The hinterland is steep, with areas of Pliocene limestone, but the shore is generally beachfringed, with some minor dunes to the east. The southern coast of the island is depositional, with beaches of grey volcanic sand that culminate in a recurved spit at Padelegan. Coastal waters are muddy, but outlying islands, such as Kambing, have fringing reefs and derived beaches of pale coralline sand. Tide range increases westwards, and the Baliga River enters the sea by way of a broad, mangrovefringed tidal estuary, bordered by swampy terrain, with a narrow beach to the west. Surabaya Strait shows tidal mudflats, scoured channels, and estuarine inlets indicative of relatively strong current action, and there has been extensive reclamation for fishponds along the mangrove-fringed coast to the south. In the fourteenth century ships could reach Mojokerto, now 50 kilometres inland on the Brantas delta, which continues to prograde around its distributary mouths. The southern shores of Madura Strait are beach-fringed, the hinterland rising steeply to the volcanoes of Bromo and Argapura. Beach sediments are grey near the mouth of rivers draining the volcanic hinterland, pale or cream near fringing coral reefs, and white in the Jangkar sector, where quartzose sands are found. The east coast of Java is steep, with streams radiating from the Ijen volcano, but to the south a coastal plain develops and broadens. This consists of low beach ridges built mainly of volcanic materials derived from the Ringgit upland. The Sampean delta is fan-shaped, accreting on its western shores as erosion cuts back the eastern margin. The Blambangan Peninsula is of Miocene limestone, and has extensive fringing reefs backed by coralline beaches, with evidence of longshore drifting on the northern side, into the Straits of Bali. The south coast of Java is dominated by wave action from the Indonesia Ocean, and receives a relatively gentle southwesterly swell of distant origin and stronger locally generated south-easterly waves that move shore sediments and deflect river outlets westwards, especially in the dry winter season. It is quite different from the north coast of Java, being dominated by steep and cliffed sectors and long, sandy beaches rather than protruding deltas. There is very little information on the extent of shoreline changes in historical times, and we cannot accept the statement of Tjia et al. (1968, p. 26) that abrasion rates along the south coast must have been much higher than those on the deltaic northern shoreline because of the more powerful wave action from the Indonesian Ocean: changes on this rocky and sandy coast will have been relatively slow. The Bay of Grajagan is backed by a sandy barrier enclosing a river-fed estuarine lagoon system with an outlet to the sea at the western end, alongside the old volcanic promontory of Capil. Farther west the coast becomes indented, with cliffed headlands of Miocene sedimentary rock and irregular embayments, some with beaches and beach ridges around river mouths. Nusa barung is a large island of Miocene limestone with a karstic topography and a cliffed and isletted southern coast; its outlines are related to joint patterns and southward-tilting (Tjia 1962). It modifies oceanic wave patterns on the sandy shores of the broad embayment to the north in such a way as to generate longshore drifting from west to east so that the Bondoyudo River has been deflected several kilometres eastwards to an outlet behind a barrier spit leading to a cuspate foreland with multiple beach ridges (Fig. 25). The coastal plain then narrows westwards and gives place to a steep indented coast on Miocene sedimentary formations, including the limestones of Kendeng, with bolder promontories of andesite near Tasikmadu. At Puger and Meleman there are beach-ridge systems surmounted by dunes up to 15 metres high, with a thick vegetation cover, in sequence parallel to the shoreline. These interrupt the predominantly karstic limestone coast (Plates 4, 5, and 6), with cliffed sectors and some fringing reefs, that continues westwards to Parangtritis. Near Baron the limestone cliffs are fronted by shore platforms exposed at low tide and flat-floored notches, cut in the base of cliffs and stacks, testify to the importance of solution processes in the shaping of these features (Plate 4). Locally, beaches of calcareous sand and gravel occupy coves, and where these occur an abrasion ramp may be seen at the rear of the shore platform. At Baron a river issues from the base of a cliff and meanders across a beach of black sand that has evidently been washed into the valley-mouth inlet by ocean waves (Plate 5), the sand having come from sea-floor deposits supplied by other rivers draining the volcanic hinterland. At Parangtritis the cliffs end, and the broad depositional plain of central south Java begins. The Opak and Progo rivers, draining the southern slopes of the Merapi volcano, are heavily laden with grey sands and gravel derived from pyroclastic materials. During floods these are carried into the sea to be reworked by wave action and built into beaches with a westward drift (Plates 7 and 8). The coastal plain has prograded, with the formation of several beach ridges separated by swampy swales. No measurements of historical changes are available, but our reconnaissance in November 1979 found evidence of sequences of localized progradation at the river mouths followed by westward distribution of part of the prograded material. It appears that the alignment of the shore is being maintained, or even advanced seawards, as the result of successive increments of fluvial sand supply. Finer sediment, silt and clay, is deposited in bordering marshes and swales, or carried into the sea and dispersed by strong wave action. On some sectors, especially near Parangtritis, the beach is backed by dune topography, typically in the form of ridges parallel to the shoreline and bearing a sparse scrub cover (Plate 9). At Parangtritis there are mobile dunes up to 30 metres high, driven inland by the south-easterly winds (Plate 10). The presence of mobile dunes, unusual in this humid tropical environment, may be due to a reduction of their former vegetation cover by sheep and goat grazing, and by the harvesting of firewood (Verstappen 1957). Whereas the present beach and dune systems consist of incoherent grey sand, readily mobilized by wind action in unvegetated areas, the older beach-ridge systems farther inland are of more coherent silty sand which can be used for dry-land cultivation. The silt fraction may be derived from airborne (e.g., volcanic dust) or flood-borne accessions of fine sediment, or it may be the outcome of in situ weathering of some of the minerals in the originally incoherent sand deposits. At Karangtawang the depositional lowland is interrupted by a high rocky promontory of andesite and limestone, the Karangboto Peninsula. There are extensive sand shoals off the estuary of the Centang River, which washes the margins of the rocky upland, and there appears to have been rapid progradation of the beach to the east-and also in the bay to the west-where sand has built up in front of a former sea cave which used to be accessible only by means of ropes and ladders when men descended the cliff to collect birds' nests. Rapid accretion may have been stimulated here by the catastrophic discharge of water and sediment that followed the collapse of the Sempor Dam in the hinterland in 1966. The sandy and swampy coastal plain resumes to the west of the Karangboto Peninsula, and extends past the mouth of the Serayu River. In this sector it has been disturbed by the extraction of magnetite and titanium oxide sands; in places, the beach ridges have been changed into irregular drifting dunes, while dredged areas persist as shallow lagoons. On either side of the mouth of the Serayu River the coastal plain has prograded by the addition of successive sandy beach ridges separated by marshy swales. The sediments are of fluvial origin, reworked and emplaced by wave action, and progradation has enclosed a former island as a sandstone hill among the beach ridges. According to Zuidam et al. (1977) the coastal plain shows a landward slope at a number of places where the streamlets flow land-wards instead of seawards, and this is presumed to be due to very recent differential tectonic movements. The geomorphological contrast between the irregular deltaic coast of northern Java and the smooth outlines of depositional sectors on the south Java coast is largely due to contrasts in wave-energy regimes and sea-floor topography. The sediment loads of rivers flowing northwards and southwards from the mountainous watershed are similar, but the finer silt and clay, deposited to form deltas in the low-energy environments of the north coast, are dispersed by high-wave energy on the south coast. The coarser sand fraction seen in beach ridges associated with the north coast deltas is thus concentrated in more substantial beach and dune formations on the south coast. The contrast is emphasized by the shallowness of coastal waters off the north coast, which reduces wave energy, as opposed to the more steeply shelving sea floor off the south coast, which allows larger waves to move into the shoreline. Nevertheless, silt and clay carried in floodwaters settles in the swales between successively built beach-ridge systems along the southern coast, and in such embayments as Segara Anakan, and, as we have noted, it may also have been added to the sandy deposits of older beach ridges inland. The nature and rate of sediment yield from rivers draining to the south coast vary with the size and steepness of the catchment, with geological features such as catchment Ethology, and with vegetation cover. In the Serayu River basin, deforestation has accelerated sediment yield and increased the incidence of flooding in recent years. Meijerink (1977) found that the annual sediment yield from catchments dominated by sedimentary rocks was ten times that of catchments with similar vegetation and land use on volcanic formations, the contrast being reflected in the nature and scale of depositional features developed at the river mouth. West of the Serayu River the sandy shoreline, backed by beach ridges, curves southwards to Cilacap, in the lee of Nusakambangan, a high ridge of limestone and conglomerate, with precipitous cliffs along its southern coastline. Extensive mangrove swamps threaded by channels and tidal creeks border the shallow estuarine embayment of Segara Anakan (Fig. 26), which receives large quantities of silty sediment from the Citanduy River. At the eastern end, strong tidal currents maintain a navigable inlet for the port of Cilacap, which stands on the sandy barrier behind a shoaly bay. A meandering channel persists westwards, leading through the mangroves to Segara Anakan, which has a larger outlet through a steep-sided strait to Penandjung Bay. Changes in the configuration of Segara Anakan between 1900 and 1964 were traced by Hadisumarno (1964), who found evidence for rapid advance of mangroves into the accreting intertidal zone. He reported surveys made in 1924, when the average depth (ignoring deeper tidal channels) was 0.5 to 0.6 metres, and 1961, when it had shallowed to 0.1 to 0.2 metres, the tidal channels having deepened. Mangrove advance is exceptionally rapid here, and much of the shallow lagoon is expected to disappear as mangroves encroach further in the next two decades. The features and dynamics of Segara Ankan are being studied in Phase II of the UN University LIPI Indonesian coastal resources management project in 1980-81. West of Segara Anakan the beach ridge plain curves out to the tombolo of Pangandaran, where deposition has tied an island of Miocene limestone (Panenjoan) to the Java mainland (Fig. 27), and continues on to Cijulang, where the hinterland again becomes hilly. Beaches line the shore, and many of the rivers have deflected and sand-barred mouths. At Genteng a beach-ridge plain develops, curving out to a tombolo that attaches a former coralline island, and beach ridges also thread the depositional lowlands around the mouths of the Ciletuh and Cimandiri rivers flowing into Pelabuhanratu Bay. The beach ridges indicate past progradation, but no information is available on historical trends of shoreline change in this region. West of Pelabuhanratu the coast steepens, but is still fringed by surf beaches, some sectors widening into depositional coastal plains with beach and dune ridges and swampy swales, including the isthmus which ties Ujong Kulon as a peninsula culminating in Java Head The Indonesian coasts of Kalimantan have received very little attention from geomorphologists, and there is no information on rates of shoreline change in historical times. The western and southern coasts are extensively swampy, with mangroves along the fringes of estuaries, inlets and sheltered embayments. The hilly hinterland approaches the west coast north of Pontianak, where there are broad tidal inlets, and to the south depositional progradation has attached a number of former volcanic islands as headlands. The Pawan and the Kapuas rivers have both brought down sufficient sediment to build substantial deltas (Tjia 1963) but in general the shoreline consists of narrow intermittent sandy beaches backed by swamps, with cuspate salients in the lee of islands such as Gelam, or reefs as at Tanjung Sambar. South of Kendawagan a ridge of Triassic rocks runs out to form the steep-sided Betujurung promontory and the hills on Bawat and Gelam islands. The south coast is similar, with a number of cuspate and lobate salients, most of which are swampy protrusions rather than deltas. Sand of fluvial origin has drifted along the shoreline east and west from the mouth of the Siamok, to form the straight spit of Tanjung Bandaran to the east, partly enclosing mangrove-fringed Sampit Bay, and the recurved spit of Tanjung Puting to the west. Near Banjarmarsin, ridges of Cretaceous and Mio-Pliocene rock run through to form the promontory of Selatan where the swampy shores give place to the more hilly coastal country of eastern Kalimantan. The east coast has many inlets and swamp-fringed embayments, the chief contrast being the large Mahakam delta, formed downstream from Samarinda (Fig. 28). Coarse sandy sediment derived mainly from ridges and valleys in the Samarinda area is prominent in the delta, which has numerous distributaries branching among the swampy islands (Magnier et al. 1975, Allen et al. 1976). Other rivers draining to the east coast open into funnel-shaped tidal estuaries, as at Balikpapan and Sangkulirang, and Berau, Kajau, and Sesayap in the north-east (Tjia 1963); as has been noted, tide ranges are higher on the east coast of Kalimantan than on the south and west coasts. At Balikpapan, shoreline erosion has resulted from the quarrying of a fringing coral reef, but the rate and extent of this erosion have not been documented. The coasts of Sulawesi have also received little attention from geomorphologists but it is known that this island has been tectonically active. In contrast with the low-lying swampy shores of Kalimantan there are long sectors of steep coast, often with terraced features indicating tectonic uplift or tilting, especially where coral reefs have been raised to various levels up to 600 metres above present sea level, some of them transversely warped and faulted. Rivers are short and steep, with many waterfalls and incised gorges, and there are minor depositional plains around the river mouths. Fringing and nearshore coral reefs are extensive, and along the shore there are sectors of beach sand, with spits and cuspate forelands, especially in the lee of offshote islands, as at Bentenan. It is likely that progradation is taking place where rivers drain into the heads of inlets and embayments, especially on the east coast, where mangrove fringes are extensive, but no details are available. Volcanic activity has modified coastal features locally, for example on Menado-tua -the active volcano off Menado in the far north of the island- and erosion has been reported at Bahu, but again there are no detailed studies. South and south-east of Sulawesi there are many uplifted reef patches and atolls, as well as islands fringed by raised reef terraces Binongko, for example, has a stairway of 14 reef terraces, the highest 200 metres above sea level (Kuenen 1933), and Muna is a westward-tilted island with reef terraces up to 445 metres above sea level (Verstappen 19601. Bali and Nusatenggara The northwestern coast of Bali consists of Pliocene limestone terrain, the shores having yellow beach sands and some fringing reefs. A lowland behind Gilimanuk becomes a narrowing coastal plain along the northern shore, giving place to a steeper coast on volcanic rocks near Singaraja. Out to the north, the Kangean islands include uplifted reefs and emerged atolls. In the eastern part of Bali the coast is influenced by the active volcanoes, specially Agung, which generate lava and ash deposits that move downslope and provide a source of sediment that is washed into the sea by rivers, particularly during the wet season (December to April). These sediments are then distributed by wave action to be incorporated in grey beaches. Sanur beach is a mixture of fluvially supplied grey volcanic sand and coralline sand derived from the fringing reef (Tsuchiya 1975,1978). At Sengkidu the destruction of a fringing reef by collecting and quarrying of coral has led to erosion of the beach to the rear, so that ruins of a temple now stand about 100 metres offshore, indicating that there has been shoreline erosion of at least this amount in the past few decades following the loss of the protective reef (Praseno and Sukarno 1977). Similar erosion is in progress on Kuta and Sanur beaches. South of Sanur, in the lee of the broad sandy isthmus that links mainland Bali to the Bukit Peninsula (of Miocene limestone) to the south, spits partly enclose a broad tidal embayment with patches of mangrove on extensive mudflats. This peninsula has a cliffed coast with caves and notches, stacks rising from basal rock ledges and extensive fringing reefs; beaches occupy coves and south of Benoa beach deposition has resulted in the attachment of a small island to the coast as a tombolo (Plate 11). West of the isthmus, ocean waves determine the curvature of beach outlines, and there has been erosion in recent decades on either side of the protruding airport runway at Denpasar. The beach here, in the lee of a fringing coral reef, is of pale coralline sand, backed by low dunes. It gives way northwards to grey sand of volcanic origin, with beaches interrupted by low rocky promontories and shore benches. Longshore drifting to the north-west is indicated by spits that deflect stream mouths in that direction (Plate 12), and as wave energy decreases, in the lee of the Semenanjung promontory of south-eastern Java, the beaches become narrower and gentler in transverse gradient. At the north-western end of Bali the Gilimanuk spit shows several stages of growth from the coast at Cejik, to the south, interspersed with episodes of truncation. Verstappen (1975b) suggested that growth occurred during phases of dominance of westerly wave action and truncation when south-easterly waves were prevalent, the variation being due to wind regimes associated with long-term migrations of the ITC, but stages in the evolution of this spit have not Yet been dated. Many of the features found on Bali are also found on the similar Lesser Sunda islands to the east, but few details are available. Cliffs of limestone and volcanic rock extend along the southern coasts of Lombok, Sumbawa, and Sumba but elsewhere the coasts are typically steep rather than cliffed and often have fringing coral reefs. There are many volcanoes, some of them active: Inerie and Iya in southern Flores and Lewotori to the east have all erupted in recent times and deposited lava and ash on the coast, as has Gamkonora on Halmahera. Rivers have only small catchments, and depositional lowlands are confined to sheltered embayments, mainly on the northern shores. Terraces and emerged reefs indicative of uplift and tilting are frequently encountered on these eastern islands (Davis 1928). On Sumbawa uplifted coral reefs are up to 700 metres above sea level, attached to the dissected slopes of old volcanoes and, on Timor, reef terraces-much dissected by stream incision-attain 1,200 metres above sea level, the higher ones encircling mountain peaks that were once islands with fringing reefs or almost-atolls with reefs enclosing lagoons that had a central island. Chappell and Veeh (1978) have examined raised coral terraces on the north coast of Timor and on the south coast of the adjacent volcanic island of Atauro, where they extend more than 600 metres above sea level. Dating by Th230-U234 established a sequence of shoreline features and fringing reefs developed during Quaternary oscillations of sea level on steadily rising land margins. On Atauro, where the stratigraphy is very well displayed in gorge sections cut through the terrace stairway, the shoreline of 120,000 Years BP is 63 metres above present sea level. Correlation with other such studies, notably in Barbados, New Guinea, and Hawaii, suggests that the world ocean level was then only 5 to 8 metres above the present, which indicates a mean uplift rate of about 0.5 metres per 1,000 years in Atauro. At Manatuto, Baucau, and Lautem in north-east Timor, dating of similar terraces indicates a similar uplift rate but at Hau, just east of D;l;, the shoreline of 120,000 years BP is only 10 metres above sea level indicating a much slower rate of land uplift, only 2 to 4 centimetres per 1,000 years. Another emerged almost-atoll is seen on Rot;, southwest of Timor, where the enclosing reefs have been raised up to 200 metres, the highest encircling interior hills of strongly folded sedimentary rocks. Kissar, north-east of Timor, has a stairway of five reef terraces, the highest at 150 metres above sea level. Leti, to the east of Timor, has been uplifted in two stages to form reef terraces 10 metres and 130 metres above sea level, and similar features are seen at 10 to 20 metres and 200 to 240 metres on the nearby island of Moa. Yamdena is an island bordered by high cliffs of coral limestone cut into the outer margins of a reef that has been raised 30 metres out of the sea. North of the Banda Sea, Seram has a coral reef 100 metres above sea level, and Ambon, which consists of two islands linked by a sandy isthmus, has reefs at heights of up to 530 metres. Gorong, south-east of Seram, is an atoll uplifted in several stages to 300 metres, and now encircled by a lagoon and a modern atoll reef Obi and Halmahera also have upraised reef terraces up to 300 metres above sea level. In the Aru Islands, Verstappen 11960) described cliffs fronted by shore platforms that had been submerged as the result of tectonic subsidence, but uplifted atolls also occur in this region. A great deal of research is required to establish the nature of coastal features in eastern Indonesia. Some of the reconnaissance accounts are misleading: cliffs have been taken as evidence of recent uplift, and mangrove-fringed embayments as indications of recent subsidence; and it is possible that too much emphasis has been given to catastrophic events, such as earthquakes, volcanic eruptions, and tsunami, in the interpretation of coastal features. Tectonic movements have undoubtedly influenced coastal changes in parts of Irian Jaya, both of steep sectors, mainly in the north, and in the extensive swampy lowlands to the south. Verstappen (1964a) compared a 1903 map of Frederik Hendrik island (Yos Sudarso), near the mouth of the Digul River on the south coast, with maps based on air photographs taken in 1945, and found evidence of substantial progradation, which he attributed to recent uplift in a zone passing through Cape Valsch (Fig. 29). Frederik Hendrik Island is mainly low-lying, with extensive reed-swamps, and its bordering channels are scoured by strong tidal currents but the Digul River opens into a broad estuary, and under present conditions it relinquishes most of its sediment load upstream as it traverses extensive swamps and recently subsided areas between Gantentiri and Yondom. In consequence it is not now building a delta into the Arafura Sea. On the north coast of Irian Jaya the Mamberamo has built a substantial delta, but in recent decades this has shown little growth; indeed, the western shores show creek enlargement and landward migration of mangroves, while the eastern flank is fringed by partly submerged beach ridges with dead trees, all indicative of subsidence (probably due to compaction) and diminished sediment yield from the river. Verstappen (1964a) related this diminished yield to an intercepting zone of tectonic subsidence that runs across the southern part of the delta, marked by a chain of lakes and swamps, including an anomalous mangrove area (Fig. 30). The largest of the lakes, Rombabai Lake, is adjacent to the levees of the Mamberamo, and at one point the subsided levee has been breached during floods and a small marginal delta has grown out into the lake. The islands west of Irian Jaya show evidence of tectonic movements, Waigeo being bordered by notched cliffs of recently uplifted reef limestone, while Kafiau is essentially an upraised almost-atoll with hills of coral limestone ringing an interior upland. In September 1979 a major earthquake (force 8 on the Richter scale) disturbed the islands of Yapen and Biak, north of Irian Jaya, initiating massive landslips on steep coastal slopes, especially near Ansus on the south coast of Yapen. According to the United States Geological Survey it was the strongest earthquake in Indonesia since the August 1977 tremor on Sumbawa which had similar effects. Tsunami generated by these and other earthquakes were transmitted through eastern Indonesia but there have been no detailed studies of their geomorphological and ecological consequences. This review of Indonesian coastal features has indicated the variety of forms that exist within this archipelago, the bestdocumented sectors being the north-eastern coast of Sumatra and the north coast of Java, both of which show evidence of substantial changes within historical times. It is hoped that geomorphological studies will soon provide much more information on the other sectors, which at this stage are poorly documented and little understood.
<urn:uuid:3c5a34ab-fa25-45b5-9e0c-2c6b121a96d8>
CC-MAIN-2013-20
http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0ccgi--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-help---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&c=ccgi&cl=CL1.1&d=HASH01c3665e324835d6857762f0.3
2013-05-19T09:48:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950793
15,977
Spelling Match It! Learning Puzzle Game This colorful set of 3- and 4-letter puzzle cards provide children with an introduction to spelling. They learn to spell by associating the object with the word and correctly assembling the puzzle pieces. The puzzles are self-correcting. - Spelling matching puzzle game - 3- & 4 Letter puzzle cards - Self-correcting puzzle cards - For ages 3 years and up
<urn:uuid:41943833-a898-4c2e-aa20-6bd3af89cf17>
CC-MAIN-2013-20
http://www.meijer.com/s/spelling-match-it-learning-puzzle-game/_/R-124956
2013-05-20T22:11:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.842833
88
Nowadays, electron microscopes are an essential tool, especially in the field of materials science. At TU Vienna, electron beams are being created that possess an inner rotation, similarly to a tornado. These "vortex beams" cannot only be used to display objects, but to investigate material-specific properties - with precision on a nanometer scale. A new breakthrough in research now allows scientists to produce much more intense vortex beams than ever before. Quantum Tornado: the Electron as a Wave In a tornado, the individual air particles do not necessarily rotate on their own axis, but the air suction overall creates a powerful rotation. The rotating electron beams that have been generated at TU Vienna behave in a very similar manner. In order to understand them, we should not think of electrons simply as minuscule points or pellets, as in that case they could at most rotate on their own axis. Vortex beams, on the other hand, can only be explained in terms of quantum physics: the electrons behave like a wave, and this quantum wave can rotate like a tornado or a water current behind a ship's propeller. "After the vortex beam gains angular momentum, it can also transfer this angular momentum to the object that it encounters", explained Prof. Peter Schattschneider from the Institute of Solid State Physics at TU Vienna. The angular momentum of the electrons in a solid object is closely linked to its magnetic properties. For materials science it is therefore a huge advantage to be able to make statements regarding angular momentum conditions based on these new electron beams. Beams Rotate - With Masks and Screens Peter Schattschneider and Michael Stöger-Pollach (USTEM, TU Vienna) have been working together with a research group from Antwerp on creating the most intense, clean and controllable vortex beams possible in a transmission electron microscope. The first successes were achieved two years ago: at the time, the electron beam was shot through a minuscule grid mask, whereby it split into three partial beams: one turning right, one turning left and one beam that did not rotate. Now, a new, much more powerful method has been developed: researchers use a screen, half of which is covered by a layer of silicon nitride. This layer is so thin that the electrons can penetrate it with hardly any absorption, however they can be suitably phase-shifted. "After focusing using a specially adapted astigmatic lens, an individual vortex beam is obtained", explained Michael Stöger-Pollach. This beam is more intense by one order of magnitude than the vortex beams that we have been able to create to date. "Firstly, we do not split the beam into three parts, as is the case with a grid mask, but rather, the entire electron stream is set into rotation. Secondly, the grid mask had the disadvantage of blocking half of the electrons - the new special screen does not do this", said Stöger-Pollach. Thanks to the new technology, right and left-rotating beams can now be distinguished in a reliable manner - previously this was only possible with difficulty. If we now add a predetermined angular momentum to each right and left-rotating beam, the rotation of one beam is increased, while the rotation of the other beam decreases. Electron microscopes with a twist This new technology was briefly presented by the research team in the "Physical Review Letters" journal. In future, the aim is to apply the method in materials science. Magnetic properties are often the focus of attention, particularly in the case of newly developed designer materials. "A transmission electron microscope with vortex beams would allow us to investigate these properties with nanometric precision", explained Peter Schattschneider. More exotic applications of vortex beams are also conceivable: in principle, it is possible to set all kinds of objects in rotation - even individual molecules - using these beams, which possess angular momentum. Vortex beams could therefore also open new doors in nanotechnology.
<urn:uuid:2a0ae20e-9e35-4c91-ba0d-12fd656c4087>
CC-MAIN-2013-20
http://www.sciencecodex.com/electron_microscopes_with_a_twist-101435
2013-05-24T15:31:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923177
816
Andrew Lang's Fairy Books or Andrew Lang's Coloured Fairy Books constitute a twelve-book series of fairy tale collections. Although Andrew Lang did not collect the stories himself from the oral tradition, the extent of his sources, who had collected them originally (with the notable exception of Madame d'Aulnoy), made them an immensely influential collection, especially as he used foreign-language sources, giving many of these tales their first appearance in English. As acknowledged in the prefaces, although Lang himself made most of the selections, his wife and other translators did a large portion of the translating and telling of the actual stories. Lang's urge to collect and publish fairy tales was rooted in his own experience with the folk and fairy tales of his home territory along the English-Scottish border. At the time he worked, English fairy-tale collections were rare: Dinah Maria Mulock Craik's The Fairy Book (1869) was a lonely precedent. When Lang began his efforts, he was fighting against the critics and educationists of the day, who judged the traditional tales' unreality, brutality, and escapism to be harmful for young readers, while holding that such stories were beneath the serious consideration of those of mature age. Over a generation, Lang's books worked a revolution in this public perception. The series was immensely popular, helped by Lang's reputation in folklore, and by the packaging device of the uniform books. The series proved of great influence in children's literature, increasing the popularity of fairy tales over tales of real life. It also inspired a host of imitators, like English Fairy Tales (1890) and More English Fairy Tales (1894) by Joseph Jacobs, and the American series edited by Clifton Johnson—The Oak-Tree Fairy Book (1905), The Elm-Tree Fairy Book (1909), The Fir-Tree Fairy Book (1912)—and the collections of Kate Douglas Wiggin and Nora Archibald Smith, among others. The first of his collections was the Blue Fairy Book (1889), for which Lang pulled together tales from the Brothers Grimm, Madame d
<urn:uuid:ad8ca2a5-8f1b-4f65-83de-202ab0aa2453>
CC-MAIN-2013-20
http://www.cakitches.com/general/book-fairy.html
2013-05-22T21:37:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968891
435
Tobacco Mosaic Virus Host: flue-cured tobacco (Nicotiana tabacum (flue-cured type) Description: The first symptom of this virus disease is a light green coloration between the veins of young leaves. This is followed quickly by the development of a "mosaic" or mottled pattern of light and dark green areas in the leaves. These symptoms develop quickly and are more pronounced on younger leaves. Mosaic does not result in plant death, but if infection occurs early in the season, plants are stunted. Lower leaves are subject to "mosaic burn," especially during periods of hot and dry weather. In these cases, large dead areas develop in the leaves. This constitutes one of the most destructive phases of tobacco mosaic virus infection. Infected leaves may be crinkled, puckered, or elongated. Image type: Field Image location: United States Name: R.J. Reynolds Tobacco Company Slide Set Organization: R.J. Reynolds Tobacco Company Country: United States
<urn:uuid:fbe705d1-8a2c-40ec-84e8-8195e3de1d2e>
CC-MAIN-2013-20
http://www.invasive.org/browse/detail.cfm?imgnum=1402026
2013-05-22T00:58:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921585
219
This article applies to Domain Time II. Last updated: 1/25/2012 February 29th, occurring only in leap years, does not require any special handling under Windows operating systems. The calendar for that year simply includes the extra day, and all internal calculations take account of it. Since time protocols exchange information using UTC or TAI, both of which are a count of seconds since that protocol's epoch, the date is derived from the time, rather than specified. Domain Time manages the clock using UTC, and Windows is responsible for converting the count of seconds to the date that is displayed. Domain Time learns about leap seconds from time sources announcing leap seconds (generally a GPS-connected source using either NTP or PTPv2). If a Domain Time Server knows of an upcoming leap second, it will inform clients being served by NTP and, as of version 5.2.b.20110601, the DT2 protocol. If a Domain Time Server is configured to be free-running (that is, serving the time without using any time sources), it will not know about pending or past leap seconds. NOTE: If Domain Time is configured to obtain its time from multiple sources, all sources must agree on the upcoming leap second. Domain Time will reject leap seconds if its sources do not agree with each other. Leap second insertions (a minute with 61 seconds) are supposed to generate a time sequence like this: 11:59:60 UTC < this is the extra second The Windows operating systems are not currently capable of having a minute with 61 seconds, so the 60th second is simply repeated. Domain Time accomplishes this by slewing the clock backward one full second at the designated UTC time. Since slewing is used, it takes approximately two full seconds of wall-clock time for one second of computer time to elapse. At the end of the two-second adjustment, the wall-clock time and computer time will again match (assuming the wall clock has inserted its own leap second). Within the computer, time continues to progress (it doesn't go backward), but at a slower than normal rate, until the leap second has been inserted. Effectively, this means that processes will see the 60th second 11:59:59 UTC < this is the extra second Due to the nature how leap seconds are calculated, a leap second deletion (a minute with only 59 seconds) has never occurred. However, NTP and PTPv2 are both capable of signaling a pending deletion if necessary, and Domain Time handles it much the same way as it handles an insertion. Instead of slowing the computer down temporarily, it speeds the computer up. The last minute of the day will end up with only 59 seconds. Processes will see the time slewing forward for approximately half of a second, after which wall clocks and computer time will again match. Logs and Confirmation where "source" is either an IP address or a DNS name. NTP servers may begin showing a pending leap second up to a month before it is due. Since leap seconds are defined as occurring on the last day of the month, and the flag does not include date information, the flag cannot be set more than a month ahead. Some servers will only set the flag on the day the insertion is due. Query your hardware vendor for details on your particular model's behavior. The software implementation of NTP typically uses a file referenced in ntpd.conf to specify the date of upcoming leap seconds. Hardware clocks usually get the information from GPS or radio. PTPv2 servers normally get their leap second information from GPS and require no configuration. Domain Time notes the leap flag when it queries its time sources. If all sources agree that a leap second (either insertion or deletion) is due, Domain Time calculates the offset between the current time and the last second of the month, and schedules an event for that exact time in UTC. Corrections to the clock between the scheduling and the event are automatically accounted for by the operating system, so the event will fire at the desired moment. Domain Time will note the scheduled event in the text log, saying which server informed Domain Time of the upcoming leap, the scheduled time in UTC, and the scheduled time in local time. If subsequent time checks reveal a disagreement with the previously-scheduled event, or all of Domain Time's sources do not agree, the event is cancelled. Domain Time notes this cancellation in its log. When the event occurs, Domain Time adjusts the clock as needed, and notes the reason for the adjustment in its log. It then records the time the adjustment was made, and ignores the flag (if still set by any sources) until a minimum of 24 hours has passed. This helps prevent duplicated leap seconds caused by servers that fail to clear the flag immediately after the event has passed. Domain Time does not preserve pending leap second information between restarts of the Domain Time service. If a leap second event begins being announced while Domain Time is not running, Domain Time will discover the information when it checks its sources after startup. If a leap second event occurs while Domain Time is not running, the event has already passed and should not be scheduled. Domain Time will account for the extra second at the startup correction. You may disable Domain Time's advanced scheduling of leap seconds by unchecking the box on the Advanced tab of the Control Panel applet. If this box is unchecked, then Domain Time will account for the leap second the next time it checks its sources after the leap event has occurred. Correction for the extra second will be handled exactly as if the clock were really off by that amount (i.e., slewing or stepping, based on your configuration). If advanced scheduling is disabled, but Domain Time discovers a pending leap second when it queries its sources, it will note in the log that the leap second is pending but not scheduled. Leap second log entries are "Information" level. You do not need to change your log to trace or debug to see leap second activity. My Account | Contact Us | Feedback to Greyware | Copyright © 1995-2013 Greyware Automation Products, Inc. All Rights Reserved All Trademarks mentioned are the properties of their respective owners.
<urn:uuid:2f327d60-c948-496c-a216-bc1980f2b8d8>
CC-MAIN-2013-20
http://www.greyware.com/kb/kb2012.125.asp
2013-05-20T22:12:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920817
1,287
Name: Rough meadow-grass ( rough-stalked meadow-grass, roughstalk bluegrass ) Latin name: Poa trivialis L. Occurrence: Rough meadow-grass occurs as both an annual and perennial grass with procumbent tillers some of which become leafy stolons. It is an indigenous grassland species that has become increasingly important as a weed of winter cereals and herbage seed crops. It is found throughout the UK in all types of grassland especially newly established leys. It is native in open woods, marshes, ditches, damp grassland, rough and cultivated ground. The species is found in the hedge bottom and field margins as well as spreading into arable fields. It is common on moist and even on wet soils. In surveys around 1912, rough meadow-grass was found mainly on clay, loam and chalk but only occasionally on sandy soil. In 1975 it was said that 20% of arable crops in the UK were infested with rough meadow-grass. In a survey of weeds in conventional cereals in central southern England in 1982, rough meadow-grass was found in 29% of winter wheat, 14% of winter barley and 7% of spring barley. In winter oilseed rape in 1985, it was found in only 3% of crops. In a study of seedbanks in arable soils in the English midlands sampled in 1972-1973, rough meadow-grass was recorded in 19% of the fields sampled in Oxfordshire and 38% of those in Warwickshire but never in large numbers. Seed was found in 1.5% of arable soils in a seedbank survey in Scotland in 1972-1978. It was the second most abundant grass weed in a seedbank survey in swede-turnip fields in Scotland in 1982, being found in 85% of fields sampled. Rough meadow-grass is often a colonist following sward deterioration in cultivated grassland. It is palatable to stock and was an important constituent of permanent grassland but is now little sown. Biology: Rough meadow-grass flowers in June and is wind pollinated. Plants require a period of winter cold in order to flower. Rough meadow grass can produce 200 to 1,700 seeds per flower head and 29,000 seeds per plant. In a cereal crop, rough meadow-grass may produce 1,000 to 14,000 seeds per plant. Seeds are shed between June and August. Innate dormancy is short but dormancy can be enforced by seed burial. Proximal seeds are less dormant and germinate more readily than distal seeds. Seeds require light for germination which ensures that only seed at or near the soil surface will germinate and establish. Germination on the soil surface is largely confined to the autumn after shedding. Wetting and drying of the seed enhances germination. However, germination is reduced when seeds are at a high density in the soil. Seed from grassland populations is less dormant than seed from arable situations. In grassland, peak emergence is in September. In cultivated soil, seedling emergence occurs from March to October but is affected by the depth of burial and timing of cultivations. The optimum depth of emergence is 0-10 mm and the maximum is 30 mm. Rough meadow-grass seedlings that emerge in the autumn become vernalised over the winter in preparation for flowering the following year. Plants remain winter green but there is little growth before April. Spring emerging seedlings are unlikely to become vernalised and will therefore not flower in the current year. Rough meadow grass is cold tolerant and remains winter green but makes little growth until April. Persistence and Spread: Rough meadow-grass has relatively persistent seeds for a grass. It has been suggested that based on seed characters it should persist longer than 5 years in soil. Seeds have been recorded in large numbers in the soil beneath pastures even when the plant was poorly represented in the vegetation. Reproduction by seed is very important but rough meadow-grass also has long creeping stolons that ensure vegetative spread. Rough meadow-grass seed has occurred as a contaminant of cereal seed. Proximal seeds tend to clump together at and after dispersal due to threads at the base of the seeds. It is difficult to separate them out when they contaminate herbage seeds. The seeds are ingested by earthworms and viable seeds have been recovered in wormcast soil. Seed germination increased from 66 to 90% following passage through an earthworm. Management: Seedlings have fibrous roots and are easily dislodged by harrowing and other cultivations. Studies have shown that cutting seedlings at the soil surface is more effective than partial burial. Complete burial, alone and after uprooting seedlings, is the most consistently effective treatment. There is the potential for recovery if seedlings are left on the soil surface or if just the roots are buried. Shoot fragments can regenerate after cultivation. Shading from the crop canopy improves the level of control. After crop harvest, ploughing buries the freshly shed seeds but this may lead to a future weed problem. Delaying stubble cultivations allows seeds a period to germinate on the soil surface and reduces the number of viable seeds that may persist after burial. Winter cropping favours Poa species, as does increasing the water holding capacity of soil by straw incorporation. Rough meadow-grass seedlings that emerge in spring barley do not become vernalised and will not flower and set seed. Spring barley can therefore be used as a cleaning crop provided the rough meadow-grass plants are destroyed after harvest. A sown boundary strip around the margin of an arable field will delay but not prevent the spread of rough meadow-grass in to the field. In permanent grassland, rough meadow-grass is favoured by increased soil fertility. Both percentage cover and seed numbers in the soil increase. It does not persist under close mowing and is susceptible to trampling. However, rough meadow-grass will invade permanent grassland and become established under severe grazing. Rough meadow-grass seeds are consumed by ground beetles. Updated October 2007. Further Information / Links: For more information on this weed - Fully referenced review Rough meadow-grass (79 Kb) October 07
<urn:uuid:2c3b6a11-629d-439b-9e5b-853f508a8a8f>
CC-MAIN-2013-20
http://www.gardenorganic.org.uk/organicweeds/weed_information/weed.php?id=64
2013-05-20T02:21:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961968
1,321
This page is sponsored by Google Ads. ARN does not necessarily select or endorse the organizations or products advertised above. Ames, Ia. - An Iowa State University professor wants astronauts to go back to the moon. He says the world could be surprised by what's there - 4 billion-year-old remnants of Earth, Mars and Venus that could redefine the history of the solar system and mankind. "There is a chance to get our hands on new, empirical evidence that isn't available on the Earth anymore," said Guillermo Gonzalez, an assistant professor of physics and astronomy who headed the research team. "That's waiting for us on the moon." Rocks ejected intact from Earth's gravitational field by asteroid and comet impacts found their way through space to the moon, according to the team's research, in much the same way that lunar rocks have been found on Earth. A published NASA- and National Science Foundation-sponsored research paper discussing this topic speculates that astronauts could discover more samples of Mars or the first samples of early Venus. For that possibility alone, Gonzalez says astronauts must go back despite the Columbia explosion in early February. The scientific benefits would outweigh the risks. "Finding a rock from Venus would be like finding the Hope diamond," he said. Venus can't be explored because of its surface. Information on Mars is based on so few samples that any single find would contribute huge amounts of knowledge to its history, Gonzalez said. To determine just how much Earth rock has reached the moon, the researchers mathematically simulated the gravitational results of hundreds of particles randomly leaving the Earth. The results show seven parts of Earth per million, more abundant than diamonds on Earth, Gonzalez said. John Armstrong, a graduate assistant at the University of Washington, is credited by Gonzalez with doing the majority of the paper's research. Armstrong said any voyages to Mars should wait. "The fact is, the moon is right there, and it can tell us everything we need to know about the solar system," he said. Critics of the research argue that nothing could survive the violence required for a rock to be blown off a planet, exit its thick atmosphere and not be destroyed upon impacting another body. Armstrong and Gonzalez admit it is a legitimate worry. "A lot of the criticism we've been getting is, "Can this material survive impact with the moon, and could it stay there?" " Armstrong said. "Before we found Mars meteorites on the Earth, they said that couldn't File Date: 03.25.03 This data file may be reproduced in its entirety for non-commercial use. A return link to the Access Research Network web site would be appreciated. Documents on this site which have been reproduced from a previous publication are copyrighted through the individual publication. See the body of the above document for specific copyright information.
<urn:uuid:5b517ada-344b-4592-ab3f-8df61e625bc3>
CC-MAIN-2013-20
http://www.arn.org/docs2/news/moonsecret032503.htm
2013-05-26T02:50:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941521
592
U.S. Strategic Missile and Armament Systems (1950s60s) Intercontinental Ballistic Missile Program Beginnings The Minuteman program was a Cold War story, but development of the missile system offers its own history. This section explores the evolution of America's ballistic missile program, of which the Minuteman would play a vital role. By the time of the Cuban Missile Crisis in 1962 the United States had succeeded in developing nuclear missiles with intercontinental range. However, America's early forays into strategic missiles suffered from a lack of funding, bureaucratic infighting, and interagency tensions that slowed early research into missile armament systems. Although the progression from piloted weapons systems to missiles seems obvious in retrospect, that conclusion remained uncertain at the onset of the Cold War. Many high-level politicians and military officers began to think more seriously about Intercontinental Ballistic Missile (ICBM) development in response to these tensions, leading the Air Force to initiate a crash program in ICBM development through the newly formed Air Research and Development Command (ARDC). The ARDC and the new crash program built on previous missile research conducted by the Consolidated Vultee Aircraft Corporation (Convair) for Air Force contract MX‑774. Convair's contract had been canceled in 1947 as part of the Air Force's post-Word War II cuts in military spending. The news in 1949 that the Soviets had tested an atomic bomb sparked revived interest in air defense systems, though of course, in an age of aerial warfare, the potential for long-range Soviet strikes upon American soil had never been far from the minds of Washington strategists. "Attacks can now come across the arctic regions, as well as across oceans, and strike deep...into the heart of the country," General Carl Spaatz, commander of American strategic bombing in World War II told a Senate Committee in 1945. "No section will be immune," he warned, "the Pearl Harbor of a future war might well be Chicago, or Detroit, or Pittsburgh, or even Washington." North Korea's 1950 invasion of South Koreaan attack perceived by many Western strategists as part of a concerted global strategy by the Sovietsmade Western fears of attack seem all the more prescient. Air Research and Development Command The Air Force established the ARDC in 1950 specifically for development of the Air Force missile program. Many issues remained to be solved before the ICBM could get off the ground. Development of the ICBM program was hampered by resistance on the part of one branch of the Air Force, the Air Force Air Staff (Air Staff), and inefficient cooperation between different branches of the military. The Air Staff was the planning body within U.S. Air Force Headquarters. As a Major Command, the ARDC (later known as the Air Force Systems Command) was below the Air Staff in the hierarchy of the U.S. Air Force. Initially the Air Force opposed further research and development on the grounds that available technology was not advanced enough for the successful development of missiles with intercontinental range. Members of the Air Staff questioned the reliability and effectiveness of ICBMs. Additionally, the culture within the Air Force at the time favored development of bombers and the integration of missiles with aircraft development. Achievement of high rank in the service required pilot training and command of squadrons or wings, and only officers could be pilots. These flyers were thus naturally hesitant to endorse a new and potentially significant weapons system that carried the potential of diminishing the value of their skills (as pilots) to the Pentagon. Indeed, the Air Force went so far as to designate its missiles "pilotless aircraft," implicitly signifying that any real aircraft carried a human commander. The lack of an integrated development plan further hampered missile research and development and budgetary issues resulting from President Truman's economy drive compounded the problems of developing the ICBM program. Only after the Air Force began to integrate its missile program with its aircraft program did it become apparent that missile development needed a separate, focused effort. The Air Force had competition in missile development from both the Army and the Navy. Missile development programs underway at the beginning of the 1950s included the Army's Redstone project, headed by Wernher von Braun and the Jupiter Intermediate Range Ballistic Missile, as a joint venture between the Army and Navy. The Air Force found itself in a position of losing its defensive capabilities and therefore stature in the armed forces if it did not keep up with missile technology. Rather than allowing themselves to fall behind technologically, the Air Force overcame its reticence and approved a contract with Convair in January 1951 for development of a ballistic missile carrying a heavy nuclear payload with a five thousand-mile range and a circular error probable (acceptable radius of target error) of 1,500 feet. This new missile project, known as the MX-1593 or Atlas, was largely based on Convair's earlier Air Force project, the MX-774. Convair now built on earlier engineering efforts to create the Atlas ICBM. In 1952 Trevor Gardner, Special Assistant for Research and Development to Air Force Secretary Harold E. Talbot, asked the Air Force for performance specifications and a justification of the deployment schedule for the Atlas. The response from the ARDC asserted that "the ballistic rocket appears, at present, to be the ultimate means of delivering atomic bombs in the most effective fashion." Funding for the Atlas remained limited, however, and important logistical problems had to be overcome in its development before it could meet the Air Force's requirements. Bomb weight, maximum range, and nose cone design to withstand reentry were three formidable early problems faced by missile developers. However, scientific advances created thermonuclear devices that were lighter than earlier generations of nuclear weapons while possessing more destructive capabilityin 1952 the validity of thermonuclear detonation was proven. During this same period, more powerful liquid-fuel engines became available and it became clear that ICBMs with a range of over five thousand miles could be built. The combination of more powerful engines and lighter bombs solved the problem of limited missile range. The development of a blunt, copper heat-sink in 1952 to absorb the fierce heat of the reentry vehicle solved the third problem. Now the ARDC and Convair needed to transfer these new technologies to its Atlas missile system The Air Staff did not agree with the ARDC on Atlas development and funding and refused to commit the necessary funds for full-scale development. The ARDC refused to give up, citing the urgent need for an ICBM in the interest of national security. The ARDC favored full-scale development on an accelerated schedule, whereas the Air Staff preferred additional research before committing more funding to the program. After two years of political maneuvering, the Air Staff and ARDC reached a compromise in 1953. This agreement produced a development plan that called for the research and development phase for the Atlas to be completed by "sometime after 1964" and for an operational missile by 1965. Teapot Committee and RAND Report While American leaders worked to develop their own strategic missile force, they also strove to evaluate United States military defense capabilities in relationship to their closest rival. Two committees were formed during this period to study the Soviet Union's potential threat. The Strategic Missiles Evaluation Committee, code name Teapot Committee, was formed in 1953 by Trevor Gardner and was chaired by famed mathematician Dr. John von Neumann of the Institute for Advanced Studies. The Teapot Committee was developed to evaluate current programs and the level of technology of potential enemies (mainly the Soviet Union), and to recommend solutions for identified problems. A concurrent study focusing on similar questions was conducted by the RAND Corporation, a security studies think-tank with long ties to the Air Force. Both studies produced alarming findings. They each independently determined that Soviet missile technology had advanced significantly in the short period since World War II, and that only a major push in missile development in the United States could overcome this technology gap. Policymakers of this period fervently believed that falling technologically behind the Soviets in the defense arena would be inviting the disaster of a Soviet attack. The reports also concluded that development of an operational ICBM system within six years was an attainable goal if the Air Force would commit the appropriate talent, funds, and management strategies to the project. According to Teapot, the Atlas program in particularas the most advanced American missile program then under developmenthad to be accelerated for the sake of national security. President Eisenhower took these findings most seriously, and ordered work on the ICBM program accelerated by assigning it "the highest national priority." The Western Development Division (WDD), an extension of the ARDC, was created and assigned to spearhead the development of ICBMs. Western Development Division Trevor Gardner, Air Force Chief of Staff General Nathan F. Twining, and Lieutenant General Donald Putt received approval for a management agency within the Air Force, the WDD, whose primary purpose would be to develop an ICBM. The WDD was created "solely for the prosecution of research, development, test, and production leading to a successful intercontinental ballistic missile." The WDD facilitated the rapid development of the Atlas system, and its employees worked long hours to get the job done. For example, Lieutenant General Otto Glasser reported that a normal work-week consisted of ten-hour days, six days a week, with extra time often being put in on Sundays. The main function of this working group was not to actually build an ICBM, but to work together with private contractors to design the new weapon as quickly and cheaply as possible. The project became a race against time, with the goal of an operational ICBM by the end of the 1950sthe estimated date for an operational Soviet ICBM. To many of the workers, the very safety and security of the United States seemed to hinge on the success of their program. To help meet its goals, the WDD contracted with the Ramo-Wooldridge Corporation of Los Angeles, California to provide technical direction. This joining of forces speaks to the increased size and importance of the ICBM program in the Air Force's eyes. The number of Ramo-Wooldridge staff members assigned to assist the WDD on the ICBM project started with 170 staff members at the beginning of 1954 and grew to 5,182 by the end of 1960. The WDD opened its office in a former elementary school in Inglewood, California, in 1954 with General Bernard A. Schriever, a forty-three year-old well respected brigadier general, appointed as its head. In an attempt to maintain a low profile for this top-secret project, military staff stationed at the WDD wore civilian clothes. ICBM chronicler and journalist Roy Neal described the WDD headquarters in these words, "No sign identified the white schoolhouse as the Western Development Division... The windows were frosted and heavily barred. All outside doors, except one, were locked. The only entrance was across a chain-link fenced parking lot. A security guard manned the door... Some of the old-timers recall... the comment of the school boy who was sauntering by the school buildings. Eying the frosted glass and steel-barred windows, he said to a chum, 'Boy am I glad I don't go to school here.' " The WDD staff began their work designing and coordinating the construction of the Atlas ICBM. In 1955, the WDD requested and received Air Force approval to develop a second ICBM, the Titan, concurrently with the Atlas. The WDD initiated the research and development on the Titan in the hope that if Atlas was delayed, Titan with slightly different engineering could be made operational by the end of the 1950s and keep the United States from falling behind in the missile race. Last Updated: 19-Nov-2003
<urn:uuid:686be318-1321-40ad-9003-bb68752f3dc3>
CC-MAIN-2013-20
http://www.nps.gov/history/history/online_books/mimi/hrs1-2.htm
2013-05-24T02:26:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963028
2,430
View from Trimble’s Line, Cross Keys Battlefield This is a view from the position of the 21st Georgia Infantry (to the left) and the 16th Mississippi Infantry. The 15th Alabama Infantry was just to the right of the 16th Ms. Though the position is now in a fenceline, during the battle, the position was in a wood, but as the site suggests today, a split-rail fence was immediately in front of Gen. Isaac Trimble’s line and aided in concealing it. On the afternoon of June 8. 1862, the 8th New York Infantry, from Gen. Julius Stahel’s Brigade, advanced on this position, but without deploying skirmishers. As skirmishers from the 21st North Carolina Infantry scurried back to tell Trimble of the unsuspecting New Yorkers, the 1,500 Confederates on this line made ready. When the New Yorkers were within 50 yards, the Confederate line erupted in a volley. In a matter of 15 minutes, approximately 250 out of 548 New Yorkers lay dead or wounded on the field. One Confederate wrote that the Federals were “lying in the field as thick as blackbirds.” Massanutten Mountain can be seen in the background.
<urn:uuid:768ed6f6-6689-4b5c-aca5-beda4113aa6d>
CC-MAIN-2013-20
http://shenandoahscivilwar.wordpress.com/2009/02/01/view-from-trimbles-line-cross-keys-battlefield/
2013-05-20T02:47:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.9745
257
Despite overwhelming evidence and an admission and apology from Germany decades ago, revisionists continue to claim that nearly 6 million Jews were not killed by Nazis during the Holocaust. Iranian President Mahmoud Ahmadinejad, for one, has called the Holocaust a "myth" and suggested that Germany and other European countries, rather than Palestine, provide land for a Jewish state. Unlike Ahmadinejad, most revisionists do not deny that Jews were interned in prison camps during World War II; rather, they argue that the number of deaths was greatly exaggerated. Gas chambers are a particular sticking point: Holocaust deniers say they were purely a rumor or, if they indeed existed, were not powerful enough to kill though evidence and history indicate otherwise. And the photographs of emaciated and dying Jews? Attorney Edgar J. Steele, a revisionist, says, "All those pictures of skinny people and bodies stacked like cordwood were actually of Czechs and Poles and Germans [who] died of typhus, which was rampant in the camps." Next The CIA and AIDS
<urn:uuid:9b67e7e6-5e1b-4517-b0c5-a0147807a9e5>
CC-MAIN-2013-20
http://www.time.com/time/specials/packages/article/0,28804,1860871_1860876_1861026,00.html
2013-05-24T08:37:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976447
212
The Sunnyvale View > News Sharpen your fire safety smarts this season How many strands of mini string lights can be safely connect? How many inches should be cut from the base of a fresh-cut Christmas tree before placing it in water? Not really sure? Then it’s time to sharpen the holiday fire safety smarts with information and resources from the National Fire Protection Association (NFPA). “Many people simply don’t know which activities and practices present hazards,” said Lorraine Carli, NFPA’s vice president of communications. “‘Project Holiday’ points out where holiday fire risks lurk, along with a wealth of tips and guidelines to prevent them.” Across the board, the majority of holiday fires are the result of human error. The holiday season is a time when there is an increased risk of home fires. NFPA is offering resources to help increase public awareness about fire risks during the holiday season, “Project Holiday” features a quiz that checks just how prepared you really are for a fire-safe season. The site also includes free, online videos and downloadable materials to help parents protect their families, particularly those with young children, who are at greater risk to fires. In addition, an online toolkit is available with campaign materials that can be used to spread the word about fire safety throughout the community. Sparky the Fire Dog is also pitching in with holiday-themed fire-safety materials for parents and educators on his website, including downloadable coloring and activity sheets and e-cards. According to NFPA, many holiday traditions and festivities – from candle decorations and cooking to Christmas trees and holiday lighting – significantly contribute to the season’s increased risks: •December is the peak month for home candle fires. Almost half of all home decoration fires were started by candles. •Although Christmas tree fires aren’t common, when they do occur, they’re more likely to be serious. •A heat source too close to the tree causes roughly one in five associated fires, with one of every three home Christmas tree fires caused by electrical problems. •Almost half of all holiday lighting fires occur in December. Electrical failures or malfunctions were a factor in two-thirds (69 percent) of these fires. •Cooking is the leading cause of home fires; unattended cooking is the leading cause. For information regarding Project Holiday and its tips for a fire-safe season, visit www.nfpa.org/holiday. NFPA is a worldwide leader in fire, electrical, building, and life safety. The mission of the international nonprofit organization founded in 1896 is to reduce the worldwide burden of fire and other hazards on the quality of life by providing and advocating consensus codes and standards, research, training, and education. NFPA develops more than 300 codes and standards to minimize the possibility and effects of fire and other hazards. All NFPA codes and standards can be viewed at no cost at www.nfpa.org/freeaccess. Copyright © 2013 - Star Local News
<urn:uuid:f7ff7f2d-33f7-48e5-a428-2365f74d7182>
CC-MAIN-2013-20
http://lewisvilleleader.com/articles/2013/01/08/the_sunnyvale_view/news/1550.prt
2013-05-23T11:49:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942264
636
June 17, 2004 Joe H. Gieck, Ed.D., PT, ATC Professor of Sports Medicine and Athletic Training University of Virginia In any sport one of the primary objectives is to enjoy the activity. Avoiding injury while involved only adds to the satisfaction from your sense of accomplishment. There are many factors involved in injury prevention. All these must be considered before beginning activity. In each instance, you are in control of these factors and the role they play in injury prevention. The first step then is to assess what the sport requires and what you bring to the table in the way of readiness for the sport. You should begin by defining what your goals are. Are they recreational or competitive in nature? While both require many of the same principles, the competitive athlete will require a more disciplined and intense approach and effort. It is much easier to remain injury free in recreational sports if you follow the same guidelines for the competitive athlete. As a competitive athlete you often have to push the limits in the physical arena which can create more risk for injury. Preparation physically and mentally can provide this cushion of physical fitness to avoid injury. Proper rest and nutrition are necessary for optimal performance. If you are skimping on sleep and not getting an adequate diet you only hold yourself back and set yourself up for injury. Hydration is part of the nutritional balance necessary for participation. Sweat rates of elite athletes may exceed 8 - 10 quarts a day. Dehydration of as little as 2% can affect physical performance which in turn makes injury more likely. Exercise should be that which can be comfortably tolerated. There should be a slow build up in intensity to reach peak performance. Usually an increase of about 10% per week is what is recommended to properly prepare your body for the activity and to prevent injury. Too much too soon is often the cause of overuse injury. As exercise becomes more intense it should also be pain free in that there may be some soreness but not causing musculoskeletal pain the next day. Ideally you progress the exercise in intensity but without pain or soreness. Pain is the body's way of telling you you're doing too much and risking injury. Thus, it is important that when beginning a sport that you slowly adjust to the pace, from half to three quarter to full speed. In this manner you acclimate to the full speed of the sport. Obviously you must allow ample time to prepare for competition. However it requires time and hard work, which many are not willing to do. Just like with improper hydration, nutrition, or rest, an injury is more likely to happen. An often overlooked area of injury prevention is a preseason screening process. Areas that should be assessed should be equipment, especially shoes, posture, strength, range of motion, proprioception, endurance, power, speed and agility. A worn or cheap pair of shoes is an example of an injury waiting to happen. Lower extremity posture in running sports should be evaluated and corrections made prior to competing. For throwing sports, have your coach assess your technique. This can spot mechanical flaws that can be corrected and reduce the chance for injury. If you have a previous injury, it should be assessed in the above areas to assure that you are ready to return to play. You should have a strength level appropriate for your sport of choice. If the muscles and tendons cannot handle the stress loads of the sport or activity, injury is sure to occur. Strength and flexibility are the cornerstones of physical fitness. If you lack strength or adequate range of motion in your joints they are at risk of injury. A weak or tight muscle or tendon is at risk. Proprioception or balance is required in sport and a factor in the injury process if you have deficits here. For the lower extremity for example you should be able to balance easily with your eyes closed on one leg, if not then this is an area of concern and need remediation. In beginning an exercise session you should always warm up to avoid injury. Getting the body ready for the selected activity through a series of selected exercises properly prepares you to reduce your chance of injury. Fatigue is often a cause of injury. The muscles and tendons cannot contract and relax in a sequential manner, become out of synch, reactions slow and injury is there waiting to happen in the form of a strain, sprain or fracture. When you feel fatigue coming on, it is the time for a break to allow the body its necessary recuperation. An aerobic or endurance base is required of most all sports to prevent fatigue and injury. If you lag behind here you again put yourself at risk as well as under performing in the activity and become non competitive against those who have a good aerobic base. Ideally, you will need a physical activity that increases your pulse level to more than 120 heart beats/minute for a minimum of 20 minutes. Anaerobic exercise is developed through power and speed training in short yet intense sessions. Power is the ability to function rapidly in your sport to attain maximal results. A good strength base is required to begin a power program. Circuit training is a good example of power training where a 30 second bout of vigorous exercise is performed followed by a 20 second rest throughout a cycle of 6 - 8 exercises. This is the system that ultimately develops you for your sport. Speed and agility will keep you out of potentially injury producing situations. Speed may be developed by improving technique utilizing efforts with a 6 second maximum effort. Agility and coordination emphasize neuromuscular control and are the culmination of all physical fitness factors. It is the ability to react to the demands of sport. It is usually the first to suffer fatigue. As you implement improvement of these systems you can increase muscle fiber size and bone strength, increase flexibility, decrease fat, improve cardiovascular, and respiratory fitness and help reduce the chances you will sustain injury in your activity. Those who are physically fit have an injury rate one half to those who are not fit. The injury prone athlete exhibits negative thinking. Sport should be an enjoyable experience and one of exhilaration. Being positive about injury prevention without being too much of a risk taker will add to your pleasure of activity and help with reducing your injury risk. If the activity isn't fun find another one. The assessment and implementation of a program to prevent injury will allow you to enjoy the benefits of the sport or activity without the consequences of pain, discomfort and frustration as a result of injury.
<urn:uuid:6bde1751-aac7-47a4-91a8-03032f37db7c>
CC-MAIN-2013-20
http://www.theacc.com/genrel/061704aaa.html
2013-05-20T22:07:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956685
1,318
VAERS stands for the Vaccine Adverse Event Reporting System - it is run by the CDC (Centers for Disease Control) as one method of monitoring vaccine safety. This is a fairly basic and standard practice, similar to the less formal reports of drug adverse events to the FDA after drugs are on the market. Such voluntary reporting systems are an early warning system. Their purpose is to indicate a possible new trend that might indicate a previously unrecognized side effect or risk. Such reporting systems are also important because drugs are typically studies in thousands of individuals before going to market, but then they may be used by orders of magnitude more people, perhaps millions. Therefore there may be side effects or risks that are statistically too small to show up in studies of thousands of people, but will show up when given to millions. Such systems are also used as part of the precautionary principle. The FDA, for example, may place a warning on a drug, or even remove it from the market, based upon a possible association with an adverse outcome, even if we can't be sure of a real association or a cause and effect. What voluntary reporting systems are not, however, are scientifically rigorous assessments of true risk. They are useful for generating, but not testing, hypotheses. The primary weakness of voluntary reporting systems, like VAERS, is that they are voluntary - people take it upon themselves to report what they believe may be a vaccine side effect. Such reporting is therefore subject to reporting bias. A news story warning about the risks of the flu vaccine will result in a spike in VAERS reports of flu vaccine side effects. Any trend in VAERS reporting, therefore, has to be interpreted with caution, and in fact cannot be scientifically interpreted. A possible signal in VAERS would need to be followed up with some rigorous data - systematically looking at the incidence of the alleged side effect in a population and correlating it with vaccine status, for example. A recent article in Mothering Magazine however, ignored all this and treated VAERS reporting as if it were scientific data. The author, Stephen Rubin PhD, analyzed recent VAERS data and interpreted the results entirely as if trends represented actual trends in side effects, rather than potentially artifactual trends in reporting. The result was an irresponsible piece of fear-mongering. First of all, it should come as no surprise that reports of vaccine side effects correlate with getting vaccines. Why would someone think that they or their child had a vaccine side effect if they didn't recently get a vaccine? It is also easy to imagine many artifacts in the reporting. Perhaps parents (especially new parents) are more likely to worry about vaccine side effects (and report them) after their child gets their first series of vaccines, but are more mellow with later vaccines. We should also expect that ages at which other infantile or childhood diseases occur with result in spikes in VAERS reporting as they are misinterpreted as vaccine side effects. It's important to note that VAERS accepts all reports - reports are not filtered based upon any assessment of how plausible they are or whether or not they are likely to be a real vaccine effect. Anyone can report anything and it goes into the database. Rubin writes, for example: "The graph shows a serious spike in Influenza-related reports during the last two years. Suddenly, there are about three times as many adverse events being reported following Flu shots. Of course, concerns about the H1N1 Flu caused more people to be vaccinated during these years, but were there really three times as many people vaccinated? The numbers do not show that." "Something is causing an increase in the number of reactions to the Flu shot, and it isn’t just that more people are getting it. We should all wonder why nearly half of recent VAERS reports involve people who have gotten a Flu shot." Or - perhaps all the media hype about the risks of the "swine flu" vaccine resulted in an increase in self-reporting. Rubin does not even consider this possibility. He treats the data as if it were a recording of actual side effects, not voluntary reports of side effects. VAERS remains a very important source of information for the CDC and others to monitor the vaccine program. However, it continues to be abused in a pseudoscientific way by those in the anti-vaccine community and also by those who are simply naive about the nature of VAERS and the limitations of self-reporting.
<urn:uuid:72bd337d-e1e6-45a7-8120-c910f6513d42>
CC-MAIN-2013-20
http://www.randi.org/site/index.php/swift-blog/1642-vaers-pseudoscience.html?widthstyle=w-fluid
2013-06-19T18:53:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972394
907
Osteoporosis is a disease in which bones become fragile and more likely to fracture. Usually the bone loses density, which measures the amount of calcium and minerals in the bone. Thin bones; Low bone density Causes, incidence, and risk factors Osteoporosis is the most common type of bone disease. About half of all women over the age of 50 will have a fracture of the hip, wrist, or vertebra (bone of the spine) during their lifetime. Bone is living tissue. Existing bone is constantly being replaced by new bone. Osteoporosis occurs when the body fails to form enough new bone, when too much existing bone is reabsorbed by the body, or both. Calcium is one of the important minerals needed for bones to form. If you do not get enough calcium and vitamin D, or your body does not absorb enough calcium from your diet, your bones may become brittle and more likely to fracture. Sometimes bone loss occurs without any cause. White women are more likely to have bone loss. Sometimes the tendency to have bone loss and thin bones is passed down through families. A drop in estrogen in women at the time of menopause and a drop in testosterone in men is a leading cause of bone loss. Other causes of bone loss include: Being confined to a bed Certain medical conditions Taking certain medications Other risk factors include: Absence of menstrual periods (amenorrhea) for long periods of time A family history of osteoporosis Drinking a large amount of alcohol Low body weight There are no symptoms in the early stages of osteoporosis. Many times, people will have a fracture before learning that they have the disease. Pain almost anywhere in the spine can be caused by fractures of the bones of the spine. These are called compression fractures. They often occur without an injury. The pain may occur suddenly or slowly over time. There may be a loss of height (as much as 6 inches) over time. A stooped posture or kyphosis (also called a "dowager's hump") may develop. Medications to treat osteoporosis can help prevent future fractures, but spine bones that have already collapsed cannot be reversed. Some people with osteoporosis become disabled from weakened bones. Hip fractures are one of the main reasons people are admitted to nursing homes. Calcium is essential for building and maintaining healthy bone. Vitamin D is also needed because it helps your body absorb calcium. Following a healthy, well-balanced diet can help you get these and other important nutrients throughout life. Other tips for prevention: Avoid drinking excess alcohol Get regular exercise A number of medications are approved for the prevention of osteoporosis. Management of osteoporosis in postmenopausal women: 2010 position statement of The North American Menopause Society. Menopause. 2010 Jan-Feb;17(1):25-54. Lewiecki EM. In the clinic. Osteoporosis. Ann Intern Med. 2011;155(1):ITC1-1-15;quiz ITC1-16. Park-Wyllie LY, Mamdani MM, Juurlink DN, Hawker GA, Gunraj N, Austin PC, et al. Bisphosphonate use and the risk of subtrochanteric or femoral shaft fractures in older women. JAMA. 2011;305(8):783-789. National Osteoporosis Foundation. Clinician's Guide to Prevention and Treatment of Osteoporosis. Washington, DC: National Osteoporosis Foundation; 2010. David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M. Health Solutions, Ebix, Inc.
<urn:uuid:294ae5a4-52b3-4087-804a-38c5d0235b42>
CC-MAIN-2013-20
http://www.athenshealth.org/body.cfm?id=65&action=articleDetail&AEProductID=Adam2004_117&AEArticleID=000360&crawl=false
2013-05-21T09:59:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.905474
829
The Air Force looks for economy at the pump. - By Mike Harbour - Air & Space magazine, September 2006 The Air Force uses half the U.S. Government’s fuel supply every year. Half. In other words, it takes all the tanks in Iraq, all the ships in Norfolk, and all the federal fleet cars driven by bureaucrats on per diem to match the USAF for sheer gas guzzling. And with a jet fuel bill topping $4.7 billion last year, the Pentagon is looking for alternatives. DARPA, the defense department’s research agency, is USAF Scientific Advisory Board and the Nazi Germany for an answer. The same chemical method used to create gasoline for the Axis war machine during World War II may help power the U.S. military in 2008, thanks to a Department of Defense initiative to find a synthetic alternative to petroleum-based fuel. An Air Force B-52 made a flight over California in September with two of its eight engines powered by synthetic kerosene based on a process known as Fischer-Tropsch developed at the Kaiser Wilhelm Institute in the 1920s. “We demonstrated that we could burn Fischer-Tropsch fuel in that aircraft,” says William Harrison III, fuels branch chief of the Air Force Research Laboratory’s Propulsion Directorate. “The engines performed as we expected, just like they would with the JP-8 fuel.” But unlike JP-8, the equivalent to commercial Jet-A fuel, Fischer-Tropsch fuel can be created from coal, natural gas, or biological matter. All three sources are plentiful in the United States, one reason why the process moved from “off the stove,” Harrison says, to number one on the list of Pentagon alternative fuels efforts. After the basic source, or feedstock, has been turned into an intermediate synthesis gas (syngas), it is then refined into fuel. “That’s what’s nice about Fischer-Tropsch: you can use any hydrocarbon feedstock to make the syngas, and then, once you have the clean syngas, the fuel is all very much the same,” Harrison says. The initial test flight, made from Edwards Air Force Base in California on September 19, was the first time an Air Force jet flew with synthetic fuel blended with 50 percent JP-8 in its tanks. Ronald Sega, Under Secretary of the Air Force, as well as a former pilot and astronaut, served as a crewmember on the two-hour flight. Although cut short by a non-related mechanical problem, the test was determined a success, and a second flight the following week confirmed the engines operated normally using the blend.
<urn:uuid:f8140d0a-afd9-4652-abfc-ea2f42a46c42>
CC-MAIN-2013-20
http://www.airspacemag.com/military-aviation/FEATURE-gasguzzlers.html?c=y&page=1
2013-05-18T18:23:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947103
566
It can be easy to overlook planning nutritious lunches. "But don't count out lunch," says Elizabeth Ward, registered dietitian, mother of three and author of "MyPlate for Moms, How to Feed Yourself & Your Family Better: Decoding the Dietary Guidelines for Your Real Life." In fact, students who eat a nutritious, balanced diet are better prepared to learn, reports the Association for Supervision and Curriculum Development. "School lunch is more than just a meal - it's an opportunity for good nutrition and teaching during children's prime learning hours," says Ward. "Children are always growing and developing, both physically and mentally, so providing them with great lunch nutrition keeps them healthy in and out of the classroom." Ward offers the following tips to help keep your child eating healthy during school hours: - Talk to your children. Ask them what they would like to eat for lunch and teach them where food comes from. Involving children in meal planning will make the process more fun. Packing lunch with their favorite character on their lunch kit will make their meal even more enjoyable. - Check with the school to see how close snack time is to lunch. This will help you determine how much food to pack for your children. Portion control is important for a healthy, balanced diet. Since children are smaller than adults, they should eat smaller portions, too. - Lunch can be more than just the traditional milk, sandwich and fruit. Eating the same thing every day may get boring fast. As long as the food is healthy, you don't need to get hung up on serving a traditional lunch. Alternate cold meals and hot to keep your child's interest. If your child craves pizza, make one at home with low-fat cheese and vegetables. Use sunflower seed butter or olive oil instead of regular butter, and make sure milk, cheese and yogurt are low or nonfat. If you want to send a sandwich for lunch, try making it on a whole-wheat bagel, pita pocket or sandwich wrap. - Give your child an alternative to sugary soda and juice drinks by packing ice water with fruit slices in a bottle. The fruit will add the sweet taste your child craves, without the added sugar. There is no nutritional value to sugary drinks, so cutting them out of your child's diet and helping them understand why you're doing so early on will benefit them in the long run. - Provide a balanced meal. Keep kids fueled during and after school by offering essentials packed with fiber or protein, which will also help reduce snacking urges. Fiber, dairy foods, and protein-packed fare are essential to keeping kids fueled during and after school for activities, and reducing the urge to snack. Children eat what is available, so having carrot sticks and hummus available keeps kids' minds off of cookies, candy, and chips. Send protein-rich Greek yogurt with fruit and nuts or whip up a smoothie made with fruit and milk and send in an insulated straw bottle.
<urn:uuid:8c48e599-4a4e-43bf-8518-345e6107ea90>
CC-MAIN-2013-20
http://www.wellsvilledaily.com/community/blogs/family-time/x887146087/Tips-to-provide-your-child-with-a-fun-nutritious-school-lunch
2013-05-19T18:28:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966893
615
The State of Food Insecurity in the World 2011 highlights the differential impacts that the world food crisis of 2006-08 had on different countries, with the poorest being most affected. While some large countries were able to deal with the worst of the crisis, people in many small import-dependent countries experienced large price increases that, even when only temporary, can have permanent effects on their future earnings capacity and ability to escape poverty. This year’s report focuses on the costs of food price volatility, as well as the dangers and opportunities presented by high food prices. Climate change and an increased frequency of weather shocks, increased linkages between energy and agricultural markets due to growing demand for biofuels, and increased financialization of food and agricultural commodities all suggest that price volatility is here to stay. The report describes the effects of price volatility on food security and presents policy options to reduce volatility in a cost-effective manner and to manage it when it cannot be avoided. Read full article @ www.fao.org - Economist: Rising food prices globally reason to increase prices locally (thecurrencynewshound.com) - Global Food Prices Remain High and Volatile Affecting Poorest Countries the Most (fidest.wordpress.com) - Agriculture investment, empowering women key to food security: FAO (agricultureafrica.wordpress.com) - Global commission charts pathway for achieving food security in face of climate change (eurekalert.org) - World Bank Food Price Watch – Global Price Trends (bespacific.com) - Annan Calls Global Food Crisis a Moral Failing (foodsecuritysm.wordpress.com) - Annan calls global food crisis a moral failing (seattletimes.nwsource.com) - Annan calls global food crisis a moral failing (ctv.ca) - The ironic nature of the world’s food crisis (leggotunglei808.wordpress.com) - Otaviano Canuto: Food Prices, Financial Crisis and Droughts (huffingtonpost.com)
<urn:uuid:ae92d71f-2cf3-42fe-85aa-dce797cf72ac>
CC-MAIN-2013-20
http://hronlineph.com/resources/international/from-the-web-the-state-of-food-insecurity-in-the-world-2011/
2013-05-18T05:54:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.899232
429
One of the joys of working with basic digital electronics– and logic gate ICs in particular –is that it almost works like building with a set of Lego blocks: One output goes here, which connects to the next input here, and so forth until it does what you wanted. If you’ve played with chips like these, you’ve probably also come across chips with “open collector” outputs. And if not, they’re worth knowing about. Open-collector outputs form the basis of a number of clever tricks for level-shifting and interfacing between different types of logic, and from logic to other types of electronic circuits. In what follows, we’ll work with the SN7407N, which is one of the most basic ICs with open-collector outputs. We’ll discuss what it means to have “open collector” outputs, and show some of the different ways that they are used. This is a schematic symbol for the SN7407N, showing the pinout. There’s power (nominal 5 V) and ground, and then six input-output pairs, for a total of 14 pins. The chip is described as a hex buffer (or hex driver), because there are six independent channels, and the logic function is that each output gives a continuous copy of its input. The logic function of “buffer” is normally indicated on a schematic symbol by the forward-triangle “amplifier” symbol on each channel– a buffer is just an amplifier with “unity” (X1) gain –and the symbol is modified by the underlined diamond mark that indicates open collector outputs. Here’s a simplified model of what is inside each buffer channel. The buffer input goes into a logical NOT gate. The output of that NOT gate goes to the base of an NPN bipolar transistor. The emitter of the transistor is connected to ground and the collector of the transistor is connected to the output. This is the “open collector.” When a logical input to the SN7407N is low, the output of the NOT gate is high, so the base of the transistor is held at a voltage above the emitter. This “turns on” the transistor, which means that if there is any voltage (above about 1.5 V) connected to the collector– that is, connected to the output of the SN7407N channel –current will flow from the collector, through the transistor to ground. When a logical input to the SN7407N is high, the output of the NOT gate is low, so the base of the transistor is held low, at the same voltage as the emitter. The transistor is off, and does not conduct current. That is to say, no current flows to or from the output. It’s as though the output is simply not connected to anything. So, where with most digital electronics, the output of a buffer (or other logic gate) is a “high” or “low” voltage, an open collector has two different states: Output transistor disabled or output transistor enabled. In other words, output (effectively) “not connected” or output connected, through a transistor, to ground. Here is a most basic example of how that can be useful. Suppose that an open collector output is outfitted with a “pull-up ” resistor– a moderate value resistor (typically 2.2k – 10k) connected to a positive power supply rail, say 12 V. Then, when the output transistor is disabled (and the output is effectively “not connected”), the output will be pulled up to the power supply rail value, 12 V in this case. When the output transistor is enabled, the output is effectively connected to ground, and the output goes close to 0 V. This, therefore, is a neat way to build a logic-level shifter. What we’ve done is translate a logic-level input (e.g., 0-5V) into a different level (0-12 V). Note that the output does not have to be pulled up that far. If the pull-up were connected to 3V, the output range would be 0-3 V, and you could use that as an input to digital electronics that do not tolerate a full 5 V on their inputs. For the SN7407N, the output can go as high as 30 V, so you can also use it to shift higher as well. Another way that you can use open collector outputs is as a compact substitute for a set of external, discrete transistors. Suppose that you wanted to drive six sets of three white LEDs each, controlled by six outputs from your microcontroller. To do this, you might hook up each output, through a resistor, to the base of a transistor, and use that transistor to switch the current to the LEDs. The SN7407N could be used the same way– allowing you to substituting one chip for six resistors plus six transistors. Here’s one channel of the last circuit, built on a breadboard. The blinking TTL input comes from the microcontroller, into the SN7407, and the external clip leads bring the 12 V on board. If you look closely, you’ll see that there’s actually one more component: a vestigial (but harmless) 10k pull-up on the SN7407 output. This circuit is an example of a “low side” driver, where the LEDs are switched on and off from their “low side,” the side closer to ground potential. There are some things that you can’t do with open collector outputs. It’s tempting to think that because your output works to switch 20 mA, it can also be used to source 20 mA, as in the “bad circuit” above. It is true that in conjunction with a pull-up resistor, the open collector output can go up to 12 V, but that’s assuming that there is only minimalcurrent draw from that 12 V output. The issue is that the open-collector output does not source current at all; it only can sink current. So, if any current were to flow through the LEDs, it would not come from the SN7407 output, but from the 12 V rail, through the 10k resistor. And by Ohm’s law, you can’t get 20 mA to pass through 10 kΩ, unless you provide at least 200 V. This circuit won’t come close. The circuit above would be an example of a “high side” driver if it actually worked. High-side drivers switch LEDs (or other electronics) on and off, using a switch connected to the side at higher voltage. There’s actually a very real need for both “high side” and “low side” driver circuits. For example, in a multiplexed LED matrix, each row is switched on one at a time, using a high-side driver connected to each row. Then, low-side drivers connected to each column dictate which LEDs in that row are on and off at a given time. So how might we go about building a high-side driver that works? Since the open collector output with a pull-up won’t source the current that we need, the obvious thing to go back to regular logic (which can source and sink current), and instead place a real switch– a transistor –on the high side of the LEDs. And so… we end up with another bad circuit. And as far as bad circuits go, this is one of the most common. The problem is that in order to switch the LEDs on or off, the logic output needs to be able to swing the voltage of the transistor base both above and below the voltage at the top of the LED stack, very close to 12 V. As our logic input only ranges between 0 and 5 V, the LEDs will be always on (if a PNP transistor is chosen) or always off (if an NPN transistor is chosen). So here’s a solution that works, and it’s actually a great high-side driver. The SN7407 output is pulled up to 12 V, through a 10 kΩ resistor. It is also connected, through 1 kΩ, to the base of a PNP transistor. When the SN7407 input is high, the outputs is effectively disconnected, and the base of the transistor is pulled up to 12 V, turning off the transistor, and ensuring that no current flows through the LEDs. When the SN7407 input is low, the base of the transistor is connected to ground through the 1 kΩ resistor, turning on the transistor, and thereby allowing the LEDs to turn on. So, a funny side effect of this is that this is not just a high-side driver, it’s an inverting high side driver– the LEDs are on when the input signal is low. If you wanted to reverse this, so that the LEDs were on when the input signal was high, you could simply replace the SN7407N with a SN7406N, the inverting equivalent of the SN7407N. There’s yet another unique way that open collector outputs are useful, and that’s in building ad-hoc logic gates by connecting their outputs together. Above is a simple example. Two SN7407 channels, from inputs X and Y have their outputs connected together and to a pull-up resistor. Then if both X and Y are high, the combined output will also be high. But, if either X or Y is low, the output will be low. This constitutes a logical AND gate, because the output is high if and only if both X and Yare high. This kind of gate is referred to as a Wired-AND gate (because it’s made by wiring it up, rather than using a silicon AND-gate chip) and it’s a very useful type of gate. It’s particularly useful because you can connect dozensof open collector outputs to the same line, to make a very large AND gate that could (for example) monitor dozens of critical systems and trigger a shutdown if any of them fails. Other types of wired logic gates can be built this way as well. For example, the above circuit built with inverting SN7406 channels would be a Wired-NOR gate that could be inverted again to yield a Wired-OR gate. As a footnote, open collector “wired” logic gates like these have largely fallen out of favor. That’s partly because it’s hard to debug a hardware problem when you have dozens of gates connected together, and partly because of the development (long ago) of tristated logic chips. But they are still, on occasion, the right solution to a problem.
<urn:uuid:1239e486-563a-465d-9373-98642a0fd88c>
CC-MAIN-2013-20
http://www.evilmadscientist.com/2012/basics-open-collector-outputs/
2013-05-18T18:51:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936637
2,296
Cows, camels, sheep, goats, etc being ruminants must chew their food repeatedly by regurgitating their food from their first stomach compartment and chewing their 'cud'. This then finer chewed material makes its way to the various stomach compartments to be digested. These animals are eating plant material, the same plant material animals such as elephants, horses and hippos eat as well. However, these animals only have one stomach compartment. - Why does one need a multi-compartment stomach and one does not if they are all eating the same/similar food?
<urn:uuid:be69a55f-00c2-437a-a0c0-d78a8ec22f35>
CC-MAIN-2013-20
http://biology.stackexchange.com/questions/5621/why-do-ruminants-require-a-multi-compartment-stomach-to-digest-food/5663
2013-05-19T02:00:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.979866
120
Gerald Matisoff, chair of the department of geological sciences, and Peter Whiting, professor of geological sciences, are both presenting research today at the 2008 Joint Meeting of the Geological Society of America, Soil Science Society of America, American Society of Agronomy, Crop Science Society of America, and Gulf Coast Association of Geological Societies in Houston. When a reactor in the Chernobyl nuclear power plant exploded in 1986 in what was then the Soviet republic of Ukraine, radioactive elements were released in the air and dispersed over the Soviet Union, Europe and even eastern portions of North America. More than 20 years later, researchers from Case Western Reserve University traveled to Sweden and Poland to gain insight into the downward migration of Chernobyl-derived radionuclides in the soil. Among the team's findings was the fact that much more plutonium was found in the Swedish soil at a depth that corresponded with the nuclear explosion than that of Poland. Radionuclides occur in soil both from natural processes and as fallout from nuclear testing. Gerald Matisoff, chair of the department of geological sciences at Case Western Reserve University, Lauren Vitko, field assistant from Case Western Reserve, and others took soil samples in various locations in the two countries, measuring the presence and location of cesium (137Cs), plutonium (239, 240Pu), and lead (210Pbxs). Matisoff will present his findings today at the 2008 Joint Meeting of the Geological Society of America, Soil Science Society of America, American Society of Agronomy, Crop Science Society of America, and Gulf Coast Association of Geological Societies in Houston. By looking at the magnitude of the radioactive fallout, how fast it moved down into the soil profile and how quickly it eroded and is transported by sediment, Matisoff's research helps shed light on two fronts. The first is dealing with the public health ramifications, studying such issues as food chain transfer, exposure and cleanup as well as understanding the geologic aftereffects. These issues include measuring erosion rates, how long the radionuclides are retained in the watershed, the source of sediment found in rivers as well as compiling radioactive inventories. The second is developing an understanding of the differentiation of radioactive elements from a one-time event like Chernobyl and those of fallout created by atmospheric nuclear weapons testing conducted in the 1960s. Soil samples collected by Matisoff's team reveal insights based on several conditions, such as how the radionuclides were delivered to the soil, whether from a one-time event like the Chernobyl disaster or from atmospheric bomb testing; the half-life of the radionuclides and whether they were absorbed more heavily onto clay particles (such as 137Cs and 7Be) or organic materials (239, 240Pu and 210Pbxs); and the types of soil which may keep the particles at the surface or allow them to permeate to levels below the surface. As the team examined a range of soil types from the two countries, they found a spike in 239, 240Pu in Sweden's soil at a depth that coincides with the Chernobyl disaster, yet no similar blip in Poland's soil. Meteorological research showed that it rained in Sweden while the radioactive cloud was over that country. Leeched of much of its radionuclides, much less plutonium fell on Poland when the cloud later crossed over its borders. Matisoff says that his team's findings are preliminary, having raised as many questions as they have answered. His goal is to use this research for even bigger projects and greater, more definitive findings. Funding for the projects was provided by the National Science Foundation. Sediment in rivers comes from erosion of the landscape as well as the erosion and collapse of the banks themselves. Just how much each source contributes to a river—and how it affects the flow and path of that river—is the subject of research by Peter Whiting, professor of geological sciences at Case Western Reserve University. Taking measure of certain radionuclides found in the soil, including beryllium and lead, at various points along a 423-km-long section of the Yellowstone River, Whiting has determined how much of the sediment in the Yellowstone came from runoff and how much came from the streambanks. For example, streambank erosion contributes approximately 50 percent of the sediment at measurement sites up-river, increasing to 89 percent at Billings, Mont. In river basins where significant portions of the surrounding landscape are used for agriculture or forestry, the percentage of sediment coming from streambank erosion drops below 50 percent. Whiting will present his findings today at the 2008 Joint Meeting of the Geological Society of America, Soil Science Society of America, American Society of Agronomy, Crop Science Society of America, and Gulf Coast Association of Geological Societies in Houston. Radionuclides occur in soil both from natural processes and as fallout from nuclear testing. Beryllium and lead are found in greater concentrations at the surface of the soil. All the beryllium will be found in the top two centimeters of the surface soil but lead will be found to greater depth. Beryllium and lead have markedly different half-lives. Lead has a 20-year half life, while that of Beryllium is only 53 days. Comparing the activities of both elements in the river’s suspended sediment to the surrounding landscape and streambanks helps provide a detailed profile of where the sediment originates. "We need to understand the sources of the sediment in our rivers if we want to address stewardship of our rivers," said Whiting. For instance, fine sediment carried into rivers can cloud the water and can choke out freshwater bugs and fish that require cleaner water. Fine sediment deposited on the stream bottom can smother eggs laid by fish including salmon and walleye. To preserve these populations of fish, we often try to rehabilitate streams by reducing the amount of sediment supplied to the stream. But to try to reduce the supply, and one needs to understand whether it is activities eroding the landscape—urbanization, farming, or timbering—or it is the streambanks that are the primary cause of the problem. "In using radionuclides as markers in our research, we are helping to develop new tools for the advancement of soil and river stewardship," said Whiting. Case Western Reserve University is committed to the free exchange of ideas, reasoned debate and intellectual dialogue. Speakers and scholars with a diversity of opinions and perspectives are invited to the campus to provide the community with important points of view, some of which may be deemed controversial. The views and opinions of those invited to speak on the campus do not necessarily reflect the views of the university administration or any other segment of the university community.
<urn:uuid:dfe5f1b8-426c-4fa3-858b-05849f717cad>
CC-MAIN-2013-20
http://blog.case.edu/case-news/2008/10/06/geologicalsciences
2013-05-22T00:21:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949664
1,399
While the relationship between poverty and poor health is complex, access to basic needs like adequate housing and nutrition, appropriate education and personal safety is well documented to improve health trajectories. According to Zuckerman, the article by Beck, et al in Pediatrics represents a special example of how a multidisciplinary approach to social determinants of health initiated from a primary care setting can address poor housing conditions and reduce risk for asthma for individual patients and for a population. “When families do not receive the benefits or protections of certain laws, their health can be undermined. The consequences can be treated medically, but their upstream causes are social and are more effectively addressed using legal strategies,” said Zuckerman. A recent report estimates that 50 to 85 percent of health center users – between 10 and 17 million people – experience unmet legal needs, many of which impact their health. Most at-risk individuals may not know that their problems have legal solutions. Medical-Legal Partnerships, founded by Zuckerman at Boston Medical Center in 1993 for children, helps parents navigate the complex government and legal systems that often hold solutions for many social determinants of poor health. “The health care team’s role is to identify early unmet legal needs that cause or exacerbate child health problems. Once identified, lawyers bring critical skills to complement the expertise of the health care team,” explained Zuckerman. By reducing the impact of legal determinants that affect health, this creative partnership in clinical settings will compliment increased access to health care provided by recent health care reforms.
<urn:uuid:d5a94a6d-a511-4617-8119-8346d386ec4b>
CC-MAIN-2013-20
http://www.bmc.org/about/news/3067.htm
2013-06-18T22:38:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94689
318
11 Plus Verbal Reasoning Verbal Reasoning is almost universally used as one of the test papers in the 11+. It is believed to be an effective way of testing a child’s potential, not just learned ability. Of course, learned ability does enter into the equation. While some of the question types simply test a child’s logical deduction skills or their ability to decipher codes, much of an 11+ verbal reasoning test will require a good vocabulary and also strong basic maths skills. Strangely, most verbal reasoning tests also encompass maths questions. You can find more help in both those areas on our English and maths sections. Some children simply have “the knack” when it comes to Verbal Reasoning, even if they have never encountered it before. These children also tend to be keen on puzzles of all types – crosswords, wordsearches, word games, jigsaws, Sudoku, etc. If you can encourage your child to enjoy these activities they make for good informal preparation for Verbal Reasoning tests. If your child is not one of the lucky few it is still possible to become very adept at Verbal Reasoning simply by learning the techniques required to solve the problems. Preparation will not enable a child who is not innately intelligent to qualify in the 11+, but it will assist children who find VR more difficult than curriculum-based learning. An analogy sometimes used is that of doing the Times Crossword: If you do the crossword every day you become familiar with how the compilers think and you can see the solutions more quickly. However, if you do not possess a good vocabulary in the first instance, you will not know the answers to the clues. There is a very wide range of Verbal Reasoning question types and it is essential to research exactly which question types feature in the papers in your area. The most common Verbal Reasoning tests in use for the 11 Plus are those prepared by GL Assessment (formerly NFER) and for that reason we provide specific advice on their question types. There are either 15 or 21 question types featured on each paper, although papers using all 21 types are by far the most common. The format of the papers may be either “standard” format (no answer options are provided and the child must work out the answer from scratch) or multiple choice, where five possible answer options are provided on the answer paper. In areas where the tests are not set by GL Assessment and past papers are not available (such as the Durham CEM test used in Birmingham and Warwickshire, or the Moray House papers used by many Hertfordshire schools) it may be necessary to cover a wider variety of VR question types. The publisher that features the biggest range of questions is probably Bond Assessment, but it would be wise to buy a selection of other books that may feature different question types. You can assess the content of different authors’ papers by downloading their sample papers at Free 11 Plus Papers from this site. You can also find more advice on which practice materials parents have used successfully in these areas on the relevant regional section of our 11+ Forum
<urn:uuid:12285629-f0c6-41b3-8ae6-5945213ac682>
CC-MAIN-2013-20
http://www.elevenplusexams.co.uk/advice/verbal-reasoning
2013-06-18T23:04:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950617
641
View Full Version : help! how do i model a "Dodecahedron" 03-25-2007, 07:19 PM in MAYA 8 . i need to create a Dodecahedron (like soccer ball) but without the hexagon faces, 03-26-2007, 10:07 PM Hmmm... Are you sure you mean a dodecahedron? A soccer ball is a truncated icosahedron. I presume you do as you mention it needs only pentagonal sides (i.e. a regular dodecahedron, as dodecahedron simply means any object with 12 faces). I'm not sure if there any geometry tricks you could use to make one directly out of primitives (does Maya not allow you to create platonic solids?), but as long as you don't mind entering a few vertex values you can create one fairly quickly. I can't show you in Maya, but the following method should work fine anyway. First create a 2m polygonal cube centred on the origin. It's vertices should be (1,1,1), (-1,1,1) etc. Slice the faces of this cube as follows: There should be slices on the sides hidden as well, but in the same pattern so that none of the slices meet on the same edge. Pay attention to the axes I have defined as they may be different to the default Maya ones and I will refer to them later. Now create a single vertex (not a polygon unless there's no other way), with values of (1/a, a, 0) where a is the golden ratio (1+√5)/2 m. This is approximately (61.8cm, 1.618m, 0). Mirror this on the x-axis and the y-axis so you have 4 points above and below the cube. Copy these points, rotate them 90 ° around the y-axis, then the x-axis. You should now have 4 points in front of and behind the cube. Copy the top 4 points again, and rotate them 90° around the z axis, then the x axis to get points left and right of the cube. These 12 vertices will be used to adjust the cube to the right shape. The points should be in the locations shown below (click to get a high-resolution screenshot). Only half the vertices are highlighted for clarity. The points are fairly close to the cuts you made in the cube earlier. All you have to do now is weld each of the points at the end of these cuts to the nearest of the vertices you've created like so (only half shown again): When all twelve vertices are welded you should have a completed dodecahedron (click to expand): I hope that makes sense, it's a bit hard to describe the steps. If you want I can make a video tutorial to show it in more detail, but given that I'm not using Maya it may not be very useful when it's the concepts that matter. There's probably a bunch of other ways you could do this too, so let me know if you find a more elegant solution (you could do it by physically scaling the cuts themselves by the golden ratio for example). Let me know if you have any questions. M 04-28-2007, 04:58 PM You're welcome, I think ;) Yairmann only has 3 posts, so I guess he's long gone... 04-28-2007, 05:20 PM Oops, double post 04-29-2007, 05:36 AM Well his join date is 2004, so i'll assume he's just someone that browses the forums with little to say. 04-30-2007, 05:03 PM In Maya there is one ready to go: Create->Polygon Primitives->Platonic Solids opt. then choose Dodecahedron. 04-30-2007, 05:03 PM This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum. vBulletin v3.0.5, Copyright ©2000-2013, Jelsoft Enterprises Ltd.
<urn:uuid:062109a2-88fc-439d-a887-ea7e6bc0a9bc>
CC-MAIN-2013-20
http://forums.cgsociety.org/archive/index.php/t-478413.html
2013-05-26T03:45:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.906906
890
The Qur'an and the sayings of the Prophet Muhammad address the individual rights of children repeatedly. One of the core values of Islam is providing for those that are at a disadvantage or that cannot provide for themselves adequately. Because of this, the individual rights of women, children, minorities, the elderly, orphans, and the handicapped are extensively discussed in Islamic law, the Qur'an, and in the sayings of the Prophet Muhammad. The Qur'an also urges the Muslim community to take care of its orphans' needs and to make sure that all children have everything they need. Islam is a religion that is characterized by its treatment of children; many Muslim gatherings are characterized by the presence of children either joining in with the adults, or playing among themselves. The foremost right of all children is to have their basic needs provided for until they become adults. This means that all children must receive food, clothes, and shelter. It is important that they be protected from harm and not suffers from hunger or exposure. The Qur'an also specifies that children must be treated with respect. Parents must also love their children and show them affection, a fundamental right of children as is stated in the Qur'an. The Qur'an also teaches that all siblings must be treated equally, and that no sibling must be given preference financially or in other aspects. In this point, the Prophet Muhammad also stated that it is important to be fair when giving gifts to your children, to treat them equal to one another. The Prophet Muhammad also advised that, if one were to give preferential treatment to one sibling over the other, that preferential treatment should be given to girls rather than to boys. In Islam, all children have the right to receive an education. One of the Prophet Muhammad's sayings states that a good education is the best gift that a father can give to his children. In Islam, parents are also recommended to ensure that their children will be properly provided for with an inheritance. The actions and sayings of the Prophet Muhammad show that he was especially kind toward children. There are several recorded instances where the Prophet Muhammad urged parents to treat their children with respect and kindness. For example, there is an anecdote in which a child sitting on the Prophet Muhammad's lap urinated on him. The child, obviously quite young, seemed confused when his father started berating him angrily in front of the gathered crowd. The Prophet Muhammad urged to father to stop, saying that his clothes could be washed, but that the child's self-esteem would be very difficult to restore after having yelled at him in front of every in such a manner. -- Al Arabiya Digital
<urn:uuid:1dcce64a-3b7e-4f43-8bc2-8c69b424f8f5>
CC-MAIN-2013-20
http://www.islamonline.com/news/print.php?newid=543288
2013-05-24T01:38:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.981472
534
- Shopping Bag ( 0 items ) Children's LiteratureIn the fall of 1914 Ernest Shackleton set sail on the good ship Endurance along with a steadfast group of explorers. His goal: to reach and cross Antarctica. In an age of lethal polar expeditions Shackelton's exploration was fraught with risk. Unbeknownst to him and his crew, an eighteen-year-old lad stowed away on the Endurance in order to be part of this voyage of discovery. This lad, Perce Blackborrow, was willing to risk the wrath of Shackleton in order to illicitly join the expedition. While Blackborrow and the other explorers anticipated a rough trip they could never imagine the hardship, pain, and trauma they would encounter. Based upon the true-life story of the Shackleton expedition—and young Perce Blackborrow's role in it—this historical novel takes readers back to one of the most amazing stories of endurance known. Blending a strong narrative style with meticulous research this tale will be a joy to readers interested in survival stories. In the end, Blackborrow persevered despite the terrible suffering he and his mates were forced to endure. In telling this story as fiction Victoria McKernan brings history to life. 2005, Alfred A. Knopf, Ages 12 up. —Greg M. Romaneck
<urn:uuid:60723c03-c81f-4747-8796-17861d998420>
CC-MAIN-2013-20
http://www.barnesandnoble.com/w/shackletons-stowaway-victoria-mckernan/1100290043
2013-05-21T10:28:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941064
273
How cooking helped humans evolve Join an online discussion with Richard Wrangham, who argues that apes became human because they learned to cook. Wrangham is author of “Catching Fire,” which argues that the development of cooking by our ancestors was a key to unlocking human potential. Share your thoughts on his assertions about cooking, eating and evolution with Dr. Wrangham and other listeners at "The World" Science Forum. A Harvard anthropologist who’s spent decades studying chimps in Africa, Wrangham also studied human diet. He says cooking gave early hominids access to a much wider range of foods, helped their brains grow, and gave them time to develop tools and technologies. "What we've discovered in the last few years is that, although of course we can eat lots of foods raw, and although every other animal survives well on raw food, humans really do seem to be special because we cannot survive really effectively on raw food. The only exception is modern, urban society where you can have access to very high class agricultural domesticated foods. "But for people living in a state of nature, as it were, hunters and gatherers living in the wild, there's no evidence at all that we can survive on raw food. And what this means is that humans are fundamentally adapted to cooking as a part of their existence ... and then the fascinating question is, how long as this been going on?" Wrangham says humans began cooking almost two million years ago, and biology offers strong evidence of this, " ... because we have evolved to have a tremendous reduction in various aspects of our intestinal system, and it starts with the mouth -- we've got a tiny little mouth compared to other great apes, we have small teeth, and we have small guts overall -- I mean our guts a the smallest in relation to our body size of any of the primates. "Once we started eating cooked food, then it was a great saving to the body in terms of energy to be able to get rid of the intestines we didn't need. And we didn't need it because our food was very high density and very easily digested once we started cooking." The smaller gut is evidence of when cooking started says Wrangham: "So we now have these characteristic features compared to other non-human primates -- the small mouth, the small teeth, the small gut -- and we can pinpoint when these things evolved because the fossil record tells us when we had small mouths, when we had small teeth, and when our guts became small ... and the answer is this happened with the evolution of Homo erectus, between 1.8 and 1.9 million years ago. "It's a signal that is really quite strong -- it's very difficult to imagine how that species, Homo erectus, could have had these reductions in the gut unless they had such good food processing that, basically, they had to cook." Cooked food increases the amount of energy that humans get out of the food: "If you took a pound of raw steak and you eat that, and you compared that to eating a pound of cooked steak, then you are going to get significantly extra calories out of eating the cooked steak." Softening food allows it to be eaten quickly and with less effort, and allowed for more caloric intake and less energy expenditure, which helped humans evolve. Wrangham also has a theory about how cooking became women's work: "The emergence of cooking, this enormously valuable addition to our skill set, meant that everybody benefited from eating cooked food, men and women. And maybe you can imagine that the males and females were all individually cooking for themselves at first, but it would rapidly have emerged to the males the idea that, 'hey, you know something, if I come back hungry and not having got my food together for the day, I can bum off some woman.' "And the reason they can do that is because when you have cooking, you have for the first time an absolute necessity for the food to remain the sight of everybody else while it is being prepared. You cannot simply put all the food into your mouth right away because obviously then it wouldn't be cooked. So then this creates a problem -- it creates an opportunity for theft ... I think what's happen is that this exposure to theft as a consequence to cooking ... has led to the development of a primitive protection racket ... she is vulnerable, so it ends up with, she feeds him and what she gets out of it is security." PRI's "The World" is a one-hour, weekday radio news magazine offering a mix of news, features, interviews, and music from around the globe. "The World" is a co-production of the BBC World Service, PRI and WGBH Boston.
<urn:uuid:c213a448-ed0d-4353-9614-ddba6b6dfadc>
CC-MAIN-2013-20
http://www.pri.org/stories/science/cooked-food-evolution1455.html
2013-05-21T10:22:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978813
989
A pressure sore (bed sore) is an injury to the skin and/or the tissues under the skin, caused primarily by constant pressure. People confined to a bed or chair and unable to move are at greatest risk for developing pressure sores, which form most often in bony areas such as the hips, heels, or tailbone. Pressure sores develop when constant pressure reduces blood supply to an area of skin and tissue. Oxygen and nutrients carried by the blood cannot reach the cells in the tissue, causing the cells to die. Pressure sores can range from mild reddening of the skin to severe tissue damage that extends into muscle and bone. These sores are difficult to treat and slow to heal. For people who are confined to a bed or chair or are unable to move, changing positions frequently and distributing body weight evenly will relieve pressure on any one area of skin. Eating a balanced diet with adequate protein promotes healthy skin, as does keeping skin clean and free of body fluids or feces. Moisturizing dry skin with good-quality lotions will keep the skin from drying out and cracking, which makes it vulnerable to pressure sores. Healing a pressure sore depends on relieving the pressure on the area. Treatment for pressure sores includes changing positions frequently to restore blood flow to the tissue and washing the sore daily. Unaffected tissue around the sore should be kept clean and dry to prevent further damage. Removing dead tissue and applying medicated ointments or creams will help reduce the risk of infection. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org © 1995-2012 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. Find out what women really need. Most Popular Topics Pill Identifier on RxList - quick, easy, Find a Local Pharmacy - including 24 hour, pharmacies
<urn:uuid:b2b0e267-71f7-4012-b18f-ef4aa9936062>
CC-MAIN-2013-20
http://www.emedicinehealth.com/script/main/art.asp?articlekey=136802&ref=137697
2013-05-23T12:02:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924455
405
Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to loosen" or "to untie") is the art and science of analyzing information systems in order to study the hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown. In addition to mathematical analysis of cryptographic algorithms, cryptanalysis also includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation. Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorization. Amount of information available to the attacker Attacks can be classified based on what type of information the attacker has available. As a basic starting point it is normally assumed that, for the purposes of analysis, the general algorithm is known; this is Shannon's Maxim "the enemy knows the system"—in its turn, equivalent to Kerckhoffs' principle. This is a reasonable assumption in practice — throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously through espionage, betrayal and reverse engineering. (And on occasion, ciphers have been reconstructed through pure deduction; for example, the German Lorenz cipher and the Japanese Purple code, and a variety of classical schemes).: - Ciphertext-only: the cryptanalyst has access only to a collection of ciphertexts or codetexts. - Known-plaintext: the attacker has a set of ciphertexts to which he knows the corresponding plaintext. - Chosen-plaintext (chosen-ciphertext): the attacker can obtain the ciphertexts (plaintexts) corresponding to an arbitrary set of plaintexts (ciphertexts) of his own choosing. - Adaptive chosen-plaintext: like a chosen-plaintext attack, except the attacker can choose subsequent plaintexts based on information learned from previous encryptions. Similarly Adaptive chosen ciphertext attack. - Related-key attack: Like a chosen-plaintext attack, except the attacker can obtain ciphertexts encrypted under two different keys. The keys are unknown, but the relationship between them is known; for example, two keys that differ in the one bit. Computational resources required Attacks can also be characterised by the resources they require. Those resources include: - Time — the number of computation steps (like encryptions) which must be performed. - Memory — the amount of storage required to perform the attack. - Data — the quantity of plaintexts and ciphertexts required. It's sometimes difficult to predict these quantities precisely, especially when the attack isn't practical to actually implement for testing. But academic cryptanalysts tend to provide at least the estimated order of magnitude of their attacks' difficulty, saying, for example, "SHA-1 collisions now 252." Bruce Schneier notes that even computationally impractical attacks can be considered breaks: "Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute force. Never mind that brute-force might require 2128 encryptions; an attack requiring 2110 encryptions would be considered a break...simply put, a break can just be a certificational weakness: evidence that the cipher does not perform as advertised." Partial breaks The results of cryptanalysis can also vary in usefulness. For example, cryptographer Lars Knudsen (1998) classified various types of attack on block ciphers according to the amount and quality of secret information that was discovered: - Total break — the attacker deduces the secret key. - Global deduction — the attacker discovers a functionally equivalent algorithm for encryption and decryption, but without learning the key. - Instance (local) deduction — the attacker discovers additional plaintexts (or ciphertexts) not previously known. - Information deduction — the attacker gains some Shannon information about plaintexts (or ciphertexts) not previously known. - Distinguishing algorithm — the attacker can distinguish the cipher from a random permutation. Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem, so it's possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks on DES, MD5, and SHA-1 were all preceded by attacks on weakened versions. In academic cryptography, a weakness or a break in a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore, it might only reveal a small amount of information, enough to prove the cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened version of cryptographic tools, like a reduced-round block cipher, as a step towards breaking of the full system. History of cryptanalysis Cryptanalysis has coevolved together with cryptography, and the contest can be traced through the history of cryptography—new ciphers being designed to replace old broken designs, and new cryptanalytic techniques invented to crack the improved schemes. In practice, they are viewed as two sides of the same coin: in order to create secure cryptography, you have to design against possible cryptanalysis. Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others can be a decisive advantage. For example, in England in 1587, Mary, Queen of Scots was tried and executed for treason as a result of her involvement in three plots to assassinate Elizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered by Thomas Phelippes. In World War I, the breaking of the Zimmermann Telegram was instrumental in bringing the United States into the war. In World War II, the Allies benefitted enormously from their joint success cryptanalysis of the German ciphers — including the Enigma machine and the Lorenz cipher — and Japanese ciphers, particularly 'Purple' and JN-25. 'Ultra' intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by 'Magic' intelligence. Governments have long recognized the potential benefits of cryptanalysis for intelligence, both military and diplomatic, and established dedicated organizations devoted to breaking the codes and ciphers of other nations, for example, GCHQ and the NSA, organizations which are still very active today. In 2004, it was reported that the United States had broken Iranian ciphers. (It is unknown, however, whether this was pure cryptanalysis, or whether other factors were involved:). Classical ciphers Although the actual word "cryptanalysis" is relatively recent (it was coined by William Friedman in 1920), methods for breaking codes and ciphers are much older. The first known recorded explanation of cryptanalysis was given by 9th-century Arabian polymath, Al-Kindi (also known as "Alkindus" in Europe), in A Manuscript on Deciphering Cryptographic Messages. This treatise includes a description of the method of frequency analysis (Ibrahim Al-Kadi, 1992- ref-3). Italian scholar Giambattista della Porta was author of a seminal work on cryptanalysis "De Furtivis Literarum Notis". Frequency analysis is the basic tool for breaking most classical ciphers. In natural languages, certain letters of the alphabet appear more frequently than others; in English, "E" is likely to be the most common letter in any sample of plaintext. Similarly, the digraph "TH" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide these statistics. For example, in a simple substitution cipher (where each letter is simply replaced with another), the most frequent letter in the ciphertext would be a likely candidate for "E". Frequency analysis of such a cipher is therefore relatively easy, provided that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains. In Europe during the 15th and 16th centuries, the idea of a polyalphabetic substitution cipher was developed, among others by the French diplomat Blaise de Vigenère (1523–96). For some three centuries, the Vigenère cipher, which uses a repeating key to select different encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—"the indecipherable cipher"). Nevertheless, Charles Babbage (1791–1871) and later, independently, Friedrich Kasiski (1805–81) succeeded in breaking this cipher. During World War I, inventors in several countries developed rotor cipher machines such as Arthur Scherbius' Enigma, in an attempt to minimise the repetition that had been exploited to break the Vigenère system. Ciphers from World War I and World War II Cryptanalysis of enemy messages played a significant part in the Allied victory in World War II. F. W. Winterbotham, quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at the war's end as describing Ultra intelligence as having been "decisive" to Allied victory. Sir Harry Hinsley, official historian of British Intelligence in World War II, made a similar assessment about Ultra, saying that it shortened the war "by not less than two years and probably by four years"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended. In practice, frequency analysis relies as much on linguistic knowledge as it does on statistics, but as ciphers became more complex, mathematics became more important in cryptanalysis. This change was particularly evident before and during World War II, where efforts to crack Axis ciphers required new levels of mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the Polish Bomba device, the British Bombe, the use of punched card equipment, and in the Colossus computers — the first electronic digital computers to be controlled by a program. With reciprocal machine ciphers such as the Lorenz cipher and the Enigma machine used by Nazi Germany during World War II, each message had its own key. Usually, the transmitting operator informed the receiving operator of this message key by transmitting some plaintext or ciphertext before the enciphered message. This is termed the indicator, as it indicates to the receiving operator how to set his machine to decipher the message. Poorly designed and implemented indicator systems allowed first the Poles and then the British at Bletchley Park to break the Enigma cipher system. Similar poor indicator systems allowed the British to identify depths that led to the diagnosis of the Lorenz SZ40/42 cipher system, and the comprehensive breaking of its messages without the cryptanalysts seeing the cipher machine. Sending two or more messages with the same key is an insecure process. To a cryptanalyst the messages are then said to be "in depth". This may be detected by the messages having the same indicator by which the sending operator informs the receiving operator about the key generator initial settings for the message. Generally, the cryptanalyst may benefit from lining up identical enciphering operations among a set of messages. For example the Vernam cipher enciphers by bit-for-bit combining plaintext with a long key using the "exclusive or" operator, which is also known as "modulo-2 addition" (symbolized by ⊕ ): - Plaintext ⊕ Key = Ciphertext Deciphering combines the same key bits with the ciphertext to reconstruct the plaintext: - Ciphertext ⊕ Key = Plaintext (In modulo-2 arithmetic, addition is the same as subtraction.) When two such ciphertexts are aligned in depth, combining them eliminates the common key, leaving just a combination of the two plaintexts: - Ciphertext1 ⊕ Ciphertext2 = Plaintext1 ⊕ Plaintext2 The individual plaintexts can then be worked out linguistically by trying probable words (or phrases) at various locations; a correct guess, when combined with the merged plaintext stream, produces intelligible text from the other plaintext component: - (Plaintext1 ⊕ Plaintext2) ⊕ Plaintext1 = Plaintext2 The recovered fragment of the second plaintext can often be extended in one or both directions, and the extra characters can be combined with the merged plaintext stream to extend the first plaintext. Working back and forth between the two plaintexts, using the intelligibility criterion to check guesses, the analyst may recover much or all of the original plaintexts. (With only two plaintexts in depth, the analyst may not know which one corresponds to which ciphertext, but in practice this is not a large problem.) When a recovered plaintext is then combined with its ciphertext, the key is revealed: - Plaintext1 ⊕ Ciphertext1 = Key Knowledge of a key of course allows the analyst to read other messages encrypted with the same key, and knowledge of a set of related keys may allow cryptanalysts to diagnose the system used for constructing them. The development of modern cryptography Even though computation was used to great effect in Cryptanalysis of the Lorenz cipher and other systems during World War II, it also made possible new methods of cryptography orders of magnitude more complex than ever before. Taken as a whole, modern cryptography has become much more impervious to cryptanalysis than the pen-and-paper systems of the past, and now seems to have the upper hand against pure cryptanalysis. The historian David Kahn notes: "Many are the cryptosystems offered by the hundreds of commercial vendors today that cannot be broken by any known methods of cryptanalysis. Indeed, in such systems even a chosen plaintext attack, in which a selected plaintext is matched against its ciphertext, cannot yield the key that unlock[s] other messages. In a sense, then, cryptanalysis is dead. But that is not the end of the story. Cryptanalysis may be dead, but there is - to mix my metaphors - more than one way to skin a cat.". Kahn goes on to mention increased opportunities for interception, bugging, side channel attacks, and quantum computers as replacements for the traditional means of cryptanalysis. In 2010, former NSA technical director Brian Snow said that both academic and government cryptographers are "moving very slowly forward in a mature field." However, any postmortems for cryptanalysis may be premature. While the effectiveness of cryptanalytic methods employed by intelligence agencies remains unknown, many serious attacks against both academic and practical cryptographic primitives have been published in the modern era of computer cryptography: - The block cipher Madryga, proposed in 1984 but not widely used, was found to be susceptible to ciphertext-only attacks in 1998. - FEAL-4, proposed as a replacement for the DES standard encryption algorithm but not widely used, was demolished by a spate of attacks from the academic community, many of which are entirely practical. - The A5/1, A5/2, CMEA, and DECT systems used in mobile and wireless phone technology can all be broken in hours, minutes or even in real-time using widely available computing equipment. - Brute-force keyspace search has broken some real-world ciphers and applications, including single-DES (see EFF DES cracker), 40-bit "export-strength" cryptography, and the DVD Content Scrambling System. - In 2001, Wired Equivalent Privacy (WEP), a protocol used to secure Wi-Fi wireless networks, was shown to be breakable in practice because of a weakness in the RC4 cipher and aspects of the WEP design that made related-key attacks practical. WEP was later replaced by Wi-Fi Protected Access. - In 2008, researchers conducted a proof-of-concept break of SSL using weaknesses in the MD5 hash function and certificate issuer practices that made it possible to exploit collision attacks on hash functions. The certificate issuers involved changed their practices to prevent the attack from being repeated. Cryptanalysis of symmetric ciphers - Boomerang attack - Brute force attack - Davies' attack - Differential cryptanalysis - Impossible differential cryptanalysis - Improbable differential cryptanalysis - Integral cryptanalysis - Linear cryptanalysis - Meet-in-the-middle attack - Mod-n cryptanalysis - Related-key attack - Sandwich attack - Slide attack - XSL attack Cryptanalysis of asymmetric ciphers Asymmetric cryptography (or public key cryptography) is cryptography that relies on using two keys; one private, and one public. Such ciphers invariably rely on "hard" mathematical problems as the basis of their security, so an obvious point of attack is to develop methods for solving the problem. The security of two-key cryptography depends on mathematical questions in a way that single-key cryptography generally does not, and conversely links cryptanalysis to wider mathematical research in a new way. Asymmetric schemes are designed around the (conjectured) difficulty of solving various mathematical problems. If an improved algorithm can be found to solve the problem, then the system is weakened. For example, the security of the Diffie-Hellman key exchange scheme depends on the difficulty of calculating the discrete logarithm. In 1983, Don Coppersmith found a faster way to find discrete logarithms (in certain groups), and thereby requiring cryptographers to use larger groups (or different types of groups). RSA's security depends (in part) upon the difficulty of integer factorization — a breakthrough in factoring would impact the security of RSA. In 1980, one could factor a difficult 50-digit number at an expense of 1012 elementary computer operations. By 1984 the state of the art in factoring algorithms had advanced to a point where a 75-digit number could be factored in 1012 operations. Advances in computing technology also meant that the operations could be performed much faster, too. Moore's law predicts that computer speeds will continue to increase. Factoring techniques may continue to do so as well, but will most likely depend on mathematical insight and creativity, neither of which has ever been successfully predictable. 150-digit numbers of the kind once used in RSA have been factored. The effort was greater than above, but was not unreasonable on fast modern computers. By the start of the 21st century, 150-digit numbers were no longer considered a large enough key size for RSA. Numbers with several hundred digits were still considered too hard to factor in 2005, though methods will probably continue to improve over time, requiring key size to keep pace or other methods such as elliptic curve cryptography to be used. Attacking cryptographic hash systems |This section requires expansion. (April 2012)| Side-channel attacks |This section requires expansion. (April 2012)| - Black-bag cryptanalysis - Man-in-the-middle attack - Power analysis - Replay attack - Rubber-hose cryptanalysis - Timing analysis Quantum computing applications for cryptanalysis Quantum computers, which are still in the early phases of research, have potential use in cryptanalysis. For example, Shor's Algorithm could factor large numbers in polynomial time, in effect breaking some commonly used forms of public-key encryption. See also - Economics of security - Information assurance, a term for information security often used in government - Information security, the overarching goal of most cryptography - National Cipher Challenge - Security engineering, the design of applications and protocols - Security vulnerability; vulnerabilities can include cryptographic or other flaws - Topics in cryptography - Zendian Problem Historic cryptanalysts - Conel Hugh O'Donel Alexander - Charles Babbage - Lambros D. Callimahos - Alastair Denniston - Agnes Meyer Driscoll - Elizebeth Friedman - William F. Friedman, the father of modern cryptology - Meredith Gardner - Friedrich Kasiski - Dilly Knox - Solomon Kullback - Marian Rejewski - Joseph Rochefort, whose contributions affected the outcome of the Battle of Midway - Frank Rowlett - Abraham Sinkov - Giovanni Soro, the Renaissance's first outstanding cryptanalyst - John Tiltman - Alan Turing - William T. Tutte - John Wallis - 17th century English mathematician - Herbert Yardley ||This article needs additional citations for verification. (April 2012)| - "Cryptanalysis/Signals Analysis". Nsa.gov. 2009-01-15. Retrieved 2013-04-15. - Schmeh, Klaus (2003). Cryptography and public key infrastructure on the Internet. John Wiley & Sons. p. 45. ISBN 978-0-470-84745-9. - McDonald, Cameron; Hawkes, Philip; Pieprzyk, Josef, SHA-1 collisions now 252, retrieved 4 April 2012 - Schneier 2000 - For an example of an attack that cannot be prevented by additional rounds, see slide attack. - Smith 2000, p. 4 - "Breaking codes: An impossible task?". BBC News. June 21, 2004. - Crypto History [dead link] - Singh 1999, p. 17 - Singh 1999, pp. 45–51 - Singh 1999, pp. 63–78 - Singh 1999, p. 116 - Winterbotham 2000, p. 229. - Hinsley 1993. - Copeland 2006, p. 1 - Singh 1999, p. 244 - Churchhouse 2002, pp. 33, 34 - Budianski 2000, pp. 97–99 - Calvocoressi 2001, p. 66 - Tutte 1998 - Churchhouse 2002, p. 34 - Churchhouse 2002, pp. 33, 86 - David Kahn Remarks on the 50th Anniversary of the National Security Agency, November 1, 2002. - Tim Greene, Network World, Former NSA tech chief: I don't trust the cloud. Retrieved March 14, 2010. - Stallings, William (2010). Cryptography and Network Security: Principles and Practice. Prentice Hall. ISBN 0136097049. - Ibrahim A. Al-Kadi,"The origins of cryptology: The Arab contributions”, Cryptologia, 16(2) (April 1992) pp. 97–126. - Friedrich L. Bauer: "Decrypted Secrets". Springer 2002. ISBN 3-540-42674-4 - Budiansky, Stephen (2000), Battle of wits: The Complete Story of Codebreaking in World War II, Free Press, ISBN 978-0-684-85932-3 - Calvocoressi, Peter (2001) , Top Secret Ultra, Cleobury Mortimer, Shropshire: M & M Baldwin, ISBN 0-947712-41-0 - Churchhouse, Robert (2002), Codes and Ciphers: Julius Caesar, the Enigma and the Internet, Cambridge: Cambridge University Press, ISBN 978-0-521-00890-7 - Copeland, B. Jack, ed. (2006), Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford: Oxford University Press, ISBN 978-0-19-284055-4 - Helen Fouché Gaines, "Cryptanalysis", 1939, Dover. ISBN 0-486-20097-3 - David Kahn, "The Codebreakers - The Story of Secret Writing", 1967. ISBN 0-684-83130-9 - Lars R. Knudsen: Contemporary Block Ciphers. Lectures on Data Security 1998: 105-126 - Schneier, Bruce (January 2000). "A Self-Study Course in Block-Cipher Cryptanalysis". Cryptologia 24 (1): 18–34. doi:10.1080/0161-110091888754 - Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1966. ISBN 0-88385-622-0 - Christopher Swenson, Modern Cryptanalysis: Techniques for Advanced Code Breaking, ISBN 978-0-470-13593-8 - Friedman, William F., Military Cryptanalysis, Part I, ISBN 0-89412-044-1 - Friedman, William F., Military Cryptanalysis, Part II, ISBN 0-89412-064-6 - Friedman, William F., Military Cryptanalysis, Part III, Simpler Varieties of Aperiodic Substitution Systems, ISBN 0-89412-196-0 - Friedman, William F., Military Cryptanalysis, Part IV, Transposition and Fractionating Systems, ISBN 0-89412-198-7 - Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I, Volume 1, ISBN 0-89412-073-5 - Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I, Volume 2, ISBN 0-89412-074-3 - Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II, Volume 1, ISBN 0-89412-075-1 - Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II, Volume 2, ISBN 0-89412-076-X - Hinsley, F.H. (1993), "Introduction: The influence of Ultra in the Second World War", in Hinsley & Stripp 1993, pp. 1–13 - Singh, Simon (1999). The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography. London: Fourth Estate. pp. 143–189. ISBN 1-85702-879-1. - Smith, Michael (2000), The Emperor's Codes: Bletchley Park and the breaking of Japan's secret ciphers, London: Random House, ISBN 0-593-04641-2 - Tutte, W. T. (19 June 1998), Fish and I, retrieved 7 October 2010 Transcript of a lecture given by Prof. Tutte at the University of Waterloo - Winterbotham, F.W. (2000) , The Ultra secret: the inside story of Operation Ultra, Bletchley Park and Enigma, London: Orion Books Ltd, ISBN 978-0-7528-3751-2, OCLC 222735270 Further reading - Bard, Gregory V. (2009). Algebraic Cryptanalysis. Springer. ISBN 978-1-4419-1019-6. - Hinek, M. Jason (2009). Cryptanalysis of RSA and Its Variants. CRC Press. ISBN 978-1-4200-7518-2. - Joux, Antoine (2009). Algorithmic Cryptanalysis. CRC Press. ISBN 978-1-4200-7002-6. - Junod, Pascal & Canteaut, Anne (2011). Advanced Linear Cryptanalysis of Block and Stream Ciphers. IOS Press. ISBN 978-1-60750-844-1. - Stamp, Mark & Low, Richard (2007). Applied Cryptanalysis: Breaking Ciphers in the Real World. John Wiley & Sons. ISBN 978-0-470-11486-5. - Sweigart, Al (2013). Hacking Secret Ciphers with Python. Al Sweigart. ISBN 978-1482614374. - Swenson, Christopher (2008). Modern cryptanalysis: techniques for advanced code breaking. John Wiley & Sons. ISBN 978-0-470-13593-8. - Wagstaff, Samuel S. (2003). Cryptanalysis of number-theoretic ciphers. CRC Press. ISBN 978-1-58488-153-7. |Look up cryptanalysis in Wiktionary, the free dictionary.| |Wikimedia Commons has media related to: Cryptanalysis| - Basic Cryptanalysis (files contain 5 line header, that has to be removed first) - Distributed Computing Projects - Simon Singh's crypto corner - The National Museum of Computing - UltraAnvil tool for attacking simple substitution ciphers
<urn:uuid:55b20032-0c4d-42c3-9a6d-d0bf4ba33e75>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Cryptanalyst
2013-05-22T00:07:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.875107
6,198
- Support Us - About Us CHANGING AGE STRUCTURES IN POPULATIONS OF ZEBRA MUSSELS IN THE ST. CROIX NATIONAL SCENIC RIVERWAY Byron Karns, National Park Service, St. Croix National Scenic Riverway Zebra mussels have been a threat to the St. Croix watershed since the early 1990. In 1992, the first mussels were discovered in the Mississippi above the confluence with the St. Croix River. The first boat discovered with attached zebra mussels was in 1994 and reproduction was pinpointed by 2000. There is a critical need to understand the implications of an ever expanding and increasing number of zebra mussels in the river. This is a high priority for the NPS and ACoE and other natural resource management agencies. The NPS will gather information about the age structure of these populations to determine recruitment, growth rates, and mortality. This will aid in determining the affects of this animal on native fauna, including freshwater mussels. Anecdotal accounts of periodic, but substantial zebra mussel die-offs in large river systems in the Midwestern U.S. have been noted in the last several years. Details from the Illinois and Upper Mississippi rivers, suggest an early season recruitment followed by a late season population crash. However, these observations have been casual and not systematic or well documented. In order to predict impacts to river biota, an organized assessment of seasonal population dynamics of zebra mussels in a large river system is necessary. The St. Croix River is a 6th order system with moderate zebra mussel infestations within the downstream most 22 miles, and especially in the lower 6 miles below the Kinnickinnic Narrows. In 2006, densities of this invasive animal reached over 700m2 within this last pool. The affects of large numbers of zebra mussels in freshwater systems in North America have been well documented. Particularly, native mussels have been severely impacted by direct food and oxygen competition and indirectly by shell colonization. If, however, condition in certain river systems allow for veliger settlement and establishment, but limit growth through maturity, implication for management are numerous. Keywords: Zebra Mussels, Population Dynamics, Lake St. Croix, Water Quality
<urn:uuid:4dc1dcbf-e31e-4c39-b4ad-2c86c41a4865>
CC-MAIN-2013-20
http://www.smm.org/scwrs/publications/rendezvous/2007/zebramussels
2013-05-24T02:20:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936368
474
Psychoactive substances can be defined as drugs that alter your mind and thought process. The abuse of psychoactive substances is not so easy to define. The American Psychiatric Association, or APA, published the fourth edition of the “Diagnostic and Statistical Manual of Mental Disorders,” commonly referred to as the DSM-IV. The APA uses this manual to define standards related to mental disorders, including drug abuse. The manual uses technical language that is mainly used for research purposes, however many health care professionals try to simplify its definitions for everyday use in their practice. Many drug rehab facilities refer to the DSM-IV when treating their patients. Addicts who suffer from a dual diagnosis (a mental disorder in addition to their addiction problem) often require addiction treatment that can only be offered at a specialized drug rehab facility. When it comes to defining the abuse of psychoactive substances, a distinction must first be made between substance use and substance abuse. The following definitions have been suggested: - Prescription drug use – The use of a medication in a socially accepted way. It is often recommended by a doctor or healthcare professional to control mood or state of mind. An example of prescription drug use would be a patient taking a medication prescribed to them by their physician in order to treat their anxiety. The medication does affect their mind and body, but it is being used in a recommended way. - Prescription drug abuse – Problematic substance abuse. The use of a medication to alter or control state of mind in an illegal manner or a way that induces harm to one’s self. An example of this could be an individual stealing a family member’s medication and using it to get high. Substance abuse is dangerous. Even medications that are considered safe can be harmful when used inappropriately. When discussing prescription drug use, it is also important to define the difference between drug addiction and physical dependence. Such a distinction helps prevent confusion with appropriate drug use (such as pain medication after a surgery or accident), which can potentially lead to a physical dependence. - Drug addiction – The repeated, compulsive seeking or use of a drug despite the negative physical, social, or psychological effects that are caused by its use. Individuals with drug addictions will continue using a drug even if they don’t require it to treat a medical condition. An example of drug addiction can be seen by an individual who continues using a substance to get high despite the fact that they have missed days of school or have had an automobile accident while using. - Physical dependence – Physical dependence results when an individual’s body becomes used to a medication and cannot properly function without it. Not all medications cause a physical dependence, though. People who are physically dependent on a medication cannot abruptly stop using the substance. Instead, they have to slowly wean themselves off of the medication, usually with the help of addiction treatment at a drug rehab center. If the medication is stopped suddenly, patients will suffer from physical withdrawal symptoms and will become very sick. Just because someone is physically dependent on a certain substance or medication, does not make them an addict. There is a distinction because people who are just physically dependent will want to stop taking a medication after they don’t require it to treat their medical condition. Addicts on the other hand, will continue taking the medication to get high and have no goal of stopping. An example of physical dependence is a cancer patient who needs large amounts of pain medication that would experience a physical withdrawal if the use of that particular medication were to stop. This type of patient wouldn’t seek out such a medication if they did not require it, so their dependency doesn’t necessarily make them a drug addict. Sometimes though, drug rehab is required to help individuals terminate their physical dependence. Individuals suffering from drug addiction can become physically dependent on the drugs that they use, but this isn’t always the case. Sometimes drugs won’t cause a physical dependence. However, it is always possible to develop a psychological dependence on a drug. At any rate, an addict will always seek a medication for the purposes of getting high, rather than to treat a medical condition.
<urn:uuid:cd9c1c68-fe01-4cd5-bdf3-6a0ea7443460>
CC-MAIN-2013-20
http://www.unityrehab.com/rehab-corner/terms-related-to-substance-abuse/
2013-05-20T02:47:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948447
845
Jump to content GONE FOR A BURTON Q From Nick Carrington: What's the origin of the phrase gone for a Burton, please? A We wish we knew. In informal British English, something or someone who has gone for a Burton is missing; a thing so described might be permanently broken, missing, ruined or destroyed. The original sense was to meet one’s death, a slang term in the RAF in World War Two for pilots who were killed in action (its first recorded appearance in print was in the New Statesman on 30 August 1941). The list of supposed origins is extremely long, but the stories are so inventive and wide-ranging that you may find them intriguing: - Spanish Burton was the Royal Navy name for a pulley arrangement that was so complex and rarely used that hardly anyone could remember what it was or what to do with it. Someone in authority who asked about a member of a working party might be told that he’d gone for a burton. - The name of burton was given to a method of stowing wooden barrels across the ship’s hold rather than fore and aft. Though they took up less space this way, it was dangerous because the entire stowage might collapse and kill somebody. - The term burnt ’un referred to an aircraft going down in flames. - It refers to the inflatable Brethon life jacket at one time issued by the RAF. - It was a figurative reference to getting a suit made at the tailors Montague Burton, as one might say a person who had died had been fitted for a wooden overcoat, a coffin (compare the full Monty). - The RAF was said to have used a number of billiard halls, always over Burton shops, for various purposes, such as medical centres or Morse aptitude tests (one in Blackpool is especially mentioned in the latter context). To go for a Burton was then to have gone for a test of some sort, but to have failed. - It was rhyming slang: Burton-on-Trent (a famous British brewing town in the Midlands), meaning “went”, as in went West. - A pilot who crashed in the sea was said to have ended up in the drink; to go for a Burton was to get a drink of beer, in reference to Burton-on-Trent. So the phrase was an allusive reference to crashing in the sea, later extended to all crashes. - It is said that there was a series of advertisements for beer in the inter-war years, each of which featured a group of people with one obviously missing (a football team with a gap in the line-up, a dinner party with one chair empty). The tagline suggested the missing person had just popped out for a beer — had gone for a Burton. The slogan was then taken up by RAF pilots for one of their number missing in action as a typical example of wartime sick humour. There’s little we can do to choose one of these over the others. If the advertisements really did run before the War they would be the obvious source, though none have been traced and the most probable candidate, the Burton Brewery Co Ltd, closed in 1935 and was hardly well-known even before then. Whatever the truth, knowing a little about wartime pilots, my bet would be on some association with beer. [A version of this piece appears in my book Port Out, Starboard Home, which is available in a paperback edition from Penguin Books.] Page created 29 Oct. 2005 SHARE THIS PAGE WITH ... FOLLOW WORLD WIDE WORDS Please support World Wide Words. Buy anything from Amazon and get me a small commission at no cost to you.
<urn:uuid:294b531f-acf9-4f3f-8f39-c2ca2f3c54a7>
CC-MAIN-2013-20
http://www.worldwidewords.org/qa/qa-gon1.htm
2013-05-20T03:16:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698222543/warc/CC-MAIN-20130516095702-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.979888
775
Expansion poses no geophysical problems--the planet just keeps on growing and expanding, wherever and in whatever form it occurs, but the annual increase in diameter (~5-10 cm/yr or ~2-4 in/yr) is very small and difficult to measure. Subduction, on the other hand, is purely hypothetical because it is based on a fundamental assumption that the planet has always been the same size since it was formed 4.5-4.6 billion years ago; something almost impossible to prove. This philosophical assumption requires that any addition of surface area to one part of the planet would require an equal compensatory loss in some other region of the planet. Maintaining a constant diameter, however, raises a number of troubling questions about the mechanics of subduction: a. Not generally realized is that subduction, at a minimum, would require the Pacific basin to decrease in width by at least the ~2-4 cm/yr increase in width of the Atlantic basin in order to maintain Earth at a constant diameter and permit the entire Pacific Ocean basin to be swallowed! But, for subduction to be valid, another ~8-16 cm/yr of East Pacific Rise (EPR) growth (the greatest rate of new seafloor growth on the planet [Fig. 2]) also must be swallowed, for a total minimum subduction rate of ~10-20 cm/yr (~4-8 in/yr). b. And to the above totals one must add an amount equal to additional seafloor growth along thousands of kilometers of midocean ridges in the Indian Ocean and around Antarctica. The Indian Ocean, which has opened even wider than the Atlantic, also has no evidence of subduction within its confines. How can worldwide seafloor growth in oceans outside the Pacific be vectored smoothly into the Pacific basin where the EPR is generating a prodigious volume of new seafloor in the middle of the Pacific subduction area? c. A major flaw in subduction dogma is the very young age of the oldest Pacific Ocean sediments ever found in the Pacific basin. These sediments were cored on Ocean Drilling Program (ODP) Leg 129 at Site 801B (18° 38.52´N, 156° 21.582´E, Central Pigafetta Basin, just east of the Mariana Trench) and were found to be only ~169 Ma (Middle Jurassic) in age; roughly equal to the oldest sediments found in the Atlantic Ocean. d. Using these ODP data and extrapolations from magnetic anomaly lineations (isochrons) in the same area, Nakanishi, et al, arrived at a slightly older age of ~195 Ma, postulating “the shape of the early Pacific plate was a rough triangle” covering an area of 0.04x10 km² at ~190 Ma, 0.6x10 km² at ~180 Ma, and 3x10 km² at ~170 Ma. The Pacific plate is now estimated to cover an area of 108x10 km²—which means that the entire Pacific plate has been generated within the last ~195 Ma, thereby constraining the age of the Pacific basin to be no more than ~200-205 Ma. e. Proponents of subduction may argue that sediment ages less than ~200 Ma supports their contention that all the older Pacific seafloor has been subducted since the Atlantic basin first opened approximately ~160-175 Ma, and therefore none of the original Panthalassan seafloor can be found today. But this is only an inferred assumption and valid only if subduction has really existed. This is now a moot point because the evidence shown in Heezen and Tharp’s map shows that Panthalassa (Wegener's eo-Pacific Ocean) never existed. f. If subduction were actually occurring to offset worldwide seafloor growth, there should be constant and sustained seismic activity reflecting disappearance of older seafloor at the same rate new seafloor is being generated. There is indeed a great deal of earthquake activity throughout the Ring of Fire, but it is not equally distributed around the Pacific Ocean perimeter commensurate with the constancy of new seafloor growth that must be vectored in from oceanic areas outside the Pacific basin. g. There is no empirical proof that Pacific perimeter earthquakes are caused by subduction; this is inferred and purely hypothetical. There are more logical explanations such as crustal adjustments due to relaxed curvature and flattening of the Earth's crust as a consequence of expansion in diameter. Earthquakes, though powerful, are merely secondary effects of planetary expansion, not primary geophysical actions with independent motive power. h. Subduction fails to explain a satisfactory causative mechanism able to force thin ocean floors only 10 km thick to dive beneath thick continental shields 25-40 km thick without leaving behind some physical evidence. There is no evidence of ocean floors and seamounts diving into the deep ocean trenches (the trenches show little or no sedimentation, and no toppled seamounts). As noted by Roger Revelle in 1955, material recovered from even the deepest trenches “resemble in many ways deposits laid down in shallow water.” i. This exposes a related problem--the missing soft sediments that should have been scraped off the ocean floor when descending beneath a rigid continental shield over a period of two hundred million years. These soft sediments are an unconsolidated top layer of ocean floor ~10 meters thick. Massive amounts of sediments should be piled up against continental shores, or in the deep ocean trenches off the eastern coasts of Asia and Australia, the western coasts of North and South America, or in the Aleutian Trench. The sediments just aren't there; the ocean trenches are relatively free of sediments and there are no mountains of soft sediments piled up against any Pacific shore. As noted above, subduction fails on several grounds. The current dogma of "subduction" is a theoretical concept with no physical evidence to verify it, nor a plausible causative mechanism to support the claim that one tectonic plate dives, or is driven, beneath an opposing plate. Everything about subduction, including its origin, is based on pure hypothesis and speculation, beginning with an erroneous basic assumption that Earth’s diameter was fixed at the time of its creation. As explained in the simple hand demonstration showing subduction's fatal flaw, if subduction did exist, the Pacific Ocean basin must eventually be swallowed in its entirety if the Earth’s diameter is to remain constant. In fact, studies of Pacific plate movements that were intended to prove subduction, unwittingly included several measurements that show the Pacific Ocean basin to be increasing in width--not decreasing in width as required by subduction. The scientific literature contains countless papers purporting to prove subduction, but if examined closely, estimates of subduction velocities are usually inferred from midocean ridge growth rates, or are based on suggestive geophysical data without empirical measurements to prove the direction and velocity of motion. Benioff zones and deep-focus earthquakes, without directional evidence, are just as easily interpreted as obduction from beneath the continents—or, better, just a sudden shift of two crustal masses readjusting positions in response to expansion of the core and sheer gravitational weight. The epicenter depth of an earthquake bears no relationship to the direction of relative movements of the opposing masses that shifted and caused the earthquake, or the primary mechanism that caused the masses to shift. Such lack of plausible evidence forces one to question the dogma of subduction and plate tectonics. Furthermore, there is now ample geological evidence to validate the expansion theory, so subduction is no longer viable. With this evidence one may confidently postulate that Pangaea began to break up ~200 (~195-205?) Ma, at the end of the Triassic, when Asia and Australia broke away from North and South America to form the Pacific Ocean, followed by opening of the Atlantic, and all of today's ocean have been created since that moment in ©1999, St. Clair Enterprises (Page last updated 15 May 2001)
<urn:uuid:8875d6c2-2f36-4a81-8dcd-ee4b34dee901>
CC-MAIN-2013-20
http://www.expanding-earth.org/page_4.htm
2013-05-25T12:35:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936365
1,684
View your list of saved words. (You can log in using Facebook.) Royal palm (Roystonea regia).—E.R. Degginger/EB Inc. Any of about 2,800 species of flowering, subtropical trees, shrubs, and vines that make up the family Arecaceae (or Palmae). Many are economically important. Palms furnish food, shelter, clothing, timber, fuel, building materials, fibres, starch, oils, waxes, and wines for local populations in the tropics. Many species have very limited ranges; some grow only on single islands. The fast growth and many by-products of palms make exploitation of the rainforest appealing to agribusiness. The usually tall, unbranched, columnar trunk is crowned by a tuft of large, pleated, fan- or feather-shaped leaves, with often prickly petioles (leafstalks), the bases of which remain after leaves drop, often covering the trunk. Trunk height and diameter, leaf length, and seed size vary greatly. Small flowers are produced in large clusters. Among the most important palms are the sugar palm (Arenga pinnata, or A. saccharifera), coconut palm, date palm, and cabbage palmetto. This entry comes from Encyclopædia Britannica Concise. For the full entry on palm, visit Britannica.com.
<urn:uuid:fe219a70-6089-4c3d-a59c-27c237ade2b4>
CC-MAIN-2013-20
http://www.merriam-webster.com/concise/palm
2013-05-23T11:43:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.915331
296
9:30 am - 12:30 pm Chicago sits more than 600 miles from Gettysburg and more than 700 miles from Manassas and Atlanta, yet the city’s residents were intimately connected to the Civil War. Although the city was less than 30 years old when the war began, Chicago provided troops, supplies, and relief to the Union Army that proved critical to the war effort. In 1863 and 1865, the city hosted two important sanitary fairs, which raised money and supplies for wounded soldiers. Chicago also was home to Camp Douglas, which held Confederate prisoners of war. This seminar will reveal the interconnectedness between the Chicago home front and the Civil War battlefront, drawing largely on Newberry collection items. Newberry Teachers’ Consortium is a subscription program open to Chicago-area teachers.
<urn:uuid:c21cb769-2fc8-4c20-b0fc-9aac1b70f25c>
CC-MAIN-2013-20
http://www.newberry.org/05232012-chicago-and-civil-war
2013-05-20T12:49:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698958430/warc/CC-MAIN-20130516100918-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944963
164
Our eyes change as we grow older starting around age 50. For example, our ability to see color decreases as we age, particularly blues and pale colors. Although getting older doesn’t result in poor eyesight for everyone, declining vision is common. These changes make it harder for mature eyes to read print and electronic communications without some accommodations. If people will have difficultly reading what your write, why bother? Communicating electronically or in print with older readers requires accommodations for this reality. Here’s some tips for making information easier to read for persons living with vision challenges. 1) Use san serif fonts. That’s French for text fonts that are “without serifs.” Serifs are those frills at the end of letters. Times New Roman is an example. Those flourishes at the ends of the letters have the effect of making the letters run together making it harder to distinguish individual letters. San serif font examples: Ariel, Calibre, Tahoma, Verdana, Segoe UI. 2) Use a larger font size. The recommended minimum font size is 12 point for text in letters and print material. For Power Point presentations, use a minimum of 22 point fonts. Use 14 point for persons who have mild vision issues. Use 18 point or larger for persons with low vision. 3) Use dark colors for text (black, navy blue, etc.) on a light background, and especially avoid using blues, purples, and pale colors for your text. Pale blues are the worst, despite their popularity among some email users. 4) Backgrounds behind text should be light colored and uncluttered. It is much harder to read text that is overlaid on a photo or other busy background. Also, light colored text on black or other dark backgrounds is extremely hard to read for many persons with low vision. 5) Don’t use italics or underlining because they make letters run together (similar to the problem with serifs). Italics make it harder to distinguish between individual letters. Underlining also makes it harder to distinguish between individual letters and words. Italics with underling is worse yet. Because hyperlinks typically are light blue and underlined, using bold is a way to compensate. 6) Don’t use text effects, such as shadows, outlining, and 3-D. They make the text more blurry for older and tired eyes to read. 7) Increase white spaces between text by using short paragraphs, headings and subheadings, bullets, and left justified text. Don’t clutter a document, PowerPoint slide or webpage with a lot of clip art, animations, graphics, and photos. Keep it simple. 8) Use captions for photos, clip art, and graphics, especially if they are busy or small sized. If you have additional suggestions, please leave a comment for this posting in order to share it with others.
<urn:uuid:317afafc-987e-4969-8004-4994edd92bc4>
CC-MAIN-2013-20
http://www.mcphersonsentinel.com/article/20130212/blogs/130219751/0/blogs01
2013-05-19T10:04:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.880878
610
Cholesterol due to atherosclerosis Altoun most common complications stemming from high cholesterol levels and sustained, is arteriosclerosis, but what is really atherosclerosis. To understand more why it is so important to control blood cholesterol levels refer to me what is the definition of atherosclerosis. Find and preserve your health. One of the most prevalent cardiovascular diseases is atherosclerosis, which can mean a significant risk, as it not only reduces the quality of life, but also can lead to death. Definition of atherosclerosis it is known as atherosclerosis and fibro-proliferative thickening muscle or endothelial walls of small arteries and arterioles. It can also be considered as a group of diseases characterized by damage to the arteries produce small or medium size. What is atherosclerosis? You could say that in atherosclerosis can find three different entities * Atherosclerosis, which is characterized by the formation of atheromas (accumulated lipids) in the artery walls, they reduce the diameter of the arteries, causing less oxygen to the tissues and therefore there is a risk of ischemia acute myocardial infarction. Atherosclerosis can be linked to the process of arterial hypertension, blood vessel inflammation and autoimmune processes. This process can affect the arteries of different organs. * Monckeberg sclerosis, this is characterized by fibrosis and calcification of the middle of the arteries of the lower limbs, in this type of MS there is no reduction of the caliber of the artery. * Arteriosclerosis itself. Know the definition of atherosclerosis may be helpful to understand the consequences that may result in these diseases. The information and prevention go hand in hand, the more knowledge you have about this or other topics, the more likely you’ll have a better quality of life. Arteriosclerosis, along with other cholesterol problems, can be dangerous to your health. Consider Concordia University online programs to get a healthcare degree and help others with their cholesterol problems.
<urn:uuid:d9a7382f-d04a-496a-8dc1-af70d8fc4302>
CC-MAIN-2013-20
http://www.healthytipstoday.net/657-what-is-arteriosclerosis.html
2013-05-19T02:23:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922125
419
Saturday, July 24, 2010 Stephen Jones: Walking With the Wind This is a post by contributing writer, Stephen Jones, who is a progressive political activist and a resident of Las Cruces, New Mexico. On August 15, 1906 a small group of women and men, all United States citizens, gathered in the town of Harpers Ferry, West Virginia for a conference on the grounds of nearby Storer College. In the misty dawn light the following morning this band of Americans met at the center of the old town, near a small brick structure, an old fire engine house known colloquially as “John Brown’s Fort.” The so-called fort was the site of the fiery abolitionist's last stand against the Virginia militia, which had marched down on him, under the command of Robert E. Lee, in Brown’s abortive raid on the U.S. arsenal in 1859. In homage to the location's abolitionist past, the 1906 gathering at Harpers Ferry walked silently single-file toward the old building, still intact. To pay honor to the ground beneath them, they removed their shoes before crossing the green field to the “fort.” 1906 was the second gathering of the group, but only the first on American soil. A year earlier these same Americans had been denied a public meeting place in Buffalo, New York, and in desperation instead assembled across the Niagara River at Erie Beach in Canada. The original assembly was led by W.E.B. DuBois, Frederick L. McGhee, Mary Burnett Talbert and William Monroe Trotter, among others. They included lawyers and educators, clergymen and U.S. military veterans. All of them were descendents of former slaves. Because they had been forced to meet on Canadian soil in 1905 they called themselves the “Niagara Movement.” What they sought for themselves and others was an equal place at the American table. In an era dominated by vicious racial segregation, enforced by acts of terror, lynching, physical assault and jailings against any who dared to speak out for equality and justice, they were launching a historic campaign for civil and human rights in the United States. By 1909 the “Niagara Movement” took on its permanent organizational name when it became the NAACP, a multi-racial organization dedicated to the proposition that all persons are created equal. As the NAACP it led a national campaign for civil rights that, among other achievements, culminated in 1954, under the legal team led by Thurgood Marshall, with the landmark Brown v. Board of Education decision of the U.S. Supreme Court that struck down segregation in public schools and marked the beginning of the end of Jim Crow laws in the United States. Shirley Sherrod Fiasco: Fight for Equality Continues As we learned, all too clearly, from events earlier this week, no campaign for justice and dignity is ever really over in the United States. The same old demons, the same dark places in the American soul are never very far away. As we are all aware by now, on Monday a right-wing activist named Andrew Breitbart, acting in support of the racist Tea Party Express operation, distributed a doctored videotape of Shirley Sherrod, a U.S. Department of Agriculture employee, and daughter of a victim of racist violence. She was speaking to a rural southwest Georgia meeting of the NAACP and recalling events in her own life 24 years ago. Airing the faked tape over and over again, FOX and other right-wing media operations blanketed cable and the airwaves with screams of “black racism.” Chasing advertising dollars and ratings, and failing to engage in standard journalistic due diligence, CNN and other so-called “legitimate news” outlets immediately joined FOX in repeatedly airing the libelous videotape to their audiences. A day later, when the phony story unraveled, FOX, Breitbart and the rest of the right-wing media machine, in an incredible twist of Orwellian newspeak, claimed that somehow they, rather than Sherrod, had been wronged. There were no apologies. Not from FOX. Not even from CNN, though CNN did lead the way in airing the full 35-minute taped speech that not only exonerated Shirley Sherrod, but revealed her to be a gentle spokesperson for reconciliation and redemption. Rocked back on its heels for part of a day, FOX then tried to counter-attack by having its employees Sean Hannity, Bill O’Reilly and FOX’s popular talking-head Ann Coulter pronounce on-screen that Breitbart, the perpetrator of this fraud, had somehow been “set-up.” The FOX-Breitbart-Tea Party attack was, of course, only one in a long, long line of racist assaults emanating from the FOX and the rest of the right-wing media machine, and it was hardly the first originating with Breitbart. Sadly, the Agricultural Secretary, Tom Vilsack, and the Obama Administration unceremoniously ousted Sherrod without ever bothering to check the facts. Vilsack and the Administration were more concerned with damage control in a toxic cable-media cycle than standing up for the truth. Even more tragically, the venerable century-old NAACP joined in the wrongful repudiation of Sherrod, before finally righting themselves a day later. Freedom Ain't Free “Freedom ain’t free,” the famous civil rights activist Fannie Lou Hamer often said. The assault on Shirley Sherrod is not the first racist attack on a decent American citizen in our history, and it won’t be the last. Hopefully, the rest of us will be ready next time. None of us must ever again make the mistakes that were made by some of those from whom we should have expected leadership earlier this week. Fresh attacks on African Americans are invariably right around the corner. Similar attacks on Hispanics have been ongoing. As we all know, there is presently a human rights crisis in nearby Arizona. The 2010 Republican Party platform of Texas demands a fresh assault on the rights of women, and the searching out and “jailing” of lesbians and gay men. Freedom ain’t free, to repeat a phrase. The price of liberty, justice and freedom is eternal vigilance. We shall never be turned back. We had all best realize that we are all going to be in this for the long haul and get back to work. We must stand firm and walk with the wind. We, as Americans, have come far since that misty morning at Harpers Ferry in the summer of 1906 when the members of the Niagara Movement began blazing a trail across a green field, but we haven’t reached the end of that journey yet. “The battle for humanity is not lost or losing. All across the skies sit signs of promise,” W.E.B. DuBois proclaimed at the first meeting of the Niagara Movement. “The morning breaks over blood-stained hills. We must not falter, we may not shrink. Above are the everlasting stars.” To read more posts by Stephen Jones, visit our archive. Not that long ago, probably around 1997 maybe, I was in a MLK day parade which started out from the State Capitol in Austin, Tx and was headed a few miles away to the grounds of Huston-Tillotson College. About a block into it, I was on the center line, and a car came up the other way every now and then. I remember a black Town Car or something similar with dark windows and a window rolling down with a middle finger salute. In the same city, a few years earlier, in 1981, I looked around inside a pickup truck that I had gotten into, owned by a volunteer with the Urdy for City Council campaign. I had heard that there had been a firebombing of a truck, but had not heard anything more about it until I was sitting in it. The seat didn't fit and couldn't be bolted down, so you had to hang on to the sooty dashboard. These things still happen in supposedly urban, even progressive places. It is necessary to remember and to not allow the bigots to paint a different picture than is actually real. Posted by: Stuart Heady | Jul 24, 2010 3:02:25 PM The latest twist is that if you mention or acknowledge racial disparity either historically or in present times at all, that makes you a racist. You know the way Stephan Colbert does not see color? That. I run into that all the time with White people. They think that if they don't see or acknowledge that issue of race, than it just goes away and doesn't exist. God help you if you bring it up in daily conversation because people will walk away, even amongst the most educated. Posted by: qofdisks | Jul 25, 2010 8:07:54 AM I wouldn't be surprised. Americans are so good at ignoring history and living in a consumer dream world. Posted by: Pogue | Jul 25, 2010 12:47:42 PM This story points out the sad truth about the way people jump to conclusions about others without the whole story. It is especially damaging when it crosses the race barrier. Let us not forget those who are not pointed out in this piece. They are also the people of color who have been taught the ignorance of hate and rasism. Those that point out the faults of the white man without pointing out the faults of everyone. We shall all continue to work on these issues and may God bless all of us as we continue to struggle through them. Posted by: Sid | Jul 25, 2010 7:20:16 PM Sounds like Sid likes to blame the victim. He probably thinks women who are raped "asked for it." Blaming the obviously oppressed by claiming they are the oppressors is a favorite ploy of oppressors or people who don't believe oppression is real. In the case of African-Americans, it's hard to say that white Americans have ever faced anything even remotely similar. Posted by: Old Dem | Jul 25, 2010 9:11:35 PM I couldn't agree more, Old Dem, and while our friend Sid wishes to point out that minority of African Americans who have given up hope of national redemption and unity as somehow exemplar of a whole segment of our Americans family, the 104 year history of the NAACP, as outlined briefly in this piece proves otherwise. Were it not for the belief of the vast majority of African Americans, and the leadership they have shown over the decades to the rest this great and diverse nation, we might have devolved into the tribal hatreds we have seen in Europe or the Middle East or along the India/Pakistan divide. Thankfully we, as Americans have not. We are walking a different path. Let us continue to do so! If last week was a "teachable moment," as some have professed, the lesson Shirley Sherrod has taught us is that we can, as a people come together over a troubled past. She is right. As a Congressman from Georgia, John Lewis, a leader of the African American communities, and also of all Americans; a man who was a key leader of the Civil Rights movement long before he took a seat in Congress has repeatedly told us, we are, all of us, in our diversity, a beloved community. Diversity is what makes Americans a great and exceptional nation. In 2001 John Lewis said, "As we begin a new century, we must move our feet, our hands, our hearts, our resources to build and not to tear down, to reconcile and not to divide, to love and not to hate, to heal and not to kill. I hope and pray that we continue our daring drive to work toward the Beloved Community. It is still within our reach. Keep your eyes on the Prize." Posted by: Stephen Jones | Jul 26, 2010 1:45:26 AM I am amazed at the ignorance in the last two comments. Did you not learn anything about what happened. You say a rape victim should have blame. How sick are you? Sherrod in her own statements pointed out her own prejudices and her turn in time and how we can learn from them. This is not about a white oppressor it is about ignorance. Stephen Jones and Old Dem prove the point how those of us who are of color have also been taught hate of race by our own people. How quick we are to make it a one sided topic to promote racial hate. I am disgusted by the last two comments and have not the words to explain how you are trying to perpetuate the seperation of race to promote a blog. There is so much to learn from each other and we still cannot have a discussion without trying to give excuses for hate. There is none. There is no reason for racial hate to be acceptable or excusable. Where do you get your hate? It is sad to know that you can freely express it. Posted by: Sid | Jul 26, 2010 9:42:06 PM Whenever Sid is cornered he changes the subject. He talks about the ignorance of others but shows only ignorance himself by refusing to understand the realities of both recent events and history. It is not racism to understand that racism is alive and well in America and how we got here. Read some history Sid. The "faults" of brown and black people are nowhere near equatable with what has been perpetrated in this nation for hundreds of years. Posted by: Old Dem | Jul 27, 2010 9:27:44 AM That still does not make racism acceptable, EVER! Period. End of Story. Ms. Sharrod told a great story of how to overcome that and Old Dem just cannot understand that. Posted by: Sid | Jul 27, 2010 10:45:06 PM
<urn:uuid:cc535cff-c841-4a76-9447-44592737039a>
CC-MAIN-2013-20
http://www.democracyfornewmexico.com/democracy_for_new_mexico/2010/07/stephen-jones-walking-with-the-wind.html
2013-05-24T22:48:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968215
2,858
Rafinesque’s family moved to France the year following his birth, and at age nineteen Rafinesque became an apprentice in the mercantile house of the Clifford Brothers in Philadelphia. He returned to Europe in 1805 and spent the next decade in Sicily, where he was secretary to the U. S. consul. During this time his first scientific books were published. He returned to the United States in 1815 and remained in America the rest of his life, becoming a naturalized citizen in 1832. He was professor of botany and natural science at Transylvania University in Lexington, Kentucky from 1819 to 1826. The early conclusion by Rafinesque that the taxonomic categories called species and genera are man-made generalizations which have no physical existence led to his deep appreciation of variation in plants. He understood that such variation, through time, will lead to the development of what we call new species. But he had no explanation for the cause of variation, though he did consider hybridity a possible mechanism and, without calling it that, he had what appears to be some perception of mutation. Hence, he never developed a theory of evolution earlier than Darwin, as sometimes has been claimed, because Rafinesque had no inkling of natural selection and his understanding of geological time was far too shallow.
<urn:uuid:afc57491-92fb-41c3-b7fe-d40f11ad90d3>
CC-MAIN-2013-20
http://diogenesii.wordpress.com/2012/10/22/october-22-1783-a/
2013-05-18T05:31:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.984685
271
Entrance to Fort McHenry in 1925 (Baltimore Sun ) The quickly constructed, earthen star-shaped Fort Whetstone, built in 1776, was the first fortification to occupy the site where Fort McHenry now stands. The city's vulnerability to a waterborne attack was exposed in the spring of 1776 when the British sloop Otter sailed unchallenged up the Chesapeake Bay. Baltimoreans quickly sprang into action, and the Maryland Council of Safety began to supervise construction of the fort on Whetstone Point, which was to include an 18-gun battery. In order to keep out and frustrate any enemy ship trying to enter the harbor, three iron chains suspended by floating blocks of wood were stretched from the fort across the Patapsco River to Lazaretto Point. A small passage was left open for friendly shipping to pass through (though to ensure their friendliness, they were required to sail directly under the battery's guns). The feared British attack during the Revolutionary War never came, and by 1780, with the fort in decline, everything except the cannons and furniture was ordered sold. When England declared war on France in 1793, the fort, which had remained under Maryland control, was turned over to the federal government. The next year, Congress authorized and embarked on a massive program of constructing coastal forts along the Atlantic Coast to protect shipping and cities. Maj. John Jacob Ulrich Rivardi had been appointed by Secretary of War Henry Knox to oversee construction of the present Fort McHenry, which began in 1794. The fort was named for Marylander James McHenry, a signer of the Constitution and member of the Continental Congress who served as the nation's third secretary of war from 1796 to 1800. The design of the pentagonal structure, which was completed by 1803, was the concept of French engineer Jean Foncin. Secretary of War William Eustis wrote in 1811 that the fort was "a regular pentagon of masonry, calculated for thirty guns, a water battery, with ten heavy guns mounted, a brick magazine that will contain three hundred barrels of powder, with brick barracks for two companies of men and officers; without the fort, a wooden barracks for one company, also a brick store and gun house." The fort's only trial by fire wasn't long in coming. In 1812, the United States declared war on England, and two years later, after marching on and burning Washington, British forces turned their attention to Baltimore, where they attempted an invasion by land and sea. A British flotilla of 50 warships rode at anchor off North Point, 12 miles from the city, on Sept. 11, 1814. As tensions mounted, Gen. Sam Smith, who was in charge of the defense of the city, made sure his force of 15,000 men was ready for the inevitable attack. It was raining early Sept. 13 when a furious bombardment commenced. It was over by dawn the next day. The fort had held by virtue of its plucky band of defenders, and the British fleet was in retreat. Its bravery and steadfastness inspired Francis Scott Key, who once described the War of 1812 as a "lump of wickedness," to pen what became "The Star-Spangled Banner." The city had been spared, and it was the first and last time the fort's batteries would fire on an enemy. The fort was abandoned by the federal government in 1860, and revived again with the outbreak of the Civil War. During the Civil War, members of the state legislature and other Marylanders sympathetic to the Confederacy were imprisoned at the fort. After the Battle of Gettysburg, more than 7,000 Confederate prisoners were incarcerated there. As Fort McHenry's strategic usefulness began to fade, the decision was made to close it in 1912, with its land being redeveloped into a cattle quarantine station. An editorial in The Baltimore Sun decried this plan as a "desecration," and suggested that "prominent Maryland men and women bring this to the attention of Congress." A July 4, 1911, Sun editorial said, "Fort McHenry should be converted into a national park by act of Congress, reserved forever as a possession of the nation." On July 20, 1912, the 141st Coastal Artillery Co., the last unit at the fort, departed for Fort Strong in Boston harbor, bringing down the curtain on 110 years of military usefulness. "The flag, which the British could not shoot away, was hauled down by a lone soldier. No bugle sounded retreat; no soldiers stood with bared heads. Fort McHenry was dead," reported The Sun. It didn't take long for the neglected fort to become overrun with weeds. A headline in The Sun lamented, "Poor Old Fort McHenry — Can't Somebody Do Something?" On May 20, 1914, The Sun reported that by a unanimous vote, the city's Board of Estimates agreed to accept the federal government's offer of Fort McHenry for use as a public park. Several months later, The Sun reported, Baltimoreans climbed aboard the No. 2 streetcar that took riders to the fort's gate where they roamed the grounds. The Sun said it had become a "popular destination for families who spend the day on the reservation." It was another war that once again brought Fort McHenry back to military usefulness. In 1917, the Army established General Hospital No. 2, a 3,000-bed facility that was used to care for wounded soldiers from Europe. The fort was closed in 1920, and two years later, the temporary hospital building that had been erected by the Army during the war was torn down. Fort McHenry was declared a national park in 1925 and placed under the administration of the War Department. During the 1930s, the Works Progress Administration began to restore Fort McHenry to its present 19th-century appearance. It came under the authority of the National Park Service in 1933. History of sorts was made again June 14, 1922, when President Warren G. Harding arrived at the fort to dedicate a memorial to Francis Scott Key. It was the first time an American president's voice had been broadcast on the radio.
<urn:uuid:7b42ceb6-3606-43a0-8a6d-d97f21fe6cbb>
CC-MAIN-2013-20
http://articles.baltimoresun.com/2012-06-17/news/bs-md-backstory-fort-mchenry-20120617_1_fort-mchenry-whetstone-point-coastal-forts
2013-06-18T22:41:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.980117
1,280
Open front unrounded vowel |Open front unrounded vowel| The open front unrounded vowel, or low front unrounded vowel, is a type of vowel sound, used in many vocal languages. According to the official standards of the International Phonetic Association, the symbol in the International Phonetic Alphabet that represents this sound is ⟨a⟩. In practice, however, it is very common to approximate this sound with ⟨æ⟩ (officially a near-open (near-low) front unrounded vowel),citation needed and to use ⟨a⟩ as an open (low) central unrounded vowel. This is the normal practice, for example, in the historical study of the English language. The loss of separate symbols for open and near-open front vowels is usually considered unproblematic, since the perceptual difference between the two is quite small, and very few, if any, languages contrast the two. See open central unrounded vowel for more information. If one needs to make it explicit that the vowel they're talking about is front, they can use symbols like [a̟] ([a] with "advanced" diactric), or [æ̞] (lowered [æ]), with the latter being more common. The IPA prefers terms "close" and "open" for vowels, and the name of the article follows this. However, a large number of linguists, perhaps a majority, prefer the terms "high" and "low", and these are the only terms found in introductory textbooks on phonetics such as those by Peter Ladefoged.citation needed |IPA vowel chart| |Paired vowels are: unrounded • rounded| |This table contains phonetic symbols, which may not display correctly in some browsers. [Help]| IPA help • IPA key • chart • chart with audio • view - Its vowel height is open, also known as low, which means the tongue is positioned as far as possible from the roof of the mouth – that is, as low as possible in the mouth. - Its vowel backness is front, which means the tongue is positioned as far forward as possible in the mouth without creating a constriction that would be classified as a consonant. This subsumes central open (central low) vowels because the tongue does not have as much flexibility in positioning as it does in the mid and close (high) vowels; the difference between an open front vowel and an open back vowel is similar to the difference between a close front and a close central vowel, or a close central and a close back vowel. - It is unrounded, which means that the lips are not rounded. Most, if not all, languages have some form of an unrounded open vowelcitation needed. For languages that have only a single low vowel, the symbol for this vowel <a> may be used because it is the only low vowel whose symbol is part of the basic Latin alphabet. Whenever marked as such, the vowel is closer to a central [ä] than to a front [a]. |Arabic||Levantine1||بان||[baːn]||'he/it appeared'||See Arabic phonology| |Bengali||পা pa||[pa]||'leg'||See Bengali phonology| |Catalan||Majorcan||sac||[sac]||'sack'||Corresponds to ä in other varieties. See Catalan phonology| |Chinese||Cantonese||沙 saa1||[saː˥]||'sand'||See Cantonese phonology| |Mandarin||他 tā||[tʰa˥]||'he'||See Mandarin phonology| |English||Canadian3||hat||[hat] (help·info)||'hat'||Depending on the region, the quality may vary from front [a] to central [ä] or even further back [ɑ] (in some Scottish and Ulster accents for example); the length may also vary. Many speakers may have [æ] (or even [ɛ], in case of older RP speakers and some Southern English dialects) instead. For the Canadian vowel, see Canadian Shift. See also English phonology| |Cockney56||stuck||'stuck'||Can also be [ɐ̟].| |Inland Northern American7||stock||'stock'||See Northern cities vowel shift| |German||Bernese||drääje||[ˈtræ̞ːjə]||'turn'||See Bernese German phonology| |Gujarati||શાંતિ shanti||[ʃant̪i]||'peace'||See Gujarati phonology| |North Frisian||braan||[braːn]||'to burn'| |Polish8||jajo||[ˈjajɔ] (help·info)||'egg'||Fronted allophone of /a/ [ä] between palatal or palatalized consonants. See Polish phonology| |Spanish9||Eastern Andalusian||las madres||[læ̞(h) ˈmæ̞ːð̞ɾɛ(h)]||'the mothers'||Corresponds to ä in other dialects, but in these dialects they're distinct. See Spanish phonology| |Vietnamese||xa||[saː]||'far'||See Vietnamese phonology| |Welsh||mam||[mam]||'mother'||See Welsh phonology| - Thelwall & Sa'Adeddin (1990:38) - Ternes & Vladimirova-Buhtz (1999) - Boberg (2005:133–154) - Wells (1982:305) - Hughes & Trudgill (1938:35) - W. Labov, S. Ash and C. Boberg (1997). "A national map of the regional dialects of American English". Department of Linguistics, University of Pennsylvania. Retrieved March 7, 2013. - Jassem (2003:106) - Zamora Vicente (1967:?) - Merrill (2008:109) - Boberg, C. (2005), "The Canadian shift in Montreal", Language Variation and Change 17: 133–154 - Hughes, Arthur; Trudgill, Peter (1979), English Accents and Dialects: An Introduction to Social and Regional Varieties of British English, Baltimore: University Park Press - Jassem, Wiktor (2003), "Polish", Journal of the International Phonetic Association 33 (1): 103–107, doi:10.1017/S0025100303001191 - Ladefoged, Peter (1999), "American English", Handbook of the International Phonetic Association (Cambridge Univ. Press): 41–44 - Merrill, Elizabeth (2008), "Tilquiapan Zapotec", Journal of the International Phonetic Association 38 (1): 107–114, doi:10.1017/S0025100308003344 - Ternes, Elmer; Vladimirova-Buhtz, Tatjana (1999), "Bulgarian", Handbook of the International Phonetic Association, Cambridge University Press, pp. 55–57, ISBN 0-521-63751-1 - Thelwall, Robin; Sa'Adeddin, M. Akram (1990), "Arabic", Journal of the International Phonetic Association 20 (2): 37–41, doi:10.1017/S0025100300004266 - Wells, J.C. (1982), Accents of English, 2: The British Isles, Cambridge: Cambridge University Press - Zamora Vicente, Alonso (1967), Dialectología española (2nd ed.), Biblioteca Romanica Hispanica, Editorial Gredos
<urn:uuid:b58031eb-17a0-44f3-9aaf-97884eaf9d46>
CC-MAIN-2013-20
http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Open_front_unrounded_vowel
2013-05-24T02:06:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.785685
1,745
An appetite for food addiction? Many of us enjoy foods that are high in sugar, fat, salt, or a combination of the three; take savoury biscuits for example. Dr. David Kessler’s The End of Overeating explores in detail the art and science behind the creation of highly palatable foods. Despite their appeal, most of us are able to exhibit adequate control when consuming or over consuming these foods. However, there is a subset of the population for whom control over these foods becomes problematic and can result in unhealthy weight gain or obesity. For these individuals, consumption can become life threatening. Why is it that some who wish to reduce their intake of these foods are not able to do so? While some may point to weak willpower or misplaced motivation, prominent neuroscientists have suggested that the failure to regulate eating behaviours is symptomatic of a food addiction. Dr. Nora Volkow, the effervescent director of the National Institute on Drug Abuse and one of the most vocal advocates of the brain disease model of addiction, has argued that some forms of obesity should be included as a mental disorder in the latest iteration of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). The case for food addiction is based on several lines of evidence. Animal and human neuroimaging studies have shown that some foods, particularly those high in sugar, fat and/or salt, can produce changes in the brain similar to those produced by addictive drugs, such as cocaine and heroin. This is unsurprising given that drugs of addiction were originally shown to act on the neural pathways that mediated everyday rewarding activities, such as eating (see here for a thoughtful exposition). There also appears to be a common genetic risk for vulnerability to drug addiction and obesity. Patterns of eating in obese individuals, in particular those with binge eating disorder, also closely resemble key behaviours exhibited by drug abusers. Leaving aside concerns regarding the strength of the evidence presented in favour of food addiction and an addiction model of obesity (see Ziauddeen et al. and Epstein et al. for informative reviews), what would be the social and clinical implications of labelling overeating and certain subsets of obesity as an addiction? It has been argued that food addiction could improve our understanding of obesity and the development of more effective treatments and policy measures to reduce over consumption. Its effect on stigma is less certain. Obese individuals may come to view their weight and eating as something outside of their control; they are not just suffering from urges that encourage weight gain but are suffering from a ‘brain disease’ that causes them to overeat. Research is needed to understand what implications an addiction model of obesity will have upon individuals struggling to reduce their weight and to control their eating. Dr. Robert Lustig and colleagues, among others, have argued that neuroscientific research on the addictive properties of certain foods provides compelling evidence for policies that reduce their consumption across the population, such as taxes and regulations on the sale and promotion of sugar. There is already strong epidemiological evidence for the efficacy of population-based policies, as well as practical evidence from those used to regulate tobacco. Neuroscience may in fact be used to promote a high-risk approach focussing primarily on individuals with or at risk of food addiction. This would be counterproductive as excess weight is a global public health concern whereby a shift away from processed energy-dense foods to those that promote health would be beneficial to most, not simply those deemed to be suffering from a food addiction. Historical behaviour of the alcohol, tobacco and gambling industries suggests that the food industry is likely to exploit this view. Our research (forthcoming) suggests that the public, while supporting the view that certain foods can be addictive, does not support commonly advocated population-based policies, such as advertising restrictions and food taxes, to improve levels of obesity. The reasons behind this warrant further investigation. While the current enthusiasm of research into animal and human neuroimaging studies on overeating warrants thoughtful consideration, scientists and researchers should also consider the potential social and policy implications of their findings. The clinical impact of food addiction diagnoses on the ability to reduce overeating and weight also needs to be examined further. Contrary to popular accounts, an addition model of obesity may actually reduce obese individuals’ ability to control their eating and weight.
<urn:uuid:c6f1fa8d-e7ee-4751-9434-7244a3f80a7b>
CC-MAIN-2013-20
http://blog.practicalethics.ox.ac.uk/2013/01/an-appetite-for-food-addiction/
2013-05-24T15:44:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961733
871
TV time: Why children watch multi-screensAugust 3rd, 2011 in Health New research published in BioMed Central's open access journal, International Journal of Behavioral Nutrition and Physical Activity, examines the relationship children have with electronic viewing devices and their habits of interacting with more than one at a time. A sedentary lifestyle, linked to spending lots of time watching TV and playing computer games, is thought to lead to obesity, lower mental well-being, and cause health problems in later life, including diabetes. It is now possible to watch TV 'on demand' via the internet, play computer games on laptops, on hand-held devices or mobile phones, to keep in contact with friends using text, Facebook, Skype, and MSN, and to do all this concurrently. However previous studies have not examined if children take part in multi-screen viewing or children's reasons for doing so. Questioning 10-11 year olds, researchers at the University of Bristol and Loughborough University found that the children enjoyed looking at more than one screen at a time. They used a second device to fill in breaks during their entertainment, often talking or texting their friends during adverts or while they were waiting for computer games to load. TV was also used to provide background entertainment while they were doing something else especially if the program chosen by their family was 'boring'. Dr Jago from the University of Bristol explained, "Health campaigns recommend reducing the amount of time children spend watching TV. However the children in this study often had access to at least five different devices at any one time, and many of these devices were portable. This meant that children were able to move the equipment between their bedrooms and family rooms, depending on whether they wanted privacy or company. So simply removing the TV from a child's room may not be enough to address the health concerns and we need to work with families to develop strategies to limit the overall time spent multi-screen viewing wherever it occurs within the home." Provided by BioMed Central "TV time: Why children watch multi-screens." August 3rd, 2011. http://medicalxpress.com/news/2011-08-tv-children-multi-screens.html
<urn:uuid:0f3e0451-212d-4cd6-a5b3-e66ec1be0178>
CC-MAIN-2013-20
http://medicalxpress.com/print231560419.html
2013-05-23T11:55:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973101
451
As you enter a building on the northern rim of the campus at the University of Leicester in central England, the colorful signboard that catches your eye suggests science at the cutting edge. Whether or not the Space Research Center deserves this tag, cutting-edge would be absolutely the right description for the work being carried out by one of the building’s other inhabitants. Sarah Hainsworth is an engineer who, for the past few years, has been investigating the sharpness of knives — and not just knives, but screwdrivers, scissors and even ballpoint pens. Any implement, in fact, that has featured as a murder weapon. “If you give somebody a knife and ask them if it’s sharp, how do you measure that? How do you quantify sharpness?” Hainsworth said. “I suppose that’s what we’ve been trying to do.” The cutlery industry has measured the effectiveness of the slicing edges of blades designed for chopping up vegetables or carving meat, and how best to maintain that sharpness, she says. “But what nobody has really done is to investigate the sharpness of points,” she said. They certainly have now. Hainsworth sometimes spends whole afternoons dropping knives from various heights on to foam or legs of pork — a close substitute for human skin — and recording the results in finest detail. The ease with which a blade penetrates the human body has become an important consideration in court cases. “When there’s been a stabbing, one of the common defenses is: I didn’t mean to kill; the knife was extremely sharp,” Hainsworth said. The implication is that the accused is less guilty because they had not used extreme force. Hainsworth began looking at knife sharpness in 1999 when she was leading an undergraduate project about materials that might improve their ability to retain a keen edge. Because of this work she was approached five years ago by a solicitor. “A knife had been used in a murder,” she said. “It had made a wound of significant depth through the chest and into the spine. The accused had said he always kept his knives extremely sharp. I was asked whether I could verify this so they could use it as a defense.” After testing the knife, she established that it was not sharp. “I imagine the solicitor rejected that line of defense,” she said. It was while working on that case that Hainsworth realized there was a gap in knowledge. The existing research had been short-term and was out of date. “What we really needed to do was look at this with some of the modern tools that are available to engineers,” she said. This lack of convincing research was posing a problem for the criminal justice system, she said. “The courts in the first instance will ask the forensic pathologist for an indication of the degree of force required in a stabbing offense and the pathologist might assess the sharpness by testing the blade on their fingers. When you’re trying to communicate to a jury how sharp something is, I don’t think that is a particularly convincing assessment. And it’s open to criticism from the barrister on the opposing side that the method is unscientific,” she said. Is Hainsworth’s work likely to help more people mount successful defenses against the most serious charges of violence involving knives? She cites two cases in response. “In one, the weapon was relatively blunt and the prosecution barrister argued that because it was blunt, considerable force was used, and that influenced the sentence. I had another case where I found the weapon to be sharp, and the barrister argued that the accused had knowingly kept the knife very sharp and had taken it to use as a weapon in that knowledge,” she said.
<urn:uuid:034ca091-a49a-49ef-ab3e-87eb93bc22d5>
CC-MAIN-2013-20
http://www.taipeitimes.com/News/editorials/archives/2009/02/14/2003436079
2013-05-23T05:09:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702849682/warc/CC-MAIN-20130516111409-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.980441
814
Blast From the Transit Solution Past: New York's Moving Sidewalks Ever wonder what transportation innovation looked like in 1903? Here's a fun one, courtesy the New York Public Library's Digital Gallery: This design for a series of "underground moving sidewalks" ran in the February 28, 1903 edition of Harper's Weekly, alongside an article about how the city ought to be tackling congestion issues on the eve of new bridge connections bringing commuters and travelers from Brooklyn into New York City. The key issue for the early 20th century transportation planners behind these "moving sidewalks" was to reduce congestion caused by massive crowds of people crossing the East River and to make it easier for them to get to the subway, elevated train and surface transit terminals in Manhattan. The newest proposition to solve this problem is now before the Board of Estimate, which has referred it to the Rapid Transit Commission. It is popularly known by the misnomer, "Moving Sidewalks." It is really a system of moving platforms or continuous trains. Men like [railroad magnate] Cornelius Vanderbilt, Stuyvesant Fish [president of the Illinois Central Railroad], E.P. Ripley [president of the Atchison, Topeka and Santa Fe Railway], and others are interested in the new plan, and the engineers not only pronounce it feasible, but extremely economical. The moving platform is simply the improvement of the continuous trains that were in operation at the Chicago and Paris Expositions, and that carried millions of people along at a good rate of speed and in absolute comfort without accident. The plan involved a loop of moving platforms running from Bowling Green at the bottom tip of Manhattan and up the east side of the island, connecting with the Brooklyn, Manhattan, and Williamsburg bridges. The system would run in subway-like tunnels about 30 feet wide, with stations every two blocks, on a six-mile loop. About 10,600 platforms would be needed for such a system, and they would be arranged with three separate tracks: two stepping platforms, one running at 3 mph and the second at 6 mph, and the main platform, which would have seating and run at 9 mph. According to the May 8, 1903 edition of The New York Times, concerns that the moving sidewalks would be prohibitively expensive would likely doom the project, requiring unheard of 5-cent fares. Although an October 9, 1903 edition of the Times reports that the rapid transit commission recommended "immediate adoption" of the plan with a $3 million outlay, but that obviously never happened. Gothamist suggests that "Brooklyn Rapid Transit had a hand in burying the idea, as they had a monopoly on the borough's public transit at the time." The idea was originally proposed by a New Jersey merchant named Alfred Speer in 1871, and was eventually put into action during the World's Columbian Exposition in Chicago in 1893, with a moving sidewalk designed by Joseph Lyman Silsbee. Breakdowns were reportedly common. The design would be improved upon and installed again at the Paris Exposition Universelle in 1900. That moving sidewalk can be seen in action in this film, shot by one of Thomas Edison's associates at the exposition. After that, the idea seems to have petered out. Smithsonian's Paleofuture blog has a nice rundown of the history of moving sidewalks – from Speer's original idea in 1871 all the way up to The Jetsons. But outside of failed urban projects from the 1900s and futuristic cartoons, the closest thing we've got to this potentially amazing public transit idea is the highly watered-down moving walkways so common in airports around the world.
<urn:uuid:26446280-1bfe-441d-8ec4-76d8ee39d29b>
CC-MAIN-2013-20
http://www.theatlanticcities.com/commute/2012/06/blast-transit-solution-past-new-yorks-moving-sidewalks/2301/
2013-05-22T00:48:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958171
748
Channel Tunnel Geology C. S. Harris A summary of the geology of the Channel Tunnel, compiled for you by a professional geologist who was consultant geologist during Channel Tunnel construction. From Geology Shop One of the main sites for geological information worldwide . This is one of over 50 link pages plus there are many original articles. Try our site specific SEARCH ENGINE to find the information you want or go to our MAIN INDEX page. Or try our site specific ORIGINAL articles, FREE geological stuff, or use our HOMEWORK AND TUTORIAL GUIDE . (Links to: Channel tunnel facts and brief history; Detailed geology on each side of the Channel Tunnel; Chalk, the basic facts; Chalk, the White Cliffs; Landslips of East Kent; Channel tunnel, a detailed sequence stratigraphy) This page is an attempt to provide a summary, based on existing published data, of the general geology of the Channel Tunnel. It has been published with the kind permission of Eurotunnel. In 1833 the French engineer Thome de Gamond began the first systematic geological and hydrographic survey of the Channel. The data obtained allowed him to propose a number of designs for a crossing. His work is the first example of an engineer recognising the importance of the regional geological framework to the design of a Channel crossing. Many more were to follow in the footsteps of de Gamond culminating in the work for the site investigation for the completed project. Much of this work concentrated quite sensibly on the continuity of the potential tunnelling horizons across the Channel based on surface mapping, shallow borehole data and geophysical information. Thus the overwhelming majority of work concentrated on rocks of Late Cretaceous age, with some work on the Gault Clay and Folkestone Beds. Apart from a study for the potential influx of hazardous gases, mainly from rocks of Carboniferous age, very little work on pre-Cretaceous data was undertaken, even in the later site investigations. Map showing the regional geology of SE England and NW France. Structural data were considered to be important both for the stability of the excavations and for long term stability of the tunnels (earthquakes). The structural control exercised on the Mesozoic and Palaeogene rocks by the largely concealed Palaeozoic rocks of the London-Brabant massif was not fully documented (Shephard-Thorn et al, 1972) even by the time of the later site investigations. Results obtained by TML' s (the contractor) geotechnical department together with a number of key papers by leading academics (Mortimore and Pomerol, 1991) since construction began suggest that lateral changes in the stratigraphy and structure can be related back to observable structures in the underlying Palaeozoic rocks. Figure showing the configuartion of the three tunnels. The region includes southeast Kent, the Strait of Dover and adjacent parts of northwest France. It is located at the eastern end of the major Wealden-Boulonnais Anticlinorium (Anticline), which straddles the region. This major structure separates the Palaeogene basins of the southern North Sea and the eastern English Channel. The sea-bed outcrops of the Jurassic, Cretaceous and Palaeogene rocks are adequately defined by recent research involving bottom sampling, seismic profiling and some shallow boreholes, but in general the stratigraphy is less well known than for the onshore successions. In Kent and Sussex, the Weald Anticline is bounded to the north and south by the North and South Downs respectively; its continuation across the Channel is seen in the Boulonnais, where it is delimited by the horseshoe-shaped Chalk uplands of the Haut Boulonnais. East Kent forms the northeastern edge of the post-Mesozoic anticline, with the Mesozoic rocks exposed at the surface dipping away from the axis beneath the North Downs. To the north of the main Chalk outcrop, Tertiary strata are preserved in the Richborough Syncline, beyond which the Chalk again rises to the surface to form the anticlinal Isle of Thanet. In both the Weald and the Boulonnais, the Anticline has been deeply eroded to expose the core of earlier Cretaceous and Jurassic rocks. In the Boulonnais, not only does the axial inclination of the major structure bring the Palaeozoic floor much nearer the surface than in east Kent, but normal faulting during the Tertiary along the lines of reactivated N 110 degree faults initiated at the close of the Variscan Orogeny has raised the Palaeozoic core to an even higher level in a horst-like structure (Auffret and Colbeaux, 1977), so that it is actually exposed in an inlier centred on Ferques. The first proposals for a fixed link between the two countries were made in the early nineteenth century and included various highly imaginative combinations of bored tunnels, immersed tubes, bridges and artificial islands. Most schemes, however, were far ahead of the engineering techniques and geological knowledge of the time to be considered seriously. One scheme even proposed the boring of a short direct route between Folkestone and Cap Gris Nez in Jurassic rocks of mixed lithologies, not ideally suited for tunnelling. The more reasonable proposals did at least recognise the importance of gaining a sound knowledge of both seabed topography and geology along the route, and it was these factors that ultimately would determine the success of any scheme. Geological section along the length of the Channel Tunnel Despite the early history and attempts made in the late nineteenth century, the first modern investigation directly associated with the building of the Channel Tunnel was undertaken in 1958/59. This comprised both geophysical and borehole surveys with further work being carried out during the periods 1962-65, 1972-74 and more recently in 1986-88 as part of the present scheme. The initial two campaigns formed part of feasibility studies and were not intended to provide the complete information necessary for design purposes, and they therefore covered very large areas and had fairly broad specifications. They were, however, of considerable significance at the time and have been very useful in terms of their general contribution to the total database. The 1972-74 survey was the first to investigate a specific tunnel route as part of a tunnel design and was restricted in its area of coverage, as were the recent 1986-88 surveys. In all, a total of 116 marine and 68 land boreholes have been drilled along the alignment and over 4000 line kilometres of geophysical survey completed. Continuous seismic profiling was and still is considered to be the most acceptable means of providing geological information relatively quickly for the large areas to be covered, but it provides only indirect information about the structure and nature of geological strata. Direct information may only be obtained from boreholes, which also provide representative samples of the ground for testing and one-dimensional information to control the geophysical survey. The marine site investigations differ from the land investigations in that: (a) Geophysical surveys are a very cost-effective method of obtaining geological data offshore. Consequently, these methods form a major part of the marine investigations carried out for the Channel Tunnel. (b) Information derived from land sited boreholes is much cheaper to acquire than from marine boreholes (e.g. 1987 comparative costs of £20 000 as against £0.5 million for a typical Channel Tunnel borehole). Thus, boreholes tend to dominate in this instance. The seabed of the Strait, which reaches a maximum depth of 60 m along the tunnel alignment, is essentially the result of Quaternary erosion after several periods of emergence. It intersects (approximately at a right angle) the large Weald-Boulonnais Anticline, which trends WNW-ESE (parallel to the tunnel). The core of this anticlinal structure is occupied by rocks of Jurassic age, which are themselves bounded by Cretaceous strata (chalks, clays and sandstones). The whole assemblage rests on the primary basement, which outcrops in the Marquise region but was only encountered during site investigations at a depth of 114 m below natural ground level adjacent to the Sangatte access shaft. Figure showing the seafloor geology of the Dover Straits The Weald-Boulonnais anticline is in fact composed of several main flexures which have resulted in numerous small secondary anticlines and synclines, in particular the Sangatte-Quenocs anticline, which directly affects the French side of the project. The Strait may be divided into two basic structural units, with continuity between them: (a) the UK part, comprising a relatively undisturbed structure representing the offshore extension of the Kent basin (b) the more deformed French part featuring the Sangatte-Quenocs anticline and some relatively substantial faults The whole of the Channel Tunnel route is within the Cretaceous layers that form the north slope of the Weald-Boulonnais Anticline. In France all three tunnels start within the flinty chalks of the Lower Senonian at the portal at Frethun. Proceeding generally down sequence the tunnels pass through Turonian chalks and into the clayey chalks of the Cenomanian, which then form the main tunnelling horizon right up to the UK portal. The UK portal at Castle Hill was constructed in a mixed face of the lowest Chalk Marl, Glauconitic Marl and topmost Gault Clay. The UK portal posed specific difficulties during construction as the tunnel passed through a major landslip at the base of Castle Hill. This was continually monitored by TML during construction while major remedial works were undertaken, including toe-weighting and drainage channels, in order to sufficiently stabilise it. The general dip of these Cretaceous beds is very slight (1 to 4 degrees maximum eastwards) in the Sangatte area, but from the beginning of the undersea section the NNE dip (i.e. beds strike roughly parallel to the tunnels) increases rapidly over the space of 1km to reach 20 degrees in the French half. Towards the UK section it then decreases just as quickly to 2-5 degrees, dips characteristic of the greater majority of the UK section. The sea floor of the Channel, which is generally very flat and mostly slopes gently southwestwards, is often notched by deep fossil valleys filled with sandy-gravelly alluvia or flints. These valleys are the result of glacial erosion and their position and form .are imposed by the main structural features of the geology. In this area the tunnel route carefully avoided the 80 m deep Fosse Dangeard trench, situated 500 m to the south in the middle of the Strait at the dividing line between the French and UK sectors. Tunnelling hazards in the Straits of Dover The fracturing of the Chalk formations through which the tunnels were driven is related both to the re-activation of faults that were present in the Jurassic and basement rocks beneath, and to the development of the Weald-Boulonnais Anticline. Tectonic control of sedimentation also occurred as the Weald-Boulonnais Anticline began to form, during the deposition of the Cretaceous formations. The specific consequence of this was that the total thickness of the various Cretaceous layers is less on the French side, where sedimentation took place in a more marginal shallow shelf environment. From the end of the Palaeozoic era the Channel successively underwent two extensional tectonic phases, a compressive phase and a further extensional phase: (a) The first extensional episode gave rise, from the beginning of the Mesozoic era, to the Western Channel semi-graben. (b) The second extensional phase lasted from the end of the Jurassic to the Middle Cretaceous (Albian) and may be related to the beginning of the opening of the Gulf of Biscay. Relatively minor tectonic events have been recorded in the Late Cretaceous. These have been proven to be controlled by major structures in the Palaeozoic basement (Mortimore & Pomerol, 1991). (c) Just after the end of the Cretaceous (beginning of the Tertiary era) the cover was subjected to slight compression, resulting in deformation with a large radius of curvature (broad anticlines and synclines of the Weald-Boulonnais Anticlinorium), contemporaneous with the Pyrenean folding. (d) Finally, the majority of the faults now affecting the cover and concerning the tunnel appeared at the end of the Tertiary/beginning of the Quaternary on the occasion of a third extensional phase at the time of formation of the tectonic trenches of Alsace, the Bresse and the Massif Central. This third phase reactivated many of the old faults in the basement. The intensity of these movements, particularly the last extensional phase, is not the same everywhere, and it should be noted that the tunnel route is situated to the north and just outside of the most highly tectonised zone. This explains why the route actually intersects only a few major faults (with throws of a few metres) on the French side, while on the UK side no fault with a throw greater than 1m was recorded during tunnelling. In the cliffs between Dover and Folkestone the dominant discontinuities (joints) dip steeply, between 60 and 70 degrees (to subvertical) with a WNW-ESE strike. These according to Bevan & Hancock (1986) correspond with the fracture type 'conjugate steeply inclined hybrid joint' and are attributed to gentle flexuring of the strata during geologically recent times. A second set of WNW-ESE trending discontinuities strike the coast at intervals of 1.0 km to 1.5 km and extend inland for 10km or more. Each discontinuity or lineation comprises a group of closely spaced subvertical joints and corresponds to the fracture type 'vertical extension joints' also described by Bevan and Hancock. A third set of major joints running NE-SW, associated with the formation of the dry valleys, is not so evident in the section of cliff studied because of their proximity with the predominant orientation of the cliffs. Further discontinuities occur subparallel to the cliffs. However, these are not related to regional structural trends but to a continuous process of stress relief taking place within metres of the cliff face. Furthermore, the initiation of discontinuities subparallel to the cliffs maybe attributed to creep movements within underlying more malleable Gault Clay, particularly where the cover to the clay is limited, i.e. towards the western end of Abbot's Cliff. In the Lower Chalk, exposed towards the base of the cliffs, joints are locally seen to 'sole out' as they penetrate the more clayey strata. During construction of the Channel Tunnel only two main conjugate sets were recognised, high angle to vertical WNW-ESE and NE-SW. These, together with a major subhorizontal set of joints were of major significance in the stability of the tunnels during excavation. This was particularly evident where the tunnel trend was parallel to one of the major joint sets, increasing the possibility for wedge failure in the sidewalls of the tunnel. Channel tunnel data did not allow the separation of the joint sets as proposed by Bevan & Hancock (1986) nor was there any proof within the tunnels of significant changes in joint orientation or joint abundance directly under the dry valleys. Folding, faulting and fracturing of the Palaeozoic rocks occurred during the formation of the Variscan Armorican massif. The Jurassic rocks and Cretaceous formations were subsequently deposited on these structurally complex basement rocks. While the Cretaceous strata, within which the whole tunnel route is situated, includes sequences that differ greatly both in their characteristics and in their properties (overconsolidated clays, glauconitic marls, flinty chalks etc) and are therefore easy to distinguish from each other, they also include sequences whose boundaries are difficult to establish, as their changes in nature or properties are very gradual. Such is the case with the boundary between the Chalk Marl (Craie Bleue) and the overlying Grey Chalk and the boundaries between the flinty chalks of the Senonian and the flintless Turonian. From the top downwards (i.e. from the most recent to the oldest) the layers that concern the tunnel route are described below. This is also the order in which the tunnel route encountered them from the French Portal at Frethun. Figure showing the major geological units plotted against their calcimetry Senonian chalks with continuous layers of flint The flints are decimetric in size and form almost continuous layers at spacings of 0.50 to 1 m. Only the underland part of the tunnel route on the French side passed through these White Chalks (for approximately 1.5 km from the Fréthun portal). The upper part of the Turonian comprises White Chalks 10 to 15 m thick containing layers of flint, similar to the facies in the Senonian chalks. After these came White Chalks with few flints: a 12 m thick sequence consisting of a granulose White Chalk with greenish clayey streaks. After passing through this formation, the tunnels encountered no more flint. Finally came 24 m of marly chalks and a nodular chalk, which is 19 m thick on the French side, 15 m on the UK side, consisting of nodules of hardened yellowish chalks in a chalky matrix containing greenish marly streaks. The total thickness of the Turonian is 60-70 m. The tunnel route encountered few chalks of this age before the Sangatte shaft, after which the tunnels were excavated entirely within rocks of Cenomanian age (minor exceptions being the UK service tunnel, which beneath the UK Crossover is within Albian strata; the pump stations and all three tunnels towards the UK portal which are all partly within Albian strata) all the way to the UK portal. One characteristic of the series of Cenomanian chalks is the overall decrease in their calcium carbonate content (and corresponding increase in clay content) with increasing depth towards the Gault Clay. This variation is not entirely progressive, as the sedimentation of the chalks is cyclic from a clayey phase at the base to a calcareous phase at the top. Each cycle is 0.2 to 2 m thick. The progressive increase in clay content takes the form of clayey intermediate beds, which increase in thickness and become increasingly clayey towards the base of the series. The Chalk Marl is specifically characterised by its extensive marly intermediate beds which have made it less sensitive to fracturing and alteration and hence less permeable than the overlying chalks. Upper and Middle Cenomanian Chalks These are the equivalent of what were previously known as the 'White and Grey' Chalks. They can be divided into four units with quite distinct lithologies: a) At the top, beneath the Turonian, occurs a unit,1 to 2 m thick, which comprises a succession of thin marly layers. (b) Below this a 10 to 15 m thick unit of solid greyish White Chalk with some thin many intermediate layers is present. (c) Next, a unit 15 to 18 m thick comprising a finely rhythmic assemblage containing chalk beds, several decimetres thick, with thin bluish to greenish marly layers at the base. (d) Finally a 6 to 8 m thick unit, which is a granular chalk with frequent small hardgrounds, together with metre thick beds of more calcareous chalk, separated by very thin marly layers. It is often in the middle of this unit that the bluish colour associated with the Chalk Marl (Craie Bleue) first appears. Frequently there is indurated chalk at the base of the unit. The tunnel route encountered these chalks between the Sangatte shaft and the portal and at some points in the undersea section. Lower Cenomanian Chalks Essentially equivalent to the Chalk Marl, the Middle Cenomanian comprises three distinct units: (a) An upper unit which is 7 to 10 m thick and comprises metre thick beds of marly chalk, often bluish in colour, separated by extensive darker blue intermediate marly layers, identifiable in borehole samples by the occurrence of two microfossil species characteristic of the top of unit (Rotalipora. reicheli and C. formosus). (h) A middle unit, 6 to 9 m thick, often lighter in colour, comprising solid clay beds separated by very marly intermediate layers. (c) A basal unit, 6 to 9 m thick, each bed with a base of marls which change progressively upwards to marly chalks. Beds of sponges are frequently present at the chalky tops of the sedimentation cycles and form small very indurated decimetre thick levels. Variation of bulk density and calcimetry in a typical chalk marl sedimentary cycle The thickness of the Chalk Marl is least towards the Quenocs anticline, 18-20 m at the beginning of the undersea tunnel, with a subsequent steady increase to 28-30 m on the UK side. Basal Cenomanian Chalks All of the previously described layers occurred in both the French and UK sectors alike, although in the UK sector only the Lower and Basal Cenomanian chalks were encountered during tunnelling. In contrast the basal units of the Cenomanian exhibit marked lateral changes. (a) an upper unit of very clayey homogenous chalk, typically 5 to 7 m thick but up to 12m thick in the vicinity of the Shakespeare Underground Development, is always present on the UK side, but only represented on the French side by a thickness of 1-2 m between adjacent to the UK sector. (b) the Glauconitic Marl (Tourtia), is 1 to 12 m thick. The facies of the Glauconitic Marl are highly irregular, ranging from compact and indurated sandstone to clayey-calcareous sands, and the transition to the formation above is often very gradual. In contrast, the base is consistently indurated and very dense, with phosphate nodules that reflect seismic waves. A major facies change within this unit occurred at the UK Crossover, where directly under the Crossover a hard indurated calcareous sandstone with a sharp top and base was recorded on an Early Cenomanian structural high, whilst eastward and off this high the glauconitic marl was observed to thicken rapidly to around 12m with a sharp base and very transitional top. The total thickness of the Lower and Basal Cenomanian chalks, frequently reaches 32-35 m. Due to the optimisation of the tunnel route, these comprise over 90 % of the formations through which the undersea section of all three tunnels pass. Gault clay (Albian) Zone 6a, a greyish clayey chalk with facies intermediate between the Chalk Marl and the underlying Gault Clays, is unevenly distributed on the UK side, present in traces on the French side (1-2 m), and attains maximum thickness of 6-7 m in the vicinity of the UK Crossover. Although some authors have included this intermediate stratigraphic unit in the overlying Cenomanian, a major unconformity which is associated regionally with minor folding occurs at the top of this unit. This unconformity is the most distinctive event between the base of Bed XII and the Mid-Cenomanian unconformity. As Zone 6a occurs below this event it is considered to be part of the Albian. This topic was extensively researched during the construction of the Channel Tunnel, mainly because of the need to estimate the relative height of the tunnel above the base of the Glauconitic Marl, a major seismic reflector and one which was used to geostatistically contour (isopachyte maps) the subsurface geology across the Dover Straits. It was proven that the greatest thickness of Zone 6a occurred in a NNW-SSE trending channel or 'graben like' structure 5-6 Km in width. This structural/depositional low is parallel to one of the major structural trends of the region. As, by the time of Glauconitic Marl deposition, the Crossover area became a structural/depositional high, a minor inversion is implied at the boundary. This structural event is consistent with regional information indicative of an angular unconformity at the Zone 6A/Glauconitic Marl boundary. Below Zone 6a occurs the more typical Gault Clay. Dark grey overconsolidated clays at the base, pass into lighter grey clays towards the top (due to their higher calcium carbonate content), containing numerous phosphatic horizons which represent periods of non deposition. The minimum thickness on the French side is between 10-12 m with a steady increase to 40 m or more at some locations on the UK side. Towards the French sector the Late Albian above Bed XII reduces to several metres or less with sedimentary sequences as thin as a few centimetres separated by marked unconformities. The base of the formation is characterised by a thin layer of cemented glauconitic sands which reflect seismic waves. Glauconitic clayey sands with crossbedded stratification (15 m thick on the French side and 25 m on the UK side). Large areas of Greensand outcrop on the sea floor a few kilometres to the southwest of the tunnel route. As the Greensands are permeable and because they were potentially in direct hydraulic contact with the seafloor with a Gault Clay cap rock overlying them, it was possible that if encountered in the tunnels they could be under significant hydrostatic pressure (equivalent to their depth below sea level). While this was not anticipated to be a problem in any of the tunnels, it did pose a potential threat during probing undertaken to verify the safety of the tunnels. The present position of the cliffs between Folkestone and Dover was established during Late Glacial times, probably during the later stages of the Flandrian transgression when weathering processes were at their most aggressive. Since then the cliffs have been subjected to marine erosion, subaerial weathering and more recently the works of man. The attack by the sea tends to have a destabilising effect whilst the process of subaerial weathering would, given no further marine erosion, ultimately result in a stable cliff the shape of which would resemble the escarpment of the North Downs to the west of Folkestone. The rate of erosion of the toe of the cliffs between Folkestone and Dover has been measured as up to 0.75 m per year (May, 1966). However, the rate and amount of erosion at any specific locality is dependent on the protection of the cliff foot by shingle or debris from recurrent cliff falls. Clearly the process of toe erosion is halted, albeit temporarily, by sea defences with sufficient height to provide protection from wave splash and spray or with sufficient width to keep the shoreline remote from the cliff base. Studies of the effects of the spoil reclamation platform on beach processes and cliff erosion have established that the net west to east longshore drift will create a build up of beach deposits at the western end of the reclamation platform. In the course of time shingle will bypass the reclamation platform to replenish the foreshore to the east of the existing platform where the cliffs are undergoing severe attack at the toe. The mean rate of cliff top retreat between Folkestone and Dover has been estimated by May (1966) to be 0.09 m/yr. More detailed estimates, made by comparing the position of the cliff top shown on the first edition of the Ordnance Survey of 1872 with that shown on the coastal mapping prepared for TML in 1986, indicate that the mean rate of cliff top retreat was greater above the protected section of coast (0.13 m/yr) than above Abbot's Cliff (0.08 m/yr) and Shakespeare Cliff (0.06 m/yr). This seemingly anomalous result may be attributed to the gradual regression of the drift deposits, which form the upper few metres of the cliff face, back to a more stable angle than that at which they were left by the slope trimming operations of the railway company in 1843. Figure showing the geomorphology, landslips and tunnel construction at the UK terminal Subaerial weathering in the form of water erosion, wetting and drying, wind attack and frost action causes the gradual frittering away of the cliff face, both the chalk and overlying drift deposits. Aided by the process of stress relief, the more gradual processes of material removal can be accompanied by the occasional toppling of loosened blocks, particularly where support has been lost above the more readily degraded marl bands. This process is particularly prevalent at the horizon of the Plenus Marl where the overlying and more brittle Melbourn Rock becomes undermined by erosion of the softer marl. These processes are at their most active throughout the winter months but particularly following periods of heavy rain and or ground freezing, i.e. January, February and March. The chemical processes of chalk solution by naturally slightly acidic rainfall and the growth of crystalline salt in cracks on the lower slopes may also be contributory, albeit to a minor extent, to the gradual degradation of the chalk cliffs. The original lithological and geotechnical characteristics of the chalks have been modified by weathering processes which were most intense during the interglacial times when the Strait was above sea level. Classification of weathering grades and the identification of these grades in boreholes during the various site investigation campaigns allowed a weathering profile to be defined across the Strait. As a broad generalization, it was possible to show that the sound chalk thickness from the top of the Gault is typically a linear function of the ground thickness above the Gault. It was also found to penetrate more deeply into the chalk rock mass in the vicinity of faults and in the vicinity of highly fractured zones. The permeability of the chalk mass results from the network of fractures running through it. The chalk matrix itself is almost impermeable; laboratory measurements on samples produced values less than 10-9 for both the Grey Chalk and Chalk Marl. The permeability of the fractured rock mass, which is at least 1000 times greater, depends on two factors: (a) the geometry of the fracture network (density, orientation and especially degree of interconnection between the various discontinuity sets) (b) the hydraulic conductivity of each individual fracture (aperture, filling, roughness, continuity etc). Greater depths of weathering associated with such zones are readily visible on the Sussex coast to the east of Brighton where increased fracturing and deep weathering can be seen associated with the dry valley systems where they intersect the cliffs. These are in the much purer and highly permeable white chalk where permeability of the rockmass can easily be in the order of 10-3 m/s. The main reason for the choice of the Chalk Marl as the main tunnelling horizon was because of its relatively low permeability typically in the range 10-7 to 10-8 m/s. The lower permeability also results in a reduced susceptibility to penetration by water and therefore deep weathering. It was never anticipated that deep weathering as seen in the white chalk would occur at the Channel Tunnel. The fracture spacing recorded in the Grade II chalk (99%+ of the UK sector) during tunnelling varied from around 1 per m to as high as 1 per 3.0 m. Typically, in the Chalk, fracture abundance greatly reduced with depth. On the UK side every attempt was made to record weathering at tunnel horizon. No clear evidence was found, apart from where it was expected, close to the portal and the entrance to Adit A2 at Shakespeare cliff. Prof. R Mortimore undertook SEM analyses on behalf of TML on a limited number of fracture surface samples recovered from the Marine Service Tunnel in the area of the worst tunnelling conditions encountered. The results proved the presence of limited mineralisation of the fracture surfaces which might be attributable to early stages of weathering. Even with this slight mineralisation, the fracture spacing clearly indicated Grade II chalk. Despite the relatively high porosity of the Lower, Middle and Upper Chalk the primary permeability remains very low due to the absence of continuity between pore spaces. Reynolds (1947) recognised the impermeable nature of the Lower Chalk in the Folkestone-Dover chalk block stating that the movement of water is prevented in any direction other than along fissures. This secondary permeability is governed by lithology, structure and topography. Zones of high secondary permeability are associated with the more brittle and consequently more jointed strata above the Grey Chalk, notably in the Melbourn Rock. Water movement is related to the density and aperture of the fissures. Both decrease with increasing clay content towards the base of the Lower Chalk and there is no measurable activity within the lowest 30 m of the Chalk, i.e. within the Chalk Marl. The top of the Chalk Marl corresponds to a spring line which is observed widely in the Chalk of both SE England and NW France. In the cliffs between Folkestone and Dover the spring line falls gradually eastwards emerging at about +20 m OD at the western end of Abbot's Cliff and falling to about -45 m OD at the eastern end of Shakespeare Cliff. More recent research has indicated that water movement in Chalk is very strongly influenced by even relatively minor aquicludes and major subhorizontal joints. These typically lead to water migration in a dip direction. Where the structure of the chalk comprises a series of domes, the intervening structural low may become a conduit for the runoff from the dome structures themselves facilitating the formation of valley systems. This theory does nor require the valley systems to be the result of preferential weathering of more faulted and jointed and faulted. The direction of water movement is preferentially along those fissures which originated as extension joints in response to gentle flexuring of the strata. There is some correlation between the points at which water issues from the cliff and the emergence of the WNW-ESE fissure systems, notably Steady Hole spring - in the Folkestone Warren, Lydden Spout Spring - Abbot's Cliff, and a former submarine spring now buried beneath the reclamation. As previously described, this latter spring is understood to have issued at a higher elevation behind the railway following periods of heavy rainfall at a point which corresponds to the spring noted by Reynolds (1947, 1970,1972). The topographic control on ground water levels behind the cliffs is significant in that the Aycliff dry valley, which approaches the back of the cliff from the northeast, has the effect of lowering the potential for high ground water or perched conditions in the cliffs behind Round Down and Shakespeare Cliff. Furthermore, the steep landward-facing slopes behind these sections of cliff have the effect of directing run-off inland and away from the cliff. The levels of the main groundwater table in the Folkestone-Dover chalk block have been determined by observations over many years of the standing water levels in wells and boreholes. Minimum ground-water contours indicate levels behind the cliffs to be +15 m OD at Abbot's Cliff falling to 0 m OD at Shakespeare Cliff. The nearest point for which long term observation are available is in a well at Church Hougham, 1.1 km inland from Abbot's Cliff, where a fluctuation of 15 m has been recorded over a period of 13 years. Records from Dover Castle, 3.5 km east of the site, show a variation of only 2 m between maximum and minimum levels over a period of 8 years. Extrapolating to the area of interest one can infer fluctuation in ground water of between 5 m and 10 m behind Abbot's Cliff and between 0 m and 5 m at Shakespeare Cliff. These general water levels are consistent with a standing water level of 25 m OD encountered in a borehole sunk by British Rail in the cliff behind Lydden Spout. Of significance to cliff fall processes is the occasional development of high transient water pressures, associated with infiltration, perched above marl bands (aquicludes) within the Middle Chalk and upper part of the Lower Chalk following periods of intensive and or prolonged rainfall. It is considered that high transient pressures above the Chalk Marl and the Plenus Marl following heavy rainfall was the final trigger for the substantial fall at the western end of Abbot's Cliff in January 1988. Forewarning of collapse in this part of Abbot's Cliff was provided by signs of distress at the cliff top and by the shearing of ventilation shafts detected as early as the 1950s. The long-term influence of the Channel Tunnel service and running tunnels on regional ground water levels has been demonstrated by long-term borehole monitoring to be insignificant as the tunnels are located within the relatively impermeable Chalk Marl horizon and the tunnels linings were in any event back-grouted during construction rendering the tunnel effectively watertight. One surprise encountered during construction was a minor acquiclude which occurred above the crown of the UK Crossover. This marl seam was a mere several cms in thickness yet it effectively prevented downward migration of groundwater into the cavern. The relatively high water pressure above this marl seam triggered a minor slab failure in the crown during construction. Once recognised the problem was simply dealt with by installing drainage. The original and very practical approach adopted for optimising the route of the 150 km of tunnels that now link France and the UK some 900,000 years after the Strait came into existence was to ascertain at every point within a 1 km corridor the accuracy of prediction of each of the main parameters of the project (top of Gault, top of Craie Bleue, permeability of Craie Bleue, etc.) with a view to placing the tunnel route in the zone of least risk, If this was not possible, then either the risk would be met by adapting the tunnel location or the work procedures to the degree of confidence; or the accuracy would be improved by conducting further exploration, an approach that was adopted for the location of the two crossovers. The accuracy depended on that of the actual data, their distribution, the complexity of the parameters to be represented and, above all, the inevitable interpolation between data. Geostatistical methods were adopted for contouring the main stratigraphic boundaries and made it possible to optimise the results from the exploratory work. This resulted in, for example, the position of the top of the Gault being defined with a standard deviation of only ±2 to 3m in the tunnels and ±1 to 1.5 m at the crossovers. On the French side this parameter was checked during the tunnelling approximately every 250 m and in 44% of cases the deviation from the prediction was within ±1 m, and in 82% of cases within ±2 m, i.e. substantially better results than the deviations predicted statistically that were used as a basis for design of the project. Similar results were obtained for the UK side apart from a section past the UK Crossover where the error was up to 6m. This was one of the few geological surprises encountered and it may be no coincidence that it occurred in an area of much expanded Glauconitic Marl, which thickened rapidly away from the UK Crossover 'high'. The successful completion of the tunnel while meeting few geological surprises validates not only the whole of the chain of exploratory operations, each of the phases of which had been rigorously optimised and pushed to its technological limits, but also the geostatistical methods that resulted in tunnelling in the best possible geological conditions.
<urn:uuid:c20238df-166d-4669-a22c-aad50cc5af37>
CC-MAIN-2013-20
http://www.geologyshop.co.uk/chtung.htm
2013-05-24T22:36:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950486
8,396
What Is It? Retinoblastoma is a form of cancer that develops on the retina. The retina is the structure at the back of the eye that senses light. It sends images to the brain which interprets them. In short, the retina allows us to see. Although rare, retinoblastoma is the most common eye tumor in children. In most cases, it affects youngsters before age 5. It causes 5% of childhood blindness. But with treatment, the vast majority of patients maintain their sight. About 40% of retinoblastoma cases are hereditary. This form of the disease usually affects children under age 2. It can affect 1 eye (unilateral) or both (bilateral). All cases of bilateral retinoblastoma are hereditary. These cases can be associated with a tumor in the brain's pineal gland. Unilateral retinoblastoma is usually not hereditary. It generally occurs in older children. Children with retinoblastoma are more likely to develop other types of cancer later in life. The risk is higher in children with the hereditary type. Children treated with radiation therapy or certain types of chemotherapy also have a higher risk. Children who develop retinoblastoma in one eye have an increased risk of developing it in the other eye. They need frequent eye exams -- even after treatment. Doctors recommend that children with retinoblastoma get checked regularly for other cancers throughout their lives. Many of the second cancers that develop in long-term survivors of childhood retinoblastoma are caused by the radiation therapy used to treat the original cancer. A specific gene leads to the development of retinoblastoma. In the hereditary form of the disease, all of the patient's cells have a mutation, or change, in this gene. On its own, this single mutation doesn't cause the disease. But if the patient develops a second mutation in a retina cell, the cancer can develop. If it does, both eyes are usually affected. In the nonhereditary, or sporadic, form, both mutations occur by chance. It usually affects one eye. The hereditary form of the disease -- and the gene that causes it -- can be associated with other types of cancer. These include cancers of the soft tissues or bone and an aggressive form of skin cancer. The most common sign of retinoblastoma is a whitish-looking pupil. However, it does not always mean the child has the disease. Children with retinoblastoma also may have a crossed eye that turns out toward the ear or in toward the nose. But again, this is a common condition, one that's likely to be noncancerous (benign). Less common symptoms of retinoblastoma include: - Redness and eye irritation that doesn't go away - Differences in iris color and pupil size - Bulging of the eyes Newborns with a family history of retinoblastoma should be checked by an ophthalmologist (eye specialist) before leaving the hospital. But in most cases, a doctor diagnoses the condition after parents notice an abnormality and have the child's eyes examined. An ophthalmologist diagnoses retinoblastoma by doing a dilated-pupil examination. This involves viewing the retina with an indirect ophthalmoscope to see if a tumor exists. The indirect ophthalmoscope is different than the hand-held direct ophthalmoscope most doctors use to look inside the eye. It has more magnifying lenses and gives the ophthalmologist a clearer view of the entire retina. This exam is usually done under general anesthesia. That way, the doctor can look carefully at the child's retina. Sketches or photographs of the retina help "map" the tumor's location. Ultrasound, which uses sound waves to create images, often is done to measure larger tumors that make it difficult to see inside the eye. Next, either computed tomography (CT) scans or magnetic resonance imaging (MRI) scans may be done. By looking at the images they generate, doctors can determine whether the cancer has spread outside of the eye, into the brain or to other parts of the body. If cancer has spread, additional tests may be needed. Retinoblastoma will continue to grow until it is treated. There is no known way to prevent retinoblastoma. Because retinoblastoma may be hereditary, genetic testing is critical. Patients who carry the gene for the disease have an 80% chance of developing it. They also have a 50% chance of passing on the gene to a child. Siblings and children of retinoblastoma patients should be examined every 2 to 4 months during the first years of life. Treatment for retinoblastoma will depend on: - Whether the disease affects one or both eyes - The extent of the disease in the eye(s) - Whether vision can be saved - Whether the cancer has spread beyond the eye If the tumor is large, in one eye and vision cannot be saved, the eye may be removed. This is a simple operation. About three to six weeks later, the child usually can be fitted with an artificial eye. When tumors occur in one or both eyes and vision might be saved in one or both eyes, more conservative treatments may be considered. Radiation or chemotherapy may be used to shrink the tumors. Local treatments may then be used to eliminate the tumor and preserve vision. These may include brachytherapy, photocoagulation and cryotherapy. - Radiation can be an effective treatment for some patients. Retinoblastoma is very sensitive to radiation. However, it can damage the retina or other tissues in the eye. Radiation can also affect the growth of bone and other tissues near the eye. It may increase the risk of developing other cancers, too. Two types of radiation may be used. External beam radiation involves focusing beams of radiation on the cancer from a source outside the body. Brachytherapy involves putting radioactive material into or near the tumor. - Photocoagulation uses lasers to destroy the tumor. - Cryotherapy uses extreme cold to freeze and destroy cancer cells. Doctors may choose cryotherapy for small tumors. To be effective, it usually has to be done several times. It is not used if the patient has several tumors. - Chemotherapy involves receiving one or more anticancer drugs through an injection into a blood vessel. Retinoblastoma tends to resist chemotherapy, but it may be effective when combined with other treatments. For example, it may be used to shrink tumors to increase the chances of success with photocoagulation, cryotherapy or brachytherapy. It is used commonly to treat a child whose tumor has spread beyond the eye. Chemotherapy also may be given to a child when the cancer has not spread beyond the eye, but when it has grown extensively within the eye, making it more likely to spread. Retinoblastoma is a rare disease that requires specialized care. Seek treatment for your child at a center with staff experienced in treating it. When To Call a Professional If you see any abnormalities in your child's eyes, take him or her to the doctor right away. You may be referred to a doctor who specializes in childhood eye diseases. Early diagnosis and treatment are crucial to saving vision -- and life. The outlook depends on how much the cancer has grown in and beyond the eye. Nearly all children treated for retinoblastoma live at least five years. Children who are cancer-free after five years are generally considered cured. However, if left untreated, retinoblastoma is almost always fatal. Survivors have an increased risk of developing a second, unrelated cancer. With regular follow-up, it may be caught and treated early. American Cancer Society (ACS) National Cancer Institute (NCI) NCI Office of Communications and Education Public Inquiries Office 6116 Executive Blvd. Bethesda, MD 20892-8322
<urn:uuid:fed3a9ca-d6ea-4fb2-9cee-c8452af4f351>
CC-MAIN-2013-20
http://www.intelihealth.com/IH/ihtIH/WSRNM000/8096/24537/211015.html?d=dmtHealthAZ
2013-05-22T21:47:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925281
1,666
Making Sense of Pattern Grading by Terry Horlamus Excerpted from Threads #101, pp. 66-70 The term pattern grading may initially conjure up visions of complicated measurements and fancy rulers, but once the basic concept is understood, the actual process of grading is easy, especially using the method I outline here. This means that you—the home sewer, custom dressmaker, or independent designer—can do just as good a job as Vogue, Burda, Calvin, or Donna. |Why grade? The purpose of grading is to proportionally increase or decrease the size of a pattern, while maintaining shape, fit, balance, and scale of style details (dress, original design).| The basic concept Historically, the science of grading went hand-in-hand with the advent of commercial patterns and the mass-production of pattern-built clothing some 150 years ago. To properly fit a pattern to a range of sizes, each pattern piece needed to be graded, or systematically increased or decreased. Today, pattern companies and apparel manufacturers take a middle-sized pattern (typically a size 12) and grade it up for larger sizes and grade it down for smaller sizes (see One pattern, three sizes). |One pattern, three sizes| |A base size 12 pattern (left) can be graded up to a size 16 (center) using the cut-and-spread method, and similarly graded down to a size 6 (right) by cutting and overlapping along specified cut lines.| Methods of grading There are three basic methods of grading: cut and spread, pattern shifting, and computer grading. No one method is technically superior and all are equally capable of producing a correct grade. |Cut-and-spread method: The easiest method, which is the basis of the other two methods, is to cut the pattern and spread the pieces by a specific amount to grade up, or overlap them to grade down. No special training or tools are required—just scissors, a pencil, tape, and a ruler that breaks 1 in. down to 1/64.| |Pattern shifting: Pattern shifting is the process of increasing the overall dimensions of a pattern by moving it a measured dis-tance up and down and left and right, (using a specially designed ruler) and redrawing the outline, to produce the same results as the cut-and-spread method.|
<urn:uuid:514b943b-fabc-4ed9-abd6-df5b981e8ee8>
CC-MAIN-2013-20
http://www.threadsmagazine.com/item/4368/making-sense-of-pattern-grading
2013-05-19T18:27:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923972
495
The 75-million-year-old fossil specimens, uncovered in the badlands of Alberta, Canada, include remains of a juvenile and two adult ostrichlike creatures known as ornithomimids. Until now feathered dinosaurs have been found mostly in China and in Germany. "This is a really exciting discovery, as it represents the first feathered dinosaur specimens found in the Western Hemisphere," said Darla Zelenitsky, an assistant professor at the University of Calgary and lead author of the study. "These specimens are also the first to reveal that ornithomimids were covered in feathers, like several other groups of theropod dinosaurs," Zelenitsky said. She said the find "suggests that all ornithomimid dinosaurs would have had feathers." The creatures had a cameo screen appearance in the original Jurassic Park movie in which they were shown being chased by a Tyrannosaurus Rex. In the movie however they were portrayed as having scales rather than plumage – which researchers say they now know was not the case. Francois Therrien, curator at the Royal Tyrrell Museum in Drumheller, Alberta and the co-author of the study, said the discovery revealed another fascinating fact – the existence of early wings in dinosaurs that were too big to fly. "The fact that wing-like forelimbs developed in more mature individuals suggests they were used only later in life, perhaps associated with reproductive behaviours like display or egg brooding," he said.
<urn:uuid:ad42521e-3c7c-4981-9fc4-6e80c1627836>
CC-MAIN-2013-20
http://www.telegraph.co.uk/science/dinosaurs/9634930/Fossils-of-feathered-dinosaur-species-discovered-in-the-Americas.html
2013-05-22T07:35:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.984174
307
What the Lines on World Globes Mean A geographic coordinate system allows us to determine the location of any place on earth using numbers and letters. In fact, a location can be specified within a few inches, even if it’s half a world away. This ancient system has been refined over the centuries as exploration has advanced understanding and technology has allowed for more accurate measurements. You don’t need any high-tech gadgetry to use coordinates, though – just grab a map or globe. The Long and Short of It The roots of our modern coordinate system are over 2,000 years old; a system based on intersecting lines of longitude and latitude. Latitude lines are also called parallels because they run parallel to the equator and to each other. A degree of latitude is just a little over 69 miles wide and there are 180 total, from pole to pole. Because the equator divides the earth into two equal halves or hemispheres, it’s the perfect reference point from which to establish all other latitudes. The equator is thus demarcated as zero degrees (0°) latitude. Each pole is at 90° so the latitude increases as you move away from the equator. Longitude lines, or meridians, run perpendicular to latitude lines, forming a grid across the planet. If you look at a globe you’ll see that meridians get closer to each other further from the equator, meeting at the poles. That is why, unlike degrees of latitude, degrees of longitude vary in width. There is no natural, obvious spot for 0° longitude, called the prime meridian, so multiple prime meridians existed until the 19th century. In 1884, the International Meridian Conference voted to adopt the British prime meridian location of Greenwich, England. Now degrees of longitude radiate out from that line until they reach 180° on the opposite side of the world. Crisscrossing the Globe The easiest way to understand the relationship between lines of longitude and latitude and the earth is to use a globe as a visual aid. Most globes show parallels and meridians at 15° intervals – wide enough apart not to obscure other important features but close enough to be useful. When coordinates of a location are given, the latitude is always first followed by an N for north of the equator or an S for south. The longitude is next with an E for east or W for west of the prime meridian. For example, Chicago, IL is 41°N 87°W. Space and Time Not only do meridians mark distance but time, too. You’ll count 24 meridian or longitude lines on your globe. It takes 24 hours for the earth to make a full rotation on its axis, so every meridian shown, each 15°, marks an hour in time. The meridian east of a chosen longitude line is an hour later, the meridian west will be an hour earlier. You can also calculate how many miles per hour the earth is rotating (it varies by place!) when you know the latitude of a location.
<urn:uuid:5ef09777-d94a-4842-8c39-634fac83256d>
CC-MAIN-2013-20
http://www.worldglobes.com/usinglatitudeandlongitudelinesarticle.cfm
2013-06-20T08:52:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.871651
650
Geologists and Geophysicists Career Information At School Soup we want to help you on your Geologists and Geophysicists Career path. Here in our Geologists and Geophysicists career section, we have lots of great information to help you learn all about Geologists and Geophysicists. If you're interested in other possible careers, please select a career from the dropdown menu below to learn more about that specific career. Significant Points· Work at remote field sites is common. · A bachelor’s degree in geology or geophysics is adequate for entry-level jobs; better jobs with good advancement potential usually require at least a master’s degree. · A Ph.D. degree is required for most research positions in colleges and universities and in government. Nature of the WorkEnvironmental scientists and geoscientists use their knowledge of the physical makeup and history of the Earth to locate water, mineral, and energy resources; protect the environment; predict future geologic hazards; and offer advice on construction and land use projects. Environmental scientists conduct research to identify and abate or eliminate sources of pollutants that affect people, wildlife, and their environments. They analyze and report measurements and observations of air, water, soil, and other sources to make recommendations on how best to clean and preserve the environment. They often use their skills and knowledge to design and monitor waste disposal sites, preserve water supplies, and reclaim contaminated land and water to comply with Federal environmental regulations. Geoscientists study the composition, structure, and other physical aspects of the Earth. By using sophisticated instruments and analyses of the earth and water, geoscientists study the Earth’s geologic past and present in order to make predictions about its future. For example, they may study the Earth’s movements to try to predict when and where the next earthquake or volcano will occur and the probable impact on surrounding areas to minimize the damage. Many geoscientists are involved in the search for oil and gas, while others work closely with environmental scientists in preserving and cleaning up the environment. Geoscientists usually study, and are subsequently classified in, one of several closely related fields of geoscience, including geology, geophysics, and oceanography. Geologists study the composition, processes, and history of the Earth. They try to find out how rocks were formed and what has happened to them since formation. They also study the evolution of life by analyzing plant and animal fossils. Geophysicists use the principles of physics, mathematics, and chemistry to study not only the Earth’s surface, but also its internal composition; ground and surface waters; atmosphere; oceans; and its magnetic, electrical, and gravitational forces. Oceanographers use their knowledge of geology and geophysics, in addition to biology and chemistry, to study the world’s oceans and coastal waters. They study the motion and circulation of the ocean waters and their physical and chemical properties, and how these properties affect coastal areas, climate, and weather. Geoscientists can spend a large part of their time in the field identifying and examining rocks, studying information collected by remote sensing instruments in satellites, conducting geological surveys, constructing field maps, and using instruments to measure the Earth’s gravity and magnetic field. For example, they often perform seismic studies, which involve bouncing energy waves off buried rock layers, to search for oil and gas or understand the structure of subsurface rock layers. Seismic signals generated by earthquakes are used to determine the earthquake’s location and intensity. In laboratories, geologists and geophysicists examine the chemical and physical properties of specimens. They study fossil remains of animal and plant life or experiment with the flow of water and oil through rocks. Some geoscientists use two- or three-dimensional computer modeling to portray water layers and the flow of water or other fluids through rock cracks and porous materials. They use a variety of sophisticated laboratory instruments, including x ray diffractometers, which determine the crystal structure of minerals, and petrographic microscopes, for the study of rock and sediment samples. Geoscientists working in mining or the oil and gas industry sometimes process and interpret data produced by remote sensing satellites to help identify potential new mineral, oil, or gas deposits. Seismic technology also is an important exploration tool. Seismic waves are used to develop a three-dimensional picture of underground or underwater rock formations. Seismic reflection technology may also reveal unusual underground features that sometimes indicate accumulations of natural gas or petroleum, facilitating exploration and reducing the risks associated with drilling in previously unexplored areas. Numerous subdisciplines or specialties fall under the two major disciplines of geology and geophysics that further differentiate the type of work geoscientists do. For example, petroleum geologists explore for oil and gas deposits by studying and mapping the subsurface of the ocean or land. They use sophisticated geophysical instrumentation, well log data, and computers to interpret geological information. Engineering geologists apply geologic principles to the fields of civil and environmental engineering, offering advice on major construction projects and assisting in environmental remediation and natural hazard reduction projects. Mineralogists analyze and classify minerals and precious stones according to composition and structure and study their environment in order to find new mineral resources. Paleontologists study fossils found in geological formations to trace the evolution of plant and animal life and the geologic history of the Earth. Stratigraphers study the formation and layering of rocks to understand the environment in which they were formed. Volcanologists investigate volcanoes and volcanic phenomena to try to predict the potential for future eruptions and possible hazards to human health and welfare. Geophysicists may specialize in areas such as geodesy, seismology, or magnetic geophysics. Geodesists study the size and shape of the Earth, its gravitational field, tides, polar motion, and rotation. Seismologists interpret data from seismographs and other geophysical instruments to detect earthquakes and locate earthquake-related faults. Geochemists study the nature and distribution of chemical elements in ground water and Earth materials. Geomagnetists measure the Earth’s magnetic field and use measurements taken over the past few centuries to devise theoretical models to explain the Earth’s origin. Paleomagnetists interpret fossil magnetization in rocks and sediments from the continents and oceans, to record the spreading of the sea floor, the wandering of the continents, and the many reversals of polarity that the Earth’s magnetic field has undergone through time. Other geophysicists study atmospheric sciences and space physics. Hydrology is closely related to the disciplines of geology and geophysics. Hydrologists study the quantity, distribution, circulation, and physical properties of underground and surface waters. They study the form and intensity of precipitation, its rate of infiltration into the soil, its movement through the Earth, and its return to the ocean and atmosphere. The work they do is particularly important in environmental preservation, remediation, and flood control. Oceanography also has several subdisciplines. Physical oceanographers study the ocean tides, waves, currents, temperatures, density, and salinity. They study the interaction of various forms of energy, such as light, radar, sound, heat, and wind with the sea, in addition to investigating the relationship between the sea, weather, and climate. Their studies provide the Maritime Fleet with up-to-date oceanic conditions. Chemical oceanographers study the distribution of chemical compounds and chemical interactions that occur in the ocean and sea floor. They may investigate how pollution affects the chemistry of the ocean. Geological and geophysical oceanographers study the topographic features and the physical makeup of the ocean floor. Their knowledge can help oil and gas producers find these minerals on the bottom of the ocean. Biological oceanographers, often called marine biologists, study the distribution and migration patterns of the many diverse forms of sea life in the ocean. Working ConditionsSome geoscientists spend the majority of their time in an office, but many others divide their time between fieldwork and office or laboratory work. Geologists often travel to remote field sites by helicopter or four-wheel drive vehicles and cover large areas on foot. An increasing number of exploration geologists and geophysicists work in foreign countries, sometimes in remote areas and under difficult conditions. Oceanographers may spend considerable time at sea on academic research ships. Fieldwork often requires working long hours, but workers are usually rewarded by longer than normal vacations. Environmental scientists and geoscientists in research positions with the Federal Government or in colleges and universities often are required to design programs and write grant proposals in order to continue their data collection and research. Environmental scientists and geoscientists in consulting jobs face similar pressures to market their skills and write proposals to maintain steady work. Travel often is required to meet with prospective clients or investors. Geoscientists held about 28,000 jobs in 2006. Many more individuals held geoscience faculty positions in colleges and universities, but they are classified as college and university faculty. About 25 percent of geoscientists were employed in architectural, engineering, and related services, and 20 percent worked for oil and gas extraction companies. In 2006, State agencies such as State geological surveys and State departments of conservation employed about 3,600 geoscientists. Another 2,900 worked for the Federal Government, including geologists, geophysicists, and oceanographers, mostly within the U.S. Department of the Interior for the U.S. Geological Survey (USGS) and within the U.S. Department of Defense. About 5 percent of geoscientists were self-employed, most as consultants to industry or government. Training, Qualifications, Adv.A bachelor’s degree in geology or geophysics is adequate for some entry-level geoscientist jobs, but more job opportunities and better jobs with good advancement potential usually require at least a master’s degree in geology or geophysics. Environmental scientists require at least a bachelor’s degree in hydrogeology; environmental, civil, or geological engineering; or geochemistry or geology, but employers usually prefer candidates with master’s degrees. A master’s degree is required for most entry-level research positions in colleges and universities, Federal agencies, and State geological surveys. A Ph.D. is necessary for most high-level research positions. Hundreds of colleges and universities offer a bachelor’s degree in geology; fewer schools offer programs in geophysics, hydrogeology, or other geosciences. Other programs offering related training for beginning geological scientists include geophysical technology, geophysical engineering, geophysical prospecting, engineering geology, petroleum geology, geohydrology, and geochemistry. In addition, several hundred universities award advanced degrees in geology or geophysics. Traditional geoscience courses emphasizing classical geologic methods and topics (such as mineralogy, petrology, paleontology, stratigraphy, and structural geology) are important for all geoscientists and make up the majority of college training. Persons studying physics, chemistry, biology, mathematics, engineering, or computer science may also qualify for some environmental science and geoscience positions if their coursework includes study in geology. Those students interested in working in the environmental or regulatory fields, either in environmental consulting firms or for Federal or State governments, should take courses in hydrology, hazardous waste management, environmental legislation, chemistry, fluid mechanics, and geologic logging. An understanding of environmental regulations and government permit issues is also valuable for those planning to work in mining and oil and gas extraction. Hydrologists and environmental scientists should have some knowledge of the potential liabilities associated with some environmental work. Computer skills are essential for prospective environmental scientists and geoscientists; students who have some experience with computer modeling, data analysis and integration, digital mapping, remote sensing, and geographic information systems (GIS) will be the most prepared entering the job market. A knowledge of the Global Positioning System (GPS)—a locator system that uses satellites—also is very helpful. Some employers seek applicants with field experience, so a summer internship may be beneficial to prospective geoscientists. Environmental scientists and geoscientists must have excellent interpersonal skills, because they usually work as part of a team with other scientists, engineers, and technicians. Strong oral and written communication skills also are important, because writing technical reports and research proposals, as well as communicating research results to others, are important aspects of the work. Because many jobs require foreign travel, knowledge of a second language is becoming an important attribute to employers. Geoscientists must be inquisitive and able to think logically and have an open mind. Those involved in fieldwork must have physical stamina. Environmental scientists and geoscientists often begin their careers in field exploration or as research assistants or technicians in laboratories or offices. They are given more difficult assignments as they gain experience. Eventually, they may be promoted to project leader, program manager, or another management and research position. Job OutlookEmployment of environmental scientists and hydrologists is expected to In the past, employment of geologists and some other geoscientists has been cyclical and largely affected by the price of oil and gas. When prices were low, oil and gas producers curtailed exploration activities and laid off geologists. When prices were up, companies had the funds and incentive to renew exploration efforts and hire geoscientists in large numbers. In recent years, a growing worldwide demand for oil and gas and new exploration and recovery techniques—particularly in deep water and previously inaccessible sites—have returned some stability to the petroleum industry, with a few companies increasing their hiring of geoscientists. Growth in this area, though, will be limited due to increasing efficiencies in finding oil and gas. Geoscientists who speak a foreign language and who are willing to work abroad should enjoy the best opportunities. The need for companies to comply with environmental laws and regulations is expected to contribute to the demand for environmental scientists and some geoscientists, especially hydrologists and engineering geologists. Issues of water conservation, deteriorating coastal environments, and rising sea levels also will stimulate employment growth of these workers. As the population increases and moves to more environmentally sensitive locations, environmental scientists and hydrologists will be needed to assess building sites for potential geologic hazards and to address issues of pollution control and waste disposal. Hydrologists and environmental scientists also will be needed to conduct research on hazardous wastesites to determine the impact of hazardous pollutants on soil and groundwater so engineers can design remediation systems. The need for environmental scientists and geoscientists who understand both the science and engineering aspects of waste remediation is growing. An expected increase in highway building and other infrastructure projects will be an additional source of jobs for engineering geologists. Employment of environmental scientists and geoscientists is more sensitive to changes in governmental energy or environmental policy than employment of other scientists. If environmental regulations are rescinded or loosened, job opportunities will shrink. On the other hand, increased exploration for energy sources will result in improved job opportunities for geoscientists. Jobs with the Federal and State governments and with organizations dependent on Federal funds for support will experience little growth over the next decade, unless budgets increase significantly. The Federal Government is expected to increasingly outsource environmental services to private consulting firms. This lack of funding will affect mostly geoscientists performing basic research. Median annual earnings of geoscientists were $68,730 in May 2009. The middle 50 percent earned between $49,260 and $98,380; the lowest 10 percent earned less than $37,700, the highest 10 percent more than $130,750. According to the National Association of Colleges and Employers, beginning salary offers in July 2009 for graduates with bachelor's degrees in geology and related sciences averaged $39,365 a year. In 2009, the Federal Government's average salary for managerial, supervisory, and nonsupervisory positions was $83,178 for geologists, $94,836 for geophysicists, and $87,007 for oceanographers. The petroleum, mineral, and mining industries are vulnerable to recessions and to changes in oil and gas prices, among other factors, and usually release workers when exploration and drilling slow down. Consequently, they offer higher salaries, but less job security, than other industries. Related OccupationsMany geoscientists work in the petroleum and natural gas industry. This industry also employs many other workers in the scientific and technical aspects of petroleum and natural gas exploration and extraction, Information on training and career opportunities for geologists is available from either of the following organizations: Information on oceanography and related fields is available from: Information on obtaining a position as a geologist, geophysicist, or oceanographer with the Federal Government is available from the Office of Personnel Management through USAJOBS, the Federal Government’s official employment information system. This resource for locating and applying for job opportunities can be accessed through the Internet at http://www.usajobs.opm.gov or through an interactive voice response telephone system at (703) 724-1850 or TDD (978) 461-8404. These numbers are not tollfree, and charges may result. Sources of Additional Information Information on training and career opportunities for geologists is available from either of the following organizations: Information on oceanography and related fields is available from: Information on obtaining a position as a geologist, geophysicist, or oceanographer with the Federal Government is available from the Office of Personnel Management through USAJOBS, the Federal Government’s official employment information system. This resource for locating and applying for job opportunities can be accessed through the Internet at http://www.usajobs.opm.gov or through an interactive voice response telephone system at (703) 724-1850 or TDD (978) 461-8404. These numbers are not tollfree, and charges may result.
<urn:uuid:0287a384-fa21-4fbf-8476-3878a8249062>
CC-MAIN-2013-20
http://schoolsoup.com/careers/career_info.php?career_id=46
2013-05-19T18:28:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93842
3,754
Humans have walked the Earth for 190,000 years, a mere blip in Earth's 4.5-billion-year history. A lot has happened in that time. Earth formed and oxygen levels rose in the foundational years of the Precambrian. The productive Paleozoic era gave rise to hard-shelled organisms, vertebrates, amphibians, and reptiles. Dinosaurs ruled the Earth in the mighty Mesozoic. And 64 million years after dinosaurs went extinct, modern humans emerged in the Cenozoic era. The planet has seen an incredible series of changes—discover them for yourself. Phenomena: A Science Salon National Geographic Magazine Our genes harbor many secrets to a long and healthy life. And now scientists are beginning to uncover them All the elements found in nature—the different kinds of atoms—were found long ago. To bag a new one these days, and push the frontiers of matter, you have to create it first. Burn natural gas and it warms your house. But let it leak, from fracked wells or the melting Arctic, and it warms the whole planet.
<urn:uuid:6f1c61c8-7d38-488d-b7a3-5cc2f53ea869>
CC-MAIN-2013-20
http://science.nationalgeographic.com/science/prehistoric-world/prehistoric-time-line.html?email=Places14Dec07
2013-06-19T19:36:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.906216
233
In the first half of the 20th century, almost anything seemed possible. The design and architecture of the time suggested that objects would grow more and more rounded, so obviously it made sense to assume that our homes and vehicles would eventually take on the shape of spheres. In the above illustration from a 1934 issue of Everyday Science and Mechanics, one of those futuristic spherical homes is being transported to its new lot. According to the illustration and the accompanying article, homeowners would just need to attach special protective tires and hitch their homes up to tractors to move to a new location. The sphere trend was still going strong a decade later when this 1946 illustration from Amazing Stories magazine imagined a giant gyroscopic leisure ball. The ball was envisioned as a kind of next-generation vacation vehicle, taking travelers wherever they wanted to go and making them feel like super-sized hamsters at the same time. It looks a lot like a land-locked version of a cruise ship, and the transparent plastic bubble that surrounds the interior core would let travelers enjoy the passing scenery from the deck without worrying about silly things like weather.
<urn:uuid:6e615214-e9b0-4a39-afa9-b0d7f23698c9>
CC-MAIN-2013-20
http://gajitz.com/the-future-was-all-balls-2-crazy-retrofuturistic-spheres/
2013-06-18T22:39:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967682
221
The learning styles in Texas Holdem Learning process do not just limit themselves to the auditory and visual learning but expands to another yet very interesting way of learning called the kinaesthetic learning. The kinaesthetic or tactile learner learns with the skin as the sensory organ and is based on the touch. Learning through the kinaesthetic way involves the learning through actions and movements which constitutes the learning of a learner of this sort by manipulating and handling things with the emotions as a driving force. It is important that one determines one’s own style of learning. Following is a list of some traits that are characteristic of kinaesthetic learner. - A kinaesthetic learner would trust his gut and his instincts to make decisions at the poker table. - A kinaesthetic learner often keeps fiddling with his chips at the poker table and can also have a sleight of hand with the chips. - On taking a big decision say all-in, this type of learner gets the urge to walk around. - While playing online poker, this player could be seen indulged in other activities like squeezing a ball, fiddling with a pencil or can even be noticed to change positions frequently. - This sort of a learner recalls the emotions he went through on recounting a past experience. - A kinaesthetic learner has an exceptionally good sense of direction. - This sort of a learner likes to spend his/her leisure time by doing something creative. - Assembling a product/gadget without much help from the manual is another talent that this learner boasts of. - A kinaesthetic learner is a careful shopper and often tests the appliances/products when he/she buys them. - This learner expresses emotions through various touches like hugs, kisses and even expresses anger in a tangible sort of a way, like by slamming doors or even stomping his/her feet. - This learner has a soft spot for sports and likes to think of himself/herself as an athlete. - Long sittings in one position are tough for this type of a learner. - A kinaesthetic learner can work things up to his/her advantage and is often hyperactive. Also a kinaesthetic learner always keeps learning and never stops. - Kinaesthetic learners should learn on the move by teaming up the learning process with a physical activity and should also take breaks between each learning session. - It is important that you get the big picture of the learning content before the learning process actually starts for you. Also, the use of coloured transparent sheets has helped people with problems in focussing to focus better. - It is important for the player to feel comfortable at the poker table as this would help him concentrate more on the game. - Classical baroque music can help a learner of this sort with his/her concentration. Also, one should avoid listening to heavy music with rhythmic beats if one learns this way. - Other games like chess and backgammon which require a specific strategy and the ability to see gauge your opponents move can also help you develop better poker skills. - Don’t let your emotions play while at the poker table and take decisions with a calm head. A kinaesthetic learner has a plethora of manipulative poker resources available to help him on his way to the poker greatness; the challenging task being the use of these resources to their full benefit.
<urn:uuid:6f5a1909-6231-41f4-86b0-c320c91c25e1>
CC-MAIN-2013-20
http://www.458casino.com/4123/kinaesthetic-learner-poker/
2013-05-22T08:25:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957769
717
After you've entered data, you may find that you need another column to hold additional information. For example, your worksheet might need a column after the date column, for order IDs. Or maybe you need another row, or rows. You might learn that Buchanan, Suyama, or Peacock made more sales than you knew. That's great, but do you have to start over? Of course not. To insert a single column, click any cell in the column immediately to the right of where you want the new column to go. So if you want an order-ID column between columns B and C, you'd click a cell in column C, to the right of the new location. Then, on the Home tab, in the Cells group, click the arrow on Insert. On the drop-down menu, click Insert Sheet Columns. A new blank column is inserted. To insert a single row, click any cell in the row immediately below where you want the new row to go. For example, to insert a new row between row 4 and row 5, click a cell in row 5. Then in the Cells group, click the arrow on Insert. On the drop-down menu, click Insert Sheet Rows. A new blank row is inserted. Excel gives a new column or row the heading its place requires, and changes the headings of later columns and rows. Click Play to watch the process of inserting a column and a row in a worksheet. In the practice you'll learn how to delete columns and rows if you no longer need them.
<urn:uuid:1726ce89-a670-4c62-9129-a189f1f529a0>
CC-MAIN-2013-20
http://office.microsoft.com/en-gb/training/insert-a-column-or-a-row-RZ010076674.aspx?section=20
2013-05-24T09:05:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.878502
326
Cancer rates on the rise “Rising cancer rates mean four in 10 people in the UK get the disease at some point in their lives”, reports BBC News. The news story is based on a press release by the health charity Macmillan Cancer Support. The press release reports on a new analysis of cancer statistics in the UK over the past decade. The figure of ‘4 in 10’ comes from this data. Researchers also examined the ‘journeys’ of people who were diagnosed with one of three types of cancer to see how their health changed over the next 10 years. The full results are not available and have only been presented at a conference, but the press release has some details on survival from colorectal cancer, which can be found below. The headlines that 4 in 10 people will now get cancer at some point in their lives may sound worrying. However, it does not mean that the number is increasing because more people are being exposed to cancer risks. Some of these changes may be due to lifestyle factors linked to cancer, such as obesity, alcohol and smoking, but many will be due to an ageing population, and improved disease detection, treatment and survival. There are ways to cut your risk of cancer, including eating well, exercising and avoiding smoking and heavy alcohol consumption. Read our Live Well article for more information on ways to cut your risk. Where did the story come from? The news reports are based on a press release by the charity, Macmillan Cancer Support, which reported the findings from one of its studies. The study’s findings were presented at a conference on cancer in June called “Liberating Information, Improving Outcomes”, which was hosted by the National Cancer Intelligence Network. This appraisal is partly based on an abstract (a short summary of the study and its findings) from the conference. The research was carried out by researchers from the Monitor Group, Europe, the University of Nottingham, the University of Leeds and Macmillan Cancer Support. The abstract does not include information on funding sources. The current study used data from multiple sources, including cancer registries that collect data on all reported cancer cases, and a previous study that was published in 2008 in the British Journal of Cancer. The press release also refers to the statistics that emerged from that data on the prevalence of cancer in the UK (i.e. the 4-in-10 cancer figure). It is these statistics that the media has tended to focus on. This work has not been published in a peer-reviewed journal. This research was reported accurately by BBC News. What kind of research was this? This study involved an analysis of clinical data, which sought to map the different experiences of patients with colorectal cancer, multiple myeloma and Hodgkin’s disease. The overall aim of the study was to describe the cancer patients’ journeys from the time of their cancer diagnosis to eight years post-diagnosis, based on data of their hospital activities. The researchers plan to use this information to assess the likely path that a patient with one of these cancers would take based on certain factors before and at the time of their diagnosis. Further information released by Macmillan Cancer Support was derived from cancer prevalence figures previously published by the British Journal of Cancer, and various government registries, including the Office for National Statistics. Such data analysis is useful for describing trends in disease, but does not provide sufficient information to conclusively define the underlying causes of these trends. What did the research involve? The researchers used clinical data from cancer registries and Hospital Episode Statistics to trace the experiences of individual cancer patients in the healthcare system. Hospital inpatient records spanning approximately 10 years, both before and after their cancer diagnosis, were used to describe patients’ patterns of healthcare use, burden of disease and other clinical outcomes, such as duration of survival after diagnosis and development of associated diseases. The researchers also carried out a further analysis of cancer registries and national statistics to estimate the prevalence of cancer in the UK. Information on the number of people admitted to hospital and their treatments was used to provisionally describe the types of health problems cancer patients seek hospital care for, the severity of illness, and the impact of their cancer on other aspects of their health. The study has not yet been published in a peer-reviewed journal. The researchers note that the statistics released on the Macmillan Cancer Support website relating to associated health problems should be considered provisional and require further clinical validation. What were the basic results? The Macmillan Cancer Support website gives the following cancer prevalence figures that had previously been published by the British Journal of Cancer and various government registries, including the Office for National Statistics. These data indicated that: - 42% of people who die in the UK will have had a cancer diagnosis at some point in their lives - the number of people in the UK living with cancer has increased by approximately one third in the last decade Provisional data released by Macmillan from the study presented at the conference indicate that out of a group of colorectal cancer patients who survive five to seven years post-diagnosis: - 22% will have advanced cancer - 42% will have ongoing health problems such as cardiovascular or intestinal diseases - 36% will have no ongoing health problems related to their treatment How did the researchers interpret the results? Macmillan Cancer Support attributes this increase in the proportion of the population living with cancer to several factors, including: - improvement in cancer diagnosis and treatments, which prolongs survival - an ageing population, as older people are more likely to develop cancer The charity recommends an increase in services to improve cancer outcomes. Macmillan urges the NHS to adapt to shifting trends in cancer prevalence and survivability through improved service planning and personalised care. The headlines that 4 in 10 people will now get cancer at some point in their lives may sound worrying. However, it does not mean that the number is increasing because more people are being exposed to cancer risks. Some of these cancer cases may be due to avoidable lifestyle factors, such as obesity, alcohol and smoking. However, many others will be due to improved screening, diagnostic and treatment options, which allow physicians to detect cancer earlier than previously possible, and to treat the cancer more successfully once it has been detected. The number of people living with cancer has increased as cancer detection and survival has improved and people are generally living longer. The treatment of other diseases that used to account for many deaths has also improved. Where once they would have died from a disease such as heart disease, they are now living longer and dying from cancer. A longer average life expectancy, combined with improved diagnostic and treatment technologies, is likely to account for much of the increase in the number of people in the UK living with cancer. Many cancers are now treated as chronic conditions rather than a terminal illness. It is important to note that the risk of being diagnosed with cancer is not constant and may vary significantly over the course of one’s life. Age and lifestyle are significant risk factors for cancer diagnosis. There are ways to reduce the risks of developing cancer. These include eating well, exercising and avoiding smoking and heavy alcohol consumption. Read our Live Well article on how to cut your risk of cancer. Online training units, written and reviewed by experts. Earn two hours' CPD and a personalised certificate for your portfolio. Subscribers get five FREE learning units and non-subscribers can access each learning unit for £10 + VAT.
<urn:uuid:c66cb7e7-eef2-4fdc-8962-62f5bf081a23>
CC-MAIN-2013-20
http://www.nursingtimes.net/nursing-practice/clinical-specialisms/cancer/cancer-rates-on-the-rise/5032612.article
2013-05-26T03:38:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.971902
1,554
President Barack Obama, in a speech, called on Israel to return to some variation of its pre-1967 borders. The practical significance of these and other diplomatic evolutions in relation to Israel is questionable. Historically, U.N. declarations have had variable meanings, depending on the willingness of great powers to enforce them. Obama’s speech on Israel, and his subsequent statements, created enough ambiguity to make exactly what he was saying unclear. Nevertheless, it is clear that the diplomatic atmosphere on Israel is shifting. There are many questions concerning this shift, ranging from the competing moral and historical claims of the Israelis and Palestinians to the internal politics of each side to whether the Palestinians would be satisfied with a return to the pre-1967 borders. All of these must be addressed, but this analysis is confined to a single issue: whether a return to the 1967 borders would increase the danger to Israel’s national security. Later analyses will focus on Palestinian national security issues and those of others. It is important to begin by understanding that the pre-1967 borders are actually the borders established by the armistice agreements of 1949. The 1948 U.N. resolution creating the state of Israel created a much smaller Israel. The Arab rejection of what was called “partition” resulted in a war that created the borders that placed the West Bank (named after the west bank of the Jordan River) in Jordanian hands, along with substantial parts of Jerusalem, and placed Gaza in the hands of the Egyptians. The 1949 borders substantially improved Israel’s position by widening the corridors between the areas granted to Israel under the partition, giving it control of part of Jerusalem and, perhaps most important, control over the Negev. The latter provided Israel with room for maneuver in the event of an Egyptian attack — and Egypt was always Israel’s main adversary. At the same time, the 1949 borders did not eliminate a major strategic threat. The Israel-Jordan border placed Jordanian forces on three sides of Israeli Jerusalem, and threatened the Tel Aviv-Jerusalem corridor. Much of the Israeli heartland, the Tel Aviv-Haifa-Jerusalem triangle, was within Jordanian artillery range, and a Jordanian attack toward the Mediterranean would have to be stopped cold at the border, since there was no room to retreat, regroup and counterattack. For Israel, the main danger did not come from Jordan attacking by itself. Jordanian forces were limited, and tensions with Egypt and Syria created a de facto alliance between Israel and Jordan. In addition, the Jordanian Hashemite regime lived in deep tension with the Palestinians, since the former were British transplants from the Arabian Peninsula, and the Palestinians saw them as well as the Israelis as interlopers. Thus the danger on the map was mitigated both by politics and by the limited force the Jordanians could bring to bear. Nevertheless, politics shift, and the 1949 borders posed a strategic problem for Israel. If Egypt, Jordan and Syria were to launch a simultaneous attack (possibly joined by other forces along the Jordan River line) all along Israel’s frontiers, the ability of Israel to defeat the attackers was questionable. The attacks would have to be coordinated — as the 1948 attacks were not — but simultaneous pressure along all frontiers would leave the Israelis with insufficient forces to hold and therefore no framework for a counterattack. From 1948 to 1967, this was Israel’s existential challenge, mitigated by the disharmony among the Arabs and the fact that any attack would be detected in the deployment phase. Israel’s strategy in this situation had to be the pre-emptive strike. Unable to absorb a coordinated blow, the Israelis had to strike first to disorganize their enemies and to engage them sequentially and in detail. The 1967 war represented Israeli strategy in its first generation. First, it could not allow the enemy to commence hostilities. Whatever the political cost of being labeled the aggressor, Israel had to strike first. Second, it could not be assumed that the political intentions of each neighbor at any one time would determine their behavior. In the event Israel was collapsing, for example, Jordan’s calculations of its own interests would shift, and it would move from being a covert ally to Israel to a nation both repositioning itself in the Arab world and taking advantage of geographical opportunities. Third, the center of gravity of the Arab threat was always Egypt, the neighbor able to field the largest army. Any pre-emptive war would have to begin with Egypt and then move to other neighbors. Fourth, in order to control the sequence and outcome of the war, Israel would have to maintain superior organization and technology at all levels. Finally, and most important, the Israelis would have to move for rapid war termination. They could not afford a war of attrition against forces of superior size. An extended war could drain Israeli combat capability at an astonishing rate. Therefore the pre-emptive strike had to be decisive. The 1949 borders actually gave Israel a strategic advantage. The Arabs were fighting on external lines. This means their forces could not easily shift between Egypt and Syria, for example, making it difficult to exploit emergent weaknesses along the fronts. The Israelis, on the other hand, fought from interior lines, and in relatively compact terrain. They could carry out a centrifugal offense, beginning with Egypt, shifting to Jordan and finishing with Syria, moving forces from one front to another in a matter of days. Put differently, the Arabs were inherently uncoordinated, unable to support each other. The pre-1967 borders allowed the Israelis to be superbly coordinated, choosing the timing and intensity of combat to suit their capabilities. Israel lacked strategic depth, but it made up for it with compact space and interior lines. If it could choose the time, place and tempo of engagements, it could defeat numerically superior forces. The Arabs could not do this.
<urn:uuid:625afe8f-7fc2-4dc1-bbd8-6912f27e4827>
CC-MAIN-2013-20
http://www.businessinsider.com/israel-would-be-safer-if-it-returned-to-its-pre-1967-borders-2011-5
2013-05-23T05:14:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702849682/warc/CC-MAIN-20130516111409-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967514
1,193
Help For Preemies "The prognosis for NEC worsens once bowel perforation occurs," said the study's lead author, Ricardo Faingold, M.D., currently an assistant professor of radiology at McGill University in Montreal. "Earlier detection of necrotic or dead bowel in NEC will improve an infant's chance for survival." From 2000 to 2002, Dr. Faingold and colleagues at the University of Toronto used color Doppler sonography to examine 30 premature and full-term infants with suspected or proven NEC. Researchers then compared the CDS findings with those from abdominal x-rays. CDS uses high-frequency sound waves to detect and quantify blood flow. When x-rays are used to diagnose dead bowel in NEC, doctors are looking for perforations in the intestine or gas in the abdomen that escapes from these holes. The study results indicated that CDS was more sensitive and specific than x-ray for determining NEC in newborns. "It's a very simple idea," Dr. Faingold said. "If there is blood flow to the wall of the intestine, that's a good sign. If there is no blood flow, that's bad. It means that particular area of the intestine is dying or is dead. When you see free gas in the x-ray, it may be too late. The babies are very sick by then." Dr. Faingold said CDS can also be used to measure intestinal blood flow in adults, a procedure that could benefit patients with a variety of bowel disorders, including Crohn disease, diverticulitis and ischemic bowel. To determine what constituted abnormal blood flow in the bowels of infants, researchers first compared the CDS data from the 30 premature and full-term newborns having suspected or proven NEC with a control group of 30 premature and full-term newborns without evidence of intestinal or cardiovascular disease. The researchers used CDS over other ultrasound procedures because color Doppler shows the presence or absence of blood flow in the intestines and whether that flow is normal, increased or absent. CDS is also noninvasive and free of ionizing radiation. Unlike x-ray, CDS was also able to detect various stages of NEC based on the type of blood flow to the intestine. This is important because the range in treatment options--from antibiotics to surgery--is based on the severity and progression of the disease. "This procedure is not intended as a substitute for the x-ray," Dr. Faingold said. "But in the near future, color Doppler sonography will become part of the overall assessment of premature babies."
<urn:uuid:e4df64b7-1904-4193-96dd-785fb8cf6968>
CC-MAIN-2013-20
http://www.pregnancyandbaby.com/baby/articles/943617/color-doppler-sonography-speeds-detection-of-serious-illness-in-premature-infants
2013-06-19T14:20:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936666
543
Keep it growing. Pass Flower Garden News on to Friends and Family. KNOW YOUR SOIL TYPE Soil is a result of different sizes and quantities of rock particles. Air and moisture amounts within the soil influence soil conditions. The amount of organic matter, also known as humus, is extremely important. Organic material comes from decayed plants or animals providing growing plants with nutrients. The soil has a pH value which measures acidity and alkalinity on a scale from (acidic) 1-14 (alkaline), 7 being neutral. Some plants prefer a more acidic or a more alkaline soil, but generally most plants thrive in a low acidic to neutral soil. By knowing the pH level of the garden soil it will let you know if adjustments in the soil are needed for better growing conditions for your plants. Being aware of the pH level in the soil, also helps you choose the right plants that will thrive in that particular pH level. To determine the soil's pH value you need to do a Soil Test. Some nurseries may have these available for purchasing. Improve a high acidic or alkaline soil test result, by following the Soil Test's recommendations. Here are some soil types. CLAY SOIL- (Heavy Soil) Similar to the clay used for pottery. Can remain too damp after watering and become quite dry to cracking in the sun. By adding organic matter to a clay soil it can have better draining and a more fertile soil. SANDY SOIL- (Light Soil) Has a grainy texture similar to beach sand. Can dry or drain very fast washing the soils needed nutrients away. Adding organic matter often is necessary to keep the soil rich in nutrients. LIMY SOIL- (Alkaline) Quite stony. Drains quickly. Adding organic matter can improve a limestone soil. Perennials that do well in an Alkaline soil. CHALKY SOIL-(Very Alkaline) Contains clumps similar to kid's chalk. Rain water drains very quick resulting in needed nutrients getting flushed away. Try adding organic matter for some improvement. PEATY SOIL- (Acidic) Organic material is plentiful which helps hold moisture in longer. Too much moisture can be corrected by adding sand for better drainage. Know the soil types, pH levels, and drainage habits. This can help determine where plants grow best in your next home garden. TIP- Learn from others. Take a look around your neighborhood and see what grows well. You will probably notice similarities planted in different home gardens. The Simple Facts- Sometimes it's good to get back to the basics. When gardening many factors come into play when trying to achieve a successful flowering garden. For starters knowing your soil type and being aware of the sunlights effects on your garden can make all the difference in your home garden. BASIC GARDENING Read simple facts about soil, sunlight and, water. Featured Articles Container Gardening- Container gardening is enjoyable for gardeners who have very little gardening space or want a low maintenance garden. Read More SUNLIGHT Plants need sunlight to grow and maintain their health. Generally flower gardens are either located in full sun, partial shade or heavy shade. Observe how the shaded areas react to moisture or how fast the soil dries in full sun. Does the soil need improved drainage or does more organic material need to be added to create a better growing condition for flowers and plants. Be aware of how the sun's effects change from each season. Watching shadows change as day passes by can help determine planting locations for individual plant needs. Keeping in mind some plants will grow successfully in altered sunlight conditions. TIP- I have planted many flowers and seeds discovering they will thrive not only in the recommended conditions, but in other sunlight exposures as well. Don't always limit yourself. You might be pleasantly surprised. WATER Plants cannot survive on soil and sun alone. Water plants effectively. Know your climate, and chose plants that will thrive in those conditions. Knowing the water needs of the plants you've put in your garden is valuable. Water only plants that need it. Watch for wilting leaves or cracking soil a sign watering is required immediately. Soil enriched with organic matter holds moisture in longer requiring less need for watering. Knowing the soil and sun conditions can guide you to better watering habits. TIP- Watering in the evenings is beneficial because of less evaporation. Saving rain water provides an excellent water source.
<urn:uuid:12de02ce-4f3a-43bd-b85d-a5e24d3b8b0d>
CC-MAIN-2013-20
http://www.flowergardennews.com/Gardening.html
2013-05-24T22:36:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931444
912
Art Attack: Some great ideas for practical art with nicely presented step-by-step instructions in colour photos. You will have to scroll down the list as pupils work through the lesson, but the clear graphics are definitely an asset. The Artist's Toolkit: Various artistic concepts explored with videos and animations. Also the chance to watch artists work and talk about what they are doing. Hercules: This is a website from an organisation called Access Art (part of The Arts Education Exchange). This website (in Flash) looks at statues of Hercules and the stories associated with the mythological character. The Case of Grandpa's Painting: Pupils have to identify who is the artist behind a stolen work of art. They try to match the work of art to six famous artists and once they think they have a match, they will need to look at colour, composition, style and subject to decide whether it is a definite match. This task would need a lot of good questioning to support the task, but the ideas are interesting and there is a nice sting in the tail at the end of the story. A school’s website with examples of secondary pupil’s art work to inspire budding artists – excellent! Useful website for Art week ideas/ artists Art Masterclasses online! An excellent site with teachers resources, pictures of different artwork, glossary etc.
<urn:uuid:659e903a-7c36-452f-abd4-20c95869c794>
CC-MAIN-2013-20
http://www.harrietsham.kent.sch.uk/art_efacts.aspx
2013-05-22T01:11:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.919959
282
July 2012 was the hottest month on record for the contiguous (lower 48) United States, according to the National Climatic Data Center (NCDC) of the National Oceanic and Atmospheric Administration (NOAA). It turns out that the month was pretty warm globally as well, lining up as the fourth warmest July since modern record-keeping began in 1880. The map above shows temperature anomalies for July 2012, as analyzed by the NASA Goddard Institute for Space Studies (GISS). That is, the map shows how much warmer or cooler each area was in July 2012 compared with the average for the month from 1951–1980. To build their map, scientists at GISS use publicly available data from 6,300 meteorological stations around the world; ship-based and satellite observations of sea surface temperature; and Antarctic research station measurements. For more explanation of how the analysis works, read World of Change: Global Temperatures. Note that the map does not depict absolute temperatures; it shows changes from the long-term average. The darkest reds are as much as 4° Celsius (7° Fahrenheit) above the norm for the month; white is normal; the darkest blues are 4°C below normal. In addition to extreme warming over the United States, the Antarctic Peninsula and much of eastern Euope and North Africa were especially hot in July 2012. According to NCDC: “The average combined global land and ocean surface temperature for July 2012 was 0.62°C (1.12°F) above the 20th century average of 15.8°C (60.4°F)...The Northern Hemisphere land surface temperature for July 2012 was the all-time warmest July on record, at 1.19°C (2.14°F) above average...the fourth month in a row that the Northern Hemisphere has set a new monthly land temperature record. ” In a recent analysis, NASA GISS director James Hansen and colleagues presented statistics showing that extreme summer heat waves have become much more common in the temperature record as a result of global warming. During the 1951 to 1980 base period used in the analysis, 33 percent of Earth’s land surface experienced statistically hot summers. In the past decade, the number of hot summers has risen to 75 percent of land area. Moreover, extreme heat events—in statistical terms, three standard deviations from the norm—that used to affect 1 percent of the land area in the past have been affecting as much as 10 percent of land area in the years since 2006. “‘Climate dice,’ describing the chance of unusually warm or cool seasons, have become more and more ‘loaded’ in the past 30 years, coincident with rapid global warming,” Hansen and colleagues wrote. “The distribution of seasonal mean temperature anomalies has shifted toward higher temperatures and the range of anomalies has increased....We can state, with a high degree of confidence, that extreme anomalies such as those in Texas and Oklahoma in 2011 and Moscow in 2010 were a consequence of global warming because their likelihood in the absence of global warming was exceedingly small.” - Hansen, J., Sato, M., Ruedy, R. (2012, August 6) Perception of Climate Change. Proceedings of the National Academy of Sciences. - NOAA National Climatic Data Center (August 2012) State of the Climate: Global Analysis - July 2012. Accessed August 16, 2012. - NASA (2012, August 6) Research Links Extreme Summer Heat Events to Global Warming. Accessed August 16, 2012. - NASA Earth Observatory (n.d.) World of Change: Global Temperatures. - NASA Goddard Institute for Space Studies (n.d.) GISS Surface Temperature Analysis. Accessed August 16, 2012. - NOAA ClimateWatch (2012, August) Hottest.Month.Ever...Recorded. Accessed August 16, 2012. NASA image by Robert Simmon, based on data from the Goddard Institute for Space Studies. Caption by Mike Carlowicz. - In situ Measurement
<urn:uuid:0dbe7851-0f1a-4ec5-80d5-eab28d4499d2>
CC-MAIN-2013-20
http://www.earthobservatory.nasa.gov/IOTD/view.php?id=78869
2013-05-24T09:40:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704433753/warc/CC-MAIN-20130516114033-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930339
823
This image shows the nucleus of comet Tempel 1 and the nucleus of comet Hartley 2. (The comets are placed next to each other for comparison; they are nowhere near each other in space.) As you can see, comets come in all shapes and sizes. Tempel 1 is five times larger than Hartley 2. Jets are easily seen coming off Hartley 2 but extensive processing was required to see jets on Tempel 1. Tempel 1 is 7.6 kilometers (4.7 miles) in the longest dimension. Hartley 2 is 2.2 kilometers (1.4 miles) long. NASA's Deep Impact spacecraft took both images. When the spacecraft took the Hartley 2 image, it was called the EPOXI mission. Image credit: NASA/JPL-Caltech/UMD MORE Asteroids & Comets Images:
<urn:uuid:4b7a70e9-a5be-47ac-a68e-114ecd6c49f4>
CC-MAIN-2013-20
http://www.jpl.nasa.gov/education/spaceimages/index.cfm?category=12&image=32
2013-06-19T19:06:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921156
178
Newly-launched missions extend ESA's radiation map of space As the Herschel and Planck observatories head towards their final orbits 1.5 million kilometres from Earth each spacecraft has a small but significant passenger aboard – a device no bigger than a shoebox, the latest in a family of monitors piggybacking on ESA missions to chart variations in radiation across different regions of space. The instrument is known as the Standard Radiation Environment Monitor (SREM), and it has been designed to detect highly charged particles expelled from the Sun, surrounding Earth in radiation belts, or originating from interstellar space – known as 'cosmic rays'. The SREM's main purpose is to identify radiation hazards threatening its host spacecraft but also to yield a detailed picture of the space radiation environment. Herschel and Planck are transporting their SREMS into orbit around the distant second Lagrangian point (L2), a point in space behind Earth where combined solar and terrestrial gravity keeps the spacecraft orbiting the Sun at the same rate as the Earth. These monitors are joining identical SREMs already operational in a variety of other orbits: - in low-Earth orbit on the Proba-1 technology demonstrator - in medium-Earth orbit aboard the GIOVE-B test satellite for ESA's Galileo satellite navigation system - on the Integral gamma ray observatory whose highly eccentric orbit takes it a maximum 153 000 km from Earth - and aboard the Rosetta comet rendezvous mission, in deep space beyond Mars. "For the first time we have been able to observe the same solar energetic particle events from different locations in the Solar System at the same time while using basically the same instrument," says Petteri Nieminen of ESA's Space Environment and Effects section. "That is unique." Earth's magnetic field guards against interplanetary radiation, but its protection diminishes with distance. The lowest-altitude SREM aboard Proba-1 orbits within this 'magnetosphere', although its path passes through a zone of heightened particle radiation known as the South Atlantic Anomaly. Higher-orbiting SREMs pass out of the magnetosphere altogether, crossing through the bands of trapped radiation particles known as the Van Allen Belts, while the SREMs aboard Rosetta and now Herschel and Planck sample radiation away from Earth orbit in interplanetary space. The devices can be thought of as the satellite equivalents of the radiation dosimeters worn by astronauts in-orbit. High levels of particle radiation can disrupt spacecraft electronics as well as degrade crucial onboard materials such as sensor lenses or solar cells. But its effects on unshielded human biology would be even worse. "Radiation is going to be a crucial issue when it comes to planning the future human exploration of the lunar surface and Mars," explains Mr Nieminen. "Exposure to the most energetic protons and electrons detected by SREM could cause serious radiation sickness in unprotected astronauts." The SREM design incorporates diodes that generate a measurable electric charge when they come into contact with energetic charged particles. Placed behind conical entrances, these diodes are sensitive to the direction as well as the charge and energy of incoming particles. A batch of ten SREM units was constructed in 2000 by Swiss firm Oerlikon Space (then known as Contraves) working with Switzerland's Paul Scherrer Institute under ESA contract. The design was developed from an earlier Radiation Environment Monitor (REM) flown on the UK's STRV 1B satellite and the Mir Space Station during the 1990s. The first SREM was flown on the STRV-1c satellite but its operation was cut short by spacecraft failure. With six further units now in space three more SREMs remain available for future flight opportunities. SREM results to date are feeding back into future spacecraft designs. GIOVE-B's orbit for example takes it through the highly radioactive outer Van Allen Belt, and its findings have helped to assess the shielding required for the operational Galileo satelllites set to follow it. "The previous models we have been working with were based on old NASA data from the 1960-70s," says Mr Nieminen. "However, with a European instrument we have been able to actually quantify the radiation and we do see some divergence between the old models and what we observed for ourselves." The latest SREMs will probe radiation conditions prevailing around L2, likely to be valuable data for the many next decade missions headed for this region, including ESA's Gaia and the ESA-NASA James Webb Space Telescope. Future missions will probably be carrying their own radiation detectors: ESA's Space Environment and Effects section is planning the development of next-generation units will be much more compact than the 2.5 kg SREM while bettering their performance. The current SREMs have exhibited very high sensitivity indeed, Mr Nieminen recounts: "On 27 December 2004, the unit aboard Integral even managed to detect an X-ray flare from a neutron star at the same time as its host satellite, something it was never designed to do." Petteri.Nieminen @ esa.int
<urn:uuid:d195afbf-1325-413c-8cb9-ec6493f728ac>
CC-MAIN-2013-20
http://www.esa.int/Our_Activities/Space_Engineering/Newly-launched_missions_extend_ESA_s_radiation_map_of_space
2013-05-23T19:41:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703728865/warc/CC-MAIN-20130516112848-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94592
1,051
Dust Storm in the Taklimakan Desert Dust storms continued in the Taklimakan Desert in western China through early April 2012. The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Aqua satellite captured this natural-color image on April 5. Dust was thickest along the desert’s southern margin. Dust storms are common in the Taklimakan Desert—the largest, warmest, and driest desert in China. Marching sand dunes, some reaching a height of 200 meters, cover most of the desert floor. The dunes are virtually devoid of vegetation, but plants survive along the desert perimeter, and experience distinct seasonal variations. - World Wildlife Fund, McGinley, M. (2007) Taklimakan Desert. Encyclopedia of Earth. Accessed April 5, 2012. This image originally appeared on the Earth Observatory. Click here to view the full, original record.
<urn:uuid:c883c1ba-3c34-4eab-b74a-d56fc607ce2a>
CC-MAIN-2013-20
http://visibleearth.nasa.gov/view.php?id=77597
2013-05-21T10:13:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.865289
191
The Preschool years are optimal years for learning. Brain research demonstrates that 85% of a child's intellect and skills are developed in the first five years of life. However, when it comes to selecting a preschool for your child, keep in mind that most learning in young children happens through play. When you are looking at preschools, you want to see a lot of play happening in the classrooms. At the Children's Museum Preschool, the students visit galleries and exhibits in the Museum on a daily basis. However, to an observer, it may look like play. And that's not only okay, that's the way it should be. We understand that play is where children discover ideas, experiences and concepts; but in preschool, their play is often guided by the teachers to be purposeful. The end result is creative play which is a catalyst for social, emotional, moral, motor, perceptual, intellectual, linguistic and neurological development. Whew! And we thought they were "just playing!" Here's an at-home literacy idea to do with your preschooler: Make an "environmental print" book with your preschool child. Creating a book that your child can "read" allows him/her to feel empowered as a reader. You can make this book with a phone or camera and a few minutes of your time. With your phone or camera, take pictures of words that are easily recognizable in your environment. Some examples might be: the signs on the front of businesses (Target, Walmart, McDonalds, Kroger, etc.); the names of products that are familiar (Cheerios, Kleenex, Coke, M & M's, etc.). Download and print these words on each page of your book. You may want to add the caption: "I Can Read (followed by the word/photo)."
<urn:uuid:36bff77a-1115-4dfb-b0a3-72072251b5c0>
CC-MAIN-2013-20
http://www.indyschild.com/Articles-Columns-i-2013-03-01-257478.114134-p17798.112112-Play-and-Preschool.html
2013-05-25T12:35:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966525
372