text
stringlengths 1.59k
23.8k
| id
stringlengths 47
47
| dump
stringclasses 8
values | url
stringlengths 15
3.15k
| file_path
stringlengths 125
142
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 2.05k
4.1k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Recent Releases In Global Health
Several blogs, publications examine the U.N. Millennium Development Goals (MDGs) summit:
- The MDGs serve a dual purpose “helping the poor countries to fight poverty and the rich countries to preserve a sense of social solidarity,” writes Jeffrey Sachs, director of Columbia University’s Earth Institute in a post on the Guardian’s “Poverty Matters Blog.” The post examines why “gains are being achieved despite recession in the U.S. and Europe”: China’s growth, technology, business, and governance (9/23).
- Council on Foreign Relations fellow Gayle Tzemach Lemmon writes that achieving the MDGs related to women “remain[s] a distant hope "¦ Women have become a hot topic among the development crowd, and women’s health and economic issues have received a great deal of attention recently, but attention does not change women’s lives. Investment does.” The publication examines “some progress” that has been made (9/23).
- Raymond Offenheiser, president of Oxfam America, writes on The Hill’s “Congress Blog” what he hopes are Pres. Obama’s priorities for “America’s strategy for development leadership in the 21st century,” which includes urging other leaders to create national plans to meet the MDGs on time (9/22).
- In a Lancet Comment, Ethiopian Health Minister Tedros Adhanom Ghebreyesus presents four steps to achieving country ownership. “Countries simply must own all these stages for the effect of development aid to be maximised. What seems to be missing is partners’ full commitment to country ownership. Partners have a wide range of interests that hinder them from fully embracing country-led processes. But a decisive shift has to happen now if the MDG targets are to be reached,” he writes (9/22).
- Sanitation still “goes unmentioned,” but “is an issue that touches on the lives and health of millions of individuals, and one which world leaders have promised to address,” writes former U.N. official Jan Eliasson in the Huffington Post. “Sanitation must be brought into the mainstream "“ out of the shadows, and onto the public agenda” (9/21).
- An “important milestone” in the “dramatic intensification of a global effort to turn the tide against AIDS” was the creation of the Global Fund to Fight AIDS, Tuberculosis and Malaria, writes French First Lady and Global Fund Ambassador Carla Bruni-Sarkozy in a Guardian “Comment is free” blog post. She hopes “that other donor countries will follow” France and the U.S. with “generous pledges” (9/21).
- In a Guardian “Comment is free” blog post, Mark Malloch-Brown, former head of the United Nations Development Programme, writes of the MDGs: “Over the next five years, we must learn the lessons of those countries that are succeeding, and replicate them. But if the gains of the past decade are not to unravel, we must also ensure the funds are there to support those successes” (9/20).
- In a Huffington Post blog, EngenderHealth President and CEO Pamela Barnes details five steps to “saving mothers’ lives,” and discusses the link between reducing maternal mortality and providing universal access to reproductive health care (9/20).
- According to a Huffington Post blog post, MDG “health goals are competing with each other for money, people, and other scarce resources.” The authors propose the following: “Pursue targeted (‘vertical’) priority health areas for goal-setting, advocacy and monitoring, as the MDGs have done. At the same time, deliver prevention and treatment programs that are integrated (‘horizontal’) across these health priorities” (Frenk et al., 9/16).
- “We are on the verge of a truly momentous development in the realm of global public health "“ the emergence of the first generation in almost 30 years where no child is born with HIV. It is possible "“ by 2015” "“ to achieve this, writes Susan Ellis, CEO of RED, in a Huffington Post blog post (9/20).
- AÂ Daily Caller opinion piece calls Bono’s New York Times opinion piece on the MDGs “naive and misguided.” The author writes, “policies based on science and data enjoy a short half-life at the United Nations,” citing the WHO’s endorsement of “less effective methods for preventing malaria” than insecticide DDT (Miller, 9/20).
- A post on the Guardian’s “Global Development” blog compiles 27 recently published MDG progress reports (Provost, 9/20).
- A Lancet Article assesses “official development assistance to maternal, newborn, and child health, 2003"“08,” finding: “In 2007 and 2008, US$4.7 billion and $5.4 billion (constant 2008 US$), respectively, were disbursed in support of maternal, newborn, and child health activities in all developing countries. These amounts reflect a 105% increase between 2003 and 2008, but no change relative to overall ODA for health, which also increased by 105%.” A related Lancet Comment highlights “three findings [of the paper] that require further consideration”: whether there is more money targeting morbidity and mortality, ability of governments to address mortality and long-term exit strategy by donors  (Sridhar, 9/17).
- Global Health Magazine’s blog examines a variety of MDG-related issues around the world including: family planning in Nepal, diarrheal death in Pakistan and global indicators for maternal health (September 2010).
- USAID’s “Impact” blog features posts on the U.N. Summit written by Rear Adm. (ret) Tim Ziemer, U.S. Global Malaria Coordinator, Rajiv Shah, USAID Administrator, and Scott Schirmer, Senior Coordinator, Private Sector Alliances Division, Office of Development Partners, who were in attendance at this week’s event (September 2010).
Blog: Obama’s Commitment To Global Fund Will Reveal Value Of New Global Development Policy
“As the premier global health organization on the cutting edge of bottom-up, accountable, results-focused development aid, the Global Fund [to Fight AIDS, Tuberculosis and Malaria]Â is the perfect fit for the president’s new strategy. The U.S.’s commitment to the Fund will be our first indication of whether the president’s new development policy is worth the paper it’s printed on,” according to a Huffington Post blog post that outlines why the Global Fund is a good investment for the U.S. and how it fits in with Obama’s global development strategy (Carter, 9/23).
NEJM Perspective Proposes Creation Of International Health Service Corps
A New England Journal of Medicine Perspective proposes the creation of an International Health Service Corps (IHSC), “a program that would train and fund both local providers and U.S. health care professionals to work, teach, learn, and enhance the health care workforce and infrastructure in low-income countries.” The effort, the authors write, “should be targeted to health care providers in the United States and partner countries who are committed to serving the poor” and “go beyond that of filling a human-resource void to focus on infrastructure development, knowledge transfer, and capacity building.” The perspective also looks at U.S. medical school partnerships with low-income countries and the role of the proposed IHSC in disaster relief (Kerry et al., 9/23).
Blog: Three Questions About Obama’s Global Development Policy
“As with most ambitious policy pronouncements like this, the devil will be in the details of implementation,” according to blog post on the Modernizing Foreign Assistance Network’s “ModernizeAid” blog about President Barack Obama’s new global development policy. The post highlights three questions that “remain unanswered.” They are: “If U.S. Ambassadors have oversight responsibility for foreign assistance in the field, how can we make sure our development programs work towards long-term, sustainable outcomes and not short-term political goals? More broadly, how will USAID and the State Department work together to implement the new policy? Will the Administration work proactively with Congress to overhaul the Foreign Assistance Act of 1961 to make sure the new development policy endures as one of President Obama’s key legacies?” (Beckmann/Ingram, 9/22).
Study: Community Health Workers Effective In Managing Malaria, Pneumonia
Allowing community health workers to use rapid tests and medication to “manage both malaria and pneumonia at the community level is promising and might reduce overuse of [anti-malaria drug] AL, as well as provide early and appropriate treatment to children with nonsevere pneumonia,” according to research published in PLoS Medicine. “A total of 3,125 children with fever and/or difficult/fast breathing were managed over a 12-month period,” in the study by community health workers (Yeboah-Antwi et al., 9/21).
Blog: More People On HIV Treatment Thanks To PEPFAR, U.S. Leadership
“Thanks in large part to ongoing U.S. leadership, the number of people receiving HIV prevention, treatment and care is growing and will continue to grow with the significant support of the Obama Administration "¦ The progress we have seen in PEPFAR provides us with a roadmap on how to achieve success "“ and is the centerpiece of the GHI,” writes the author of a post on The Hill’s “Congress Blog.” The author, who is principal deputy U.S. global AIDS coordinator, cites increasing numbers of people receiving treatment and care in middle- and low-income countries and writes “this Administration is working every day to continue providing American leadership in the global effort to defeat HIV/AIDS and keep our eyes on the ultimate prize "“ saving lives” (von Zinkernagel, 9/21).
Blog: Investing In Women Spurs Economy, Protects U.S. Interests, Rep. Says
“No woman should die giving life "“ and the good news is that most pregnancy-related deaths, as well as sexually transmitted infections (STIs) and HIV, are preventable with a package of basic, proven health interventions. But despite recent progress, far too many women in poor countries still face terrible risks. We as Americans should make it a priority to save women’s lives. It’s not only the right thing to do, but these investments also reduce poverty, spur the global economy and protect U.S. national interests,” Representative Yvette Clarke (D-N.Y.) writes in a post on The Hill’s “Congress Blog.” Clarke suggests “three crucial contributions the United States can make” to improve women’s health: “sound policies, sufficient funding, and true leadership” (9/21). Â
Blog Details CUGH Annual Meeting
The Consortium of Universities for Global Health’s second annual meeting took place this week in Seattle Washington. A blog documents the major sessions, which addressed topics like global health policy and diplomacy, the GHI and global health reporting. It also includes some video interviews (September 2010).
Blog: To Lead On AIDS, Obama Must Pledge $6B To Global Fund
“There are five years left to accomplish [the MDGs] and there is still time to make a difference for the world’s poorest and most vulnerable people. It is both a political and moral imperative now that President Obama fulfill his campaign pledge to be a global leader on AIDS and end extreme poverty. To do so, he must pledge $6 billion dollars over the next 3 years to the Global Fund,” states a post on CNN’s “Belief Blog.” The post concludes: “For those of us in this country, it’s a matter of Obama fulfilling a campaign promise. For the world’s poorest, it’s a matter of life and death” (Wallis, 9/21).
- Programs, Funding & Financing
- Maternal, Newborn and Child Health
- Women's Health
- Food Security and Nutrition
- UN Agency
- Pneumonia & Flu
- Access to Health Services
- Clinical Research/R&D
- Health System Performance
- Water and Sanitation
- MDGs/Post-2015 MDG Agenda
- Treatment and Prevention Strategies
- US Global Health Policy | <urn:uuid:7047cfb3-6231-4a80-97c1-2e8720542a33> | CC-MAIN-2015-35 | http://kff.org/news-summary/recent-releases-in-global-health-55/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00107-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.91863 | 2,812 | 2.59375 | 3 |
Keylogging is the action of logging the keys struck on a keyboard, typically in a covert manner so that the person using the keyboard is unaware that their actions are being monitored. Not only keystroke logging, there are also things like screen logging, clipboard monitoring etc that can be used to track the user activity on a PC. While there are legal ways to do this, like parents monitoring their children or monitoring employees in a company, keylogging can be used by criminals to steal personal information like passwords, credit card number etc. In fact in these days a majority of malwares have keylogging activities. A good introduction to what keyloggers are capable of can be found here.
Anti keyloggers are a special kind of software designed to protect the end user from keylogger softwares. Usually traditional anti keyloggers use heuristic or behavior based technique to detect keyloggers but these days some anti-keyloggers also use whitelist signatures/cloud response system to minimize false detection. Few anti-malwares have also incorporated anti keylogger features to protect user from malicious keylogging while most others still depend on blacklist signatures and so have every possibility to miss zero-day keyloggers. In such cases it is always necessary to add an additional layer of security, that is to install an anti-keylogger.
As already been said, most anti-keyloggers use behavior based technique to detect keyloggers and that’s why most anti-keyloggers prompt the user on any such detection leaving the decision on the user. But that’s not always a good approach because for a relatively in-experienced computer user such prompt brings no meaning; only a nagging. Moreover most anti-keyloggers can only block the executable having keylogging behavior leaving it in accessible. This behavior might cause serious consequences on a false situation.
DataGuard AntiKeylogger from MaxSecurity Lab is ahead in the race in two very important aspects that I’ve already covered.
- It is extremely user-friendly. Smooth interface, no prompt, no difficult learning curve, nothing….
- It detects keylogging by heuristics and without asking the user only removes the dangerous part from the executable making it otherwise accessible.
- Actually there is a third and a very important point. Dataguard Antikeylogger is extremely light on resources, lighter than any other competitive products I have tried yet. It consumes no CPU and less than 10Mb memory all the time.
DataGuard AntiKeylogger comes in four flavors Free, Lite, Pro and Ultimate, each varying in the number of features. All the three paid products comes with 15 days trial with all features that become disabled after the trial ends.
Max Tiganovschii from MaxSecurity Lab arranged me a license of the ultimate version and answered to my every question and feedback.
Download and Installation
- The latest version (v188.8.131.52) installer was downloaded from the company website.
- Downloaded file (5.788Mb) is a zip archive (DataGuardAklUltimateSetup.zip ; MD5: b8e9ef7c1b4f80c619f35c10b97d8027) containing the setup file (DataGuardAklUltimateSetup.exe ; MD5: 6ca480c2d1355ba1d0d3e0074bf9aafc).
- Unfortunately the setup file is yet to be digitally signed. I was informed that it will be signed in near future.
- The installation was smooth without any confusing options. Only customization that an user can do is to create a desktop and quick launch shortcut.
- Installation took about 2-3 minutes to complete and requires a reboot to complete the driver (dataguard.sys ; MD5: ae55e070b4e09b064aa7a041e67a2bf4) installation.
- Overall the installation consumed less than 10Mb hard drive space.
Screenshots of the installation process can be viewed here.
Usability and Effectiveness
After the reboot the software becomes completely ready.
- It shows up a system tray icon that looks like a combination of keyboard and lock. A single left click on that opens up the interface; a very neat one.
- It also has 75 alternative skins to fit any taste. A few can be seen below.
- Initially I was unable to find any version info of the installed software from the user interface. I was guided to hover the mouse on the note “This computer is protected by DataGuard AntiKeylogger ” and that showed up the version information. Max told me that in future, version information will be easier to find.
A bunch of useful settings can be found under Advanced Options.
- By default the software uses expert level of protection verbosity that can be lowered to Standard.
- The software is set to check updates automatically and the update check interval (in days) can be customized (default 5 days). Minor updates were installed automatically after user approval and for major upgrades the user were notified/redirected to download the installer form the homepage. The software lacks proxy with authentication support. So, users having that type of internet connection will not be able to use auto-update feature. I was informed that this settings will be added soon.
- To avoid unauthenticated handling of the software the software can be password protected. Under password protection, no major settings can be changed other than making some cosmetic changes like changing skins, switching off notifications and sound alerts about auto-detected modules.
- The software is set to check an executable for valid digital signatures and if found to allow it. But that can be turned off too. I think everyone should turn off that features because there are reports of digitally signed malware.
DataGuard AntiKeylogger checks all executable files in realtime for possible keylogging activity and adds those to its ‘Auto Detected Modules’ list. In addition to its behavior based analysis the software also has a whitelist that gets updated with each version. If the detected module is found to be in its whitelist or have digital signature it will allow its keylogging activity. In that case a green tick can be found on the left of the executable name, otherwise it will just filter its keylogging ability and it will be listed with a red cross. If the user needs to allow the keylogging ability of a detected module, he just has to do a left click on that red cross; in the opposite case a left click on the green tick enables filtering of the keylogging activity of the executable. The software also has a whitelist tab where the user can add executables that he needs to run smoothly.
- The software has a flaw where a user can access and change the status of the auto-detected modules even if password protected is enabled. I informed them about this and got the assurance of a possible fix.
- Under the about tab, link to the homepage and the registration information can be found. It is indeed a matter of worry that the license key is displayed instead of showing asterisk. An upcoming release might fix this too.
- The help file is pretty thorough and easy to understand. But it might create a confusion about the update policy of the software. The licensing of DataGuard AntiKeylogger is lifetime, so the note “All updates of the program (within the same major version) are free” is ambiguous. I was advised to ignore that and was informed that help file will be updated soon.
- The software has a powerful self protection that protects it from unwanted termination through malicious activity. There is no way to turn off the self protection.
- DataGuard Antikeylogger can run on 32 bit Windows e.g. Microsoft Windows 2000 SP4, 2003, Vista and 7. 64 bit version is under development.
- I have tested the software with some antikeylogger test suites available in the internet. It is able to completely block Zemana and Comodo test and partially Spyshelter test. I have informed them about this and got an affirmative reply.
- While talking with their support I get to know that they will soon be introducing cloud response system for making the protection more effective while reducing false detection.
- While the auto-response of the software is its most useful feature, power users need more control and an optional ask mode will be handy. In near future that feature might be there too in DataGuard Antikeylogger like in most other competitive products.
- Additionally it is worth mentioning in case the user needs to uninstall the software, that the software can be uninstalled by going to Add/Remove Programs. During uninstallation process you will be asked for the reason of uninstallation but you can skip that thing. I found that DataGuard AntiKeylogger leaves very few traces in the computer; 4 harmless registry entries and about 6 files in temporary folders.
In cooperation with MaxSecurity Lab, I have arranged a giveaway of DataGuard AntiKeylogger Ultimate. Each license is worth $59 and can be used lifetime in one PC. The giveaway will run for 7 days and will be closed on 24th Nov, 2011, 11:59pm GMT.
- To grab a license you need to go to the giveaway page. Enter your name, email address and them click on to “Submit Query”. You will get to see the license information once the page finished loading. You will not receive any mail regarding the license so please note down your serial for future use.
- After registering the software with the license it can be used lifetime. All the minor updates and major upgrades can be installed.
- The license is for 1 PC but it can be used even if the user needs to format his PC.
- If anyone needs to use the software in multiboot environment or in his other PCs it is advisable to grab multiple licenses.
- You will be able to get free support from MaxSecurity Lab. Just write them to email@example.com. In case you ever experience any bugs please submit them to firstname.lastname@example.org. For general questions please mail to email@example.com. As far as I experienced the support team is very eager to receive feedbacks. You can find the online F.A.Q here.
Please don’t copy the giveaway page and post it elsewhere. In case you need to share about this giveaway in any other blogs, forums or social networking sites, please link to this blog post.
I would encourage everyone to subscribe to get prompt information on coming exciting giveaways, reviews and fresh news about technology.
Have a nice day.
The giveaway, organized by MaxSecurity Lab and Techno360 has ended on Nov 24.
However, you can use 50PERCENT discount code to save 50% when buying DataGuard AntiKeylogger Ultimate or NextGen AntiKeylogger Ultimate! The offer ends soon! | <urn:uuid:d246767c-a6bb-436b-8e79-2f7fa6f6669f> | CC-MAIN-2015-35 | http://www.techno360.in/dataguard-antikeylogger-ultimate-review-and-unlimited-lifetime-license-giveaway-for-7-days/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00165-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.913125 | 2,294 | 2.59375 | 3 |
PV System Selection and Sizing
The first step in designing a PV system is to decide whether to install a PV system that is connected to the local utility grid or a remote system that functions without a utility connection. For either PV system type, the amount of shade-free roof area available from roughly 9 a.m. to 3 p.m. for mounting the PV array must be determined. If a ground- or a pole-mount is considered rather than a roof-mount, these optional sites need to be shade-free also during the same time period. A remote PV system’s array size is determined after determining the building’s average daily electrical demand and sizing other components in the total system. For a grid-connected PV system the array size can be simply sized to fit within the amount of mounting area and the budget for the project. Most of the discussion in this lesson will focus on grid-connected PV systems, with some tidbits on remote site installations
At this point, an economic decision must be made. Using a cost of $8 to $10/Watt of the array’s peak output to install a PV system, a preliminary assessment between available funds and array size can be made. Only install modules that have a UL 1703 listing. The Underwriters Laboratory (UL) uses safety and performance standards specific to the equipment type and issues a listing for models that pass the safety and performance tests.
For most grid-connected PV system installations, the estimated peak array output is used as the basis for specifying the inverter needed for the PV system. In some cases, the building’s electrical demand may be determined and used to specify the inverter. In a remote or stand-alone PV system installation, the average daily electric load of the building needs to be calculated first. The building’s electric demand should include the Watt demand of all ac loads running at the same time, plus the wattage from the surge of starting motors, plus all dc loads operating at the same time; this demand is further increased by 1.2 to account for inverter losses. In both the grid-connected and remote site situations, the initial estimate of the inverter’s capacity may be changed by making a decision that at some point in the future you plan to increase the size of PV array.
There are two basic types of inverters to consider for this course – those that produce a modified sine wave and those that produce a true sine wave. Although modified sine wave inverters are less expensive than true sine wave models, they can not produce the quality waveform required by some equipment. Utility companies produce electricity that is a true sine wave. A modified sine wave inverter produces a slightly squared off electrical waveform, but computers, power tools, refrigerators and most all equipment can use this generated electricity. Pure sine wave inverters produce a true sine wave that is the same as utility generated waveforms and is needed by high-end audio equipment and other specialized equipment that are electrically sensitive such as life support equipment. All inverters should be UL 1741 listed. In grid-connected installations the inverter must shut down rapidly in situations where the utility goes down – this is called anti-islanding and is a safety function for utility personal and electricians who may be working in the area.
|Sine Wave and Modified Sine Wave Electric Power Forms
Inverters designed for remote site and grid-connected installations are available that are designed to use nominal 12-, 24-, or 48-volt DC electricity from the PV array and some grid-connected inverters are designed to operate with input voltages ranging from 139 up to nearly 600 volts DC. The higher-voltage strings carry low current levels; this allows the use of smaller diameter wire in the circuit between the inverter and the modules.
Once you have identified the type and the output capacity of the inverter for a grid-connected system, you can determine the PV modules you’ll need. Using module maximum working voltage and amperage values, you can use series and parallel calculations to match the inverter’s input electrical requirement with the proposed array output electrical characteristics. You will have to be aware of the open circuit voltage and amperage of the module strings to not exceed the normal input range for the inverter chosen. Matching modules with the inverter will most likely be an iterative process. Inverters using high voltage inputs are best matched with PV modules using a computer program provided by each inverter manufacturer. The computer programs use a database of specific module electrical characteristics to identify the appropriate number of modules in each string and the number of strings feeding the specified inverter. Programs for several brands can be found at the links below:
- Xantrex: www.xantrex.com/support/gtsizing/disclaimer.asp?lang=eng
- Sunnyboy: www.sma-america.com/stringsizing.html
- PV Powered: www.pvpowered.com/string_sizing.php
- Beacon Power: www.beaconpower.com/StringCalc/
- Fronius: www.fronius.com/solar.electronics/downloads/configurator.htm
Installing a PV system parallel to a sloped roof would have the following equivalent tilt angle:
|Roof slope||Slope or Tilt Angle (degrees)|
The National Electric Code (NEC) has a significant impact on the design of and the components used in a PV system. Sandia National Laboratories' Photovoltaic Center has posted the following wire coding and sizing information from The Stand-Alone PV System Handbook on its website.
Wire Types Commonly Used in the U.S.
|Note: The use of NMB (Romex) is not recommended except for ac circuits as in typical residential wiring. Although commonly available, it will not withstand moisture or sunlight.|
In the United States, the size of wire is categorized by the American Wire Gage (AWG) scale. The AWG scale rates wires from No. 18 (40-mil diameter) to No. 0000 (460 mil diameter). Multiple conductors are commonly enclosed in an insulated sheath for wires smaller than No. 8. The conductor may be solid or stranded. Stranded wire is easier to work with particularly for sizes larger than No. 8. Copper conductors are recommended. Aluminum wire is less expensive, but can cause problems if used incorrectly. Many different materials are used to make the sheath that covers the conductors. You must select a wire with a covering that will withstand the worst-case conditions. It is mandatory that sunlight resistant wire be specified if the wire is to be exposed to the sun. If the wire is to be buried without conduit it must be rated for direct burial. For applications such as wiring to a submersible pump or for battery inter-connections, ask the component dealer for recommendations. Often the dealer or manufacturer will supply appropriate wire and connectors.
More useful information is contained in NEC. It is recommended that any designer/installer review Article 300 before proceeding. This article contains a discussion of wiring methods and Table 310-13 gives the characteristics and recommended usage of different wire types. Table 310-16 gives temperature derate factors. Another useful reference available from the PVSAC at Sandia National Laboratories is Photovoltaic Power Systems and the National Electrical Code, Suggested Practices.
Selecting the correct size and type of wire for the system will optimize performance and increase reliability. The size of the wire must be capable of carrying the current at the operating temperature without excessive losses. It is important to derate the current carrying capacity of the wire if high temperature operation is expected. A wire may be rated for high temperature installations (60-90°C), but this only means that the insulation of the wire can withstand the rated temperature — it does not mean that ampacity is unaffected.
The current-carrying capability (ampacity) depends on the highest temperature to which the wires will be exposed when it is carrying the current. According to Table 310-16 in the NEC, a UF-type wire operating at 55°C can safely carry only 40 percent of the current, or 30°C — a significant derate. If the ampacity of the wire is exceeded, it could result in overheating, insulation break-down, and fires. Properly sized fuses are used to protect the conductors and prevent this kind of damage.
Loss in a DC circuit is equal to I2R, where I is the current and R is the resistance of the wire. For 100 ampere current, this means 10,000 times the loss in the circuit compared to a one amp load. It is easy to see why resistance must be kept small. Also, the voltage drop in the circuit is equal to IR. Voltage drop can cause problems, particularly in low-voltage systems. For a 12-volt system, a one-volt drop amounts to more than 8 percent of the source voltage. Avoid long wire runs or use larger wire to keep resistance and voltage drop low. For most applications, AWG No. 8, No. 10, and No. 12 are used.
An abbreviated wire sizing table for a 12-Volt DC system is shown below. The table indicates the minimum wire size that should be used if the voltage drop is to be limited to 3 percent for any branch circuit. (This table can be adjusted to reflect different voltage drop percentages or different system voltages by using simple ratios. For example, a 2-percent loss can be calculated by multiplying the values in the table by 2/3. For a 24-Volt DC system, the values can be multiplied by two. For a 120-volt system multiply by 10.) The calculations show one-way distance, taking into account that two wires, positive and negative, are used in an electrical circuit.
As an example, assume the array is 30 feet from the controller and the maximum current is 10 amperes. The table shows that No. 8-size wire can be used up to a one-way distance of 30 feet (no temperature derate included). While the general rule is to limit the voltage drop for any branch circuit to 3 percent, there may be some applications, particularly those operating at or below 12 Volts, where the loss should be limited to 1 percent or less. For the total wire run on any path from source to load, the loss should be no greater than 5 percent.
|One-way Wire Distance (feet) for 3% voltage drop - 12 volt system - copper wire|
|AWG Wire Size|
The NEC requires certain conventions for color of conductors and specifies requirements for disconnecting the power source (code reference for each condition is given in brackets). Specifically:
- The grounded conductor is to be white. [200-6]. Convention is for the first ungrounded conductor of a PV system to be red, and the second ungrounded conductor black (negative in a center tapped PV system).
- Single-conductor cable is allowed for module connections only. Sunlight resistant cable should be used if the cable is exposed. [690-31b]
- Modules should be wired so they can be removed without interrupting the grounded conductor of another source circuit. [690-4c]
- Any wiring junction boxes should be accessible. [690-34]
- Connectors should be polarized and guarded to prevent shock. [690-33]
- Means to disconnect and isolate all PV source circuits will be provided. [690-13]
- All ungrounded conductors should be able to be disconnected from the inverter. [690-15]
- If fuses are used, you must be able to disconnect the power from both ends. [690-16]
- Switches should be accessible and clearly labeled. [690-17]
The purpose of grounding any electrical system is to prevent unwanted currents from flowing (especially through people) and possibly causing equipment damage, personal injury, or death. Lightning, natural and man-made ground faults, and line surges can cause high voltages to exist in an otherwise low-voltage system. Proper grounding, along with over-current protection, limits the possible damage that a ground fault can cause. Consider the following and recognize the difference between the equipment grounding conductor and the grounded system conductor:
- One conductor of a PV system (>50 volts) must be grounded, and the neutral wire of a center tapped three wire system must also be grounded. [690-41]. If these provisions are met, this is considered sufficient for the battery ground (if batteries are included in the system). [690-73]. A ground is achieved by making a solid low resistance connection to a permanent earth ground. This is often done by driving a metallic rod into the earth, preferably in a moist location. [250-83].
- A single ground point should be made. [690-42]. This provision will prevent the possibility of potentially dangerous fault current flowing between separate grounds. In some PV systems where the PV array is located far from the load, a separate ground can be used at each location. This will provide better protection for the PV array from lightning surges. If multiple ground points are used, they should be bonded together with a grounding conductor.
- All exposed metal parts shall be grounded (equipment ground). [690-44]
- The equipment grounding conductor should be bare wire or green wire. [210-5b]
- The equipment grounding conductor must be large enough to handle the highest current that could flow in the circuit. [690-43]
Because the module frames are usually aluminum and bare copper wire is used for the ground conductor, you must use the module grounding location and the manufacturers specified hardware to assure a low-resistance connection to provide long-term protection from shocks and fire hazards. The grounding conductor must be sized to safely carry the current of the over-current device protecting the circuit.
It is important that the installation crew includes a certified electrician knowledgeable about applicable codes and a person knowledgeable about the equipment used in the PV system installation. Article 690 of NEC addresses electrical requirements and the equipment for installing a PV system. The electrician must know the codes and be present to answer questions during the electrical inspection.
- During the initial site visit to check a single story building’s acceptability for a PV system, you note that the asphalt-shingled roof has a 4/12 slope and is oriented 10 degrees to the west of true south. The south-facing roof is a rectangle that is 30 feet wide and 20 feet from the eaves to the roof top. Is this building a good candidate for a PV installation? If it is and given that the roof can support the PV system and a 3-person installation crew, what would you suggest to the building owner as the largest, safe array (peak output) to install?
- For the same building described in question 1, what conditions might you encounter that would make you reject the site for a system installation?
- What estimated cost would you tell the building owner for an installed PV system with a peak output of 3000 Watts?
- Why is an inverter needed in a grid-connected PV installation?
- Why is an inverter needed in a remote or stand-alone PV system?
- How would you size the inverter for a grid-connected PV system?
- What is the color of the grounded conductor in a PV installation and how is it sized?
- What is the color of the equipment/frame ground wire in a PV installation and how is it sized?
- What function does the equipment/frame ground perform?
- Given that a PV system uses modules outputting a nominal 12 volts at 5 amperes, the modules are 30 feet from a combiner box, and you can only tolerate a 2% voltage drop, what gauge of wire should be used to connect the modules with the combiner box? What gauge of wire if the modules strings were 24 volt at 5 amps? | <urn:uuid:f12a571c-b3d3-4acd-a254-21f0c492f2ea> | CC-MAIN-2015-35 | http://pasolar.ncat.org/lesson06.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00164-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.893663 | 3,328 | 2.78125 | 3 |
Today (April 22nd) is Earth Day (see previous post), and this cartoon by Kal from The Economist provides a timely reminder, if one was needed, about the various threats to the global environment. In the cartoon shows what can best be described as a juggernaut destroying everything in its path, as an assortment of wild animals flee for their lives.
COMMENTS The various global threats are fairly clear, but there are one or two points worth commenting on. The comment made by the two birds features a play on the verb 'advance' and the adjective 'advanced'. The 'most advanced species' is, of course, humans. The remark is meant to be ironic, as is the 'Happy Earth Day' flag. The driver has his head in a bag of sand, which is a reference to the idiom 'to have/hide/stick/bury one's head in the sand'. This means to refuse to think about an unpleasant situation. • Teachers can't just hide their heads in the sand and not try to find out why students aren't doing better.
Today is Earth Day, an annual event, celebrated on April 22, on which day events worldwide are held to demonstrate support for environmental protection. It was first celebrated in 1970, and is now coordinated globally by the Earth Day Network, and celebrated in more than 192 countries each year. You can find out more about the history of this movement here. And TIME has an article about the Google Doodle you can see above which features a video telling you how to make your own Google Doodle.
TRANSCRIPT For World Water Day an astonishing discovery has been announced — there is water on Twitter. UNICEF suggests we recuperate this water for the right cause. The H2O challenge. It's a simple idea. Filter your tweets to extract the characters H, 2, and O in order to create water molecules. Then all you have to do is make a donation to UNICEF to transform these virtual molecules into drinking water for children in Togo. So, how much water do you think is hidden in your tweets? To find out, visit the campaign website: http://h2o-challenge.com.
This beautiful ad is part of a campaign warning about the threat of overfishing, specifically in relation to tuna.
"Because of its high market value, tuna are among the “most wanted” fish for those fishing illegally. A single tuna was sold for a record $ 1.76 million at a Tokyo auction. Over exploitation and illegal fishing drove this species to brink of collapse. In the near future, Bluefin tuna might be a successful history of recovery, but the rest of oceans giants remain forgotten while they disappear from our seas."
Visit the superb Sea Legend website to find out more. I've textivated the voiceover below, and you can find the original text here. Click on textivate.com and then 'textivate now' for lots more activities based on the same text.
If the timetable for exactly when countries need to reduce their greenhouse gas emissions has always seemed a little vague to you, well, we just got a deadline: 2100. That's according to a new report by the United Nations' Intergovernmental Panel on Climate Change, an international organization of scientists dedicated to studying climate change. Using data from previous reports, the 116-page "Synthesis Report" warns global greenhouse emissions need to drop to zero by 2100 to avoid irreversible damage to our planet's atmosphere. Full transcript >>
Britain will struggle to “keep the lights on” unless the Government changes its green energy policies, the former environment secretary will warn this week. Owen Paterson will say that the Government’s plan to slash carbon emissions and rely more heavily on wind farms and other renewable energy sources is fatally flawed. He will argue that the 2008 Climate Change Act, which ties Britain into stringent targets to reduce the use of fossil fuels, should be suspended until other countries agree to take similar measures. If they refuse, the legislation should be scrapped altogether, he will say. Full story >>
VOCABULARY If you rip up something such as a plan or an agreement, you decide that it is useless and stop using it. • Mr. Sapin stressed that France isn't trying to rip up the rules or embark on a massive fiscal expansion.
Thirty thousand people turned out for a climate change rally in Melbourne, Australia. It's part of a global day of protests ahead of the United Nations' summit in New York Tuesday. The protests are part of a movement called People's Climate Change March, which aims to push leaders at the summit to make a meaningful agreement on capping emissions. A number of world leaders are expected to attend the U.N. summit, including President Obama, who has often urged action on climate change. Full transcript >>
California's legislature has passed a ban on single-use plastic bags. It could soon be the first ban of its kind implemented at a state level. State lawmakers passed the bill along with a host of other measures during a late-night session Friday, after initially failing to clear the legislature in an eariler vote. It now heads to Gov. Jerry Brown's desk for approval. If signed, the proposal would ban grocery and convenience stores from providing plastic bags for their customers. Shoppers would have to either bring their own bags, or pay 10 cents at checkout for a paper or reusable plastic bag. Full transcript >>
Climate scientists have argued human activity is responsible for a significant portion of glacier melting but haven't been able to pinpoint just how much of an affect we've had until now. A panel of researchers put together by the United Nations found human-made greenhouse gas emissions account for about two-thirds of glacier melting specifically between 1991 and 2010. In the 140 years prior to 1991, they said humans only contributed about a quarter of the total amount. This marks the first time scientists have been able to attach a specific number to how drastic an effect the human carbon footprint is having on ice melting — especially recently. Full transcript >>
BACKGROUND New research shows a major section of west Antarctica's ice sheet will completely melt in coming centuries and probably raise sea levels higher than previously predicted, revealing another impact from the world's changing climate. According to a study released Monday, warm ocean currents and geographic peculiarities have helped kick off a chain reaction at the Amundsen Sea-area glaciers, melting them faster than previously realized and pushing them "past the point of no return," NASA glaciologist Eric Rignot told reporters. Read more and watch video >>
THE CARTOON The cartoon by Banx shows a pair of penguins sitting on an ice floe somewhere in west Antarctica. One of them, who is reading the report about the ice melting tells the other, "I've half a mind to learn to fly".
EXPLANATION If the penguin could fly, it would be able to escape the ice floe (of course, it could always swim).
IDIOM The expression have half a mind to do something is used for threatening to do something, when you probably will not do it. • I've half a mind to tell your parents what you've done!
BACKGROUND Global greenhouse gas emissions over the past decade were the "highest in human history", according to the world's leading scientific body for the assessment of climate change. Without further action, temperatures will increase by about 4 to 5C, compared with pre-industrial levels, it warns, a level that could reap devastating effects on the planet.The stark findings are to be revealed in the latest report by the Intergovernmental Panel on Climate Change (IPCC) today, the last in a trilogy written by hundreds of scientists on what is considered the definitive take on climate change. Full story >>
THE CARTOON The cartoon by Brian Adcock from The Independent shows the planet Earth as a cartoon character, sweating profusely and covered with bandages and plasters. The Earth says, "Maybe people will notice if I tweet a selfie."
COMMENTARY The sweat is a clear reference to global warming, and the message seems to be that these days people are more interested in posting and viewing selfies on Twitter than worrying about climate change.
VOCABULARY 1. A selfie is a photo that you take of yourself, usually for use in social media. “Selfie” was the Oxford English Dictionary's word of the year for 2013. 2. To tweet is to send a message using the microblogging and social networking service Twitter. • The "Wall Street Journal" says 44 percent of Twitter's almost one billion registered users have never Tweeted.
David Cameron's commitment to the green agenda will come under the fiercest scrutiny yet this week when top climate-change experts will warn that only greater use of renewable energy – including windfarms – can prevent a global catastrophe. Full story >>
VOCABULARY To avert is to prevent something bad or harmful from happening. • Violence may have been averted with a greater police presence.
BACKGROUND David Cameron has been accused of trying to hide the role of pollution in causing the severe smog affecting parts of the UK after he blamed the problem solely on Saharan dust, while the London mayor, Boris Johnson, said the air seemed "perfectly fine" to him. Some ministers are claiming there is little they can do about the poor air quality, with Cameron insisting the smog is just "a naturally occurring weather phenomenon". Read more >>
THE CARTOON The cartoon by Ben Jennings from The Guardian shows David Cameron with his head in a bucket of Sahara Dust. In the background, we can see the London traffic through a haze of smog.
EXPLANATION The cartoonist plays on a common English idiom. If you bury (or hide) your head in the sand, you refuse to admit that a problem exists or refuse to deal with it. • Parents said bullying was being ignored, and accused the headmaster of burying his head in the sand. The expression alludes to an ostrich, which is believed incorrectly to hide its head in a hole in the ground when it sees danger.
Emergency services were hit by a surge in 999 calls yesterday as the deadly smog tightened its grip on Britain. Complaints about breathing problems rocketed and people were urged to avoid strenuous outdoor activity as the fumes enveloped many parts of the country. David Cameron, who abandoned his regular morning jog, was criticised for dismissing the potent cocktail of toxic particles, factory pollution and dust from the Sahara as a 'naturally occurring weather phenomenon'. Full story >>
VOCABULARY If you choke, or if something chokes you, you stop breathing because something is blocking your throat. • Children can choke on peanuts. | <urn:uuid:011918f4-6b24-4cb0-ae94-84139df22a06> | CC-MAIN-2015-35 | http://www.englishblog.com/environment/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645235537.60/warc/CC-MAIN-20150827031355-00222-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.950216 | 2,204 | 3.375 | 3 |
has a strong North-South divide (rich North, poor South) and
a quick changing politics with changing goverments. This has,
of course influences on the health service.
The medical service
Italy has a governmental medical
service. In 1970 Italy still had approx. 100 different health
insurances, which was abolished in 1978. In that time the
governmental medical service SSN (Servicio Sanitaris Nazionale)
was introduced, which encompasses all citizens. Non-cash benefits
and services are available free of charge. The original idea
was to introduce the SSN in order to create a unique service
offer, to get rid of the North-South divide and to lower the
costs. However, this plan did not work out. The SSN was already
subject of 4 reforms since its founding.
A basic principle is that health should be the fundamental
right of everyone. It is not only the fundamental right of
all people, it is also regarded as public interest of the
state and it is also protected with the help of the SSN. All
citizens (registered at the SSN) are entitled to receive health
care. Everyone has to be treated with the same dignity, regardless
of his social position. Everyone in need has to receive medical
The general rules of the health service:
· Creation of a modern health consciousness with all
· Prevention of illness and industrial accidents; security
· Diagnosis and curing of illnesses; rehabilitation
· Food hygiene
· Protection against illnesses concerning stockbreeding
· Job-related further education for people employed
within the sector of health service
· Protection of mother and child
In November 30, 1998, important
changes within the medical service came into force. The health
system became federalist. Regions became responsible for management
and organisation and municipalities began to play a more powerful
health insurance links:
There are 3 levels:
The organisation is heavily decentralised. The regional government
is responsible for planning, financing, control and supervision.
The parliament, however, is deciding on the framework regarding
the SSN. (very important is law no. 833 from 1978).
The levels in detail:
a) the national (central)
The institution of this level is the ministry of health. It
is responsible for the SSN objectives (health plan). The ministry
of health, however, has been using the financial resources
from the Sanita Nazionale fond (FSN) for other levels until
the year 2004. In the beginning of 2004 this fond was abolished.
Now the regions will receive full autonomy step by step until
2013. A solidarity fond for compensation of weaker regions
is new on national level.
Another task of the ministry of health is the control of the
drug market and research. It also coordinates the Instituti
di Rivovero e Cura a Carattere Scientifico (IRCCS) –
a cooperation of 16 public and private hospitals dedicated
Another area of responsibility are national prevention programs
(e.g. information on vaccination). The responsible authority
is the supreme insitution for prevention and work security.
It is also head of 10 zoological intitutes and therefore responsible
for veterinary medicine.
b) the regional level:
The regional government draws up a regional health plan every
3 years. It includes the way how to distribute financial resources
to local health units or hospitals. They adapt themselves
to the health plans of the USL (see c) local level). The regional
government also performs a controlling function regarding
efficiency, quality and of provision of service through public
and private health organisations. They also have legislative
competence for certain areas.
c) the local level:
On the local level there are the various USL (Unita Sanitarie
Lokale). (You also find often the abbreviation ASL –
Aziende Sanitarie Lokale.) There exist more than 200 USL,
approx. 50000 – 200000 inhabitants are assigned to each
of them. The various USL are self-governing. The parliaments
of the municipalities take according decisions and elect the
president of the USL.
On this level health benefits are still provided by trusts,
hospitals of the IRCCS and private hospitals.
Every citizen has to register at the USL, which is responsible
for his domicile. It is necessary to come in person. When
registering you receive a “health card” that makes
it possible for the insured party to choose a general practioner
(“medico die base“ – a certain doctor as
primary contact person) freely, as well as the procuring of
various licences, necessary for health care. This card also
makes it possible to receive the granted exemption, dependend
on the illness.
Usually a governmental health
service is financed through public resources but this is not
completely the case in Italy. Only 37.5% comes from public
resources, 40.8% comes from health insurance contributions
paid by the employer (2.88% of the gross earnings) and the
rest from additional private payments. As the FSN will only
be abolished step by step (see organisation, national level),
the regions are responsible for financial matters.
The cash flow look as follows:
SSN Government Population
Ministry of health, national budget, income tax
National solidarity fond, sales tax **)
The various regional ministries, regional budget, IRAP *),
10-35% national, 65-90% regional
Additional regional income tax ***) and other regional taxes
*) IRAP: Imposta Regionale
sulla Attività Produttive; this is a regional tax on
employers’ profit and on salaries of member of the ÖD:
**) 74.3% are added up to the natioal budget and 25.7% to
the regional budget
***) regional income tax IPREF; the regions decide on the
amount, but the maximum is 1.4%: The national income tax will
be reduced with the same % rate as a countermove, in order
to avoid an additional burden for the citizen.
Is only received by workers (employees receive a compulsory
continuing payment of wages of a minimum of 3 months.) In
order to receive sickpay, an inability to work is required
– confirmed by a doctor. The maximum payment period
is 6 months. The patient receives 50% of the gross earnings
between day 1 and day 20, and from day 21 he receives 66.66%
of the gross earnings.
All citizens are entitled to receive non-cash benefits. The
insured party is able to choose a general practioner (medico
di base). Regular services in governmental health centres
or at statutory health insurance physician are carried out
without additional payments. Additional payments, however,
are required for many kinds of medicine as well as for specialist
services; in fact for:
· services regarding pharmaceutical care (go
to german Zahnversicherung)
· services through an established specialist
· services for partly stationary hospital treatments
· services regarding health cures
· services, which include rehabilitations outside of
The regions are also able to adopt emergency treatments in
their catalogue (without a continuing treatment in the hospital)
The costs for dental treatments will not be covered. The citizen
has to pay the entire amount by himself.
Doctors and hospitals:
a) medico die base / family
Family doctors are responsible for the primary care and refer
the patient to the specialist, if necessary. Family doctors
work either in the out-patient department of the USL or they
are self-employed and are bound by contract to the USL. The
densitiy of doctors is relatively high.
Density of doctors per 100000 inhabitants:
Italy and Ireland 2001; Germany
2000; Switzerland 2002;
A family doctor gets paid with a per capita premium per patient,
which is the main part of his payment. This can be increased
by additional fees through special treatments (e.g. treatment
of a chronic ill person).
The payment of specialists is carried out partly by the hour,
partly after single service. If you visit a specialist, you
generally have to pay a fee. The amount of this fee (between
13 and 36€) is laid down by the various regions. The
specialists work either in public hospitals or they are self-employed.
Some specialists have private offices but the health insurance
doesn’t cover the costs in this case.
In case of a visit at the gynaecologist or eye specialist
the patient does not need a referral through the family doctor.
The patient can go directly to the specialist.
In Italy the study of medicine lasts for 6 years and it ends
with a state examination. With your successful final examination
you receive a professionalism authorization. In order to register
for the state examination, you have to successfully accomplish
a 6 month internship at a university clinic (or at the National
Health Service). You can already do your internship during
your 6-year study period. If you need further education in
order to become a specialist, you can use the technical college
for further education of the university. If you want to become
a specialist and work within the National Health Service,
you have to complete a postdoctoral education after your studies,
which has to last at least 2 for years.
representation of interests:
Every doctor is compulsory member of the General Medical Council
(regional) and listed in the index of doctors. The General
Medical Councils supervise the ethics of the profession and
they also have disciplinary powers. They give advices to doctors
regarding fee negotiations.
Furthermore there are various medical unions for the medical
representation of interests.
Most of the hospitals are operated by the USL. The biggest
ones, however, possess so-called “trusts” as financial
autonomy and conclude contracts with the USL.
This was introduced in 1994 with the aim to create a healthy
competition within the hospital sector. There are also private
hospitals, which are bound to the national health service
USL hospitals are financed with the money of the USL.
Contracted hospitals receive their payments after annual negotiations
about daily rates.
The current project (partly
financed through the European Commission) is a virtual hospital.
The aim is to improve the services and create a higher quality.
Offers on the internet are:
· Clinical services of various special areas (e.g.
· Non-clinical activities (e.g. e-commerce)
· Arranging of appointments
· Online advice/ online diagnosis
· Medical chat-line
· Overview of bed availability (Italy has a relatively
small amount of hospital beds)
· Information for doctors and for the public
Drug supply belongs to the area of competence of the regions.
Those are controlling the steps and decide about necessary
additional payments and their amount.
There are 4 groups of different drugs:
· Group A: strong drugs for the treatment of chronic
· Group B: drugs with therapeutical meaning
· Group C: drugs, which do not belong to group A or
· Group H: drugs offered by hospitals
The amount of additional payments is connected with the group
People older than 65 and children up to 6 years are exempt
from additional payments as well as the disabled and unemployed.
Expenses for drugs are relatively
high when comparing with other European countries. Italy and
France take the first place in this matter.
Figures (from 2003)
Italy has approx. 301300
inhabitants. The life expectancy is 76.9 years for men and
82.9 years for women. The infant mortality (per 1000 life-births)
amounts to over 4%, which is relatively high. In 2002 public
expenses for health services amounted to 8.5% of the GDP. | <urn:uuid:408d47ee-31b1-49ae-8ba2-bdbd6e3e51f3> | CC-MAIN-2015-35 | http://www.ess-europe.de/en/italy.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645264370.66/warc/CC-MAIN-20150827031424-00103-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.93502 | 2,521 | 2.796875 | 3 |
|Part of a series on|
Part of Jewish history
|This article needs additional citations for verification. (February 2007)|
- 1 Accusations of deicide
- 2 Restrictions to marginal occupations (tax collecting, moneylending, etc.)
- 3 The Black Death
- 4 Demonizing of the Jews
- 5 Blood libels
- 6 Host desecration
- 7 Disabilities and restrictions
- 8 The Crusades
- 9 Expulsions from England, France, Germany, Spain and Portugal
- 10 Anti-Judaism and the Reformation
- 11 See also
- 12 References
Accusations of deicide
In the Middle Ages, religion played a major role in driving antisemitism. Though not part of Roman Catholic dogma, many Christians, including members of the clergy, have held the Jewish people collectively responsible for killing Jesus, through the so-called blood curse of Pontius Pilate in the Gospels, among other things.
As stated in the Boston College Guide to Passion Plays, "Over the course of time, Christians began to accept... that the Jewish people as a whole were responsible for killing Jesus. According to this interpretation, both the Jews present at Jesus’ death and the Jewish people collectively and for all time, have committed the sin of deicide, or God-killing. For 1900 years of Christian-Jewish history, the charge of deicide ( Which was originally attributed by Melito of Sardis ) has led to hatred, violence against and murder of Jews in Europe and America."
Restrictions to marginal occupations (tax collecting, moneylending, etc.)
Among socio-economic factors were restrictions by the authorities. Local rulers and church officials closed many professions to the Jews, pushing them into marginal occupations considered socially inferior, such as tax and rent collecting and moneylending, tolerated then as a "necessary evil". Catholic doctrine of the time held that lending money for interest was a sin, and forbidden to Christians. Not being subject to this restriction, Jews dominated this business. The Torah and later sections of the Hebrew Bible criticise usury but interpretations of the Biblical prohibition vary (the only time Jesus used violence was against money changers taking a toll to enter temple). Since few other occupations were open to them, Jews were motivated to take up money lending. This was said to show Jews were insolent, greedy, usurers, and subsequently led to many negative stereotypes and propaganda. Natural tensions between creditors (typically Jews) and debtors (typically Christians) were added to social, political, religious, and economic strains. Peasants who were forced to pay their taxes to Jews could personify them as the people taking their earnings while remaining loyal to the lords on whose behalf the Jews worked.
The Black Death
As the Black Death epidemics devastated Europe in the mid-14th century, annihilating more than a half of the population, Jews were taken as scapegoats. Rumors spread that they caused the disease by deliberately poisoning wells. Hundreds of Jewish communities were destroyed by violence, in particular in the Iberian peninsula and in the Germanic Empire. In Provence, 40 Jews were burnt in Toulon as soon as April 1348. "Never mind that Jews were not immune from the ravages of the plague ; they were tortured until they "confessed" to crimes that they could not possibly have committed. In one such case, a man named Agimet was ... coerced to say that Rabbi Peyret of Chambéry (near Geneva) had ordered him to poison the wells in Venice, Toulouse, and elsewhere. In the aftermath of Agimet's "confession", the Jews of Strasbourg were burned alive on February 14, 1349.
Although the Pope Clement VI tried to protect them by the July 6, 1348 papal bull and another 1348 bull, several months later, 900 Jews were burnt in Strasbourg, where the plague hadn't yet affected the city. Clement VI condemned the violence and said those who blamed the plague on the Jews (among whom were the flagellants) had been "seduced by that liar, the Devil."
Demonizing of the Jews
From around the 12th century through the 19th there were Christians who believed that some (or all) Jews possessed magical powers; some believed that they had gained these magical powers from making a deal with the devil. See also Judensau, Judeophobia.
On many occasions, Jews were accused of a blood libel, the supposed drinking of blood of Christian children in mockery of the Christian Eucharist. According to the authors of these blood libels, the 'procedure' for the alleged sacrifice was something like this: a child who had not yet reached puberty was kidnapped and taken to a hidden place. The child would be tortured by Jews, and a crowd would gather at the place of execution (in some accounts the synagogue itself) and engage in a mock tribunal to try the child. The child would be presented to the tribunal naked and tied and eventually be condemned to death. In the end, the child would be crowned with thorns and tied or nailed to a wooden cross. The cross would be raised, and the blood dripping from the child's wounds would be caught in bowls or glasses and then drunk. Finally, the child would be killed with a thrust through the heart from a spear, sword, or dagger. Its dead body would be removed from the cross and concealed or disposed of, but in some instances rituals of black magic would be performed on it.
The story of William of Norwich (d. 1144) is often cited as the first known accusation of ritual murder against Jews. The Jews of Norwich, England were accused of murder after a Christian boy, William, was found dead. It was claimed that the Jews had tortured and crucified their victim. The legend of William of Norwich became a cult, and the child acquired the status of a holy martyr. Recent analysis has cast doubt on whether this was the first of the series of blood libel accusations but not on the contrived and antisemitic nature of the tale.
During the Middle Ages blood libels were directed against Jews in many parts of Europe. The believers of these accusations reasoned that the Jews, having crucified Jesus, continued to thirst for pure and innocent blood and satisfied their thirst at the expense of innocent Christian children. Following this logic, such charges were typically made in Spring around the time of Passover, which approximately coincides with the time of Jesus' death.
The story of Little Saint Hugh of Lincoln (d. 1255) said that after the boy was dead, his body was removed from the cross and laid on a table. His belly was cut open and his entrails were removed for some occult purpose, such as a divination ritual. The story of Simon of Trent (d. 1475) emphasized how the boy was held over a large bowl so all his blood could be collected.
Disabilities and restrictions
Jews were subject to a wide range of legal disabilities and restrictions throughout the Middle Ages, some of which lasted until the end of the 19th century. Jews were excluded from many trades, the occupations varying with place and time, and determined by the influence of various non-Jewish competing interests. Often Jews were barred from all occupations but money-lending and peddling, with even these at times forbidden. The number of Jews permitted to reside in different places was limited; they were concentrated in ghettos, and were not allowed to own land; they were subject to discriminatory taxes on entering cities or districts other than their own, were forced to swear special Jewish Oaths, and suffered a variety of other measures, including restrictions on dress.
The Fourth Lateran Council in 1215 was the first to proclaim the requirement for Jews to wear something that distinguished them as Jews (and Muslims the same). It could be a coloured piece of cloth in the shape of a star or circle or square, a Jewish hat (already a distinctive style), or a robe. In many localities, members of medieval society wore badges to distinguish their social status. Some badges (such as guild members) were prestigious, while others ostracised outcasts such as lepers, reformed heretics and prostitutes. The local introduction and enforcement of these rules varied greatly. Jews sought to evade the badges by paying what amounted to bribes in the form of temporary "exemptions" to kings, which were revoked and re-paid whenever the king needed to raise funds.
The Crusades were a series of several military campaigns sanctioned by the Papacy that took place during the 11th through 13th centuries. They began as Christian endeavors to recapture Jerusalem and then to maintain the small Christian kingdoms established in the Levant against the Muslim reconquest which eventually overran them.
The mobs accompanying the first Crusade, and particularly the People's Crusade, attacked the Jewish communities in Germany, France, and England, and put many Jews to death. Entire communities, like those of Treves, Speyer, Worms, Mainz, and Cologne, were slain during the first Crusade by a mob army. About 12,000 Jews are said to have perished in the Rhenish cities alone between May and July 1096. Before the Crusades the Jews had practically a monopoly of trade in Eastern products, but the closer connection between Europe and the East brought about by the Crusades raised up a class of merchant traders among the Christians, and from this time onward restrictions on the sale of goods by Jews became frequent. The religious zeal fomented by the Crusades at times burned as fiercely against the Jews as against the Muslims, though attempts were made by bishops during the First crusade and the papacy during the Second Crusade to stop Jews from being attacked. Both economically and socially the Crusades were disastrous for European Jews. They prepared the way for the anti-Jewish legislation of Pope Innocent III, and formed the turning point in the medieval history of the Jews.
In the County of Toulouse (now part of southern France) Jews were received on good terms until the Albigensian Crusade. Toleration and favour shown to the Jews was one of the main complaints of the Roman Church against the Counts of Toulouse. Following the Crusaders' successful wars against Raymond VI and Raymond VII, the counts were required to discriminate against Jews like other Christian rulers. In 1209, stripped to the waist and barefoot, Raymond VI was obliged to swear that he would no longer allow Jews to hold public office. In 1229 his son Raymond VII, underwent a similar ceremony where he was obliged to prohibit the public employment of Jews, this time at Notre Dame in Paris. Explicit provisions on the subject were included in the Treaty of Meaux (1229). By the next generation a new, zealously Catholic, ruler was arresting and imprisoning Jews for no crime, raiding their houses, seizing their cash, and removing their religious books. They were then released only if they paid a new "tax". A historian has argued that organised and official persecution of the Jews became a normal feature of life in southern France only after the Albigensian Crusade because it was only then that the Church became powerful enough to insist that measures of discrimination be applied.
Expulsions from England, France, Germany, Spain and Portugal
In the Middle ages in Europe persecutions and formal expulsions of Jews were liable to occur at intervals, although it should be said that this was also the case for other minority communities, whether religious or ethnic. There were particular outbursts of riotous persecution in the Rhineland massacres of 1096 in Germany accompanying the lead-up to the First Crusade, many involving the crusaders as they travelled to the East. There were many local expulsions from cities by local rulers and city councils. In Germany the Holy Roman Emperor generally tried to restrain persecution, if only for economic reasons, but he was often unable to exert much influence. As late as 1519, the Imperial city of Regensburg took advantage of the recent death of Emperor Maximilian I to expel its 500 Jews.
The practice of expelling the Jews accompanied by confiscation of their property, followed by temporary readmissions for ransom, was utilized to enrich the French crown during 12th-14th centuries. The most notable such expulsions were: from Paris by Philip Augustus in 1182, from the entirety of France by Louis IX in 1254, by Charles IV in 1306, by Charles V in 1322, by Charles VI in 1394.
To finance his war to conquer Wales, Edward I of England taxed the Jewish moneylenders. When the Jews could no longer pay, they were accused of disloyalty. Already restricted to a limited number of occupations, the Jews saw Edward abolish their "privilege" to lend money, choke their movements and activities and were forced to wear a yellow patch. The heads of Jewish households were then arrested, over 300 of them taken to the Tower of London and executed, while others killed in their homes. See also:-Massacres at London and York (1189–1190). The complete banishment of all Jews from the country in 1290 led to thousands killed and drowned while fleeing and the absence of Jews from England for three and a half centuries, until 1655, when Oliver Cromwell reversed the policy.
In 1492, Ferdinand II of Aragon and Isabella I of Castile issued General Edict on the Expulsion of the Jews from Spain (see also Spanish Inquisition) and many Sephardi Jews fled to the Ottoman Empire, some to Palestine.
The Kingdom of Portugal followed suit and in December 1496, it was decreed that any Jew who did not convert to Christianity would be expelled from the country. However, those expelled could only leave the country in ships specified by the King. When those who chose expulsion arrived at the port in Lisbon, they were met by clerics and soldiers who used force, coercion, and promises in order to baptize them and prevent them from leaving the country. This period of time technically ended the presence of Jews in Portugal. Afterwards, all converted Jews and their descendants would be referred to as "New Christians" or Marranos, and they were given a grace period of thirty years in which no inquiries into their faith would be allowed; this was later to extended to end in 1534. A popular riot in 1504 would end in the death of two thousand Jews; the leaders of this riot were executed by Manuel.
In 1744, Frederick II of Prussia limited Breslau to only ten so-called "protected" Jewish families and encouraged similar practice in other Prussian cities. In 1750 he issued Revidiertes General Privilegium und Reglement vor die Judenschaft: the "protected" Jews had an alternative to "either abstain from marriage or leave Berlin" (quoting Simon Dubnow). In the same year, Archduchess of Austria Maria Theresa ordered Jews out of Bohemia but soon reversed her position, on condition that Jews pay for readmission every ten years. This extortion was known as malke-geld (queen's money). In 1752 she introduced the law limiting each Jewish family to one son. In 1782, Joseph II abolished most of persecution practices in his Toleranzpatent, on the condition that Yiddish and Hebrew are eliminated from public records and judicial autonomy is annulled. Moses Mendelssohn wrote that "Such a tolerance... is even more dangerous play in tolerance than open persecution".
Anti-Judaism and the Reformation
Martin Luther, an Augustinian monk and an ecclesiastical reformer whose teachings inspired the Protestant Reformation, wrote antagonistically about Jews in his book On the Jews and their Lies, which describes the Jews in extremely harsh terms, excoriates them, and provides detailed recommendations for a pogrom against them and their permanent oppression and/or expulsion. At one point in On the Jews and Their Lies, Martin Luther goes even as far to write "that we are at fault in not slaying them." According to Paul Johnson, it "may be termed the first work of modern antisemitism, and a giant step forward on the road to the Holocaust." In his final sermon shortly before his death, however, Luther preached "We want to treat them with Christian love and to pray for them, so that they might become converted and would receive the Lord." Still, Luther's harsh comments about the Jews are seen by many as a continuation of medieval Christian antisemitism. See also Martin Luther and Antisemitism
- Christianity and antisemitism
- Christianity and Judaism
- Islam and antisemitism
- Islamic-Jewish relations
- Jacob Barnet affair
- Jews in the Middle Ages
- Relations between Orthodox Christians and Jews
- Relations between Roman Catholics and Jews
- Religious antisemitism
- Paley, Susan and Koesters, Adrian Gibbons, eds. "A Viewer's Guide to Contemporary Passion Plays", accessed March 12, 2006.
- See Stéphane Barry and Norbert Gualde, La plus grande épidémie de l'histoire ("The greatest epidemics in history"), in L'Histoire magazine, n°310, June 2006, p.47 (French)
- Hertzberg, Arthur and Hirt-Manheimer, Aron. Jews: The Essence and Character of a People, HarperSanFrancisco, 1998, p.84. ISBN 0-06-063834-6
- Bennett, Gillian (2005), "Towards a revaluation of the legend of 'Saint' William of Norwich and its place in the blood libel legend". Folklore, 116(2), pp 119-21.
- Ben-Sasson, H.H., Editor; (1969). A History of The Jewish People. Harvard University Press, Cambridge, Massachusetts. ISBN 0-674-39731-2 (paper).
- Michael Costen, The Cathars and the Albigensian Crusade, p 38
- Anti-Semitism. Jerusalem: Keter Books. 1974. ISBN 9780706513271.
- "Map of Jewish expulsions and resettlement areas in Europe". Florida Center for Instructional Technology, College of Education, University of South Florida. A Teacher's Guide to the Holocaust. Retrieved 24 December 2012.
- Wood, Christopher S., Albrecht Altdorfer and the Origins of Landscape, p. 251, 1993, Reaktion Books, London, ISBN 0948462469
- Johnson, Paul. A History of the Jews, HarperCollins Publishers, 1987, p.242. ISBN 5-551-76858-9
- Luther, Martin. D. Martin Luthers Werke: kritische Gesamtausgabe, Weimar: Hermann Böhlaus Nachfolger, 1920, Vol. 51, p. 195. | <urn:uuid:4e9ecdda-11e2-454c-a530-9cd4e26a2fb3> | CC-MAIN-2015-35 | https://en.wikipedia.org/wiki/Antisemitism_in_Europe_(Middle_Ages) | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00105-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.970311 | 3,896 | 3.390625 | 3 |
July 1944 QST
of Contents]These articles are scanned and OCRed from old editions of the
ARRL's QST magazine. Here is a list of the
QST articles I have already posted. All copyrights are hereby acknowledged.
people were reluctant to approach the theoretical aspect of electronics
as it applied to circuit design and analysis, QST (the ARRL's monthly
publication) included equations and explanations in many of their project
building articles. Occasionally, an article was published that dealt
specifically with how to use simple mathematics. In the July 1944 edition
is the third installation of at least a four-part tutorial that covers
resistance and reactance, amplifier biasing oscillators, feedback circuits,
etc. I do not have Part I from the May 1944 edition or Part IV
from the August 1944 edition, but if you want to send me those editions,
I'll be glad to scan and post them (see
See all available
vintage QST articles
Practical Applications of Simple Math
Part III - Resistance-Coupled Amplifier Calculations
EDWARD M. NOLL.* EX-W3FQJ
Fig. 1 - Triode resistance-coupled amplifier circuit.
The design of a resistance-coupled amplifier is a relatively simple
operation involving considerably less formula juggling and mental exertion
than computing all the deductions subtracted from your pay check these
days. The information given in previous installments of this series,
plus some added information on the practical use of vacuum-tube characteristic
curves, will permit the ready calculation of all required design values
for the resistance-coupled amplifier shown in Fig. 1.
curves for the type 6J5 tube are shown in Fig. 2. The Ep
curves, the most common in general use, show the variations in plate
current with changes in plate voltage for various fixed values of grid
bias, the complete set of curves forming a "family" of characteristics
for the particular tube under consideration. These curves represent
static variations in tube potentials and currents when the tube circuit
is not loaded.
When a load is applied, such as the plate resistor
of a resistance-coupled amplifier, an additional line called the load
line must be drawn to represent the dynamic variations in tube potentials
and currents. It is apparent that the .plate-current variations through
the load resistance cause a varying voltage drop across the plate resistance,
which is actually a change in plate voltage. Thus, a change in grid
potential with the applied signal does not change the plate current
without changing the plate voltage. In fact, the resultant change in
plate voltage, caused by the variations of plate current through the
load resistance, represents the useful output of the amplifier. Therefore,
a load line representing the plate load resistance (total resistance
between plate and cathode) is drawn on the characteristic curves to
show the actual dynamic changes in tube operation.
articles have been written on the theory of characteristic curves and
load lines. Since this article is aimed at illustrating the practical
application of the curves, theoretical considerations will be brought
into the discussion only when necessary. Constructing
the Load Line
In constructing the load line on the
characteristic curves, the actual resistance of the load depends upon
the circumstances under which the amplifier is to be operated. Each
set of conditions may require a slightly different value of load resistance
for optimum performance. Optimum values may be chosen, for instance,
for maximum possible undistorted voltage output with a given value of
input signal, or for maximum possible undistorted voltage output with
a definite amount of plate supply voltage. Each of these requirements
may necessitate the use of different values.
As an illustration
of the method used in arriving at the first of these objectives, let
us assume that the maximum peak-to-peak signal delivered to the grid
of the amplifier of Fig. 1 from the preceding stage is 8 volts. It is
necessary to place the load line on the curves in such a position as
to permit the signal to swing over the linear portion of the characteristic
curves. Therefore the signal must not swing into the curved portions
at low plate-current values, nor must it swing into the positive grid
distortion region. In the case of the 6J5 triode curves the least value
of bias that can be employed with an 8-volt signal is -4 volts, permitting
the signal to swing between 0 and -8 volts. Bias in excess of -4 volts
should not be used because it would result in an undesirable reduction
in gain. As a result, the load line must be drawn to permit the grid
signal to swing over the linear region, between the 0- and -8-volt bias
The actual load line might be drawn at any one of a
number of different slopes and in each case the plate voltage swing
each side of the mean value would be equal and therefore distortionless.
However, we are interested in obtaining a maximum plate-voltage swing
with a low value of average plate current and a minimum variation in
plate current. From this consideration, it is evident that the load
line should appear practically horizontal and well down on the characteristic
curves. Since a load line which approaches a horizontal position represents
a high value of resistance (large change in plate voltage with a small
change in plate current) the resistance-coupled voltage amplifier has
a high value of plate resistance in comparison to a power amplifier,
where we are interested in a large plate-current variation to develop
power. Thus we find our 6J5 load line for an 8-volt signal well down
on the curve; in fact, point B on the-8-volt curve was chosen as far
down as possible without moving into the region of distortion as indicated
by excessive curvature.
Using the point on the -8-volt curve
as one point of the load line, a straightedge is moved about this point
as a pivot until an equal plate voltage is set off by the swing of the
signal on each side of the average bias value set on the -4-volt curve.
When this position is found, a line is drawn along the straightedge
which represents the value of plate resistance which permits maximum
nondistorted voltage output. The value of this resistance is readily
calculated by extending the load line until it crosses the plate-voltage
and plate-current coordinates, as shown in Fig. 2. The slope of the
load line, or the resistance represented by the load line, is equal
to the change in plate voltage divided by the change in plate current.
Maximum Voltage Gain
We are now ready
to consider Some typical problems.
1) What should the total
plate load resistance, Rp
, be for maximum undistorted voltage
gain in the amplifier of Fig. 1, using the characteristics shown in
From Fig. 2 we find that the slope of the load line
2) Find the plate power-supply voltage, Eb
Since the maximum plate voltage is applied to the plate only
when no plate current flows through the load, the plate voltage indicated
at zero plate current is the power-supply voltage. The position at which
the load line crosses the zero plate-current axis is point C, representing
270 volts. Therefore, Eb
the required value of the cathode resistor, Rk
of the curve shows that the average plate current at our operating bias,
point O, is equal to 2.1 ma. Therefore, the resistance required to develop
this amount of bias across the cathode resistor is
4) Calculate the required value of the plate load resistor, RL
Since the total plate resistance includes the cathode-biasing
resistance, the actual value required for the plate resistor is the
total plate resistance minus the value of the cathode resistor, or
= 77,000 - 1900
= 75,100 ohms.
Determine the value of the
grid resistor, Rg
The value of the grid resistor
should be at least four times greater than the plate load resistor of
the previous stage, but should not exceed the maximum value set by the
tube manufacturer for safe operation of the tube. In the case of the
6J5, the maximum value set by the manufacturers when using cathode bias
is 1 megohm. In most cases the value used is in the vicinity of 500,000
6) Determine the value of the cathode bypass condenser,
The capacity of the cathode bypass condenser
is set at a value which will pass the lowest frequency to be amplified
with a gain equal to 70.7 percent of the gain over the middle range
of frequencies. (The calculation of capacity values will be elaborated
upon in the next installment. However, it is a basic rule that, if the
reactance of the condenser at the lowest frequency is equal to the resistor
value, the amplifier response will be down 70.7 per cent at this frequency.)
is equal to 1900 ohms, the reactance of Ck
for a minimum frequency of 60 cycles should be 1905 ohms. The minimum
capacity for Ck
may then be determined as follows:
7) Determine the value of the coupling condenser, Cc
The coupling condenser, which also causes a loss of low frequencies
because of its reactance, is calculated in like manner with respect
to the grid resistor, or
Fig. 2 - Family of plate-voltage vs. plate-current characteristic
curves for the Type 6J5 triode tube.
8) Determine the. peak plate-voltage and plate-current variations.
By dropping perpendicular lines to the coordinates from the
points A, O, and B in Fig. 2, which represent the average bias and the
extremities of grid-signal swing, the peak-to-peak plate voltage and
current can be determined by simple subtraction.
plate voltage = 175 - 40 = 135 Peak-to-peak plate current = 3 - 1.25
= 1.75 ma.
9) Determine the peak-to-peak voltage output of the
Since the plate-voltage swing represents the variations
in potential between plate and cathode, the portion of the variation
across the cathode resistor is lost. The actual voltage output, Eo
of the stage is
= 131 volts
10) Determine the voltage gain
of the amplifier stage.
Voltage gain is equal to the output
voltage divided by the input voltage.
Maximum Power Output
Let us consider
now the case where it is desired to obtain maximum possible undistorted
output for a selected plate-supply voltage. As an example, in the circuit
of Fig. 3 a supply voltage of 200 (Eb
) is assumed. Since
the maximum value of 200 volts is applied to the 6SJ7 plate only when
no plate current flows, one point on our load line is certain to be
at point A, shown in Fig. 4, where the plate current is zero and the
plate voltage 200. From point A, load lines of various slopes may originate;
the lower the plate load resistance, the steeper the slope. Since, as
in the previous example, we are only interested in obtaining a large
plate-voltage variation with a minimum variation in plate current, the
slope of our load line should be as far down on the curves as possible
and still accommodate the complete grid swing without running into the
distortion region. Therefore, two typical load lines were drawn on the
curves shown in Fig. 4. The load line AB represents a load resistance
of 13,000 ohms which provides for a 5-volt grid signal without distortion,
while load line AC represents a load of 110,000 ohms which provides
for a 1-volt signal. Load-line AC would be the most common, since the
65J7 is a high-gain pentode which is designed to amplify small input
signals to a much higher level.
Fig. 3 - Pentode resistance-coupled amplifier circuit.
Inspection of the curve shows that we are operating the tube at a negative
bias of 4 1/2 volts and that the negative peak of the grid signal reaches
-4 volts. In the case of a triode, such an amplifier would not be operating
under optimum conditions. However, the presence of the screen and suppressor
in the pentode permits the plate voltage and plate current to swing
to very low values without distorting even on the higher-bias curves.
Thus we can obtain a large plate-voltage variation at reasonable efficiency
if we do not permit the signal to approach zero on its positive peak.
From the information available we may now proceed to calculate
suitable circuit values and some of the operating conditions:
1) Find the total plate resistance represented by the load line, AC.
2) Find the proper value for the cathode resistor Rk
Since the bias point, midway between points D and E which represent
the extremities of permissible grid swing without distortion, is at
-4 1/2 volts, the average plate-current flow is 1 ma. and our average
plate voltage is 80 volts. The screen current is approximately 25 percent
of the average plate current and, therefore, the total current passing
is 1.25 ma. In order to secure a 4 1/2 volt drop,
the value of Rk
3) What should be the value of the plate resistor, RL
= 110,000 - 3600
= 106,400 ohms.
4) Determine the value of the cathode condenser,
5) What should be the value of the coupling condenser, Cc
when using a 1-meqohm grid resistor, Rg
Fig. 4 - Plate-voltage vs. plate-current characteristic curves
for the Type 65J7 pentode tube.
6) The value of the screen-dropping resistor, Rs
, is readily
calculated if the screen voltage and screen current are known. The screen
potential must be 100 volts to meet the requirements of the characteristic
curves, which are drawn for a screen potential of 100 volts. Therefore,
the voltage drop required across the series screen resistor is 200 -
100 = 100 volts.
7) In order to bypass the screen-dropping resistor adequately, the
reactance of the bypass condenser, Ck
, should be not more
than 1/10th the resistance of the screen-resistor at the lowest frequency.
8) If the resistance-coupled amplifier is employed in an audio system
which has three or more stages, it may be necessary to employ a decoupling
to prevent feedback through the
common plate impedance. In this case, the power supply voltage must
be increased by an amount sufficient to compensate for the voltage drop
. The value of Rf
often employed is 1/10
of the value of RL
= (0.1) (106,400)
= 10,600 ohms.
9) The condenser Cf
should have a reactance, at the lowest frequency to be passed, of not
more than 10 percent of the resistance of Rf
10) The new supply voltage would, of necessity, be 200 volts plus
the voltage drop across RF
E = 200 + (Is
= 200 + (0.00025 + 0.001) (10,600)
11) The total plate-voltage swing as determined by the perpendiculars
of Fig. 4 is 130-30 = 150 volts. From the ratio,
we know that 97 percent of the output voltage or 97 volts peak-to-peak
appears across the plate resistor. Since a 1-volt peak-to-peak signal
is applied at the grid, the stage gain is 97/1 = 97.
If a different
screen voltage were selected the curves would change somewhat, calling
for alterations in the values.
In the next installment, covering
the design of a two-stage audio amplifier, an approximate method will
be outlined to convert the curves to a lower screen potential. | <urn:uuid:ed5fe20f-f042-4637-9f19-fb4c929803e9> | CC-MAIN-2015-35 | http://www.rfcafe.com/references/qst/practical-applications-of-simple-math-jul-1944-qst.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645323734.78/warc/CC-MAIN-20150827031523-00161-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.868657 | 3,421 | 2.546875 | 3 |
TEXAS DEPARTMENT OF AGRICULTURE
TEXAS DEPARTMENT OF AGRICULTURE. The Texas Department of Agriculture, with headquarters in Austin, has twelve district offices-in Amarillo, Beaumont, Brenham, Dallas, Houston, Lubbock, Odessa, San Antonio, San Juan, Stephenville, Tyler, and Vernon. Its consumer-services division is one of the largest in the state administration. The department was established by legislation of the Thirtieth Legislature in 1907. Until then, official agricultural business had been conducted by the Bureau of Agriculture, Insurance, Statistics, and History, which had largely ignored its responsibility to collect information and statistics on crops and livestock. Commissioner Robert Teague Milner was appointed until the 1908 general election could be held. The new agency had a staff of four, including the commissioner. Departmental duties included gathering statistics, publishing agricultural information, and holding farmers' institutes to promote advanced farming methods and practices. Under Milner the agency began to reflect the role it would play as a regulatory arm and advocate for cotton farmers. A cotton bureau was established in 1907 to prevent falsifying the size of Texas cotton crops. In his only report to Governor Thomas M. Campbell, Milner complained that the $17,038 department budget was inadequate. In 1908 Milner accepted the presidency of Texas A&M. Edward Reeves Kone was selected to take his place and was elected to office in 1908, 1910, and 1912. Kone frequently traveled the state to talk to farmers. In 1910 he and members of his staff made a 1,280-mile rail tour of Texas to give demonstrations to thousands of farmers. His commitment to the department persuaded a legislature dubious about the agency's future to increase its budget to $30,178 for fiscal year 1909–10.
Fred Davis succeeded Kone in 1915 and worked during his administration to improve the lot of cotton farmers. Although cotton dominated Texas agriculture, prices paid to farmers rarely covered costs. Landowners and tenant farmers, caught in an unending struggle to make ends meet, were unable to diversify into crops that might have pulled them out of debt. Davis urged growers to figure expenses and withhold cotton from the market until a fair predetermined price was reached. Though his campaign failed, it helped farmers develop more accurate estimates of production costs. A severe drought in 1917, the driest year in Texas history, added to producers' economic woes. The department recorded crop and livestock losses in two-thirds of the state. Davis went to the aid of producers and negotiated an agreement with the Texas Grain Dealers Association to furnish seed grain at cost to farmers who could afford it and on a share basis to those who could not. Through his efforts the department helped to get farmers who had food and feed to sell in contact with prospective buyers. This project was a forerunner of the agency's direct-marketing program. The agency attempted to promote a new cash crop and encouraged Texans to eat jackrabbit by holding jackrabbit dinners and publishing jackrabbit recipes. In 1920 Davis retired. During his final year in office the department had 137 employees in seven divisions and a $225,422 budget. George B. Terrell, commissioner from 1921 to 1930, increased the department's services to farmers and promoted the state's youthful citrus industry. A fight against citrus canker doubled the number of citrus trees in Texas from one million to more than two million. In 1925 the department's duties were greatly expanded when it assumed tasks previously handled by the Department of Markets and Warehouses, including inspection of weights and measures, operation of the market news service, and supervision of gins, warehouses, and farmers' co-ops. The marketing bureau encouraged sales of more than $2.5 million in Texas agricultural products by introducing sellers to prospective buyers. Terrell continued a battle begun under Davis against the pink bollworm, a destructive pest that threatened the state's cotton industry.
James E. McDonald succeeded Terrell in 1931. During his administration the Low Water Dam and Jacks and Stallions divisions were founded and eliminated. The Low Water Dam Division encouraged farmers to conserve water by building sloughs and ravines. Jacks and Stallions distributed registered and high-grade mules and horses over the state for the purpose of breeding. McDonald's administration also saw the establishment of processing plants for Texas fruits and vegetables and the expansion of the sweet potato, tomato, citrus, black-eyed pea, watermelon, truck-farming, poultry, dairy, and nursery and floral industries. McDonald was investigated by a Texas House committee in 1935 on nine charges, including misappropriation of department funds. The House found him guilty of acts ill becoming a state official, but did not impeach him. McDonald was reelected every two years until twenty-five-year-old John C. White, the youngest man ever to hold the office, defeated him in 1950.
In the first major overhaul of the department, White decentralized the agency, initiated the first cooperative effort with Mexico to control insect pests, encouraged state legislation for registration and analysis of agricultural chemicals, and established laboratories to test chemical residues and contaminants before and during harvest. Under White the department inaugurated the Texas Agricultural Products marketing project to promote Texas products. It also began the Family Land Heritage Program, an annual program honoring Texas farmers and ranchers who have worked their land for 100 years or more. From the spring of 1975 to January 1983 the department published TDA Quarterly, a glossy magazine on agricultural issues designed for a popular audience. White resigned in 1977 to become deputy United States secretary of agriculture. Governor Dolph Briscoe appointed Reagan V. Brown to succeed him. In 1978 Brown was elected under the new statute providing four-year terms for statewide elected officials. He was known for his fight for pest and predator control. To prevent the spread of the Mediterranean fruit fly from California to Texas in 1981, Brown required California produce to be fumigated before entering the state. Under special legislation passed during the fruit-fly crisis, the department was authorized to seize or destroy infested products and to stop interstate and intrastate traffic to enforce the law. Brown also worked to halt the spread of the imported fire ant; his failure to remain commissioner has been credited at least in part to his having voluntarily inserted his hand into a fire ant mound in the presence of the press.
Jim Hightower became the eighth commissioner of agriculture on New Year's Day, 1983. His major initiatives included establishing a state network of farmers' markets and wholesale marketing cooperatives, expanding the program to help producers sell directly in international markets, updating the existing Texas Agricultural Products promotion (renamed the Taste of Texas program), helping farmers and ranchers get financing to build processing and marketing facilities, encouraging new agricultural development of such products as wine grapes and native plants, opening an office of natural resources to work with rural residents and farmers on farmland conservation, rural water quality, and water conservation, and increasing the department's voice in shaping national policies that affect Texas agriculture. In addition, the department's Right to Know division, established to implement the 1987 agricultural-hazards communication act, worked to keep farmers and farmworkers informed on hazardous pesticides they might encounter in their work. In 1986 the department began publishing the Texas Gazette, a bimonthly newsletter about department affairs, and the fall of 1987 saw the first issue of Grassroots, an irregularly published serial focusing on the environment and consumer topics. The department also continued its publication of numerous brochures on both specialized and popular topics, ranging from directories of agricultural cooperatives to Texas wines and wineries. In the late 1980s the department had a staff of 575 and administered forty-nine laws that affected numerous agriculture-related matters, including pesticide registration, egg quality, accuracy of scales and gasoline pumps, citrus fruit maturity, seed purity, and fire ant control.
Since the enactment of the Texas Sunset Act in 1977 the Texas Department of Agriculture has been subject to review every twelve years by the Texas Sunset Advisory Commission. When the department came up for its first review in 1989, controversy occurred over the possibility of either making the post of agricultural commissioner an appointed one or eliminating the department entirely. Though eventually the department was continued with an elected commissioner, several changes were implemented in the agency's structure, including the establishment of a nine-member board, chaired by the agriculture commissioner, which was charged with overseeing pesticide regulation. This was an authority that had previously been granted to the commissioner of agriculture alone. In 1994 the commissioner of agriculture was Republican Rick Perry, who defeated Democrat Jim Hightower in the 1992 election. The budget for fiscal year 1995 was $21,584,790. At that time the Texas Department of Agriculture had market and regulatory powers and administered over fifty laws. To perform its duties it had five regional centers, seven suboffices, and eleven divisions. See also AGRICULTURE and articles cross-referenced there.
The Government of the State of Texas (Austin: Von Boeckmann-Jones, 1933). Frank W. Johnson, A History of Texas and Texans (5 vols., ed. E. C. Barker and E. W. Winkler [Chicago and New York: American Historical Society, 1914; rpt. 1916]). Caleb Perry Patterson et al., State and Local Government in Texas (New York: Macmillan, 1940). Texas State Directory (Austin, 1989). University of Texas at Austin LBJ School of Public Affairs, Guide to Texas State Agencies (Austin, 1978). Vertical Files, Dolph Briscoe Center for American History, University of Texas at Austin (Agriculture).
The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Valerie Crosswell, "TEXAS DEPARTMENT OF AGRICULTURE," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/mctwc), accessed September 01, 2015. Uploaded on June 15, 2010. Modified on September 4, 2013. Published by the Texas State Historical Association. | <urn:uuid:4658da06-f132-4884-b343-8ca86c6877f4> | CC-MAIN-2015-35 | https://www.tshaonline.org/handbook/online/articles/mctwc | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00278-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.964376 | 2,060 | 3.1875 | 3 |
- M. A. TESOL Home
- Class Schedules
- Prospective Students
- New Students
- Current Students
- Related Programs
- Community Service and Research
Resources for Teachers
General Useful Links
CATESOL’s Internet ESL Resources
Provides many links for teachers.
Extensive links to items of interest to both teachers and students.
CAELA’s ESL Tools
This page has tools for instruction and for program development
From TESL Journal's Links
Includes Lesson Plan Builder, an online tool for creating, sharing and publishing lesson plans that address adult education content standards in the context of CASAS and SCANS competencies.
This site provides a collection of 30 lessons that can help English learners use socially appropriate language in a variety of informal and formal situations.
This is a teacher’s resource site published by the Macmillan Education Group.
Dave's ESL Cafe is one of the most well-known ESL sites, maintained by Dave Sperling. The ESL links are extensive.
Skills and Subjects:
Reading and Writing
This page is an informative resource for language teachers who want to explore and prepare to implement an extensive reading (ER) program for their students. The ER Foundation gives awards to recently published extensive reading materials most used and favored by students and instructors. These awarded texts are listed on this page. The site also offers an annotated bibliography on ER, links to graded readers, an explanation of graded reader difficulty, and links to blogs, an ER Facebook page, and additional ER resources. Additionally, the site introduces the free MoodleReader Software for tracking student reading.
On this webpage Waring offers a definition of ER, provides ideas for instructors to teach and assess ER, and supplies links to other relevant resources.
Robb provides: a definition of ER, guidelines for the implementation of ER in the classroom, examples of successful ER programs, and theoretical background and links to presentation handouts on ER.
This Wiki, maintained by Boyd, provides ideas and materials to help inspire EAP instructors to teach paraphrasing, summarizing, citations, and reading strategies.
Vocabulary and Idioms
This site by Marti Sevier provides links to handouts and PowerPoint presentations for recent TESOL Convention presentations on vocabulary topics, along with information on vocabulary teaching resources.
This Web site, created by Tom Cobb, has a concordancer, vocabulary profiler, exercise maker, interactive exercises, and other resources for both teaching and learning vocabulary and grammar. The site is divided into three sections: Tutorial, Research, and Teachers. See explanation in this TESL-EJ review article by Marti Sevier.
This site helps English learners expand their academic vocabulary using the Academic Word List (AWL), a set of 570 high-frequency words that appear in academic texts
This long-established ESL listening Web site created by Randall Davis helps ESL/EFL students improve their listening comprehension skills. It contains general listening quizzes with topics related to everyday conversations. The quizzes are categorized by different levels: easy, intermediate, and difficult. Each listening quiz includes pre-listening, during-listening and post-listening exercises.
This site provides listening samples of English speakers from around the world. Most of the listening activities include images, transcripts, interactive quizzes, and downloadable MP3s.
This page offers taped interviews of people on the streets in the U.S. and other countries. Each video clip focuses on a a grammatical point or a simple question such as, “What time is it?” or “What is the weather like?"
Yappar includes many video clips about news, celebrity interviews, music videos, funny TV commercials, and sports clips, so could be useful in practicing listening in variable contexts. It also provides a community of people around the world with whom to chat about the topic.
This site is updated daily and offers EFL/ESL materials on current affairs. Each lesson comes in easier or harder versions, including online, PDF, and Word versions of news articles, and communication activities, along with spoken word versions (MP3 audio files) that can be downloaded.
The Voice of America (VOA) is the official external radio and television broadcasting service of the U.S. government. VOA broadcasts help people learn American English while they learn about American life, world news and topics like agriculture, health, and education. The scripts of news reports are adapted for non-native English speakers, with a core vocabulary of 1500 words, and short, simple sentences.
The BBC’s site for ESL learners. Uses simplified news stories. There is a section focusing on grammar, vocabulary, and pronunciation, and a Community section with blogs. 6 Minute English provides students some audios about different everyday topics and situations such as living abroad, mobile phones, cost of living, etc. Audios last for 6 minutes, and there is a lot of repetition of vocabulary.
CNN Student News is a 10-minute, commercial-free, daily news program for middle and high school students produced by the journalists and educators at CNN. The program can be seen as a streamed video or downloaded as a free podcast. Available teacher materials include transcripts for each show, discussion questions, media literacy questions, maps, and in-depth learning activities and materials to help students understand the news.
Material for Specific Audiences
Civics, Literacy, and Life Skills
The electronic journals Language and Civil Society and Language and Life Sciences contain ready-to-use lesson plans designed for intermediate-level EFL learners. Each chapter contains background information, classroom-ready activities, and related resources and references.
This page provides lessons which use participatory, project-based learning strategies to teach adult literacy.
This site has activities that teachers can use with learners from beginning to advanced levels of English language and literacy. Includes activities to orient new learners, to assess needs, to promote interaction, and to promote reading development. Forms, surveys, and questionnaires are included.
This resouces uses real-life stories on topics of interest to adults, with photos and videos, to build reading and life skills. Users can read the story and listen to it. Has a separate section For Adult Educators, with explanations of distance learning.
Tune in to Learning is a television series for adults who want to strengthen their literacy skills. TV411 is available though several outlets, including national broadcasts, video, and online, uses authentic materials or real-life situations to make the learning relevant. The site offers over 100 interactive lessons and activities for developing writing, reading, math, and vocabulary skills.
The Center for Digital Storytelling is a non-profit organization based in Berkeley, CA, that provides resources for creating written and visual narratives. The Web site showcases a number of digital stories created by people around the world.
Lesson Plans for K-12
The New York Times Learning Netwrok offers lessons for subjects across the curriculum using New York Times content. (For mainstream young adult, classes, not specially designed for ESL)
ReadWriteThink's mission is to provide educators, parents, and afterschool professionals with access to high quality practices in reading and language arts instruction by offering the very best in free materials. This site has Classroom Resources with lesson plans for grades K-12 (L1).
Using Technology in the Classroom
This Web site created by ESL instructors at City College of San Francisco is dedicated to enthusiastic instructors, computer challenged or experts, who want to find interesting, simple and effective ways to incorporate technology into their teaching. The site has articles, book reviews, instructors’ blogs and Web sites, workshop handouts, and more.
TESL EJ ‘s Media Reviews
Reviews of software, Web sites, videos, and non-print media designed for/useful in language learning or teaching, and reviews of development tools (e.g., Web design applications), management tools (e.g., course management systems), research tools, and authoring tools.
TESL EJ ‘s On the Internet
Articles on Internet applications in teaching ESL/EFL.
Susan Gaer’s Instructional Technology Training Web site has training tutorials as well as many samples of Web learning projects and Internet enhanced lessons for ESL.
This site by Tom Robb, Professor at Kyoto Sangyo University in Japan, has tools and tips for using the Internet, examples of interactive activities, and innovative student-created Web sites, many of which inform viewers about various aspects of Japanese history and society.
Edublogss lets you easily create and manage teacher and student blogs.
Provides a technology for schools to connect and for students to learn in a protected, project-based learning network. Participants have included classes in 200 countries and territories.
Provides links and references for the use of corpora, corpus linguistics and corpus analysis in the context of language learning and teaching.
An introduction to corpus linguistic and online corpus resources. This includes discussions of the benefits and challenges of these resources, as well as teaching suggestions and sample tasks.
A spoken language corpus of approximately 1.8 million words (200 hours) focusing on contemporary university speech within the microcosm of the University of Michigan.
A 100 million word collection of samples of written and spoken language from a wide range of sources, designed to represent a wide cross-section of current British English, both spoken and written.
COCA is a corpus of contemporary American English. The corpus includes both written and spoken samples from fiction, popular magazines, newspapers, and academic genres.
The TIME Corpus is comprized of samples of written American English from 1923 to the present as found in TIME magazine.
ELISA is being developed at the University of Tuebingen and the University of Surrey as a resource for language learning and teaching, and interpreter training. It contains interviews with native speakers of English talking about their professional careers.
Building Learner Autonomy
This is Hayo Reinders’ site, especially known for its extensive resources on learner autonomy and self-access learning. It provides a lengthy bibliography on learner autonomy.
Supports self-directed learning; has a list of independent learning/self-access centers (at Asian universities).
A resource for ESL/EFL teachers who are interested in learning how to incorporate dramatic techniques into their lessons. Has sections on improvisation, plays, scriptwriting, process drama, readers’ theater, sample curriculum, and resources.
This theater creates new plays to bring original, thought provoking theater with the highest level of acting and directing to ESL students. Site has lesson plans to go with its productions. | <urn:uuid:cb11784a-7a4b-4cfb-9048-af00036a92c5> | CC-MAIN-2015-35 | http://matesol.sfsu.edu/resources-teachers | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645294817.54/warc/CC-MAIN-20150827031454-00333-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.920026 | 2,200 | 2.890625 | 3 |
This is “End-of-Chapter Material”, section 3.8 from the book Principles of General Chemistry (v. 1.0M).
This book is licensed under a Creative Commons by-nc-sa 3.0 license. See the license for more details, but that basically means you can share this book as long as you credit the author (but see below), don't make money from it, and do make it available to everyone else under the same terms.
This content was accessible as of December 29, 2012, and it was downloaded then by Andy Schmitz in an effort to preserve the availability of this book.
Normally, the author and publisher would be credited here. However, the publisher has asked for the customary Creative Commons attribution to the original publisher, authors, title, and book URI to be removed. Additionally, per the publisher's request, their name has been removed in some passages. More information is available on this project's attribution page.
For more information on the source of this book, or why it is available for free, please see the project's home page. You can browse or download additional books there. You may also download a PDF copy of this book (147 MB) or just this chapter (8 MB), suitable for printing or most e-readers, or a .zip file containing this book's HTML files (for use in a web browser offline).
Please be sure you are familiar with the topics discussed in Essential Skills 2 (Section 3.7 "Essential Skills 2") before proceeding to the Application Problems. Problems marked with a ♦ involve multiple concepts.
Hydrogen sulfide is a noxious and toxic gas produced from decaying organic matter that contains sulfur. A lethal concentration in rats corresponds to an inhaled dose of 715 molecules per million molecules of air. How many molecules does this correspond to per mole of air? How many moles of hydrogen sulfide does this correspond to per mole of air?
Bromine, sometimes produced from brines (salt lakes) and ocean water, can be used for bleaching fibers and silks. How many moles of bromine atoms are found in 8.0 g of molecular bromine (Br2)?
Paris yellow is a lead compound that is used as a pigment; it contains 16.09% chromium, 19.80% oxygen, and 64.11% lead. What is the empirical formula of Paris yellow?
A particular chromium compound used for dyeing and waterproofing fabrics has the elemental composition 18.36% chromium, 13.81% potassium, 45.19% oxygen, and 22.64% sulfur. What is the empirical formula of this compound?
Compounds with aluminum and silicon are commonly found in the clay fractions of soils derived from volcanic ash. One of these compounds is vermiculite, which is formed in reactions caused by exposure to weather. Vermiculite has the following formula: Ca0.7[Si6.6Al1.4]Al4O20(OH)4. (The content of calcium, silicon, and aluminum are not shown as integers because the relative amounts of these elements vary from sample to sample.) What is the mass percent of each element in this sample of vermiculite?
♦ Pheromones are chemical signals secreted by a member of one species to evoke a response in another member of the same species. One honeybee pheromone is an organic compound known as an alarm pheromone, which smells like bananas. It induces an aggressive attack by other honeybees, causing swarms of angry bees to attack the same aggressor. The composition of this alarm pheromone is 64.58% carbon, 10.84% hydrogen, and 24.58% oxygen by mass, and its molecular mass is 130.2 amu.
Amoxicillin is a prescription drug used to treat a wide variety of bacterial infections, including infections of the middle ear and the upper and lower respiratory tracts. It destroys the cell walls of bacteria, which causes them to die. The elemental composition of amoxicillin is 52.59% carbon, 5.24% hydrogen, 11.50% nitrogen, 21.89% oxygen, and 8.77% sulfur by mass. What is its empirical formula?
Monosodium glutamate (MSG; molar mass = 169 g/mol), is used as a flavor enhancer in food preparation. It is known to cause headaches and chest pains in some individuals, the so-called Chinese food syndrome. Its composition was found to be 35.51% carbon, 4.77% hydrogen, 8.28% nitrogen, and 13.59% sodium by mass. If the “missing” mass is oxygen, what is the empirical formula of MSG?
Ritalin is a mild central nervous system stimulant that is prescribed to treat attention deficit disorders and narcolepsy (an uncontrollable desire to sleep). Its chemical name is methylphenidate hydrochloride, and its empirical formula is C14H20ClNO2. If you sent a sample of this compound to a commercial laboratory for elemental analysis, what results would you expect for the mass percentages of carbon, hydrogen, and nitrogen?
Fructose, a sugar found in fruit, contains only carbon, oxygen, and hydrogen. It is used in ice cream to prevent a sandy texture. Complete combustion of 32.4 mg of fructose in oxygen produced 47.6 mg of CO2 and 19.4 mg of H2O. What is the empirical formula of fructose?
Coniine, the primary toxin in hemlock, contains only carbon, nitrogen, and hydrogen. When ingested, it causes paralysis and eventual death. Complete combustion of 28.7 mg of coniine produced 79.4 mg of CO2 and 34.4 mg of H2O. What is the empirical formula of the coniine?
Copper and tin alloys (bronzes) with a high arsenic content were presumably used by Bronze Age metallurgists because bronze produced from arsenic-rich ores had superior casting and working properties. The compositions of some representative bronzes of this type are as follows:
If ancient metallurgists had used the mineral As2S3 as their source of arsenic, how much As2S3 would have been required to process 100 g of cuprite (Cu2O) bronzes with these compositions?
♦ The phrase mad as a hatter refers to mental disorders caused by exposure to mercury(II) nitrate in the felt hat manufacturing trade during the 18th and 19th centuries. An even greater danger to humans, however, arises from alkyl derivatives of mercury.
Magnesium carbonate, aluminum hydroxide, and sodium bicarbonate are commonly used as antacids. Give the empirical formulas and determine the molar masses of these compounds. Based on their formulas, suggest another compound that might be an effective antacid.
♦ Nickel(II) acetate, lead(II) phosphate, zinc nitrate, and beryllium oxide have all been reported to induce cancers in experimental animals.
♦ Methane, the major component of natural gas, is found in the atmospheres of Jupiter, Saturn, Uranus, and Neptune.
Sodium saccharin, which is approximately 500 times sweeter than sucrose, is frequently used as a sugar substitute. What are the percentages of carbon, oxygen, and sulfur in this artificial sweetener?
Lactic acid, found in sour milk, dill pickles, and sauerkraut, has the functional groups of both an alcohol and a carboxylic acid. The empirical formula for this compound is CH2O, and its molar mass is 90 g/mol. If this compound were sent to a laboratory for elemental analysis, what results would you expect for carbon, hydrogen, and oxygen content?
The compound 2-nonenal is a cockroach repellant that is found in cucumbers, watermelon, and carrots. Determine its molecular mass.
You have obtained a 720 mg sample of what you believe to be pure fructose, although it is possible that the sample has been contaminated with formaldehyde. Fructose and formaldehyde both have the empirical formula CH2O. Could you use the results from combustion analysis to determine whether your sample is pure?
♦ The booster rockets in the space shuttles used a mixture of aluminum metal and ammonium perchlorate for fuel. Upon ignition, this mixture can react according to the chemical equationAl(s) + NH4ClO4(s) → Al2O3(s) + AlCl3(g) + NO(g) + H2O(g)
Balance the equation and construct a table showing how to interpret this information in terms of the following:
♦ One of the byproducts of the manufacturing of soap is glycerol. In 1847, it was discovered that the reaction of glycerol with nitric acid produced nitroglycerin according to the following unbalanced chemical equation:
Nitroglycerine is both an explosive liquid and a blood vessel dilator that is used to treat a heart condition known as angina.
♦ A significant weathering reaction in geochemistry is hydration–dehydration. An example is the transformation of hematite (Fe2O3) to ferrihydrite (Fe10O15·9H2O) as the relative humidity of the soil approaches 100%:Fe2O3(s) + H2O(l) → Fe10O15·9H2O(s)
This reaction occurs during advanced stages of the weathering process.
♦ Hydrazine (N2H4) is used not only as a rocket fuel but also in industry to remove toxic chromates from waste water according to the following chemical equation:4CrO42−(aq) + 3N2H4(l) + 4H2O(l) → 4Cr(OH)3(s) + 3N2(g) + 8OH−(aq)
Identify the species that is oxidized and the species that is reduced. What mass of water is needed for the complete reaction of 15.0 kg of hydrazine? Write a general equation for the mass of chromium(III) hydroxide [Cr(OH)3] produced from x grams of hydrazine.
♦ Corrosion is a term for the deterioration of metals through chemical reaction with their environment. A particularly difficult problem for the archaeological chemist is the formation of CuCl, an unstable compound that is formed by the corrosion of copper and its alloys. Although copper and bronze objects can survive burial for centuries without significant deterioration, exposure to air can cause cuprous chloride to react with atmospheric oxygen to form Cu2O and cupric chloride. The cupric chloride then reacts with the free metal to produce cuprous chloride. Continued reaction of oxygen and water with cuprous chloride causes “bronze disease,” which consists of spots of a pale green, powdery deposit of [CuCl2·3Cu(OH)2·H2O] on the surface of the object that continues to grow. Using this series of reactions described, complete and balance the following equations, which together result in bronze disease:
Equation 1: ___ + O2 → ___ + ___
Equation 2: ___ + Cu → ___
Equation 3: ___ + O2 + H2O →
♦ Iron submerged in seawater will react with dissolved oxygen, but when an iron object, such as a ship, sinks into the seabed where there is little or no free oxygen, the iron remains fresh until it is brought to the surface. Even in the seabed, however, iron can react with salt water according to the following unbalanced chemical equation:Fe(s) + NaCl(aq) + H2O(l) → FeCl2(s) + NaOH(aq) + H2(g)
The ferrous chloride and water then form hydrated ferrous chloride according to the following equation:FeCl2(s) + 2H2O(l) → FeCl2·2H2O(s)
When the submerged iron object is removed from the seabed, the ferrous chloride dihydrate reacts with atmospheric moisture to form a solution that seeps outward, producing a characteristic “sweat” that may continue to emerge for many years. Oxygen from the air oxidizes the solution to ferric resulting in the formation of what is commonly referred to as rust (ferric oxide):FeCl2(aq) + O2(g) → FeCl3(aq) + Fe2O3(s)
The rust layer will continue to grow until arrested.
♦ The glass industry uses lead oxide in the production of fine crystal glass, such as crystal goblets. Lead oxide can be formed by the following reaction:PbS(s) + O2(g) → PbO(s) + SO2(g)
Balance the equation and determine what has been oxidized and what has been reduced. How many grams of sulfur dioxide would be produced from 4.0 × 103 g of lead sulfide? Discuss some potential environmental hazards that stem from this reaction.
♦ The Deacon process is one way to recover Cl2 on-site in industrial plants where the chlorination of hydrocarbons produces HCl. The reaction uses oxygen to oxidize HCl to chlorine, as shown.HCl(g) + O2(g) → Cl2(g) + H2O(g)
The reaction is frequently carried out in the presence of NO as a catalyst.
In 1834, Eilhardt Mitscherlich of the University of Berlin synthesized benzene (C6H6) by heating benzoic acid (C6H5COOH) with calcium oxide according to this balanced chemical equation:
(Heating is indicated by the symbol Δ.) How much benzene would you expect from the reaction of 16.9 g of benzoic acid and 18.4 g of calcium oxide? Which is the limiting reactant? How many grams of benzene would you expect to obtain from this reaction, assuming a 73% yield?
Aspirin (C9H8O4) is synthesized by the reaction of salicylic acid (C7H6O3) with acetic anhydride (C4H6O3) according to the following equation:C7H6O3(s) + C4H6O3(l) → C9H8O4(s) + H2O(l)
Balance the equation and find the limiting reactant given 10.0 g of acetic anhydride and 8.0 g of salicylic acid. How many grams of aspirin would you expect from this reaction, assuming an 83% yield?
♦ Hydrofluoric acid etches glass because it dissolves silicon dioxide, as represented in the following chemical equation:SiO2(s) + HF(aq) → SiF62−(aq) + H+(aq) + H2O(l)
♦ Lead sulfide and hydrogen peroxide react to form lead sulfate and water. This reaction is used to clean oil paintings that have blackened due to the reaction of the lead-based paints with atmospheric hydrogen sulfide.
♦ It has been suggested that diacetylene (C4H2, HC≡C–C≡CH) may be the ozone of the outer planets. As the largest hydrocarbon yet identified in planetary atmospheres, diacetylene shields planetary surfaces from ultraviolet radiation and is itself reactive when exposed to light. One reaction of diacetylene is an important route for the formation of higher hydrocarbons, as shown in the following chemical equations:C4H2(g) + C4H2(g) → C8H3(g) + H(g) C8H3(g) + C4H2(g) → C10H3(g) + C2H2(g)
Consider the second reaction.
♦ Glucose (C6H12O6) can be converted to ethanol and carbon dioxide using certain enzymes. As alcohol concentrations are increased, however, catalytic activity is inhibited, and alcohol production ceases.
Early spacecraft developed by the National Aeronautics and Space Administration for its manned missions used capsules that had a pure oxygen atmosphere. This practice was stopped when a spark from an electrical short in the wiring inside the capsule of the Apollo 1 spacecraft ignited its contents. The resulting explosion and fire killed the three astronauts on board within minutes. What chemical steps could have been taken to prevent this disaster?
4.31 × 1020 molecules, 7.15 × 10−4
To two decimal places, the percentages are: H: 0.54%; O: 51.39%; Al: 19.50%; Si: 24.81%; Ca: 3.75%
C, 40.98%; O, 23.39%; S, 15.63%
3Al(s) + 3NH4ClO4(s) → Al2O3(s) + AlCl3(g) + 3NO(g) + 6H2O(g)
|a.||3 atoms||30 atoms, 6 ions||5 atoms|
|b.||3 mol||3 mol||1 mol|
|c.||81 g||352 g||102 g|
|d.||6 × 1023||6 × 1023||2 × 1023|
|a.||4 atoms, 1 molecule||6 atoms, 3 molecules||18 atoms, 6 molecules|
|b.||1 mol||3 mol||6 mol|
|c.||133 g||90 g||108 g|
|d.||2 × 1023||6 × 1023||1.2 × 1022|
Equation 1: 8CuCl + O2 → 2Cu2O + 4CuCl2Equation 2: CuCl2 + Cu → 2CuCl Equation 3: 12CuCl + 3O2 + 8H2O → 2[CuCl2 3Cu(OH)2 H2O] + 4CuCl2
2PbS(s) + 3O2(g) → 2PbO(s) + 2SO2(g) Sulfur in PbS has been oxidized, and oxygen has been reduced. 1.1 × 103 g SO2 is produced. Lead is a toxic metal. Sulfur dioxide reacts with water to make acid rain.
10.8 g benzene; limiting reactant is benzoic acid; 7.9 g of benzene
The disaster occurred because organic compounds are highly flammable in a pure oxygen atmosphere. Using a mixture of 20% O2 and an inert gas such as N2 or He would have prevented the disaster. | <urn:uuid:c394e6f9-82af-4d8e-a5ca-bdacebce4f16> | CC-MAIN-2015-35 | http://2012books.lardbucket.org/books/principles-of-general-chemistry-v1.0m/s07-08-end-of-chapter-material.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645305536.82/warc/CC-MAIN-20150827031505-00279-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.926271 | 3,954 | 3.21875 | 3 |
I have been alerted to an informative, much-needed detailed 2012 Cato Institute asssessment of the 2009 US government report “Global Climate Change Impacts in the United States. See also Judy Curry’s excellent post on the Cato report at
The web page that links to this 2009 US government report starts with the grandiose claims that [highlight added]
This web page will introduce and lead you through the content of the most comprehensive and authoritative report of its kind. The report summarizes the science and the impacts of climate change on the United States, now and in the future.
In addition to discussing the impacts of climate change in the U.S., the report also highlights the choices we face in response to human-induced climate change. It is clear that impacts in the United States are already occurring and are projected to increase in the future, particularly if the concentration of heat-trapping greenhouse gases in the atmosphere continues to rise. So, choices about how we manage greenhouse gas emissions will have far-reaching consequences for climate change impacts. Similarly, there are choices to be made about adaptation strategies that can help to reduce or avoid some of the undesirable impacts of climate change. This report provides many of the scientific underpinnings for effective decisions to be made – at the national and at the regional level.
The new report, to be published by Cato this fall, is titled
with Patrick J. Michaels as Editor-in-Chief. I have been fortunate to know and respect Pat since we meet at the University of Virginia during my tenure there in the 1970s and early 1980s. This Cato report is a very important new addition to providing policymakers with a more robust perspective of climate science. It is refreshing to see a much more objective assessment than prepared by Tom Karl and others in the federal government.
As written in the draft cover letter by Edward H. Crane, President of the Cato Institute,
The Center for the Study of Public Science and Public Policy at the Cato Institute is pleased to transmit to you a major revision of the report, “Global Climate Change Impacts in the United States”. The original document served as the principal source of information regarding the climate of the US for the Environmental Protection Agency’s December 7, 2009 Endangerment Finding from carbon dioxide and other greenhouse gases. This new document is titled “ADDENDUM: Global Climate Change Impacts in the United States”
This effort grew out of the recognition that the original document was sorely lacking in relevant scientific detail. A Cato review of a draft noted that it was among the worst summary documents on climate change ever written, and that literally every paragraph was missing critical information from the refereed scientific literature. While that review was extensive, the restricted timeframe for commentary necessarily limited any effort. The following document completes that effort.
The introduction of the report states that
This report summarizes the science that is missing from Global Climate Change Impacts in the United States, a 2009 document produced by the U.S. Global Change Research Program (USGCRP) that was critical to the Environmental Protection Agency’s December, 2009 “finding of endangerment” from increasing atmospheric carbon dioxide and other greenhouse gases. According to the 2007 Supreme Court decision, Massachusetts v. EPA, the EPA must regulate carbon dioxide under the 1990 Clean Air Act Amendments subsequent to finding that it endangers human health and welfare. Presumably this means that the Agency must then regulate carbon dioxide to the point at which it longer causes “endangerment”.
The conclusion of the Cato report reads
Climate change assessments such as the one produced by the USGCRP suffer from a systematic bias due to the fact that the experts involved in making the assessment have economic incentives to paint climate change as a dire problem requiring their services, and the services of their university, federal laboratory, or agency.
I have just a few comments and recommendations for the final Cato report.
1. The 2005 National Research Council report
National Research Council, 2005: Radiative forcing of climate change: Expanding the concept and addressing uncertainties. Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington, D.C., 208 pp
should be discussed. The 2009 US government report “Global Climate Change Impacts in the United States focuses on greenhouse gases at the expense of other human climate forcings. The findings in the 2005 NRC report were ignored. The need to broaden out the consideration of non-greenhouse gas climate forcings is summarized in the article by AGU Fellows in
Pielke Sr., R., K. Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D. Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E. Philip Krider, W. K.M. Lau, J. McDonnell, W. Rossow, J. Schaake, J. Smith, S. Sorooshian, and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases. Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American Geophysical Union.
I testified to a congressional subcommittee on the need for a broader view in
Pielke Sr., Roger A., 2008: A Broader View of the Role of Humans in the Climate System is Required In the Assessment of Costs and Benefits of Effective Climate Policy. Written Testimony for the Subcommittee on Energy and Air Quality of the Committee on Energy and Commerce Hearing “Climate Change: Costs of Inaction” – Honorable Rick Boucher, Chairman. June 26, 2008, Washington, DC., 52 pp.
A major finding is the global warming is just a subset of “climate change”. Climate also always has involved change, with or without the human influence. See my discussion on these subjects in my post
and in Shaun Lovejoy’s paper that I posted on in
2. The failure of the climate models to show any decadal and longer regional predictive skill should be highlighted. I recently summarized this failure in the post
and in our articles
Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum, 93, No. 5, 52-53, doi:10.1029/2012EO050008.
Pielke Sr., R.A., R. Wilby, D. Niyogi, F. Hossain, K. Dairuku, J. Adegoke, G. Kallos, T. Seastedt, and K. Suding, 2012: Dealing with complexity and extreme events using a bottom-up, resource-based vulnerability perspective. AGU Monograph on Complexity and Extreme Events in Geosciences, in press.
3. The role of land use change as a climate forcing should be discussed in detail. Examples of papers with this perspective include
Pielke Sr., R.A., A. Pitman, D. Niyogi, R. Mahmood, C. McAlpine, F. Hossain, K. Goldewijk, U. Nair, R. Betts, S. Fall, M. Reichstein, P. Kabat, and N. de Noblet-Ducoudré, 2011: Land use/land cover changes and climate: Modeling analysis and observational evidence. WIREs Clim Change 2011, 2:828–850. doi: 10.1002/wcc.144.
Avila, F. B., A. J. Pitman, M. G. Donat, L. V. Alexander, and G. Abramowitz (2012), Climate model simulated changes in temperature extremes due to land cover change, J. Geophys. Res., 117, D04108, doi:10.1029/2011JD016382
4. The very significant problems with the land surface temperature data sets, as used to diagnose global warming, should be presented in detail in the report. Papers that document this issue include
Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841″, J. Geophys. Res., 115, D1, doi:10.1029/2009JD013655.
Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. Christy, and R.A. Pielke Sr., 2011: Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. J. Geophys. Res., 116, D14120, doi:10.1029/2010JD015146.Copyright (2011) American Geophysical Union.
McNider, R.T., G.J. Steeneveld, B. Holtslag, R. Pielke Sr, S. Mackaro, A. Pour Biazar, J.T. Walters, U.S. Nair, and J.R. Christy, 2012: Response and sensitivity of the nocturnal boundary layer over land to added longwave radiative forcing. J. Geophys. Res., doi:10.1029/2012JD017578, in press.
5. My experience with the arrogance of the writers of one of the earlier reports used to generate the 2009 report “Global Climate Change Impacts in the United States have been documented in
Pielke Sr., Roger A., 2005: Public Comment on CCSP Report “Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences“. 88 pp including appendices. | <urn:uuid:17aa0e44-f15f-4597-8f74-5af72c24e84c> | CC-MAIN-2015-35 | https://pielkeclimatesci.wordpress.com/2012/07/25/comments-on-the-cato-report-addendum-global-climate-change-impacts-in-the-united-states-by-michaels-et-al-2012/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00044-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.874562 | 2,353 | 2.515625 | 3 |
Track and field has roots in the animalistic nature of man, you could say. A subdivision of the sport Athletics, track and field is often said to have evolved from innate or natural human activity. Most events mimic predatory activities: Think spears, life-and-death chases, and assertions of dominance.
Formalized track and field was first recorded at the Ancient Olympic Games of 776 BC in Olympia, Greece. The single event contested was a footrace of roughly 200 yards called the stade or stadion, from which we get the word “stadium.”
The Ancient Olympics, deeply seeded in Greek mythology, commanded extensive cultural significance. Wars halted for the Games. Qualifiers swore oaths before Zeus, their highest god, that they had adequately prepared for competition. Poets wrote verse about each Olympic champion, and their deeds were chronicled for future generations. Eventually, though, the Games were abolished by early Christians in an effort to wipe out polytheism.
The Spirit Returns
Track and field didn’t exactly die out with the Olympics, but for the most part, international competition did. The 1896 revival of the Olympic Games, the premier display of track and field in the world, began the long journey of track and field back to international popularity.
Moreover, the Modern Olympics revived the Olympic spirit, embodied by track and field as a truly international sport.
For instance, after a terrorist attack at the 1972 Munich Olympics in which Palestinian gunmen killed 11 Israeli athletes, it was unclear whether competition would continue. However, with support by IOC president Avery Brundage and a great majority of athletes and coaches, the Games persevered. It was felt that, in the words of Brundage, “The Games must go on.” These games not only set a precedent, but sent a message: The significance of the Olympics still, despite not-infrequent political exploitation, transcends politics, nationality, and even individual histories in favor of something universal.
In 1912, the International Amateur Athletics Federation (IAAF), track and field’s first international governing body, was formed. Amateurism was firmly established as an Olympic ideal. That year, the winner of the first Olympic decathlon, American Jim Thorpe, was disqualified for having received $25 per week as a baseball player.
Expansion continued in 1921 with the first NCAA Track and Field championships. In 1928, women’s competition was introduced to the Olympics. Women’s events have been added sporadically to the Games ever since.
Early Track Stars
In 1936 Jesse Owens won a then-unprecedented four gold medals. He captured the 100 meter dash, 200 meter dash, long jump, and 4x100 meter relay. Owens was the first African-American to receive sponsorship in the form of a pair of Adidas shoes he received at the Games. Owens’ success in Berlin was all-the-louder because Adolf Hitler, who was attempting to use the Olympics as a platform for his racial politics, essentially ignored Owens’ dominant showing. Both the public and Olympic officials noticed Hitler’s non-action. He was told to either greet every medalist, or none at all. Hitler skipped the rest of the medal presentations. Owens became a worldwide star.
After the Games, Owens was presented with a number of lucrative commercial offers. Upon accepting, however, the Amateur Athletics Union (AAU) revoked his amateur status, thus barring him from further international competition. The commercial offers were rescinded. Track and field athletes would continue to forego compensation for decades.
The next worldwide-head-turner to grace the track was Roger Bannister. Since July 17, 1945, Swede Gunder Hagg had held the world record in the mile. However, three men came to the spotlight in 1953: Australia’s John Landy, America’s Wes Santee, and England’s Roger Bannister. The trio had the audacity to intend to break the four-minute barrier for the first time — a feat many “experts” of the time (mostly sportswriters) deemed humanly impossible. Due perhaps to hype from these sportswriters and perhaps to the simplicity of the undertaking — four quarter-miles in one minute each — the race to sub-four garnered tremendous international attention. On May 6, 1954, Bannister won the race and captured the world record in 3:59.4.
Bannister received the inaugural “Sports Illustrated Sportsman of the Year” in 1954.
American discus-thrower Al Oerter was the first athlete to win gold at four consecutive Olympics. He also broke the world record four times officially and once, unofficially. More impressive still, his 1964 gold-winning throw, despite a neck brace treating chronic back problems and Novocain treating torn rib cartilage, was the first-ever 200-foot toss: 200 feet, 1 inch. Oerter was a star the likes of which the throwers of track and field have yet to eclipse.
Technology & Technique
Several technological advances contributed to track and field’s growing international status. In 1948, starting blocks and wind gauges were introduced, allowing faster sprints and more accurate standards of comparison. In the 1950s came the fiberglass pole for the pole vault. More importantly, the most impactful advancement of track and field to date surfaced: The all-weather track. First using a combination of rubber and asphalt, all-weather tracks replaced surfaces of grass, dirt, and cinder ash. The new surfaces gave way to faster times and higher levels of competition, even in wet conditions. The most common surfaces today are Tartan (polyurethane) and Mondo tracks, with the latter more prevalent in international competition.
New techniques changed track and field as well. Dick Fosbury, the 1968 Olympic high jump champion, popularized the Fosbury Flop. The new jumping technique features a curved approach and making the clearance with the back to the bar. The Fosbury Flop remains the standard to this day. In 1951, American Parry O’Brien invented the glide technique for the shot put. Aleksandr Baryshnikov set a world record in 1976 in the same event with the new spin technique. Both the glide and spin techniques are still prevalent.
The End of Amateurism
Toward the end of the 1970s, amateurism in the United States was coming to a close. While many athletes competing in the European circuit had been paid under the table, conditions for athletes were poor, and many felt that administrators were living large while athletes lived in near-squalor. These feelings of unrest led to a split from the AAU, which was eventually replaced with USA Track and Field. In 1982, the IAAF dropped amateurism from both its name and practices, becoming the International Association of Athletics Federations (still IAAF). The way was cleared for added monetary incentive, and better performances as well.
Carl Lewis burst onto the international scene in the early 80s as amateurism ended. The American won nine Olympic gold medals and one silver over the span of three Olympics. He won 10 world championship medals. He was the first since Jesse Owens to win quadruple gold in a single Games. Lewis went undefeated in 65 consecutive long jumps and it took someone else setting a new world record to end his streak. Track and Field News named him “Athlete of the Year” three times in a row. Sports Illustrated and the IOC followed suit with “Olympian of the Century” and “Sportsman of the Century,” respectively.
However, Carl Lewis largely failed to secure athletic endorsement. Whether or not he was actually arrogant, he was viewed as such. Coca Cola rescinded an offer even after Lewis won four gold medals. Nike dropped him. Nonetheless, Lewis was one of track and field’s first huge international sporting icons. Being a pro in track and field was changing.
Pole-vaulter Sergey Bubka took professionalism in track and field to new heights. Bupka took Olympic gold only once, but broke the world record 35 times. Almost every time Bubka broke the world record, he broke his own record. Each time he did so, he received a large bonus from both the meet promoter and (until its dissolution) the Soviet Union. Knowing this, when Bubka broke his own record, he intentionally did so by only one centimeter at a time. Indeed, track and field had arrived as a money-making opportunity, at least for a select few.
Enter the East Africans
In 1960, Ethiopia’s Abebe Bikila (famously barefoot) won his first of two Olympic marathon golds, both in world record times. In 1980, Bikila’s countryman Miruts Yifter, nicknamed “Yifter the Shifter” for his revolutionarily fast finish, won his second and third Olympic gold medals in the 5,000 and 10,000 meter races. The precedent set by these two men, along with track and field’s newfound earning potential, sparked a revolutionary change in the sport: The rise of East African distance runners. Each year, Kenyans and Ethiopians produce the overwhelming majority of top performances from the 800-meters up.
In 1994, Ethiopian Haile Gebrselassie took his first of 27 world records from distances ranging from the 3,000 meters to the marathon. The margin by which “Geb” lowered major records like the 5,000 and 10,000 meter was unprecedented — 19.22 and 29.48 seconds over his career in the 5,000 and 10,000, respectively. Over the subsequent 12 years, those records have only been lowered some three and five seconds. Geb is still breaking records, most recently in the marathon, running 2:03:59 in 2008. The record before Geb? 2:04:55, by Kenya’s Paul Tergat.
Kenenisa Bekele took the baton of Ethiopian dominance from Gebrselassie. In addition to a slew of world records and four Olympic golds, Bekele set precedents for undefeated streaks — five years in World Cross Country, winning both the 4,000 and 12,000 meter races, and eight-plus years at 10,000 meters.
Performance Enhancing Drugs
Doping entered track and field on the coattails of increased earning potential. The type of drug has changed over the years. There was the anabolic steroid, burgeoning in the Hitler era, reaching a frenzy with the governmental doping regimes of the East Germans in the 80s, and morphing again into designer drugs like “The Clear” made to elude testing. There was Human Growth Hormone, erythropoietin (EPO), and its derivatives, like CERA. Dopers have always had to stay one step ahead of those trying to keep the sport clean. Among those caught in the crossfire between were gold medalists and track celebrities Marion Jones (sprints/jumps) and Rashid Ramzi (1,500m). Both were stripped of their medals.
In 1999, the World Anti-Doping Agency (WADA) was formed by the IOC. WADA sets standards for controlled substances and testing in sport. The most aggressive, controversial tactic implemented is the “Whereabouts System,” in which top-level athletes are required to be available for no-notice drug tests one specified hour of each day.
Doping has also fostered skepticism in the sport. With each new remarkable performance, the question is now implicit: “Are drugs responsible?”
Usain Bolt has done many things for track and field. Alongside the three Olympic and three World Championship gold medals, with five of those six performances producing world record times, Bolt is a character. His popularity has reached beyond the realm of track and field. Bolt brought track and field to programs like ESPN’s SportsCenter. He was twice named “Laureus World Sportsman of the Year.” He’s shattered precedents for earning potential in the sport, securing appearance fees that before would have been considered ludicrous. And he’s done it all while cracking jokes to the camera on the starting line, and claiming that he ate nothing but chicken nuggets before winning his first gold medal.
While typically Bolt-caliber performances would yield nothing but skepticism and doping allegations, skepticism about him is met with a plethora of arguments why he’s clean. Track and field aficionados say: “It’s his biomechanics,” or “It’s his height.” People really want Bolt to be clean. He’s a guy who can not only electrify the world, but might just be doing it with the good, old-fashioned ideals of track and field. Bolt is not only heir to the title of track and field mega-star; he’s a beacon of hope.
Track and field lives or dies by its popularity, not just for fan-base and financial support, but in order to comb the gene pool for the next big star. New media offers expanding promotion. It’s now possible to watch medium-sized track meets across the country live from your computer, or chat with someone halfway around the world about pole-vault technique. That’s good for the sport, on every level. | <urn:uuid:8709d572-87a6-46aa-b476-b86cf41a0639> | CC-MAIN-2015-35 | http://track.isport.com/track-history/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645305536.82/warc/CC-MAIN-20150827031505-00277-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.966662 | 2,796 | 3.296875 | 3 |
10 Great Economic Myths - Courtesty of Murray Rothbard
One Of The Greatest Economic Pioneers Murray Rothbard
He gives a clear and concise refutation of common economic myths
Myth 1: Deficits are the cause of inflation; deficits have nothing to do with inflation.
In recent decades we always have had federal deficits. The invariable response of the party outof power, whichever it may be, is to denounce those deficits as being the cause of perpetual inflation. And the invariable response of whatever party is in power has been to claim that deficits have nothing to do with inflation. Both opposing statements are myths.
Deficits mean that the federal government is spending more than it is taking in in taxes. Those deficits can be financed in two ways. If they are financed by selling Treasury bonds to the public, then the deficits are not inflationary. No new money is created; people and institutions simply draw down their bank deposits to pay for the bonds, and the Treasury spends that money. Money has simply been transferred from the public to the Treasury, and then the money is spent on other members of the public.
On the other hand, the deficit may be financed by selling bonds to the banking system. If that occurs, the banks create new money by creating new bank deposits and using them to buy the bonds. The new money, in the form of bank deposits, is then spent by the Treasury, and thereby enters permanently into the spending stream of the economy, raising prices and causing inflation. By a complex process, the Federal Reserve enables the banks to create the new money by generating bank reserves of one-tenth that amount. Thus, if banks are to buy $100 billion of new bonds to finance the deficit, the Fed buys approximately $10 billion of old Treasury bonds. This purchase increases bank reserves by $10 billion, allowing the banks to pyramid the creation of new bank deposits or money by ten times that amount. In short, the government and the banking system it controls in effect "print" new money to pay for the federal deficit.
Thus, deficits are inflationary to the extent that they are financed by the banking system; they are not inflationary to the extent they are underwritten by the public.
Some policymakers point to the 1982-83 period, when deficits were accelerating and inflation was abating, as a statistical "proof" that deficits and inflation have no relation to each other. This is no proof at all. General price changes are determined by two factors: the supply of, and the demand for, money. During 198283 the Fed created new money at a very high rate, approximately at 15 % per annum. Much of this went to finance the expanding deficit. But on the other hand, the severe depression of those two years increased the demand for money (i.e. lowered the desire to spend money on goods) in response to the severe business losses. This temporarily compensating increase in the demand for money does not make deficits any less inflationary. In fact, as recovery proceeds, spending picked up and the demand for money fell, and the spending of the new money accelerated inflation.
Myth 2: Deficits do not have a crowding-out effect on private investment.
In recent years there has been an understandable worry over the low rate of saving and investment in the United States. One worry is that the enormous federal deficits will divert savings to unproductive government spending and thereby crowd out productive investment, generating ever-greater long-run problems in advancing or even maintaining the living standards of the public.
Some policymakers once again attempted to rebut this charge by statistics. In 1982-83, they declare deficits were high and increasing while interest rates fell, thereby indicating that deficits have no crowding-out effect.
This argument once again shows the fallacy of trying to refute logic with statistics. Interest rates fell because of the drop of business borrowing in a recession. "Real" interest rates (interest rates minus the inflation rate) stayed unprecedentedly high, however--partly because most of us expect renewed inflation, partly because of the crowding-out effect. In any case, statistics cannot refute logic; and logic tells us that if savings go into government bonds, there will necessarily be less savings available for productive investment than there would have been, and interest rates will be higher than they would have been without the deficits. If deficits are financed by the public, then this diversion of savings into government projects is direct and palpable. If the deficits are financed by bank inflation, then the diversion is indirect, the crowding-out now taking place by the new money "printed" by the government competing for resources with old money saved by the public.
Milton Friedman tries to rebut the crowding-out effect of deficits by claiming that allgovernment spending, not just deficits, equally crowds out private savings and investment. It is true that money siphoned off by taxes could also have gone into private savings and investment. But deficits have a far greater crowding-out effect than overall spending, since deficits financed by the public obviously tap savings and savings alone, whereas taxes reduce the public's consumption as well as savings.
Thus, deficits, whichever way you look at them, cause grave economic problems. If they are financed by the banking system, they are inflationary. But even if they are financed by the public, they will still cause severe crowding-out effects, diverting much-needed savings from productive private investment to wasteful government projects. And, furthermore, the greater the deficits the greater the permanent income tax burden on the American people to pay for the mounting interest payments, a problem aggravated by the high interest rates brought about by inflationary deficits.
Myth 3: Tax increases are a cure for deficits.
Those people who are properly worried about the deficit unfortunately offer an unacceptable solution: increasing taxes. Curing deficits by raising taxes is equivalent to curing someone's bronchitis by shooting him. The "cure" is far worse than the disease.
One reason, as many critics have pointed out, raising taxes simply gives the government more money, and so the politicians and bureaucrats are likely to react by raising expenditures still further. Parkinson said it all in his famous "Law": "Expenditures rise to meet income." If the government is willing to have, say, a 20% deficit, it will handle high revenues by raising spending still more to maintain the same proportion of deficit.
But even apart from this shrewd judgment in political psychology, why should anyone believe that a tax is better than a higher price? It is true that inflation is a form of taxation, in which the government and other early receivers of new money are able to expropriate the members of the public whose income rises later in the process of inflation. But, at least with inflation, people are still reaping some of the benefits of exchange. If bread rises to $10 a loaf, this is unfortunate, but at least you can still eat the bread. But if taxes go up, your money is expropriated for the benefit of politicians and bureaucrats, and you are left with no service or benefit. The only result is that the producers' money is confiscated for the benefit of a bureaucracy that adds insult to injury by using part of that confiscated money to push the public around.
No, the only sound cure for deficits is a simple but virtually unmentioned one: cut the federal budget. How and where? Anywhere and everywhere.
Myth 4: Every time the Fed tightens the money supply, interest rates rise (or fall); every time the Fed expands the money supply, interest rates rise (or fall).
The financial press now knows enough economics to watch weekly money supply figures like hawks; but they inevitably interpret these figures in a chaotic fashion. If the money supply rises, this is interpreted as lowering interest rates and inflationary; it is also interpreted, often in the very same article, as raising interest rates. And vice versa. If the Fed tightens the growth of money, it is interpreted as both raising interest rates and lowering them. Sometimes it seems that all Fed actions, no matter how contradictory, must result in raising interest rates. Clearly something is very wrong here.
The problem is that, as in the case of price levels, there are several causal factors operating on interest rates and in different directions. If the Fed expands the money supply, it does so by generating more bank reserves and thereby expanding the supply of bank credit and bank deposits. The expansion of credit necessarily means an increased supply in the credit market and hence a lowering of the price of credit, or the rate of interest. On the other hand, if the Fed restricts the supply of credit and the growth of the money supply, this means that the supply in the credit market declines, and this should mean a rise in interest rates.
And this is precisely what happens in the first decade or two of chronic inflation. Fed expansion lowers interest rates; Fed tightening raises them. But after this period, the public and the market begin to catch on to what is happening. They begin to realize that inflation is chronic because of the systemic expansion of the money supply. When they realize this fact of life, they will also realize that inflation wipes out the creditor for the benefit of the debtor. Thus, if someone grants a loan at five percent for one year, and there is seven percent inflation for that year, the creditor loses, not gains. He loses two percent, since he gets paid back in dollars that are now worth seven percent less in purchasing power. Correspondingly, the debtor gains by inflation. As creditors begin to catch on, they place an inflation premium on the interest rate, and debtors will be willing to pay it. Hence, in the long-run anything which fuels the expectations of inflation will raise inflation premiums on interest rates; and anything which dampens those expectations will lower those premiums. Therefore, a Fed tightening will now tend to dampen inflationary expectations and lower interest rates; a Fed expansion will whip up those expectations again and raise them. There are two, opposite causal chains at work. And so Fed expansion or contraction can either raise or lower interest rates, depending on which causal chain is stronger.
Which will be stronger? There is no way to know for sure. In the early decades of inflation, there is no inflation premium; in the later decades, such as we are now in, there is. The relative strength and reaction times depend on the subjective expectations of the public, and these cannot be forecast with certainty. And this is one reason why economic forecasts can never be made with certainty. | <urn:uuid:d1d36996-feb1-42f5-8b1e-b25437c20b4a> | CC-MAIN-2015-35 | http://caps.fool.com/Blogs/10-great-economic-myths-/177985 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00107-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.959014 | 2,155 | 3.0625 | 3 |
Fire and Life Safety
The City’s motto “Living Right in Ridgeland” includes living safely. The most well known jobs of a fire department are fire suppression, rescue, and emergency medical care when an emergency occurs. However, we strongly feel that providing fire and life safety information to the public before an emergency occurs is an equally, if not a more important job.
Fire and Life Safety Facts and Statistics
The United States is the worst industrialized nation in the world in terms of fire loss (injuries, deaths, and property loss). In this country, Mississippi annually ranks #1 (and less often #2) in fire deaths per capita. Unfortunately, our state ranks near the top in terms of unintentional (and preventable) injuries from other causes also. Besides fire and burn injuries, areas of concern are motor vehicle injuries, poisoning injuries, fall injuries, firearms injuries, bike and pedestrian injuries, water injuries, choking, suffocation, and strangulation injuries, and injuries caused by severe weather. Risk groups from highest to lowest are children, then senior adults, and finally the rest of the population.
Are preventable injuries really a problem in this country? The answer is absolutely YES. For children in the United States they are worse than a problem—they are an epidemic!
- Unintentional injuries are the leading cause of death for children ages 1 to 14 (nearly 6,700 deaths per year).
- More than 120,000 children become permanently disabled every year due to unintentional injuries.
- One of every four children (approximately 14 million) is injured seriously enough to require medical attention.
- For every child who dies from a preventable injury, 45 others are hospitalized, 1,300 are treated in emergency rooms, and nearly 1,600 visit a doctor’s office.
Why are so many preventable fires and injuries occurring in the U.S., and especially in Mississippi? Primarily for two reasons: lack of knowledge and lack of action. You must make the effort to acquire more safety knowledge and take safety actions in your home, at work, and wherever you are.
Fire and Life Safety Educational Services for the Public
Committed to your “Living Safely in Ridgeland,” the Ridgeland Fire Department provides these public fire and life safety educational services:
- Fire and life safety programs for ages 3 to 103, for schools, home school groups, businesses, health care facilities, hotels/motels, apartment communities, homeowner associations, retirement communities, churches, social and service clubs, scouts, etc. We can go to you, or you can come to one of our fire stations for a combined fire and life safety program / station tour.
Annual programs include Fire Academy for Kids and Fire Prevention Week program at Northpark Mall. Fire Academy for Kids is held every summer; call for schedule information. Fire Prevention Week program is held every October. Other programs are available upon request.
- Literature for over twenty fire and life safety subject areas, available at our fire stations, City Hall, and Ridgeland Public Library.
- Fire safety videos available for loan from Ridgeland Public Library.
- Fire extinguisher training for citizen groups and businesses at your location or at a fire station.
- Fire and life safety inspections for residences.
- Fire and life safety displays and activities for community events.
- Consult service for businesses and citizens, to assist in developing fire emergency plans and severe weather safety plans.
“Fire Academy for Kids” is a fun, educational, action oriented program with two goals: to give children more life safety skills in fire safety, first aid, and injury prevention; and to introduce them up close and personal to firefighters, the equipment they use, and the jobs they perform. The prior will lead to increased life safety behaviors, by preparing the children to be able to prevent a fire or injury, and by preparing them to be able to react correctly should a fire or medical emergency occur. The latter will give them a better appreciation of firefighters and their often dangerous job, and will provide information to those who may be thinking about a future career in the fire/rescue service.
Children will participate in classroom learning activities and will perform hands- on training in fire safety, first aid, and firefighter skills…such as rescuing a “victim” from a simulated “smoke-filled” room while wearing an air mask, using fire hoses (and getting wet!), using extrication tools (Jaws of Life) on a crushed car, etc.
Four homework assignments will be given and are intended to involve the whole family. (After all, life-safety knowledge and skills are important for everyone!) A parent’s signature will be required on each completed assignment for their child to be eligible to receive an awesome homework reward each day!!
All activities will be carefully monitored for safety by fire department personnel. No one will be made to perform an activity with which he or she may feel uncomfortable. “Learning Safety, Being Safe, and Having Fun” is our motto. A graduation ceremony and luncheon reception will be held at Central Fire
Station on Friday of each week at 11:00 a.m. Attending will be Mayor Gene McGee, Fire Chief Matt Bailey, and Bridgette James, Fire Prevention Coordinator with the MS State Department of Health, a statewide sponsor of “Fire Academy for Kids.” Parents and guardians are invited to attend. Please come to share in your child’s accomplishment. See you there!
For more information, call Training Officer Craig Nash at 601-856-3760.
Basic Home Fire Safety Information
Smoke is the biggest killer in a fire, followed by heat and flames. But time is your worst enemy. In as little as two minutes (less time than a television commercial), a fire that just started could be filling your home with smoke and heat, and a whole room could be engulfed in flames! Having the safety knowledge, and taking safety actions in your home before the fire, would help buy the time you need to escape and survive a fire. About 80% of all fire deaths occur in the home. The knowledge and suggested safety actions provided below can help keep you and your family out of this statistic!
Fire safety equipment for the home:
- Smoke detectors provide early warning in the event of a fire. Installing and maintaining them can double your chance of escaping a fire.
- Install detectors on each floor, including the basement.
- Install them outside of sleeping areas.
- Install them in bedrooms, living room, laundry room, etc. The more you have, the faster you will be alerted. Interconnected units are better, but cost more to install, as do monitored alarm systems.
- To prevent false alarms from shower steam and cooking, don’t install detectors too close to bathrooms or the kitchen. Units near these areas should have a silence button so any false alarms can be easily cancelled.
- If you have A/C detectors, make sure they have battery back-ups to keep them working during power failures.
- Test detectors monthly. Use a broomstick to reach the test button, to prevent a fall from a chair or ladder.
- Vacuum detectors regularly to prevent false alarms from dust accumulation, and so they remain clean enough to allow smoke to enter.
- Change batteries immediately when you hear the low-battery warning chirp.
- Replace detectors every 10 years.
- Install extinguishers on each floor, and also place them in vehicles.
- Purchase ABC multipurpose extinguishers.
- Regularly review the use and maintenance instructions on extinguishers.
- Call the fire department to schedule a fire extinguisher training class.
Fire escape ladders
- Portable fire escape ladders are available for two and three story homes.
- Place one in each upstairs room that has a window.
- Demonstrate to children how to place them in a window.
Whistle, flashlight, and house key on a lanyard
- Purchase a whistle on a lanyard (a cord worn around the neck). You can find these in a sports department or store.
- Add a lightweight flashlight to the lanyard.
- Add a house key to the lanyard.
- Place this combination in each bedroom next to the bed. In the event of a fire, tornado, or other emergency, place the lanyard around your neck.
- The whistle can be used to alert family about an emergency. If someone becomes trapped, it can be used to signal their location to firefighters.
- The flashlight can be used to see in the dark or under smoke, and it can be used to signal firefighters from a window if someone is trapped.
- The key will be needed to unlock a double-keyed door from inside. In the event people escape from windows leaving doors locked, it can be given to firefighters outside so they can quickly enter the house to save others.
Residential fire sprinklers
- Fire sprinklers have been used extremely successfully in commercial buildings for well over 100 years. They are now available to be installed in homes.
- Fire sprinklers control (and often extinguish) fires before extreme smoke, heat, and flames take their toll on lives and property.
- They can be installed at relatively low cost, especially when compared to the cost of losing a loved one or losing your home and possessions.
- It has been estimated that nearly 85% of home fire deaths could be prevented by combining the use of fire sprinklers with smoke detectors.
- When you consider that fire sprinklers save lives, property and money, how can you afford not to install them in your home?
The home fire safety equipment listed above (except for residential fire sprinklers) can be purchased at home improvement stores, hardware stores, some discount stores, etc. For installation of residential sprinklers, consult the yellow pages phone book under “Fire Protection Services, Equipment & Supplies.”
Fire escape planning for the home:
Make an Escape Plan
- Draw a floor plan of your home showing all rooms, doors, and windows.
- Draw arrows to show two quick ways from each room to the outside. If one way out is a window, make sure it opens easily and show children how to open it and remove the screen. If you have security bars that don’t open, remove them or change them to bars with quick-release latches.
- Pick a safe family meeting place outside in front of your home (a tree, the mailbox on a post, the neighbor’s driveway, etc.) and show it on the plan.
- Show the 9-1-1 emergency reporting phone number on your plan.
- Hang the plan where it can be easily seen (on the refrigerator, for example), and review it with everyone, including visiting relatives or friends.
Practice EDITH and DAN (Exit Drills In The Home & Drills At Night)
- Practice your home escape plan several times a year by having exit drills. Have everyone meet outside at the safe family meeting place.
- Practice some drills at night with the lights off, so everyone will learn how to quickly escape with low visibility.
- Practice escaping from different rooms and using different exits.
- Use the same escape methods in your drills that you would use in a real fire. Crawl low under heat and smoke. If visibility is limited, follow walls to a door or window. Before opening a door, feel it with the back of the hand up high, on the knob, and along the hinge side. If it is warm or hot,
don’t open it! Use your second way out. If it is not hot, open it just a crack to check for smoke and heat. If there is too much smoke or heat to safely exit, slam the door and use your second way out. If there is little or no smoke or heat, exit through the door, remembering to close it behind you to slow the spread of smoke, heat, and flame. Close every door that you can as you go. Get to the meeting place and have one person go to a neighbor’s house to call 9-1-1. Or call from a portable or cell phone. Do NOT call from inside the burning building, unless you are trapped. Do NOT stop
to get your belongings. Remember, time is your worst enemy in a fire. Get out fast. Never go back in a burning building. Your chances of escaping a second time are slim. Get Out and Stay Out!
- Pick up fire prevention literature at your local fire station, City Hall, or Ridgeland Public Library. Pamphlets are available on cooking safety, heating safety, electrical safety, flammable liquid safety, etc. Other literature available covers fire safety equipment, fire safety planning, severe weather planning, and other injury prevention topics.
- Schedule a fire and life safety program to be presented by the fire department. Encourage others to attend.
- Visit the websites listed below for more information on fire and life safety.
Fire and Life Safety Websites
The following websites contain useful information on these subjects: 1) fire safety, 2) burn prevention, 3) injury prevention, 4) home safety, 5) safety recalls on consumer products, 6) fire sprinklers, 7) college campus fire safety, 8) burn survivor advocacy, 9) trauma survivor advocacy, 10) response to terrorism, 11) natural and man-made disasters (including severe weather).
Numbers after each website indicate subjects covered by that website.
Nat’l Fire Protection Association – 1, 2, 3, 4, 6, 7, 11
U. S. Fire Administration – 1, 2
Home Safety Council – 1, 2, 3, 4
Center for Campus Fire Safety – 7
Safe Kids Worldwide – 1, 2, 3, 4
Coalition for Fire-safe Cigarettes – 1, 2, 4
American Fire Sprinkler Association – 6
National Fire Sprinkler Association – 6
American Burn Association – 2
Phoenix Society for Burn Survivors – 2, 8
MS Burn Camp Foundation – 8
American Red Cross – 1, 2, 3, 4, 11
American Trauma Society – 3
Nat’l Center for Injury Prevention & Control – 3
Brain Injury Association of America – 3
Trauma Foundation – 2, 3, 9
Consumer Product Safety Commission – 5
National Highway Traffic Safety Administration – 3
U.S. Dept. of Homeland Security – 10, 11
State of Mississippi – Click on “State Agencies” at bottom of page, then click on any agency listed below to get information on the subject numbers shown.
Emergency Management Agency, MS 11
Emergency Medical Services for Children (go to “EMSC Programs) 3
Fire Marshal, State 1
Forestry Commission, MS 1
Health Department 1, 2, 3, 10, 11
Homeland Security, Ms Office of 10, 11
Public Safety, Department of 3
Do you need more information?
Would you like to schedule a safety program, fire extinguisher training, or a home fire and life safety inspection?
Would you like the fire department to participate in your event with a fire and life safety display and/or fire and life safety activities?
Can we help with your fire and severe weather safety planning? | <urn:uuid:3204103d-2e32-477f-a535-4118f78f48c2> | CC-MAIN-2015-35 | http://www.ridgelandms.org/city-departments/fire-department/fire-and-life-safety/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645315227.83/warc/CC-MAIN-20150827031515-00218-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.916079 | 3,184 | 2.859375 | 3 |
Deep-rooted in the historic tradition of Connecticut, the Charter Oak is one of the most colorful and significant symbols of the spiritual strength and love of freedom which inspired our Colonial forebears in their militant resistance to tyranny. This venerable giant of the forest, over half a century old when it hid the treasured Charter in 1687, finally fell during a great storm on August 21, 1856. Two English kings, a royal agent, a colonial hero and a candle-lit room are the figures and backdrop in one of the most thrilling chapters of America's legend of liberty. The refusal of our early Connecticut leaders to give up the Charter, despite royal order and the threat of arms, marked one of the greatest episodes of determined courage in our history.
On October 9, 1662, The General Court of Connecticut formally received the Charter won from King Charles II by the suave diplomacy of Governor John Winthrop, Jr., who had crossed the ocean for the purpose. Twenty-five years later, with the succession of James II to the throne, Connecticut's troubles began in earnest. Sir Edmund Andros, His Majesty's agent, followed up failure of various strategies by arriving in Hartford with an armed force to seize the Charter. After hours of debate, with the Charter on the table between the opposing parties, the candle-lit room suddenly went dark. Moments later when the candles were re-lighted, the Charter was gone. Captain Joseph Wadsworth is credited with having removed and secreted the Charter in the majestic oak on the Wyllys estate.
The colonial governor was born in London, England. He became governor of the newly created Dominion of New England (including Massachusetts, Plymouth, Maine, Connecticut, Rhode Island, New Hampshire) in 1686. His aristocratic manner and Anglican sympathies alienated the Bostonians and he was overthrown in a citizens' revolt in 1689.
1687 - GOVERNOR ANDROS & THE CHARTER OAK EPISODE
In 1687, King James II revoked the Connecticut charter. Royal Governor Sir Edmund Andros attempted to seize the charter, but Joseph Wadsworth stole away with it. Tradition says it was hidden in the hollow of an oak on Samuel Wyllys's property. This "Charter Oak" became a famous landmark.
- - - - - - - - - -
During King Philip's war, the colonists of Connecticut did not suffer much from hostile Indians, excepting some remote settlers high up the Connecticut River. They furnished their full measure of men and supplies, and their soldiers bore a conspicuous part in that contest between the races for supremacy. But while they were freed from dangers and distress of war with the Indians, they were disturbed by the petty tyranny of Governor Andros, whose advent in New England and New York has been noticed.
Seated at New York, Andros claimed jurisdiction as far east as the Connecticut River. To the mouth of that stream he went, with a small naval force, in the summer of 1675, to assert his authority. Captain Bull, the commander of a small fort at Saybrook, permitted him to land but when the governor began to read his commission, Bull ordered him to be silent. Andros was compelled to yield to the commander's bold spirit and his superior military power, and in a towering passion he returned to New York, flinging curses and threats behind him at the people of Connecticut in general, and Captain Bull in particular.
For more than a dozen years after this flare-up of ambition and passion, nothing materially disturbed the public repose of Connecticut. Then a most exciting scene occurred at Hartford, in the result of which the liberties of the colony were involved. Andros again appeared as a usurper of authority - the willing instrument of his master King James the Second, who had determined to hold absolute rule over all New England. On his arrival in New York, as we have seen, Andros demanded a surrender of all the colonial charters into his hands. The authorities of all the colonies complied, excepting those of Connecticut. The latter steadily refused to yield their charter voluntarily, for it was the guardian of their political rights. To subdue their stubbornness, the viceroy proceeded to Hartford with sixty armed men, to demand the surrender of the charter in person. On his arrival there on the 31st of October (O. S.), 1687, he found the General Assembly in session in the meeting-house. The members received him with the courtesy due to his rank. Before that body, with armed men at his back, he demanded a formal surrender of the precious document into his own hands.
It was now near sunset. A subject of some importance was under debate, and the discussion was purposely continued until some time after the candles were lighted. Then the charter, contained in a long mahogany box, was brought in and laid upon the table. A preconcerted plan to save it from the grasp of the usurper was now instantly executed. As Andros put forth his hand to take the charter, the candles were all snuffed out and the document was snatched by Captain Wadsworth, whose train-bands were near to protect the Assembly from any violence which the royal soldiers might offer. Wadsworth bore away the charter, the crowd opening as he passed out, and closing behind him, and hid it in the hollow of a venerable oak tree on the outskirts of the village. When the candles were relighted, the members were seated in perfect order, but the charter could not be found. This was the same Captain Wadsworth who afterward silenced Governor Fletcher.
So, again, the tyrannical purposes of Andros were foiled in Connecticut. Wisely restraining his passion at that time, he assumed the control of the government declared the charter annulled, and Secretary Allyn wrote the word FINIS after the last record of the Journal of the Assembly. From that time until he was expelled from the country in 1689, he governed Connecticut as an autocrat - an absolute sovereign. Then the charter was brought out from its place of concealment, in May, 1689; a popular Assembly was convened; Robert Treat was chosen governor, and Connecticut again assumed the position of an independent colony. The tree in which the document was hidden was ever afterward known as the "Charter Oak." It remained vigorous, bearing fruit every year until a little after midnight in August, 1856, when it was prostrated by a heavy storm of wind. It stood in a vacant lot on the south side of Charter street, a few rods from Main street, in the city of Hartford.
About six years after Andros was out-generaled at Hartford, his successor in office Benjamin Fletcher, was foiled, at the same place, in his attempts to exercise control over the militia of Connecticut. From that time, during the space of about three-fourths of a century, the history of Connecticut is intimately woven with that of the other colonies planted in America by English people. The inhabitants of Connecticut, by prudent habits and good government, steadily increased in numbers and wealth. They went hand in hand with those of other colonies in measures for the promotion of the welfare of all and when, in the fullness of time, the provinces were ripe for union, rebellion and independence, the people of Connecticut were foremost in their eagerness to assert their rights as a free people.
- - - - - - - - - -The Charter Oak
THE Connecticut colonists worked in harmony as brethren or the same nation and creed until their fusion into one commonwealth in 1665. They managed their private and public affairs prudently and were prosperous. Troubles with the Dutch, concerning territorial boundaries, were amicably settled with Stuyvesant when he visited Hartford in 1650; but the mutterings of dissatisfaction which fell from the lips of the neighboring Indian tribes gave them some disquietude, and made them heartily approve and join the New England Confederacy formed in 1643 The following year the little independent colony at Saybrook, at the mouth of the Connecticut River, which had been formed in 1639, was annexed to that of Connecticut at Hartford, and was the precursor of the final union of the three colonies about twenty years afterwards.
The repose of the colonists was broken in 1653, by a war between England and Holland. An alarming rumor had spread over New England that Ninigret, an old, crafty and wily sachem of the allied Niantics and Narragansets, who had spent part of a winter at New Amsterdam, had made a league with Stuyvesant for the destruction of the New England colonies. The majority of the commissioners of the New England Confederacy believed the absurd story, and decided to make war on the Dutch. The Connecticut people were specially eager for war, for they were more immediately exposed to the effects of such a plot than the other colonists. But Massachusetts refused to furnish men and arms for an aggressive war, before an investigation of the matter. Messengers were sent to Ninigret and his associate sachems for the latter purpose. These were questioned separately, and all concurred in the solemn assurance that they had no knowledge of such a plot. Ninigret, who went to New Amsterdam for medical treatment, said with emphasis, in his denial, "I found no such entertainment from the Dutch governor, when I was there, as to give me any encouragement to stir me up to such a league against the English, my friends. It was winter time, and I stood a great part of a winter day knocking at the governor's door, and he would neither open it, nor suffer others to open it, to let me in. I was not wont to find such carriage from the English, my friends."The story of the Dutch-Indian plot appears to have been a pure invention of Uncas the crafty sachem of the Mohegans, who was a foe of Ninigret, and was extremely jealous of the supposed friendship between that sachem and the English. It caused the frightened Connecticut colonists, when Massachusetts refused to join them in war upon the Dutch, to ask Cromwell for aid. The Protector sent four ships-of-war, but before their arrival a treaty of peace had ended the war between England and Holland, and blood and treasure were saved in America.
On the restoration of monarchy in England, in 1660, the Connecticut colonists had fears regarding their future. Their sturdy republicanism and independent action in the past might be mortally offensive to the new monarch. The General Assembly of Connecticut, therefore, resolved to make a formal acknowledgment of their allegiance to the crown and ask the king for a charter. A petition was accordingly framed and signed in May, 1661, and Governor John Winthrop bore it to England. He was a son of Winthrop of Massachusetts, and was a man of rare attainments and courtly manners, and then about forty-five years of age. He obtained an interview with the king, and was received with coolness. His name and the people over whom he was the chosen ruler were associated with radical republicanism, and the king received the prayer of the petitioners with disfavor. Winthrop left the royal presence, disappointed but not disheartened, and sought and obtained another interview.
The "merry monarch" was now in more genial mood. He chatted freely with Winthrop about America - its soil, productions, the Indians and the settlers -yet he hesitated to promise a charter. Winthrop, it is said, finally drew from his pocket a gold ring of great value, which the king's father had given to the governor's grandfather, and presented it to his majesty with a request that he would accept it as a memorial of the unfortunate monarch, and a token of Winthrop's esteem for, and loyalty to King Charles, before whom he stood as a faithful and loving subject. The king's heart was touched. Turning to Lord Clarendon, who was present, the monarch said: "Do you advise me to grant a charter to this good gentleman and his people?" "I do, Sire," responded Clarendon."It shall be done," said Charles, and he dismissed Winthrop with a hearty shake of his hand and a royal blessing.
The governor left Whitehall with a light heart. A charter was issued on the first of May, 1662. It confirmed the popular constitution of the colony, and contained more liberal provisions that, any yet issued by royal hands. It defined the boundaries so as to include the New Haven colony and a part of Rhode Island on the East, and westward to the Pacific Ocean. The New Haven colony reluctantly gave its consent to the union, in 1665, and the boundary between Connecticut and Rhode Island remained a subject of dispute for more than sixty years. That old charter, engrossed on parchment, is among the archives in the Connecticut State Department. It bears the miniature portrait of Charles the Second, drawn in India ink by Samuel Cooper, it is supposed, who was an eminent London miniature painter of the time.
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Two English Kings, a royal agent, a colonial hero, and a candle-lit room are the backdrop in one of the most thrilling chapters of America's legend of liberty. The refusal of our early Connecticut leaders to give up the Charter, despite royal order and the threat of arms, marked one of the greatest episodes of determined courage in our history.
In 1639, the Fundamental Orders were produced, binding the first three Connecticut towns- Hartford, Windsor, and Wethersfield- into a colonial entity. These Fundamental Orders are considered to be the first constitution in the history of the world, which is why Connecticut is called the Constitution State. The Connecticut Charter recognized the Connecticut Colony by the English Monarchy. On October 9, 1662, the General Court of Connecticut formally received the Charter won from King Charles II by the suave diplomacy of Governor John Winthrop, Jr., who had crossed the ocean for the purpose.
In 1687, twenty-five years later, James II ascended to the throne. This spelled trouble for Connecticut. King James wanted to revoke Connecticut's Charter. The people of Connecticut, however, did not want their Charter taken away because it entitled them to certain rights under British Law. Sir Edmund Andros, His Majesty's agent, followed up the failure of various strategies by arriving in Hartford with an armed escort to seize the Charter.
After hours of debate, with the Charter on the table between the opposing parties, the candlelit room went suddenly dark. Moments later, when the candles were lighted again, the Charter was gone. Captain Joseph Wadsworth is credited with having removed and secreted the Charter in the majestic oak on the Wyllys estate.
- - - - - - - - - - - - - -
Liberties and Legends
Connecticut's history of constitutional government dates back to the seventeenth century and two significant documents: the 1639 Fundamental Orders, which bound the three original towns of Windsor, Wethersfield and Hartford into a colonial entity, and the Royal Charter of 1662 granted by Charles II. Twenty-five years later, when agents of James II attempted to seize the charter, it was spirited away and hidden in a majestic oak tree on the Wyllys estate in Hartford, thereby preserving the charter and the rights of the colonists.
For over a hundred and fifty years, the "charter oak" was a prominent and widely recognized Connecticut landmark. When it was toppled during an 1857 storm, acorns were collected as keepsakes, as were a considerable amount of twigs, leaves, branches and lumber.
The Museum exhibit "Liberties and Legends" tells the story of this venerated icon. The exhibit includes numerous souvenirs made from wood of the original charter oak, including a Colt revolving pistol, picture frames and miniature furniture. Today, several "descendants" of the charter oak are to be found on the grounds of the State Capitol and in Hartford's Bushnell Park. The original charter, preserved in an ornate frame made of "charter oak" wood, is prominently displayed in the museum.
Also on permanent display are the State Constitutions of 1818 and 1964 and Connecticut's copy of the United States Bill of Rights. | <urn:uuid:d8f6b434-12ed-4d60-83ff-fa1ffa6eabc9> | CC-MAIN-2015-35 | http://colonialwarsct.org/1687.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00106-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.975952 | 3,348 | 3.3125 | 3 |
Airplane Takeoff Video -
More Airplane Takeoff Videos 1
Takeoff is the phase of flight in which an aircraft goes through a transition from moving along the ground (taxiing) to flying in the air, usually starting on a runway. For balloons, helicopters and some specialized fixed-wing aircraft (VTOL aircraft such as the Harrier), no runway is needed. Takeoff is the opposite of landing.
For light aircraft, usually full power is used during takeoff. Large transport category (airliner) aircraft may use a reduced power for takeoff, where less than full power is applied, in order to increase passenger comfort, increase engine life or maintenance intervals or avoid VMC limitation. In some emergency cases, the power used can then be increased to increase the aircraft's performance. Before takeoff, the engines, particularly piston engines, are routinely run up at high power to check for engine-related problems. The aircraft is permitted to accelerate to rotation speed (often referred to as Vr). The term rotation is used because the aircraft pivots around the axis of its main landing gear while still on the ground, usually due to manipulation of the flight controls to make this change in aircraft attitude.
The nose is raised to a nominal 5°–20° nose up pitch attitude to increase lift from the wings and effect liftoff. For most aircraft, attempting a takeoff without a pitch-up would require cruise speeds while still on the runway.
Fixed-wing aircraft designed for high-speed operation (such as commercial jet aircraft) have difficulty generating enough lift at the low speeds encountered during takeoff. These are therefore fitted with high-lift devices, often including slats and usually flaps, which increase the camber of the wing, making it more effective at low speed, thus creating more lift. These are deployed from the wing prior to takeoff, and retracted during the climb. They can also be deployed at other times, such as prior to landing.
The speeds needed for takeoff are relative to the motion of the air (indicated airspeed). A headwind will reduce the ground speed needed for takeoff, as there is a greater flow of air over the wings. Typical takeoff air speeds for jetliners are in the 130–155 knot range (150–180 mph, 240–285 km/h). Light aircraft, such as a Cessna 150, take off at around 55 knots (63 mph, 100 km/h). Ultralights have even lower takeoff speeds. For a given aircraft, the takeoff speed is usually directly proportional to the aircraft weight; the heavier the weight, the greater the speed needed. Some aircraft specifically designed for short takeoff and landing can take off at speeds below 40 knots (74 km/h), and can even become airborne from a standing start when pointed into a sufficiently strong wind.
The takeoff speed required varies with air density, aircraft gross weight, and aircraft configuration (flap and/or slat position, as applicable). Air density is affected by factors such as field elevation and air temperature. This relationship between temperature, altitude, and air density can be expressed as a density altitude, or the altitude in the International Standard Atmosphere at which the air density would be equal to the actual air density.
Operations with transport category aircraft employ the concept of the takeoff V-Speeds, V1, VR and V2. These speeds are determined not only by the above factors affecting takeoff performance, but also by the length and slope of the runway and any peculiar conditions, such as obstacles off the end of the runway. Below V1, in case of critical failures, the takeoff should be aborted; above V1 the pilot continues the takeoff and returns for landing. After the co-pilot calls V1, he/she will call Vr or "rotate," marking speed at which to rotate the aircraft. The VR for transport category aircraft is calculated such as to allow the aircraft to reach the regulatory screen height at V2 with one engine failed. Then, V2 (the safe takeoff speed) is called. This speed must be maintained after an engine failure to meet performance targets for rate of climb and angle of climb.
In a single-engine or light twin-engine aircraft, the pilot calculates the length of runway required to take off and clear any obstacles, to ensure sufficient runway to use for takeoff. A safety margin can be added to provide the option to stop on the runway in case of a rejected takeoff. In most such aircraft, any engine failure results in a rejected takeoff as a matter of course, since even overrunning the end of the runway is preferable to lifting off with insufficient power to maintain flight.
If an obstacle needs to be cleared, the pilot climbs at the speed for maximum climb angle (Vx), which results in the greatest altitude gain per unit of horizontal distance travelled. If no obstacle needs to be cleared, or after an obstacle is cleared, the pilot can accelerate to the best rate of climb speed (Vy), where the aircraft will gain the most altitude in the least amount of time. Generally speaking, Vx is a lower speed than Vy, and requires a higher pitch attitude to achieve.
Balanced field takeoff
In aviation, the balanced field takeoff is the theoretical principle whereby the critical engine failure recognition speed, or V1, is used as a decision speed below which the pilot elects whether to continue the takeoff. The concept at play is that the distance required to complete the takeoff with a failed engine equals the distance required to reject the takeoff and come to a standstill. To achieve this, V1 can be selected within a range, higher V1s leading to increased accelerate-stop distances and lower takeoff distances with one engine inoperative.
Depending on aircraft limitations, it is not always possible to have a balanced field length. If it is possible, however, it results in the highest amount of allowed takeoff weight for the available runway, thus providing operational benefits.
Airworthiness regulations, especially FAR 25 and CS-25 (for large passenger aircraft) require the takeoff distance and the accelerate-stop distance
to be less than or equal to the available runway length, both with and without an engine failure assumed. While upper and lower bounds for V1 exist,
in some cases a range of values exists for which it is possible to fulfill the requirements for a given runway length.
Landing and Takeoff Performance Monitoring Systems are devices aimed at providing to the pilot information on the validity of the performance computation, and averting runway overruns that occur in situations not adequately addressed by the takeoff V-speeds concept.
Using the balanced field takeoff concept, V1 is the maximum speed in the takeoff at which the pilot must take the first action (e.g. reduce thrust, apply brakes, deploy speed brakes) to stop the airplane within the accelerate-stop distance and the minimum speed at which the takeoff can be continued and achieve the required height above the takeoff surface within the takeoff distance.
STOL is an initialism for short take-off and landing, a term used to describe aircraft with very short runway requirements.
The formal NATO definition (since 1964) is:
Short Take-Off and Landing (décollage et atterrissage courts) is the ability of an aircraft to clear a 15 m (50 ft) obstacle within 450 m (1,500 ft) of commencing take-off or, in landing, to stop within 450 m (1,500 ft) after passing over a 15 m obstacle.
Many fixed-wing STOL aircraft are bush planes, though some, like the de Havilland Dash-7, are designed for use on prepared airstrips; likewise, many STOL aircraft are taildraggers, though there are exceptions like the de Havilland Twin Otter, the Cessna 208, and the Peterson 260SE. Autogyros also have STOL capability, needing a short ground roll to get airborne, but capable of a near-zero ground roll when landing.
Runway length requirement is a function of the square of the minimum flying speed (stall speed), and most design effort is spent on reducing this number. For takeoff, large power/weight ratios and low drag help the plane to accelerate for flight. The landing run is minimized by strong brakes, low landing speed, thrust reversers or spoilers (less common). Overall STOL performance is set by the length of runway needed to land or take off, whichever is longer.
Of equal importance to short ground run is the ability to clear obstacles, such as trees, on both take off and landing. For takeoff, large power/weight ratios and low drag result in a high rate of climb required to clear obstacles. For landing, high drag allows the aeroplane to descend steeply to the runway without building excess speed resulting in a longer ground run. Drag is increased by use of flaps (devices on the wings) and by a forward slip (causing the aeroplane to fly somewhat sideways though the air to increase drag).
Normally, a STOL aircraft will have a large wing for its weight. These wings often use aerodynamic devices like flaps, slots, slats, and vortex generators. Typically, designing an aircraft for excellent STOL performance reduces maximum speed, but does not reduce payload lifting ability. The payload is critical, because many small, isolated communities rely on STOL aircraft as their only transportation link to the outside world for passengers or cargo; examples include many communities in the Canadian north and Alaska.
Most STOL aircraft can land either on- or off-airport. Typical off-airport landing areas include snow or ice (using skis), fields or gravel riverbanks (often using special fat, low-pressure tundra tires), and water (using floats): these areas are often extremely short and obstructed by tall trees or hills. Wheel skis and amphibious floats combine wheels with skis or floats, allowing the choice of landing on snow/water or a prepared runway. A STOLport is an airport designed with STOL operations in mind, normally having a short single runway. These are not common but can be found, for example, at London City Airport in England. | <urn:uuid:25663f4b-836e-4d94-9e5e-a3692e716207> | CC-MAIN-2015-35 | http://www.livingwarbirds.com/airplane-takeoff-videos.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00160-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.945236 | 2,080 | 3.796875 | 4 |
| Genus List | Species List | Key to Species |
Adelomyrmex is a genus of small, cryptic ants that inhabit wet forest floor leaf litter. Old World representatives of the genus have been described from New Guinea, Fiji, and Samoa. Twenty New World species have been described (Fernández and MacKay 2003, Fernández 2003), ranging from southern Mexico to southern Brazil and Paraguay.
Wheeler (1910) described a new genus and species, Apsychomyrmex myops, from Guatemala. Mann (1922) reported a second collection of myops from Honduras, observing that specimens occurred in small colonies beneath stones, the colonies resembling those of Rogeria. Menozzi (1931) described two additional species, silvestrii and tristani, from the central highlands of Costa Rica. Borgmeier (1937) reported additional collections of silvestrii and tristani from Hamburg Farm, Costa Rica, collected by F. Nevermann. Hamburg Farm is a site in the Atlantic lowlands, north of Limon. Smith (1947) reviewed the genus, providing redescriptions of the three known species of what he called "some of the rarest American ants," and providing a few new locality records. Prior to the description of Apsychomyrmex, Emery (1897) described a new genus and species, Adelomyrmex biroi, from New Guinea. Mann (1921) described Adelomyrmex hirsutus from Fiji, and Wilson and Taylor (1967) described Adelomyrmex samoanus from Samoa. Kempf (1972) synonymized Apsychomyrmex under Adelomyrmex. This synonymy was simply stated in the catalog of Neotropical ants, with no discussion or justification. Kugler (1978) examined the sting apparatus, and concluded that Adelomyrmex was related to the Neotropical genus Lachnomyrmex and the African genus Cyphoidris. Bolton (1981) described the African genus Baracidris, and considered it very close to Adelomyrmex. Fernandez and MacKay (2003) described new species. Fernandez (2003) provided a thorough species-level revision of Adelomyrmex and Baracidris and provided evidence that the two genera form a monophyletic group.
Costa Rican Abundance and Distribution
In Costa Rica, the genus is most abundant in wet forest sites above 500m, and is increasingly rare at lower elevations. Workers occur in nearly every litter sample from the Monteverde area, but are relatively infrequently encountered at La Selva. The caveat should be added that this abundance pattern is that revealed by litter sifting. The variation in abundance could be due to real differences in density, or to differences in nesting or foraging behavior that affect catchability.
The Costa Rican species are of relatively uniform habitus, and vary in details of surface sculpture, size, color, and pilosity. They occur in communities of sympatric species, and show patterns of parapatric distributions along elevational gradients. silvestrii is a broad habitat generalist, occurring in wet forest habitats from sea level to cloud forest. myops and longinoi are moderately common in lowland sites, below 500m. foveolatus and microps are very rare species (or at least difficult to collect). They are known from only 3 and 1 specimen, respectively, from La Selva, where ant sampling intensity has been particularly high. tristani occurs at La Selva but is extremely rare. It increases in abundance at higher elevations, and becomes a relatively common element of the leaf litter in cloud forest. At these higher elevations laevigatus appears, and is a relatively widespread mid-elevation montane species. brevispinosus occurs in a slightly higher elevational zone than laevigatus. It is known from the dripping, fog-bathed cloudforest on the narrow ridge crest above Monteverde, and on the slopes of Volcan Barva. In Monteverde, it appears parapatric with laevigatus. brevispinosus inhabits the wet cloud forest of the ridge crest; laevigatus inhabits the surrounding wet to moist forest in the Monteverde community and in the Penas Blancas Valley.
Nesting Habits and Foraging
In my brief examination of this genus, I can say nothing of nesting habits or foraging behavior, because I have never seen them other than as loose workers and queens in the ethanol of Winkler or Berlese samples. Mann (1922) found nests under stones in Honduras.
Distinctive Clypeo-Mandibular Morphology
All the Adelomyrmex specimens I have examined have exhibited a characteristic morphology of clypeus and mandibles, the adaptive significance of which begs investigation. Smith (1947) also observed this morphology. The clypeus has two portions, a dorsal surface (assuming a prognathous head) with an anteromedian bidentate projection, and an anterior surface perpendicular to the dorsal surface. The anterior surface has a broad, concave median portion and concave lateral portions, divided by projecting longitudinal ridges. The ridge forms a distinct tooth, and the lateral concavity a distinct notch. The basal margin of the mandible has a tooth that fits in the notch (Figure 1). It appears to be either a locking mechanism, to keep the mandibles from sliding laterally, or a gripping mechanism, for tightly clamping soft-bodied prey. In some species there is also a pronounced median tooth on the hypostomal margin (Figure 2). Perhaps the dentate anteromedian projection, the lateral interlocking tooth and notch complex, and the pronounced hypostomal tooth function together to hold earthworms or other soft-bodied, slippery or tapered prey, much like the teeth on a pair of pliers.
In the same Costa Rican cloud forests where Adelomymrmex are common, members of the Stenamma schmidti complex also have mandibles with a tooth and notch on the basal margin of the mandibles, and a corresponding tooth and notch on the anterolateral clypeal margin. The structures are remarkably similar between the two genera, but are presumably a result of convergence and not shared ancestry. It will be exciting to discover whether or not Adelomyrmex and the Stenamma schmidti complex share foraging methods or prey preferences.
Bolton, B. 1981. A revision of six minor genera of Myrmicinae (Hymenoptera: Formicidae) in the Ethiopian zoogeographical region. Bull. Br. Mus. (Nat. Hist.) Entomol. 43:245-307.
Borgmeier, T. 1937. Formigas novas ou pouco conhecidas da America do Sul e Central, principalmente do Brasil (Hym. Formicidae). Arch. Inst. Biol. Veg. (Rio J.) 3:217-255.
Emery, C. 1897. Formicidarum species novae vel minus cognitae in collectione Musaei Nationalis Hungarici quas in Nova-Guinea, colonia germanica, collegit L. Biro. Termeszetr. Fuz. 20:571-599.
Fernández C., F. 2003. Revision of the myrmicine ants of the Adelomyrmex genus-group (Hymenoptera: Formicidae). Zootaxa 361: 1–52.
Fernández C., F., MacKay, W.P. 2003. The myrmicine ants of the Adelomyrmex laevigatus species complex (Hymenoptera: Formicidae). Sociobiology 41:593–604.
Kempf, W. W. 1972. Catalogo abreviado das formigas da regiao Neotropical. Stud. Entomol. 15:3-344.
Kugler, C. 1978. A comparative study of the myrmicine sting apparatus (Hymenoptera, Formicidae). Stud. Entomol. 20:413-548.
Mann, W. M. 1921. The ants of the Fiji Islands. Bull. Mus. Comp. Zool. 64:401-499.
Mann, W. M. 1922. Ants from Honduras and Guatemala. Proc. U. S. Natl. Mus. 61:1-54.
Menozzi, C. 1931. Contribuzione alla conoscenza del "microgenton" di Costa Rica. III. Hymenoptera - Formicidae. Boll. Lab. Zool. Gen. Agrar. R. Sc. Super. Agric. 25:259-274.
Smith, M. R. 1947 ("1946"). Ants of the genus Apsychomyrmex Wheeler (Hymenoptera: Formicidae). Rev. Entomol. (Rio J.) 17:468-473.
Wheeler, W. M. 1910. Three new genera of myrmicine ants from tropical America. Bull. Am. Mus. Nat. Hist. 28:259-265.
Wilson, E. O., Taylor, R. W. 1967. The ants of Polynesia (Hymenoptera: Formicidae). Pac. Insects Monogr. 14:1-109.
John T. Longino, The Evergreen State College, Olympia WA 98505 USA.email@example.com
Go to Ants of Costa Rica Homepage | <urn:uuid:0e625256-b3d1-4053-bc55-12b5a0b88424> | CC-MAIN-2015-35 | http://academic.evergreen.edu/projects/ants/genera/adelomyrmex/Adelomyrmex.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00166-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.832128 | 2,057 | 2.53125 | 3 |
ValerianaceaeCharles D. Bell
This tree diagram shows the relationships between several groups of organisms.
The root of the current tree connects the organisms featured in this tree to their containing group and the rest of the Tree of Life. The basal branching point in the tree represents the ancestor of the other groups in the tree. This ancestor diversified over time into several descendent subgroups, which are represented as internal nodes and terminal taxa to the right.
You can click on the root to travel down the Tree of Life all the way to the root of all Life, and you can click on the names of descendent subgroups to travel up the Tree of Life all the way to individual species.close box
Originally described by Linnaeus (1753); Linnaeus (1754), Valerianaceae is traditionally subdivided into three tribes (Graebner, 1906); 1) Patrinieae, 2) Triplostegieae, and 3) Valerianeae. Many authors (see Weberling (1970), Cronquist (1988), Backlund (1996a), Brummitt (1992), and Mabberley (1997)) recognize 14 genera; two in the tribe Patrinieae (Patrinia and Nardostachys), a single genus (Triplostegia) in the tribe Triplostegieae, and 11 genera assigned to five subtribes in the Valerianeae. Six genera besides Valeriana have been recognized in South America (Aretiastrum, Astrephia, Belonanthus, Phuodendron, Phyllactis, and Stangea). Recent treatments of the South American taxa have argued for placing these species in Valeriana, reducing the number of recognized genera within Valerianaceae to eight (Eriksen, 1989 a; Borsini, 1944; Larsen, 1986).
Valerianaceae has long been thought to represent a natural group of ca. 350 species distributed throughout much of the world, with the exception of Australia and New Zealand. The group is characterized by a) sympetalous, asymmetric flowers, b) inferior, three-carpelate ovaries, c) one fertile carpel with a single anatropous ovule, d) an achene fruit type, and e) lack of endosperm in the ripe seed (with the exception of Triplostegia). The presence of iridoids of the Valepotriate type in many of the species, including Triplostegia (Backlund and Moritz, 1998), is also characteristic the group. Although the group appears to have a center of origin in Asia, the majority of the species occur in the Andes of South America.
Within Valerianaceae there is a degree of differentiation/specialization in flower and fruit morphology. The most noticeable difference in floral morphologies across Valerianaceae is in the number of stamens, which varies from four to one [five stamens have also been reported in Patrinia (Eriksen, 1989 a)]. The trend in the group is toward the reduction in the number of stamens. Donoghue et al. (in press b) inferred from a large chloroplast data set, combined with ITS sequences, that there was an initial reduction in stamen number from the ancestral condition of four to three, then two additional, independent reductions in number to two (in Fedia) and to a single stamen (in species of Centranthus; see Donoghue et al., 2003). The calyx of species of Valerianaceae is either persistent (leafy, as in Nardostachys; reduced to small teeth, as in Fedia and Valerianella; or pappus-like, as in Centranthus and species of Valeriana) or completely lacking. Eriksen (1989 a) hypothesized that the complete reduction of the calyx has occurred independently several times within Valerianaceae (e.g., within the Latin American species of Valeriana). Likewise, the degree of reduction of the two abaxial sterile locules is quite heterogeneous among species of Valerianaceae; from highly reduced to extremely inflated (as seen in some species of Valerianella and Valeriana). These modifications, of both the calyx and the sterile locules, are most likely correlated with means of dispersal.
The relationship of Valerianaceae within Dipsacales has been investigated quite extensively using morphological data (Judd et al., 1994; Backlund, 1996b). These phylogenetic analyses place Valerianaceae as sister to Dipsacaceae. Both of these herbaceous groups are united by having distinctive pollen morphology and chlorophyllous embryos, a trait unique within Dipsacales (Backlund, 1996b). The link between Valerianaceae and Dipsacaceae is further supported by simple vessel perforations, modification of calyx lobes, and reduction in the amount of endosperm (Judd et al., 1994; Backlund, 1996b; Manchester and Donoghue, 1995). Various molecular data sets, including sequence data from the chloroplast genes rbcL (Donoghue et al., 1992) and ndhF (Pyck et al., 1999; Pyck et al., 2002), restriction site data (Downie and Palmer, 1992), as well as combined morphological and molecular data (Backlund, 1996b) have now been analyzed. These molecular phylogenetic studies also support the close relationship between Valerianaceae and Dipsacaceae, with this Valerianaceae- Dipsacaceae clade being sister to the Morinaceae, another herbaceous lineage [the Valerina clade sensu Donoghue et al. (2001)]. These data do not tend to support the grouping of Triplostegia with Valerianaceae, but instead place it in a basal position in the clade consisting of species of Dipsacaceae.
More recently, phylogenetic analyses based on chloroplast DNA (trnL-F intergenic spacer (IGS), trnL intron, ndhF, matK) (Zhang et al., 2003; Bell et al., 2001) have provided additional insights into the relationships within Valerianaceae. All these analyses (with the exception of matK) place Patrinia at the base of Valerianaceae (with very strong bootstrap support, from both maximum parsimony and maximum likelihood analyses), followed by Nardostachys. The chloroplast data also support a clade consisting of Valeriana, Centranthus, and Plectritis that is sister to a clade containing Fedia and Valerianella. These data also consistently find Triplostegia to be more closely related to species of Dipsacaceae than to Valerianaceae. Pyck et al. (2002) investigated the phylogeny of the tribe Patrinieae (Graebner, 1906), which consists of Patrinia and Nardostachys, using ndhF sequence data. The author's findings were consistent with other molecular data sets (Donoghue et al., 2001b; Bell et al., 2001). There data suggested that the Patrinieae is not monophyletic and that Patrinia is sister to the rest of Valerianaceae (with the exception of Triplostegia, which was not included in their study), followed by Nardostachys.
An additional study looked at sequence data from the atpB-rbcL intergenic spacer region (Raymundez et al., 2002). These authors found results that were highly consistent with all the other studies. They did not, however, sequence any species of Nardostachys, Plectritis, Triplostegia, or species of Dipsacaceae. This study did, however, find support for the grouping of Fedia and Valerianella as a clade. In addition, their data supported a clade consisting of species of Valeriana and Centranthus.
Backlund, A. A. (1996): Phylogeny of the Dipsacales. Ph.D. thesis, Uppsala University, Sweden.
Backlund, A. A. & Donoghue, M. J. (1996): Morphology and phylogeny of the order Dipsacales. In Phylogeny of the Dipsacales, A. A. Backlund, Doctoral Dissertation. Uppsala: Department of Systematic Botany, Uppsala University, Sweden.
Backlund, A. & Moritz, T. (1998): Phylogenetic implications of an expanded valepotriate distribution in the Valerianaceae. Biochem. Syst. and Ecol. 26:309-335.
Bell, C. D. (2004a): Preliminary phylogeny of Valerianaceae (Dipsacales) inferred from nuclear and chloroplast DNA sequence data. Mol. Phylogen. Evol. 31:340-350.
Bell, C. D. (2004b): Phylogeny and biogeography of Valerina (Dipsacales). Ph.D. thesis, Yale University, USA.
Bell, C. D. & Donoghue M. J. (2003): Phylogeny and biogeography of
Morinaceae (Dipsacales) based on nuclear and chloroplast DNA sequences. Org. Divers. Evol. 3: 227-237.
Bell, C. D. & Donoghue M. J. (in press): Dating the Dipsacales: comparing models, genes, and evolutionary implications. Am. J. Bot.
Bell, C. D., Edwards, E. J., Kim, S-T., & Donoghue, M. J. (2001): Dipsacales phylogeny based on chloroplast DNA sequences. Harv. Pap. Bot. 6:481-499.
Borsini, O. E. (1944): Genera and species plantarum argentinarum, chapter Valerianaceae, pp. 275-372. Descole, Buenos Aires. Argentina.
Cronquist, A. (1988): The Evolution and Classification of Flowering Plants. New York Botanical Gardens, Bronx, NY, USA.
Donoghue, M.J., Bell, C. D. & Winkworth, R. C. (2003): The evolution of reproductive characters in Dipsacales. Int. J. Plant Sci. 164: S453-464.
Donoghue, M. J., Eriksson, T., Reeves, P. A. & Olmstead, R. G. (2001b): Phylogeny and phylogenetic taxonomy of Dipsacales, with special reference to Sinadoxa and Tetradoxa (Adoxaceae). Harv. Pap. Bot. 6:459-479.
Donoghue, M. J., Olmstead, R. G, Smith, J. F. & Palmer, J. D. (1992): Phylogenetic relationships of Dipsacales based on rbcL sequences. Ann. Missouri Bot. Gard. 79:333-345.
Eriksen, B. (1989): Note on the generic and infrageneric delimitation in the Valerianaceae. Nord. J. Bot. 9:179-187.
Graebner, P. (1906): Die Gattungen der naturlichen Familie der Valerianaceae. Bot. Jahrb. Syst. 37:464-480.
Hildago, O. Garnatje, T., Susana, A. & Mathez, J. (2004): Phylogeny of Valerianaceae based on matK and ITS markers, with reference to matK individual polymorphisms. Ann. Bot. 93: 283-293.
Larsen, B. B. (1986): A taxonomic revision of Phyllactis and Valeriana sect Bracteata (Valerianaceae). Nord. J. Bot. 6:427-446.
Meyer, F. G. (1951): Valeriana in North America and the West Indies (Valerianaceae). Ann. Missouri Bot. Gard. 38:377-503.
Ozaki, K. (1980): Late Miocene Tatsumitoge flora of Tottori Prefecture, southwest Honshu, Japan. Science Reports of the Yokohama National University Sec. 2 2:40-42.
Pyck, N., Roels, P & Smets, E. (1999): Tribal relationships in Caprifoliaceae: evidence from a cladistic analysis using ndhF sequences. Syst. Geogr. Pl. 69:145-159.
Pyck, N. & Smets, E. (2000): A search for the position of the seven-son flower (Heptacodium, Dipsacales): combining molecular and morphological evidence. Plant Syst. Evol. 225:185-199.
Pyck, N., van Lysebetten, A., Stessens, J. & Smets, E. (2002): The phylogeny of Patrinieae sensu Graebner (Valerianaceae) revisited: additional evidence from ndhF sequence data. Plant Syst. Evol. 233:29-46.
Raymundez, M. B., Mathez, J., Xena de Enrech, N. & Duduisson, J.-Y. (2002): Coding of insertion-deletion events of the chloroplast intergene atpB-rbcL for the phylogeney of the Valerianeae tribe (Valerianaceae). C. R. Biologies pp. 131-139.
Weberling, F. (1970): in: Hegig, Illustrierte Flora von Mitteleuropa, Familie Valerianaceae, pp. 97-176. Carl Hansen, Munchen, Germany.
Zhang, W.-H., Chen, Z.-D. , Li, J.-H., Chen, Y.-C. & Tang, H-B. (2003): Phylogeny of the Dipsacales s.l. based on chloroplast trnL and ndhF sequences. Mol. Phylogen. Evol. 26:176-189.
Charles D. Bell
University of New Orleans, New Orleans, Louisiana, USA
Page copyright © 2005 Charles D. Bell
All Rights Reserved.
- First online 15 July 2004
Citing this page:
Bell, Charles D. 2004. Valerianaceae. Version 15 July 2004 (under construction). http://tolweb.org/Valerianaceae/20797/2004.07.15 in The Tree of Life Web Project, http://tolweb.org/ | <urn:uuid:d4e48a04-eb67-48ce-9284-b1c66de5ddf5> | CC-MAIN-2015-35 | http://tolweb.org/Valerianaceae/20797/2004.07.15 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068749.35/warc/CC-MAIN-20150827025428-00103-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.784767 | 3,173 | 3.46875 | 3 |
(Note, this is a preprint of an article published with some editorial changes in 1998 in Calow, P. (ed.) Encyclopaedia of Ecology and Environmental Management. Blackwell Press. pp. 709-711)species concepts The concepts, or ideas, that underlie what biologists mean by the term species. Evolution produces diversity at many levels, from genes, races, species, to genera and higher taxa. There are many problems in classifying this diversity, especially with regard to the level in the evolutionary hierarchy in which to place a particular taxon. The species has often been held in special regard as the only "real" taxon. However, there are an extraordinary number of different concepts, ideas and definitions of organic species; only a few can be summarized here.
The ancient philosophical terms genus and species were used in logic to classify objects and ideas as well as living organisms. The species was a collection of objects that had a common underlying "essence"; in fact the Greek term for species and essence were the same (). The genus was a group of such species, with its own, broader essence. For biological organisms, this essentialism fitted well with the biblical story of creation, and Linnaeus adopted this essentialist view of classification in his ambitious project to catalogue every organism known at the time (18th Century). Linnaeus pioneered the use of binomial nomenclature, Latin scientific names which serve as a shorthand for descriptions of species. As an example, the scientific name of the yellow-shafted flicker (a woodpecker) is Colaptes auratus, where Colaptes is the generic and auratus the specific epithet.
The rise of evolutionary ideas meant that the old creationist essentialism was no longer tenable. Charles Darwin was interested in showing that species evolved, but to do so he obviously had to develop a species concept which depended neither on creation nor on evolution. If they evolved, varieties and species could be continuous in space and time. Thus Darwin wrote: "hereafter, we shall be compelled to acknowledge that the only distinction between species and well-marked varieties is, that the latter are known, or believed, to be connected at the present day by intermediate gradations, whereas species were formerly thus connected." To Darwin, the origin of species became the origin of the morphological gaps between populations. This species concept has been called the morphological species concept, although it emphasizes the clustering of members of the same species in morphological space, rather than the fact that morphological characters were used in its implementation.
As the geographic representation of well-organized museum collections increased, a major revolution in this Darwinian species concept began to take place (1890-1920). It was shown that some apparently good species of birds and butterflies blended together in areas of overlap or hybrid zones. Related forms that hybridize and replace each other geographically became downgraded to subspecies and were referred to by a novel extension of the Linnaean system, which now could consist of a trinomial: genus-species-subspecies. For example, the red shafted flicker (formerly Colaptes cafer) is now usually referred to as Colaptes auratus cafer, or the red-shafted race of the common flicker. This species also includes the yellow-shafted flicker, C. auratus auratus. Species with more than one subspecies became known as polytypic species. At the same time, other, more trivial forms or varieties within local populations became excluded from the formal Linnaean taxonomy.
In the 1930s and 1940s, evolutionists became dissatisfied with the Linnaean/Darwinian character-based definitions of species; they felt that species designations could reflect a real underlying biological phenomenon rather than remaining merely as categories for taxonomic convenience, and they wished to formalize this reality by specifying their idea of the important biological process. Drawing on ideas from Buffon and other early biologists, EB Poulton, T Dobzhansky and E Mayr proposed what is now known as the biological species concept, in which species are thought of as populations which do not interbreed, and are therefore reproductively isolated from other species. These ideas were developed along with (though do not necessarily require) the idea that species were important units of evolution, and that isolating mechanisms were protective devices to maintain the genetic integrity of the species.
The biological species concept seems to have been largely accepted by zoologists for about 30 years. On the other hand, botanists never fully accepted the idea because plants often had high rates of hybridization, local variability, and environmentally-induced plasticity. In recent years, however, any semblance of agreement about species concepts, even among zoologists, has been shattered. An explosion of new ideas has occurred. One movement, spearheaded by PR Ehrlich and PH Raven has claimed that populations, rather than species, are the important and real biological units of evolution. Others claim that biological processes do underlie species, but each has supported a different type of process as the important one. Examples include L Van Valen's ecological species concept, in which species are defined by their ecological niches, and HEH Paterson's recognition concept of species, in which species are defined by sexual signalling or specific mate recognition systems within species (see also isolating mechanisms). The cohesion concept of species was proposed by A Templeton to combine reproductive isolation, ecological selection, and reproductive compatibility within a single species concept. The major advantage of this idea was that both hybridizing and asexual species which, which could not be classified under the biological species concept, could be included.
A completely different approach to species concepts has been to include the idea of evolutionary history as opposed to merely the maintenance of current species. The evolutionary species concept, in which a species is a lineage evolving separately from others, was proposed by GG Simpson to allow fossils to be classified as species as well as living organisms. This idea has been formalized recently in various types of phylogenetic species concept, in which the individuals that belong to a species contain all the descendents of a single population of ancestors, that is they are monophyletic. This group of ideas was developed by J Cracraft and others specifically in response to an increase in the use of cladistics in classification. In cladistics, only apomorphies (uniquely derived traits) are used to unite groups; reproductive compatibility and free hybridization supposedly cannot be used in species definitions because they are primitive or plesiomorphic traits. Unfortunately, hybridization may also allow genes to pass from one taxon to another, and so different genes within groups of organisms may in fact have different phylogenies (phylogenies of single genes are called genealogies). To get around this problem of conflicting data, DL Baum and KL Shaw have suggested a variant phylogenetic species concept based on the consensus of many estimated genealogies of different genes; this is called the genealogical species concept. Finally, A Templeton has recently added phylogenetic and genealogical considerations, as well as ecology and reproductive isolation, to his cohesion concept of species.
Fig 1. Genotypic cluster diagnosis of species and races in areas of overlap. The frequency of genotypes in two hybrid zones involving Heliconius erato (pure forms are illustrated in Fig. 2). (a) The numbers of individuals with different proportions of colour pattern genes from H. erato emma in the centre of a hybrid zone between H. e. favorinus and H. e. emma near Pongo de Cainarache, San Martin, Peru. (b) The numbers of individuals from one site in a hybrid zone between H. erato (E) and H. himera (H) plotted against the genotypic class to which they belong; pure, first generation hybrids (F1) or backcrosses (F1xE, F1xH). In (a), the forms are considered members of the same species because we can distinguish only one peak or cluster in the genotype distribution. In (b), two genotypic clusters are distinguishable, which are considered separate species.
Most recent species concepts (Ehrlich & Raven's population concept is an exception) attempt to identify the underlying biological "reality" of species, and are therefore, to some extent, modern examples of essentialist thought. From Mayr onwards, there has been a deliberate attempt to exclude any consideration of usefulness of the term species from the discussions about species concepts. This may be a mistake. The debate can be resolved if we explicitly avoid using evolutionary history or the biological means by which the integrity of a species is maintained, and instead strive for a definition of species which is useful in taxonomy, evolutionary studies, and conservation. Given that we now have abundant genetic data, we could use a genetic version of Darwin's morphological cluster concept, called by J Mallet the genotypic cluster definition. Genotypic clusters can be identified by the presence of gaps between groups of multilocus genotypes within a local area (Fig. 1), in the same way that Darwin's morphological cluster species are identified by morphological gaps; indeed morphology is often a good clue to genotype. The genotypic cluster definition reverts back to the taxonomic practice inherent in the polytypic species (in which races are judged conspecific by means of abundant intermediates in hybrid zones). Genotypic cluster species are indeed very similar to the practical taxonomic application of the biological species concept. It is of course likely that genotypic cluster species will be maintained as distinct entities from one another because of reproductive or ecological traits, and that they will achieve evolutionary, phylogenetic, and genealogical separation through time. But genotypic clusters may violate one or more of these biological or evolutionary principles and yet remain distinct (Fig 1). The genotypic cluster definition is therefore related to the cohesion concept in that it allows for multiple means of cluster maintenance and evolution, but, instead of arbitrating between every possible process of cohesion, it merely examines the genetic results of the combination of processes. Species as genotypic clusters are easier to use in taxonomy, and in conservation, and in the investigation of speciation (the evolution of genotypic gaps that do not dissolve in sympatry) than species based on idealized evolutionary or biological concepts, because studies of biological and evolutionary processes are not required before the work starts. See the example in Fig. 2.
Fig. 2. Species and subspecies of mimetic butterflies. Warningly coloured Heliconius butterflies, showing local Mullerian mimicry and geographic differences between subspecies and species. Heliconius erato is shown in the left-hand column, H. melpomene on the right. Each row is an example of a local mimicry ring of these species from a different area of Ecuador or northern Peru. The different geographic forms within each column are considered subspecies, because there are abundant intermediates where they meet in hybrid zones (see Fig. 1a). The only exception is H. himera (fourth row), a form closely related to H. erato, and which it replaces in dry valleys in the border region of southern Ecuador and northern Peru where H. melpomene is generally absent. Heliconius himera is considered a separate species because hybrids between himera and erato, known from contact zones with three separate subspecies of H. erato, are much rarer than pure forms (see Fig. 1b).
The profusion of species concepts in the biological literature is currently a major hindrance to the study of biological diversity, and its use in conservation and evolutionary investigations. And yet, biologists no longer argue about "cell concepts", or "gene concepts", presumably because the concepts of cells as units of tissue, and DNA as the genetic material, are now broadly understood. We continue to disagree about species concepts because we do not yet understand species very well; at the same time, we need to make decisions because species themselves are going extinct at an accelerating rate. These arguments may exist because we are asking the wrong questions (perhaps species do not really exist?), but more probably because we still have much to learn about species. In any case, it is to be hoped that a generally applicable and useful idea of species will soon put an end to the current chaos. Darwin's concept may still be the that best and most useful solution.
J.L.B.M. (J L B Mallet)
Berlocher S & Howard D (eds). (1997) Endless Forms: Species and
Speciation. Oxford University Press, New York.
Mallet J. (1995) A species definition for the Modern Synthesis. Trends Ecol. Evol. 10, 294-299.
Mayr E. (1982) The Growth of Biological Thought. Diversity, Evolution, and Inheritance. Belknap, Cambridge, Mass. | <urn:uuid:796ae8f5-7fef-49ba-8874-66a743a681df> | CC-MAIN-2015-35 | http://www.ucl.ac.uk/taxome/jim/Sp/speconc.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281325.84/warc/CC-MAIN-20150827031441-00039-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.950223 | 2,657 | 3.796875 | 4 |
|India Table of Contents
In India there is no greater event in a family than a wedding, dramatically evoking every possible social obligation, kinship bond, traditional value, impassioned sentiment, and economic resource. In the arranging and conducting of weddings, the complex permutations of Indian social systems best display themselves.
Marriage is deemed essential for virtually everyone in India. For the individual, marriage is the great watershed in life, marking the transition to adulthood. Generally, this transition, like everything else in India, depends little upon individual volition but instead occurs as a result of the efforts of many people. Even as one is born into a particular family without the exercise of any personal choice, so is one given a spouse without any personal preference involved. Arranging a marriage is a critical responsibility for parents and other relatives of both bride and groom. Marriage alliances entail some redistribution of wealth as well as building and restructuring social realignments, and, of course, result in the biological reproduction of families.
Some parents begin marriage arrangements on the birth of a child, but most wait until later. In the past, the age of marriage was quite young, and in a few small groups, especially in Rajasthan, children under the age of five are still united in marriage. In rural communities, prepuberty marriage for girls traditionally was the rule. In the late twentieth century, the age of marriage is rising in villages, almost to the levels that obtain in cities. Legislation mandating minimum marriage ages has been passed in various forms over the past decades, but such laws have little effect on actual marriage practices.
Essentially, India is divided into two large regions with regard to Hindu kinship and marriage practices, the north and the south. Additionally, various ethnic and tribal groups of the central, mountainous north, and eastern regions follow a variety of other practices. These variations have been extensively described and analyzed by anthropologists, especially Irawati Karve, David G. Mandelbaum, and Clarence Maloney.
Broadly, in the Indo-Aryan-speaking north, a family seeks marriage alliances with people to whom it is not already linked by ties of blood. Marriage arrangements often involve looking far afield. In the Dravidian-speaking south, a family seeks to strengthen existing kin ties through marriage, preferably with blood relatives. Kinship terminology reflects this basic pattern. In the north, every kinship term clearly indicates whether the person referred to is a blood relation or an affinal relation; all blood relatives are forbidden as marriage mates to a person or a person's children. In the south, there is no clear-cut distinction between the family of birth and the family of marriage. Because marriage in the south commonly involves a continuing exchange of daughters among a few families, for the married couple all relatives are ultimately blood kin. Dravidian terminology stresses the principle of relative age: all relatives are arranged according to whether they are older or younger than each other without reference to generation.
On the Indo-Gangetic Plain, marriages are contracted outside the village, sometimes even outside of large groups of villages, with members of the same caste beyond any traceable consanguineal ties. In much of the area, daughters should not be given into villages where daughters of the family or even of the natal village have previously been given. In most of the region, brother-sister exchange marriages (marriages linking a brother and sister of one household with the sister and brother of another) are shunned. The entire emphasis is on casting the marriage net ever-wider, creating new alliances. The residents of a single village may have in-laws in hundreds of other villages.
In most of North India, the Hindu bride goes to live with strangers in a home she has never visited. There she is sequestered and veiled, an outsider who must learn to conform to new ways. Her natal family is often geographically distant, and her ties with her consanguineal kin undergo attenuation to varying degrees.
In central India, the basic North Indian pattern prevails, with some modifications. For example, in Madhya Pradesh, village exogamy is preferred, but marriages within a village are not uncommon. Marriages between caste-fellows in neighboring villages are frequent. Brother-sister exchange marriages are sometimes arranged, and daughters are often given in marriage to lineages where other daughters of their lineage or village have previously been wed.
In South India, in sharp contrast, marriages are preferred between cousins (especially cross-cousins, that is, the children of a brother and sister) and even between uncles and nieces (especially a man and his elder sister's daughter). The principle involved is that of return--the family that gives a daughter expects one in return, if not now, then in the next generation. The effect of such marriages is to bind people together in relatively small, tight-knit kin groups. A bride moves to her in-laws' home--the home of her grandmother or aunt--and is often comfortable among these familiar faces. Her husband may well be the cousin she has known all her life that she would marry.
Many South Indian marriages are contracted outside of such close kin groups when no suitable mates exist among close relatives, or when other options appear more advantageous. Some sophisticated South Indians, for example, consider cousin marriage and uncle-niece marriage outmoded.
Rules for the remarriage of widows differ from one group to another. Generally, lower-ranking groups allow widow remarriage, particularly if the woman is relatively young, but the highest-ranking castes discourage or forbid such remarriage. The most strict adherents to the nonremarriage of widows are Brahmans. Almost all groups allow widowers to remarry. Many groups encourage a widower to marry his deceased wife's younger sister (but never her older sister).
Among Muslims of both the north and the south, marriage between cousins is encouraged, both cross-cousins (the children of a brother and sister) and parallel cousins (the children of two same-sex siblings). In the north, such cousins grow up calling each other "brother" and "sister", yet they may marry. Even when cousin marriage does not occur, spouses can often trace between them other kinship linkages.
Some tribal people of central India practice an interesting permutation of the southern pattern. Among the Murias of Bastar in southeastern Madhya Pradesh, as described by anthropologist Verrier Elwin, teenagers live together in a dormitory (ghotul ), sharing life and love with one another for several blissful years. Ultimately, their parents arrange their marriages, usually with cross-cousins, and the delights of teenage romance are replaced with the serious responsibilities of adulthood. In his survey of some 2,000 marriages, Elwin found only seventy-seven cases of ghotul partners eloping together and very few cases of divorce. Among the Muria and Gond tribal groups, cross-cousin marriage is called "bringing back the milk," alluding to the gift of a girl in one generation being returned by the gift of a girl in the next.
Finding the perfect partner for one's child can be a challenging task. People use their social networks to locate potential brides and grooms of appropriate social and economic status. Increasingly, urban dwellers use classified matrimonial advertisements in newspapers. The advertisements usually announce religion, caste, and educational qualifications, stress female beauty and male (and in the contemporary era, sometimes female) earning capacity, and may hint at dowry size.
In rural areas, matches between strangers are usually arranged without the couple meeting each other. Rather, parents and other relatives come to an agreement on behalf of the couple. In cities, however, especially among the educated classes, photographs are exchanged, and sometimes the couple are allowed to meet under heavily chaperoned circumstances, such as going out for tea with a group of people or meeting in the parlor of the girl's home, with her relatives standing by. Young professional men and their families may receive inquiries and photographs from representatives of several girls' families. They may send their relatives to meet the most promising candidates and then go on tour themselves to meet the young women and make a final choice. In the early 1990s, increasing numbers of marriages arranged in this way link brides and grooms from India with spouses of Indian parentage resident in Europe, North America, and the Middle East.
Almost all Indian children are raised with the expectation that their parents will arrange their marriages, but an increasing number of young people, especially among the college-educated, are finding their own spouses. So-called love marriages are deemed a slightly scandalous alternative to properly arranged marriages. Some young people convince their parents to "arrange" their marriages to people with whom they have fallen in love. This process has long been possible for Indians from the south and for Muslims who want to marry a particular cousin of the appropriate marriageable category. In the upper classes, these semi-arranged love marriages increasingly occur between young people who are from castes of slightly different rank but who are educationally or professionally equal. If there are vast differences to overcome, such as is the case with love marriages between Hindus and Muslims or between Hindus of very different caste status, parents are usually much less agreeable, and serious family disruptions can result.
In much of India, especially in the north, a marriage establishes a structural opposition between the kin groups of the bride and groom--bride-givers and bride-takers. Within this relationship, bride-givers are considered inferior to bride-takers and are forever expected to give gifts to the bride-takers. The one-way flow of gifts begins at engagement and continues for a generation or two. The most dramatic aspect of this asymmetrical relationship is the giving of dowry.
In many communities throughout India, a dowry has traditionally been given by a bride's kin at the time of her marriage. In ancient times, the dowry was considered a woman's wealth--property due a beloved daughter who had no claim on her natal family's real estate--and typically included portable valuables such as jewelry and household goods that a bride could control throughout her life. However, over time, the larger proportion of the dowry has come to consist of goods and cash payments that go straight into the hands of the groom's family. In the late twentieth century, throughout much of India, dowry payments have escalated, and a groom's parents sometimes insist on compensation for their son's higher education and even for his future earnings, to which the bride will presumably have access. Some of the dowries demanded are quite oppressive, amounting to several years' salary in cash as well as items such as motorcycles, air conditioners, and fancy cars. Among some lower-status groups, large dowries are currently replacing traditional bride-price payments. Even among Muslims, previously not given to demanding large dowries, reports of exorbitant dowries are increasing.
The dowry is becoming an increasingly onerous burden for the bride's family. Antidowry laws exist but are largely ignored, and a bride's treatment in her marital home is often affected by the value of her dowry. Increasingly frequent are horrible incidents, particularly in urban areas, where a groom's family makes excessive demands on the bride's family--even after marriage--and when the demands are not met, murder the bride, typically by setting her clothes on fire in a cooking "accident." The groom is then free to remarry and collect another sumptuous dowry. The male and female in-laws implicated in these murders have seldom been punished.
Such dowry deaths have been the subject of numerous media reports in India and other countries and have mobilized feminist groups to action. In some of the worst areas, such as the National Capital Territory of Delhi, where hundreds of such deaths are reported annually and the numbers are increasing yearly, the law now requires that all suspicious deaths of new brides be investigated. Official government figures report 1,786 registered dowry deaths nationwide in 1987; there is also an estimate of some 5,000 dowry deaths in 1991. Women's groups sometimes picket the homes of the in-laws of burned brides. Some analysts have related the growth of this phenomenon to the growth of consumerism in Indian society.
Fears of impoverishing their parents have led some urban middle-class young women, married and unmarried, to commit suicide. However, through the giving of large dowries, the newly wealthy are often able to marry their treasured daughters up the status hierarchy so reified in Indian society.
After marriage arrangements are completed, a rich panoply of wedding rituals begins. Each religious group, region, and caste has a slightly different set of rites. Generally, all weddings involve as many kin and associates of the bride and groom as possible. The bride's family usually hosts most of the ceremonies and pays for all the arrangements for large numbers of guests for several days, including accommodation, feasting, decorations, and gifts for the groom's party. These arrangements are often extremely elaborate and expensive and are intended to enhance the status of the bride's family. The groom's party usually hires a band and brings fine gifts for the bride, such as jewelry and clothing, but these are typically far outweighed in value by the presents received from the bride's side.
After the bride and groom are united in sacred rites attended by colorful ceremony, the new bride may be carried away to her in-laws' home, or, if she is very young, she may remain with her parents until they deem her old enough to depart. A prepubescent bride usually stays in her natal home until puberty, after which a separate consummation ceremony is held to mark her departure for her conjugal home and married life. The poignancy of the bride's weeping departure for her new home is prominent in personal memory, folklore, literature, song, and drama throughout India.
Source: U.S. Library of Congress | <urn:uuid:13329fac-0e73-40c0-9304-4d87e9505d38> | CC-MAIN-2015-35 | http://countrystudies.us/india/86.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065828.38/warc/CC-MAIN-20150827025425-00281-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.965504 | 2,884 | 3.375 | 3 |
“Was someone asking to see the soul?”
A terrible fact of our humanity is our isolation from one another. Few of us have the ability Walt Whitman seemed to possess of being able to look into the soul of another person. That we cannot know one another’s thoughts is probably a good thing, but that we cannot share one another’s feelings, or even derive another person’s humanity from our own, is tragic. The German philosopher Arthur Schopenhauer argued that no moral law is possible that does not begin with empathy and the compassion that follows from it. Many of the crimes of the twentieth century—and of the twenty-first as well—were, are, perpetrated by those who have no conception of their victims’ personhood. After all, it isn’t the physical being of others that provides us with insight into their lives—it is an individual’s inner life that makes her what she is. Anyone who has nursed a sick or dying person knows that long after the body has decayed the real individual—the emoting, feeling, thinking person—struggles to keep living. The inexpressible and genuine part of us exists in silence, just as Wittgenstein noted in the Tractacus.
Despite this melancholy fact of our isolation, art in general, and literature in particular, can provide us with a glimpse of what it feels like to be someone else. The mystery of consciousness hasn’t been solved—I frankly doubt it ever will be—but literature—the novel—comes as close as is possible to making the inner lives of human beings manifest.
And why should fiction’s ability to open up the inner lives of imaginary people matter? Why do novels please or move us? I would argue that it is precisely because good books allow us vicarious insights into the inner lives of other human beings that we feel pleasure when we read. We want to know what others think and feel so that we can make sense of what we think and feel. Recall that the Greek word pathos means ‘experience‘ as well as ‘emotion.’ To have empathy is to move within range of another’s experience—to share in their humanity. A person who cares about the real lives of other persons can do no better than to read good books; real people withhold themselves from us, but fictional figures are, by definition, vulnerable—their creators deliberately open their characters up to our inspection so that we can, during the time that we read and perhaps (in the case of great books) forever afterward have a sense of what it feels like to be this someone created out of the rich fabric of language.
Consider, for example, the following passage:
“She waited, Kate Croy, for her father to come in, but he kept her unconscionably, and there were moments at which she showed herself, in the glass over the mantel, a face positively pale with the irritation that had brought her to the point of going away without sight of him. It was at this point, however, that she remained; changing her place, moving from the shabby sofa to the armchair upholstered in a glazed cloth that gave at once–she had tried it–the sense of the slippery and of the sticky. She had looked at the sallow prints on the walls and at the lonely magazine, a year old, that combined, with a small lamp in coloured glass and a knitted white centre-piece wanting in freshness, to enhance the effect of the purplish cloth on the principal table; she had above all from time to time taken a brief stand on the small balcony to which the pair of long windows gave access. The vulgar little street, in this view, offered scant relief from the vulgar little room […] Each time she turned in again, each time, in her impatience, she gave him up, it was to sound to a deeper depth, while she tasted the faint flat emanation of things, the failure of fortune and of honour. If she continued to wait it was really in a manner that she mightn’t add the shame of fear, of individual, of personal collapse, to all the other shames. To feel the street, to feel the room, to feel the table-cloth and the centre-piece and the lamp, gave her a small salutary sense at least of neither shirking nor lying. This whole vision was the worst thing yet–as including in particular the interview to which she had braced herself; and for what had she come but for the worst? She tried to be sad so as not to be angry, but it made her angry that she couldn’t be sad. And yet where was misery, misery too beaten for blame and chalk-marked by fate like a “lot” at a common auction, if not in these merciless signs of mere mean stale feelings?”
This paragraph, somewhat amended, opens Henry James’ late novel, The Wings of the Dove (1902). Few writers of English display a surer grasp of the psychology of their characters. James’ own life was rather circumscribed. He lived mostly in England, outside of London, and while he did visit friends and acquaintances, his overwhelming preoccupation was storytelling; he created a world of imaginary lives from his quiet, domestic existence.
Despite his unworldliness, James was a master at evoking moods and suggesting the inner lives of his characters. Note the quiet shifts of tone and mood in this paragraph. Kate Croy, a not very laudable person, waits for a father for whom she feels a mixture of affection and exasperation. Imagine a lesser artist’s rendition of the scene: “Kate was angry at her father for keeping her waiting. She had important things to do!” James renders the scene with a subtlety comparable to, but greater than, that of Vermeer’s great painting “Girl Reading a Letter at an Open Window.” (A letter plays an important role in Wings of the Dove as well.)
Vermeer’s picture begs us to ask what is in the mind of the girl. Her facial expression is ambiguous—is she sad or pensive? The detail of the tipped fruit bowl perhaps offers a clue to her feelings—she is too upset, or too elated, to tidy the room. So much is suggested, and so much is held back! James, working with tools of language that are at once less “realistic” (since we know the world most intimately through our eyes) and more evocative, is able to deepen our sense of Kate’s feelings. Listen to James again: “Each time she turned in again, each time, in her impatience, she gave [her father] up, it was to sound to a deeper depth . . . She tried to be sad so as not to be angry, but it made her angry that she couldn’t be sad.” All the resources of the novelist’s art are compressed into these lines, and into this passage. The closely observed surroundings, “vulgar,” “narrow,” “low,” are made to reflect Kate’s state of mind, as do the “sallow prints on the walls”—“sallow”—how perfect a word, “sallow,” drained of color and life—you can imagine yourself the shabby room, its furnishings annoying, tasteless, but magnified in their melancholy by the fact that the person seeing them, the person trapped in the room, is disinclined to think well of anything in her state of “mean stale feelings.”
With James, as with Vermeer, we are eavesdropping on an unguarded moment: the inner life of Kate Croy cannot be reduced to a simple formula such as “happy” or “angry.” Vermeer’s girl is wholly focused on the words before her—they are, we feel, stirring her in some way; but how? It’s impossible to know—she remains closed to us. On the other hand, we know that Kate Croy is irritated, sad, ashamed, bored, and miserable—she doesn’t quite know what she feels. And this emotional ambivalence is to determine Kate’s fate; it is the point of the book and is captured in the opening scenes—we watch it all unfold. Kate does not know how she feels and therefore lacks the one thing most needed—a fully-formed moral self.
James’s real interest in this novel is Milly Theale, a young, dying woman who is offstage for most of the book. James wrote in his preface to Wings of the Dove that “…the case [of the novel] prescribed for its central figure a sick young woman, at the whole course of whose disintegration and the whole ordeal of whose consciousness one would have quite honestly to assist.” [My italics] James is quite clear—he is the midwife who assists in bringing the consciousness of his characters into being; he “focuses his image” of Milly and Kate and the rest of the figures who people his books. And we are there as well—the reader—vicariously living out the last days of the life of a tubercular girl and her tormentors.
I want to be very clear, for there is a serious philosophical point: it is the mind of Henry James that we come to know, for there is no ‘mind’ of Kate Croy or Milly Theale. The complexity of fiction is nowhere more evident than when we seek to untangle the relations among author, character, and reader. James creates the inner lives of two young women, and then we recreate their lives after our own fashion, according to our won proclivities. When I think about Kate Croy I imagine a demure female social climber and a stout male perfectionist genius the two are forever conflated in my mind, and my empathy, my sharing of experience, extends to both.
The novelist David Foster Wallace reminds us that it is the writer’s mind to which we gain access, via the characters that he or she creates:
“[T]here is this existential loneliness in the real world. I don’t know what you’re thinking or what it’s like inside you and you don’t know what it’s like inside me. In fiction I think we can leap over that wall in a certain way. But that’s just the first level, because the idea of mental or emotional intimacy with a character is a delusion or a contrivance that’s set up through art by the writer. There’s another level that a piece of fiction is a conversation. . . There’s a kind of Ah-ha! Somebody [the writer] at least for a moment feels about something or sees something the way that I do. It doesn’t happen all the time. It’s these brief flashes or flames, but I get that sometimes. I feel unalone–intellectually, emotionally, and spiritually. I feel human and unalone and that I’m in a deep, significant conversation with another consciousness in fiction and poetry in a way that I don’t with other art.”
The greatest literary example known to me of the laying open of an inner life is Marcel Proust’s In Search of Lost Time. Proust’s million-and-a-half–word exploration of his inner life, a life which, like that of James, was both sedentary and melancholy, was intended by the author to be universal. Proust wrote that “Every reader would recognize his own self in what [the novel] says.” The particulars of the novel—dinner parties, love affairs, and long conversations over tea—would seem to have nothing to do with most of our lives, and yet Marcel’s sensibility, his acute powers of observation, and most of all his remembrance of his past life, evoke precisely that empathetic shock of recognition that makes literature so useful in developing a feeling for the inner lives of other human beings. One comes away from your months of immersion in Proust’s novel not only with a deep understanding of the narrator (Proust himself) but with a heightened sympathy for the richness and complexity of any person’s inner life. Proust shows us not only what is in his mind, but the process whereby a reflective person uncovers and brings to light the subconscious material of memory. In Search of Lost Time is a case study demonstrating the extent to which we are the memories of our past; it is a reminder that each of us carries within his mind a world that is as real as the material one which we inhabit—the place where the authentic self resides.
There is another way in which great fiction creates, extends, or enriches our empathy. Fiction extends our sympathy for other human beings by reminding us, in a way that philosophy and theology do not, that human lives are messy affairs—that passions rule our moral choices as much if not more than Kant’s “good will” or any utilitarian sense of fair play. Emma Bovary seems a spoiled, narcissistic romantic—which she is—who must be prevented from reading novels because they provide her with an inner life. Humbert Humbert’s tormented obsession with Lolita appears both criminal and demented (it’s both). At the same time Madame Bovary and Lolita open the inner world of flawed humanity for our inspection, and if we can’t approve of their characters’ moral choices we should at least be moved to broaden our own sense of what an inner life looks like. The late philosopher Bernard Williams thought that sympathy is a far better foundation for an ethical system than any normative rules or essentialist view of “human nature.” We can’t say what “all people” are like, and we cannot know what the “good” is in every case; but we can deepen our ability to feel compassion by getting to know other beings—real persons and profound fictional creations.
[E]ach of us carries within his mind a world that is as real as the material one which we inhabit—the place where the authentic self resides.
The philosopher Richard Rorty also addressed the issue of morality and fiction. Like Williams, Rorty was impatient with philosophy’s awkward attempts to derive normative rules of conduct from the mystery of ‘human nature’ and preferred the empathetic approach that novelists take in deciphering the inner lives of persons. In Rorty’s view, reading fiction keeps us from committing the acts of cruelty that come so easily when we keep ourselves at an empathetic distance from others. The historian John Dower writes in his book The Cultures of War that “[c]reating havoc and suffering from a safe distance stunts the imagination. Morality as well as sensitivity to the psychology of others becomes dulled.” Think of the use of the drone to deliver ‘shock and awe’ on unsuspecting victims—a form of disengagement taken to its logical extreme.
Rorty says of physical torture that “the best way to cause people long-lasting pain is to humiliate them by making the things that seemed most important to them seem futile, obsolete, and powerless. The best way to avoid hurting others in this way is to enter into their fantasies.” Since we value our own “fantasies,” our own dreams and hopes, we are more likely to respect the dreams and hopes of others if we can realize in a non-trivial way what it feels like to be them, whether “they” are expatriate Americans (in James), Russian Counts (in Tolstoy), or Iraqi citizens.
I’d like to close with a final example. In John Williams’ novel Stoner, the aptly named main character lives a life of stoic forbearance. His marriage is unhappy, his only child is an alcoholic, and his professional life is unrewarding. All his hopes and dreams appear to have been thwarted. Then, at mid-life, briefly, he falls in love and has an affair that both he and the reader know is doomed from the start. Despite the fact that Stoner has to abandon the only woman he has ever loved, his internal life, as rendered by Williams, never reflects futility or powerlessness. Indeed, Stoner is ennobled by his resignation, and redeemed by having known love:
“In his extreme youth Stoner had thought of love as an absolute state of being which, if one were lucky, one might access; in his maturity he had decided it was the heaven of a false religion toward which one ought to gaze with an amused disbelief, a gently familiar contempt, and an embarrassed nostalgia. Now in his middle age he began to know that it was neither a state of grace nor an illusion; he saw it as a human act of becoming, a condition that was invented and modified moment by moment and day by day, by the will and the intelligence and the heart.”
Williams’ novel is rich testimony to fiction’s power to allow us a glimpse into a person’s mind and heart—to me, William Stoner feels like a living man. And Williams’ description of love also works as a description of fiction itself—a human act of becoming, both for the writer and the reader, a condition mirroring human life that is modified line by line and page by page by the reader’s will and intelligence and heart.
James Wood, the literary critic, writes in his recent book How Fiction Works: “Of course the novel does not provide philosophical answers (as Chekhov said, it only needs to ask the right questions). Instead, [novels do] what Bernard Williams wanted moral philosophy to do—to give the best account of the complexity of our moral fabric.” Wood ends his discussion of fiction’s moral power by citing a justly famous passage from War and Peace in which Pierre, once a shallow, cynical young man, has a life-changing epiphany: “There was a new feature in Pierre’s relations […] with all the people he now met, which gained for him the general good will. This was his acknowledgment of the impossibility of changing a man’s convictions by words, and his recognition of the possibility of everyone thinking, feeling, and seeing things each from his own point of view.” For Pierre and for us, this recognition is the first step on the path from empathy to compassion to wisdom.
Photo Source: InfoBarrel
Art: Girl Reading a Letter at an Open Window by Johannes Vermeer, 1657. Public Domain. | <urn:uuid:66f833da-32b9-439e-aab5-b615eb908ef2> | CC-MAIN-2015-35 | http://atticusreview.org/fiction-and-empathy/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00277-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.97063 | 3,974 | 2.53125 | 3 |
LESSON TOPIC: 4.5 TITLE: DOCKING/UNDOCKING AND GROUNDING/STRANDING
Contact periods allotted this LESSON TOPIC:
Classroom: 2.5 Test: 0.0
Trainer: 0.0 Total: 2.5
MEDIA: Classroom lecture with visual media
6.0 EVALUATE shipboard stability by evaluating weight and moment considerations. (JTI 3.2.1, 6.0, 6.1, 6.2)
6.25 DESCRIBE initial actions required to preserve stability during an unintentional grounding with respect to ballasting, weight additions, weight shifts, and jettisoning.
6.26 CALCULATE the effect on the ship's center of gravity from docking, beaching, or grounding.
6.27 DESCRIBE the hull stresses created and the appropriate actions to alleviate them when docking, beaching, or grounding.
6.28 DESCRIBE and CALCULATE critical draft.
6.29 DESCRIBE the contents and usage of the Docking Plan, Hull History, and Hull Penetrations Drawing when planning a drydocking.
6.30 DESCRIBE the Docking Master's responsibilities to the ship, and the crew's responsibilities to the Docking Master regarding the addition, removal, or movement of weights while in drydock.
6.31 DESCRIBE the problems associated with the unavailability of the ship's firemain system while in the drydock.
6.32 STATE various compartments which must be sounded or observed during docking and undocking.
Situations in which drydocking may be required for your vessel:
OVERHAUL - Scheduled overhauls as established by CNO. The Fleet Modernization Program (FMP) has extended the overhaul cycle which varies from ship to ship, sometimes as long as 7 years.
EMERGENCIES - Serious hull damage following a collision, grounding, or battle damage. Often necessary to prevent the ship from sinking.
REPAIRS TO UNDERWATER FITTINGS - Any underwater work beyond the capacity of divers.
REMOVE FOULING OF THE HULL - Marine growth resulting in loss of speed, greater fuel consumption, and reduced plant efficiency. Due to funding, this is now the least common reason for drydocking. Navy Divers have the capability to clean the hull while the ship is still afloat.
6 MONTHS PRIOR TO DOCKING
1. Remind your CPOs that docking will be a good opportunity to overhaul or replace the skin valves in your division’s compartments.
2. Order any replacements for skin valves, be sure to get requisition numbers from Supply.
3. Route a memo to the MPA, AUX-O, ASWO, and WEPS so that they can do the same, but make it clear that they will be responsible for obtaining and replacing their own skin valves.
4. Ensure all jobs required to be done to your systems and gear are in the ship’s CSMP file so that they will be picked up in the contract.
DOCKING PLANNING CONFERENCE
The DCA is responsible to ensure the following services are written into the contract:
60 Hz, 450 VAC
Sea water service for diesel or A/C plant
All details are worked out in advance by the Docking Master, SUPSHIPS representative, and the Commanding Officer. Although the following details may not necessarily be your responsibility, they are considerations for docking:
1. Time and date of docking
2. Tugs and pilot to be used
3. Whether bow or stern enter the dock first
4. Proper conditions of list and trim
5. Handling of lines
6. Record of tank soundings before the ship is drydocked
7. Gangways to be used
8. Utilities to be furnished to the ship, such as electric power, steam, and water
9. Sanitary services to be provided
10. Garbage and refuse disposal facilities needed
11. Drydock safety precautions
12. Pumping plans or other instructions or operating directives for ballasting/deballasting floating drydock with or without ship in basin.
The Commanding Officer shall furnish the Docking Master or SUPSHIPS representative with the following information:
1. Place and date of last docking
2. Last docking position
3. Date and file number of last docking report
4. Number of days underway since last docking
5. General itinerary of ship movements (if not classified)
6. Paint history for last complete painting
7. History of touch-up painting
8. Ship weight distribution (including tank sounding
9. Offload supplies and hazardous stores
10. Lock screws in drydock position
11. Have 0° list and no excessive trim as per NSTM 997
PRIOR TO DOCKING
1. Ensure Dry Docking Bill is completed.
(Details in OPNAVIST 3120.32A SORM pg. 6-65)
a. Provide last plan to Docking Officer
b. Ship has no List
c. Ship has less than 1% Trim
d. Retract all moveable hull appendages
e. Minimize Free Surface Effect - all tanks full or empty
f. Deliver list of all hull fittings below the waterline to the Docking Officer.
2. Hull Board will meets prior to both docking and undocking
a. Hull Board members - CHENG / 1st LT / DCA / OPS / ASWO
b. Review Docking plan, Hull History, and Hull Penetrations Drawings
1. Responsibility for the ship shifts from the Commanding Officer to the Docking Officer when the first part of the ship crosses the plane of the drydock sill.
2. Once the ship is positioned in drydock, dewatering of the dock begins. As the ship just touches down on the blocks, pumping is stopped. Divers will verify that the ship is properly resting on the blocks, and that the blocks are in the correct location. Upon verification, dewatering will continue.
3. When the dock is pumped dry, members of the hull board conduct an inspection with the Docking Officer.
a. Ensure ship is positioned properly in the dock
b. Ensure all shores in place
c. Note condition of propellers, rudders, overboards, intakes, and other projections
d. Note condition of zincs/cathodic protection anodes
e. Note details of any known or observed damage
4. NSTM 997 Section 2.11 requires the Docking Master to ensure adequate shoring and side blocking is installed to resist earthquake or hurricane forces.
WHILE IN DRYDOCK
1. DCA will maintain Dry Weight Log, a log of all weight shifts, additions, and removals in excess of 500 lbs.
2. Ensure all removed skin valves are replaced with blank flanges and that no liquids are discharged to the dock without consent of the Docking Officer.
1. Prior to undocking, the Hull Board will:
a. Inspect compartments and tanks below the waterline to verify tightness.
b. Ensure all valves below the waterline are secured.
c. Thoroughly inspect hull and projections.
d. Inspect drydock for chemicals or debris which might pollute the environment, clog intakes, or cause other damage as the ship is refloated.
2. The following spaces are continuously checked for flooding as the ship is refloated:
a. Spaces in contact with the keel and side blocks
b. Tanks and voids
c. Any space with external hull fittings
When a ship is drydocked or aground, there is a profound effect on stability. As the water level decreases, the keel will rest on the blocks or sea floor. A percentage of the ship’s displacement is now supported by these objects. Stability is affected as if removing weight from that point of contact. When weight is removed from the keel, there is a virtual rise in the ship’s center of gravity.
1. As the waterline moves down, the center of buoyancy moves down.
2. As buoyancy moves down, the ship’s metacenter moves up.
3. As the ship’s keel rests on the blocks (a weight removal low), gravity moves up.
Since the center of gravity always rises faster than the metacenter, the two stability points will eventually be in the same position. This results in neutral stability, where no righting arms are being produced. The draft where GM = 0 is called the ship's critical draft.
Calculating the ship’s critical draft is very important. When in drydock, dewatering of the dock stops just before critical draft is reached so it can be verified that the ship is properly supported by the side blocks. When aground, knowing the range of tide will determine if the ship might reach it’s critical draft.
CALCULATING CRITICAL DRAFT
When aground or in drydock, contact pressure is applied at the keel. This has the same effect as removing weight at the point of contact. Looking at the KG1 equation:
Since weight is being removed at the keel, kg = 0 and (w x kg) = 0. The equation changes slightly:
Where, KGV = the new height of "G" after going aground
W0 = ship's original displacement (before grounding)
KG0 = ship's KG before grounding
WA = apparent displacement after grounding (from draft readings)
Using the Stranding Calculation Sheet and the draft diagram and functions of form, the ship's critical draft can be calculated.
3. Solve for KGV using the equation.
4. GM = KM - KGV
5. If GM > 0, choose a draft 3 inches below the last draft and complete steps 2 through 4 until GM < 0.
6. Plot GM values (top) for drafts (left) on graph. Curve will cross zero at critical draft.
EXAMPLE: The ship is aground at high tide and the range of tide is 2 feet. Will the ship reach it's critical draft, and if so, at what draft? Initial conditions: KG0 = 18.51 ft Draft = 15’3"
The Stranding Calculation Sheet will look something like this:
Using the graph to plot GVM values, critical draft is 13’5".
GROUNDING / STRANDING
In most stranding cases, the following considerations will ordinarily constitute good procedure:
1. Attempts SHOULD NOT be made to refloat the ship under her own power if wind and sea conditions indicate the possibility of the ship working harder aground, pounding, or broaching to sea.
2. Anchors to seaward should be quickly laid if possible to prevent the ship from working further ashore.
3. The ship should be weighted down, not lightened, in an effort to help keep the ship from working harder on the beach, and secondly, to prevent damage caused by working and pounding of the ship on the bottom.
When a ship goes aground, the initial reaction on the bridge is to back down using the engines. Before attempts are made, consideration should be given to:
- Depth of Water
- Sea Floor Composition
- Possible Damage to Propellers and Hull
Surface Ship Survivability, NWP 3-20.31, paragraph 5.5.1 states, "If propellers are reversed and there is no tendency of the ship to back away, no further attempts to move the ship by means of the screws should be made."
The primary reasons we do not continue to use propulsion in a grounding situation:
1. The ship's screws become less effective in shallow water and the ship may squat. Propellers may also be damaged due to contact with the sea floor.
2. Propeller wash will drive silt and/or bottom aggregate in and around the hull, possibly causing a suction when the ship is pulled from its location.
3. This silt and aggregate can be sucked into sea chests, fouling necessary cooling equipment required to maintain the ship's propulsion systems.
WEIGHT DOWN THE SHIP
If attempts at backing down fail, the ship should be weighted down to firmly fix the hull in position. This is especially important if the tide is expected to rise and adverse sea conditions exist; tides or heavy surf may drive the ship further aground or cause it to broach. Weighting down is accomplished by ballasting tanks and if necessary, flooding low compartments.
INVESTIGATE FOR DAMAGE
After the ship has been weighed down, a careful investigation should be made to sound all voids, check fuel tanks for leakage and examine the interior of the hull for signs of structural damage.
DETERMINE TONS AGROUND
Determine displacement prior to grounding using daily draft report and Draft Diagram and Functions of Form. Read drafts after grounding and determine new displacement. The difference is the amount of tons aground.
CRITICAL DRAFT CALCULATIONS
Use the Stranding Calculation Sheet to calculate critical draft. If the tide is receding, determine whether or not stability will become critical. If so, lower the ship’s center of gravity by adding more weight low, jettisoning weight high, or shifting weight down.
Ground tackle should be rigged and kedge anchors laid seaward as quickly as possible. This will help to keep the ship from broaching. When a ship is broached, scouring occurs. Sand and gravel under the hull is washed away by the action of the surf. Currents produced by the swells breaking against the ship sweep around the bow and stern with great velocity. These currents remove sea floor material from under the ship and build them up in a sand spit amidships on the inboard side. As the material is cut away from under the ship, an extreme hogging condition results that will eventually cause failure of the hull.
The ship's boat should be launched to take soundings around the hull, determining the slope and nature of the bottom. These soundings should be continued in the direction toward which the ship is to be hauled off, in order to locate rock formations, coral ledges, or other under water obstructions. Currents which may effect the ship as she comes off should be noted.
CHECK FOR HOGGING/SAGGING STRESSES
The drafts are also checked to ensure that the ship is neither hogging nor sagging. If the ship is aground at one end, sagging stresses are increased, resulting in the need to remove weight amidships and relocate it at the bow and stern. If aground on a ledge or pinnacle amidships, hogging stresses are increased. Weight should be removed from the bow/stern and relocated amidships. Irregular rock or coral formations or sharp changes in gradient produce concentrated pressures that can crush hull plating and result in flooding. This damage can be intensified if the hull works or shifts position.
Main deck: In Tension
Keel: In Compression
To relieve these stresses:
- Jettison FWD and AFT
- Ballast Amidships
Main deck: In Compression
Keel: In Tension
To relieve these stresses:
- Ballast FWD and AFT
- Jettison Amidships
A request for salvage assistance should be made immediately, not delayed while refloating attempts are made. Early mobilization and dispatch of salvage assistance might mean the difference between success and failure of the salvage operation. When a request for salvage assistance is made, the following information should be provided:
- An accurate position of the grounding site, including latitude and longitude, applicable chart numbers, and means of fixing the position.
- Ship’s draft at last port and estimated time of stranding.
- Drafts forward, amidships, and aft, following stranding with time taken and the state of tide.
- Soundings along the ship from bow to stern, corrected to the datum of the chart area.
- Course and speed at the time of grounding.
- Ship's heading after grounding with details of changes.
- Liveliness of the ship.
- Weather conditions, to include: wind direction and velocity, current weather at the grounding site, and any weather forecasts.
- Sea and ocean current conditions, to include: direction and height of seas and swells.
- Extent and type of damage to the ship.
- Location of grounding points and estimated ground reaction.
- Type of sea floor at the grounding site.
- Status of ship's machinery.
- Ship's cargo list or manifest.
- Amount and location of known hazardous materials.
- Help available at the scene or in the area, such as tugs, large boats, bulldozers, cranes, etc. | <urn:uuid:cc922e9d-e797-4276-a94c-aab9f9102509> | CC-MAIN-2015-35 | http://fas.org/man/dod-101/navy/docs/swos/dca/stg4-05.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00161-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.898671 | 3,576 | 2.890625 | 3 |
There is global concern regarding HIV drug-resistance, drug toxicity, and increasing drug costs. Many health care professionals believe that eradicating HIV will require development of a vaccine that prevents infection by the virus. Yet, one by one, classical vaccine approaches used for combating other infections have proved ineffective for HIV in clinical trials.
Nearly three decades of research has been invested in understanding how HIV overcomes immune defenses and why candidate HIV vaccines have been ineffective. Immune defense is provided by an ensemble of molecules and cells with innate and adaptive capability to counter infectious microbes. The innate immune capabilities have evolved over millions of years of evolution. Their functional importance resides in the immediate blockade of infection, for example, killing of microbes by macrophages that migrate to the infection site due to an inflammatory response. Vaccines generally work by inducing adaptive immunity developed over days to weeks. The functional mediators of adaptive immunity are antibodies and T lymphocytes that specifically recognize microbial antigens and help remove the microbe. HIV has developed various mechanisms to overcome natural immune defenses of humans. These mechanisms are also the reason for the failure of classical immunological approaches to yield an effective HIV vaccine.
Thousands of different HIV-1 strains have emerged. Most infections are initiated by strains that utilize chemokine coreceptor CCR5 for entry into host cells. Coreceptor CXCR4-dependent strains emerge with time. Both types of strains use CD4 as the primary host receptor to infect T cells and macrophages. Different parts of the world are dominated by strains belonging to different HIV-1 subtypes.1 Subtype C strains are found primarily in the developing world and account for a majority of infections globally.
A central problem is that most exposed components of HIV mutate rapidly, generating structural variations of the viral coat proteins. The mutable coat regions are also its dominant antigenic regions, also known as epitopes, against which the immune system produces antibodies and T lymphocytes.2 The original infecting strain induces a robust immune response, but new quasi-strains develop over the course of infection, and protection against the virus is transient at best. Similarly, the antigenic constituents incorporated in previously-tested candidate vaccines were drawn from a single HIV strain or at most a few strains. The candidate vaccines induced antibody responses and T cell responses mostly directed to the mutable coat protein regions, compromising their efficacy against structurally divergent virus strains in different individuals and in different parts of the world.
Over the course of the humoral immune response, antibody complementarity determining regions (CDRs) undergo rapid mutations under the selective pressure of antigen binding. This process generally generates neutralizing antibodies capable of high affinity antigen binding. One of the few immune vulnerabilities of HIV is the maintenance of its exposed CD4 binding site (CD4BS) on the surface of the coat protein gp120 in mostly constant form. The CD4BS is essential for virus-host cell binding and infection. Despite minimal chemical variability of the CD4BS, the immune system fails to mount a sufficiently protective antibody response to the CD4BS.
The reasons are complex. First, individual epitopes within the CD4BS are conformationally plastic, that is, the three-dimensional epitope structure can change during the process of infection. Initial CD4 binding at the CD4BS region located in the outer gp120 domain (CD4BSod) may induce a conformational change of the CD4BS core region composed of amino acids 421-433 (CD4BScore) that is essential for stable HIV-host cell binding. Consequently, the CD4BScore might exist in a conformation vulnerable to immune attack only transiently during the process of CD4BS-CD4 binding. Second, HIV utilizes an unusual evolutionary trick to preclude production of a protective antibody response by B lymphocytes. The CD4BScore expresses superantigen character.3,4 Superantigens bind specifically to innately-produced antibodies expressed on the surface of B lymphocytes, the B cell receptors. Unlike the stimulatory binding of traditional antigens to the B cell receptor, superantigen binding occurs at the antibody framework regions, and the functional consequence is down-regulation of B cell differentiation, premature cell death and failure to mount an adaptive antibody response. We suggested that the innate superantigen recognition capability of antibodies was originally developed by Darwinian evolution processes over millions of years as a defense against primordial microbes.5 HIV appears to have evolved a CD4BS with superantigenic character as the means to preclude an adaptive antibody response.
Induction of neutralizing antibodies is the cornerstone of effective vaccination. Following failure of candidate protein and polypeptide vaccines to induce sufficient neutralizing antibodies to the free virus,6 the focus shifted to developing candidate DNA vaccines that induce cytotoxic T cells directed to HIV infected cells.7 This approach was also ineffective. The RV144 vaccine composed of full-length gp120 protein and a canary pox vector expressing the gp120/gag/protease genes reduced the risk of infection by 31%.8 It is unclear whether this is a statistically or clinically meaningful effect. Many in the field of HIV vaccine development believe that combined induction of neutralizing antibody and cytotoxic T cells is the favored approach. As the individual antibody and cytotoxic T cell responses to the mutable HIV regions are ineffective, it is not clear how combining these responses can be the basis for effective vaccination. Our view is that HIV vaccination will be feasible once an immunogen is identified that induces a sufficient immune response to a structurally constant region of HIV essential for virus infection and propagation.
The coat protein gp41 expresses certain structurally conserved regions. The vaccine approach of Barton Haynes at Duke University entails an epitope of the HIV gp41 coat protein located in the proximity of the lipid membrane.9 Polyspecific antibodies that recognize this epitope in conjunction with membrane lipids neutralize genetically divergent HIV strains. Membranes of uninfected cells also contain the lipids as self-antigens. The immune system is generally tolerant to the self-antigens, and anti-HIV antibodies that react with self-antigens can exert deleterious effects on the host. Nonetheless, there is strong interest in the notion that breaking tolerance to self-antigens may guide development of an immunogen capable of inducing HIV neutralizing antibodies.
Concerning the epitopes of the CD4BS, there is no evidence for insufficient physical exposure as the cause of insufficient antibody production. Similarly, an intrinsic defect in the CDR adaptive mutational process is theoretically possible, but there is no evidence that this is the reason for insufficient anti-CD4BS antibody production following HIV infection or administration of the previously-tested vaccine candidates. Burton and coworkers have identified rare antibodies that recognize a segment of the CD4BS (the CD4BSod) and neutralize genetically divergent HIV strains comparatively broadly.10 Reverse-engineering of peptides with structure complementary to the neutralizing antibody binding site can be conceived as a route to a vaccine that induces the synthesis of similar neutralizing antibodies upon administration to humans. A peptide immunogen designed using as template a neutralizing antibody to a segment of the CD4BS did not induce broadly neutralizing antibodies.11 Targeting a larger CD4BS surface area by a reverse-engineered immunogen could be more fruitful.
Our studies have identified the CD4BScore as the proverbial Achilles heel of the virus. In the rare circumstances that anti-CD4BScore antibodies are produced, they neutralize HIV strains from across the world with exceptional potency.12,13 Such antibodies were found in non-infected patients with lupus, an autoimmune disease that is rarely associated with concurrent HIV infection, and in long-term survivors of HIV infection. It appears that HIV is highly vulnerable to neutralization by specific antibodies to the CD4BScore region, but the adaptive immune response to the region is insufficient to control infection under normal circumstances. A clear path to an HIV vaccine that induces broadly neutralizing antibodies can be foreseen if the following milestones can be reached: a) Reproduction of the correct CD4BScore conformation in the vaccine candidate, and b) Rapid adaptive production of neutralizing anti-CD4BScore antibodies upon administration of the vaccine candidate.
Taken together, our studies indicate the feasibility of developing an HIV vaccine capable of directing the innate CD4BS recognition capability of B cells towards a favorable maturational pathway, eventually resulting in synthesis of broadly neutralizing antibodies.
Reversible CD4BS binding by antibodies alone is sufficient to neutralize HIV. A subset of antibodies produced by B cells express the ability to catalyze the breakdown of peptide bonds, destroying gp120 permanently.19 A single catalytic antibody molecule is reused to cleave thousands of gp120 molecules over its biological half-life in blood (1-3 weeks). The neutralization potency of catalytic antibodies, therefore, is superior to traditional antibodies that bind the antigen reversibly on a 1:1 basis. Antibody catalytic sites belong to the serine protease enzyme family, consisting of nucleophilic sites similar to the archetypical Serine-Histidine-Aspartate catalytic triad of trypsin. Catalysis occurs by formation of a covalent intermediate and water attack on the intermediate, regenerating an antibody molecule that is reused for additional catalytic cycles.
Catalytic cleavage of gp120 occurs by noncovalent CD4BScore binding followed by cleavage of peptide bonds. The catalytic sites are present in antibodies produced without exposure to HIV.20,21 Sexual transmission of HIV generally occurs through the rectal and vaginal mucosal surfaces. Only a minority of sexual intercourse events with an infected individual results in transmission of the virus. Secretory IgA class antibodies found at mucosal surfaces of non-infected humans catalyze rapid gp120 cleavage and neutralize HIV in tissue culture.22 It may be hypothesized that the catalytic IgAs constitute a natural defense against mucosal HIV transmission.
In addition to inducing reversibly-binding antibodies, the covalent vaccination approach described in the preceding section stimulates adaptive improvement of the nucleophilic function of antibodies. This is feasible because covalent binding of the electrophilic vaccine candidate selects B cell receptors with the greatest nucleophilic reactivity.23,24 In turn, the improved nucleophilic reactivity enhances antibody inactivation of HIV as follows. First, specific pairing of the antibody nucleophile with the weakly electrophilic carbonyls of gp120 forms stable immune complexes with covalent character. Covalently binding antibodies were induced by immunization with the electrophilic analogs of full-length gp120 and a synthetic gp120 peptide. Reversibly bound antibodies dissociate from HIV readily. As the covalent bond is very strong, the covalent antibody-HIV complexes do not dissociate, increasing the HIV neutralization potency. Second, if the antibody combining site supports water attack on the covalent gp120-antibody complex, catalytic gp120 cleavage occurs. A subset of antibodies obtained by immunization with the electrophilic CD4BScore peptide catalyzed the cleavage of gp120 rapidly.
Treatment of HIV using reverse transcriptase and protease inhibitors requires vigilant management because of the potential for toxicity and emergence of drug-resistant strains. This has generated interest in passive immunotherapy using monoclonal antibodies. Control of viremia upon infusion of reversibly binding anti-HIV antibodies in humans was transient, suggesting emergence of antibody-resistant viral mutants. Very large quantities of the antibodies were necessary to reduce viral load, a reflection of modest antibody neutralizing potency. Can catalytic antibodies be used for passive immunotherapy of HIV infection? The answer depends on the epitope specificity and neutralizing potency of the catalysts. Targeting the CD4BScore minimizes the opportunity for development of antibody resistant strains, as CD4 binding and mutations in the CD4BScore are predicted to result in loss of CD4 binding activity. Indeed, anti-CD4BScore antibodies from long-term survivors of HIV infection neutralized the autologous HIV strain potently. There is no evidence, therefore, for emergence of resistant strains despite the selective pressure imposed by the anti-CD4BScore antibodies over prolonged durations. Anti-CD4BScore antibodies neutralize HIV in tissue culture with nanogram/ml potency, supporting their potential therapeutic application.
Intracellular expression of catalytic antibodies to these proteins holds potential for early blockade of viral propagation via interference with copying viral RNA into proviral DNA and DNA integration into the host genome. Gene therapy protocols for intracellular antibody expression27 can be conceived for persistent delivery of catalytic anti-HIV antibodies. Reactivation of HIV infection can occur due to integration of the viral genome into host DNA. Drugs that deplete proviral DNA reservoirs are under investigation to address the problem of HIV latency.28 Catalytic antibodies combined with a proviral DNA-depleting drug may be suitable for consideration as an alternative therapy for the infection.
A prophylactic vaccine and a cure for patients infected with HIV are needed urgently. However, there is considerable pessimism because of repeated clinical failure of candidate vaccines. The seemingly insoluble nature of HIV has even inspired an argument for use of the limited available funding for improved delivery of available anti-retroviral drugs to infected patients rather than further research investment. This argument is misguided. Innovative preclinical approaches are essential if the objective of eradicating HIV infection is to be met. Our positive preclinical studies using the covalent vaccination and catalytic antibody approaches are an example. These approaches were developed under basic immunology grants funded by the National Institute of Health over the past two decades. Additional developmental efforts will be necessary to obtain a standardized covalent vaccine and catalytic antibody candidates for human trials, but there is hope for translation of the preclinical immunological advances into clinical success.
In the U.S., elaborate governmental arrangements are in place to prioritize the competing developmental approaches for funding, including excellent scientific peer-review arrangements. However, programmatic allocation of funds is inspired at least in part by non-scientific reasons. The literature is replete with claims of potential clinical advances. On the other hand, most HIV vaccine development projects are likely to yield incremental advances at best. An example is the continued testing of vaccine formulations that induce immune responses primarily to mutable regions of HIV. Likewise, intensive efforts have been undertaken to identify immune markers correlating with the marginal risk reduction observed in the RV144 vaccine trial. As there is doubt whether the vaccine candidate really reduced the risk of infection, it is hard to accept that meaningful correlates of risk reduction will emerge. A policy change that forthrightly admits the limited utility of classical vaccine approaches and explicitly encourages credible, novel approaches would be a welcome event.
Scientific approaches that diverge radically from established paradigms are invariably subject to rigorous peer evaluation. Independent reproduction of the evidence is usually necessary prior to widespread acceptance of the new scientific approach. These are essential safeguards against mistaken conclusions and spurious claims. Antibodies obtained by the covalent vaccine approach have been independently verified to neutralize diverse HIV strains in tissue culture. Factors that might result in artifactual neutralization have been carefully eliminated.29,30 Similarly, the chemical and immunological principles underlying antibody catalysis have been amply validated by researchers across the world. Occam's razor is yet another safeguard against unproductive science -- when confronted with alternative explanations that are equal in other respects, the hypothesis that makes the fewest novel assumptions should be selected for further study. It is necessary to invoke B cell superantigenicity as the cause of poor CD4BS immunogenicity, as no competing hypothesis explains the empirical findings adequately. Similarly, the innovation of covalent bonding of the vaccine candidate to B cells is necessary, as no alternative strategy is available to induce a robust anti-CD4BS antibody response. In summary, the preclinical scientific findings support translation research aimed at realizing the clinical utility of the technology.
Recent immunogenicity and virus neutralization data encourages the belief that it may be possible to develop a covalent HIV vaccine that induces broadly neutralizing antibodies directed at the CD4 binding site of the virus. Catalytic antibodies to HIV appear to be a natural defense mechanism against HIV, and it may be possible to apply broadly neutralizing catalytic antibodies as an alternative therapy for HIV infection.
The authors' research was funded by the National Institutes of Health, University of Texas Houston Medical School, Abzyme Research Foundation and Covalent Bioscience Inc. We thank Dr. Carl Hanson for reading the manuscript and discussions.
Sudhir Paul, Stephanie Planque, Miguel Escobar and Yasuhiro Nishiyama have a financial interest in patents covering the covalent immunization and catalytic antibody areas. Sudhir Paul and Richard Massey have a financial interest in Covalent Bioscience Inc. Sudhir Paul is scientific advisor for the company.
* Chemical Immunology Research Center and Gulf States Hemophilia and Thrombophilia Center, Departments of Pathology and Pediatrics, University of Texas-Houston Medical School; † Covalent Bioscience Inc; ‡ Abzyme Research Foundation | <urn:uuid:085eb6c3-ebb0-4892-85bf-f3f17f3d5fe0> | CC-MAIN-2015-35 | http://www.thebody.com/content/65297/covalent-vaccination-and-catalytic-antibodies-a-ne.html?ts=pf | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00102-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.924694 | 3,551 | 3.453125 | 3 |
Certainty and Uncertainty in Treatment of Lyme Disease
By PHILIP J. HILTS
Published: July 10, 2001
A study to be reported on Thursday in The New England Journal of Medicine is fueling a running disagreement among medical researchers over the unresolved issues in Lyme disease, a tick-borne illness that is endemic in much of the Northeast and in other pockets around the nation.
The study, by Dr. Mark S. Klempner of Boston University Medical Center, showed that prolonged treatment with antibiotics was no more effective than placebos among those with persistent Lyme disease symptoms.
Citing its importance to patients and doctors, the journal posted the study, along with two others and an editorial, on its Web site (www.nejm.com) a month before the scheduled publication date. The question is, Why do a few patients who appear to have been treated successfully for Lyme disease have symptoms that come back strongly later?
Both sides agree that antibiotics work in 90 percent of patients and that the disease never recurs in those patients, at least not from that tick bite. But among the other patients, symptoms either persist or come back after the standard treatment. Do the symptoms recur because the bacteria have been hiding out in the body, only to emerge again later? Or could the Lyme bacteria, even though they were wiped out by treatment, have brought on a secondary disease, a Lyme autoimmune disorder, in which the body's immune system attacks its own cells as if they were the Lyme disease organism?
Because the patients in Dr. Klempner's study were given a new round of antibiotics, the bacteria should have been killed, and the patients' symptoms should have gone away. Since that did not happen, proponents of the autoimmune theory say, the Klempner study is good evidence for their position.
But critics say more answers are needed. Some doctors have been treating persistent Lyme disease with much heavier doses of antibiotics than the Klempner study used, and they believe that has helped. So, they say, the issue will not be resolved until the heavier doses are tested in experiments.
One patient group, the Lyme Disease Foundation, based in Hartford, has sided with heavy antibiotic use until the questions have been resolved.
Dr. Anthony Lionetti, who works in a New Jersey clinic and is associated with the foundation, said that some patients suffered persistent infection that needed to be treated but that the bacterium might not always show up in tests.
He said he sometimes asked patients to take as many as 10 tests before accepting that a patient with symptoms did not have a persistent bacterial infection.
One complicating issue is that among patients who have recurrent Lyme symptoms, at least two distinct groups seem to have emerged.
One group's symptoms are centered on arthritis symptoms like pain and swelling, usually in the knees. A second group has nervous system symptoms like memory loss, confusion, fatigue, muscle weakness and numbness or tingling.
The two sides in the dispute tend to accept the arthritis symptoms as an autoimmune reaction. The dispute is centered on the other patients.
''We really believe these patients are in pain,'' Dr. Klempner said in a telephone interview, ''and what is needed now is to find out what the cause is, if it is not persistent infection.''
But Dr. Brian A. Fallon, a psychiatrist at Columbia University and director of the Lyme Disease Program at the New York Psychiatric Institute in Manhattan, said that despite the Klempner study, which he described as important and illuminating, it was too soon to give up the ''persistent infection'' hypothesis.
Dr. Fallon is beginning to recruit patients who complain of chronic Lyme disease and have neurological and cognitive impairments for a study financed by the National Institutes of Health. They will be treated with 10 weeks of intravenous antibiotics in hopes of killing off any bacteria that may be hiding in the brain or other tissues.
Dr. Phillip Baker, chief of the Lyme disease program at the National Institute of Allergy and Infectious Diseases, said researchers there were also planning a study to look into whether the constellation of symptoms called chronic Lyme disease might actually be an autoimmune disorder. That study will look to see if the body's defenses attack its own tissues in the same way they attack the bacteria.
Within two or three years, the researchers say, they hope to have some useful, and maybe even decisive, information from this research.
Meanwhile, as the tick season begins, here are questions and answers about Lyme disease.
Q. What is Lyme disease?
A. Lyme disease is a bacterial infection caused by Borrelia burgdorferi, a spiral-shaped bacterium, or spirochete. But it is not passed from person to person; rather, its carrier is a tiny arachnid, the so-called deer tick (in the Ixodes ricinus group). In the eastern United States, this tick feeds on the blood of white-tailed deer, white-footed mice and other mammals, which commonly carry the Lyme bacterium in their blood, though it does not make them sick. When an infected tick bites a person and begins feeding, it can transmit the bacteria.
Because the disease depends on the deer, mice, ticks and bacteria, it is limited geographically to the areas where all three live. Eight states, from Maryland north to Massachusetts, account for about 90 percent of this country's reported cases. Other pockets are in Minnesota, Wisconsin and Northern California. In these areas the ticks and the bacteria may be slightly different.
About 16,000 cases of Lyme disease are reported each year to the federal Centers for Disease Control and Prevention, making it the most common disease carried by insects or arachnids. The agency also says the true number of cases is almost certainly far higher.
Q. How do the ticks infect people?
A. About three-quarters of the people who get the disease never spot the tiny ticks, which tend to hide in the hair, groin and armpits and at the back of the knees.
But people who do spot the ticks have a good chance of avoiding the disease. Only about 1 percent of deer ticks are actually infected, and the infections are most often transmitted by the nymph-stage ticks that are most common from May to September and peak in June and July.
Ticks are slow feeders, so if they are spotted and removed within 72 hours after they make contact, Lyme disease can usually be avoided.
Q. How can people avoid being infected?
A. The ticks live in wooded areas or shady grasslands, often hiding under leaf debris. The best way to avoid them in affected areas is to stay to the center of paths and to wear long pants tucked into socks, long-sleeve shirts and boots. Chemicals can be sprayed on clothing and skin to help repel the ticks. And people should check themselves regularly for ticks.
Q. How is Lyme disease diagnosed?
A. About three-quarters of the cases are diagnosed by the symptoms that show up after the tick bite, often but not always beginning with a characteristic bull's-eye rash, an area with a red center and an outer red ring. Blacks develop the same rash as whites, though it is harder to detect on darker skin. The size of the rash typically increases from half an inch to several inches across, and it may get much larger. Accompanying or following the rash may be other symptoms like fever, stiff neck, headache, body aches and fatigue.
These initial symptoms occur within the first few weeks of infection. Later, other symptoms may appear. A common one is Lyme arthritis, which can bring on pain, redness and swelling in the joints, particularly the knees. The arthritis can last from a few days to a few months, but if the Lyme infection goes untreated the arthritis may become a chronic problem.
Other late-onset symptoms include severe headaches, temporary paralysis of some facial muscles (Bell's palsy), numbness of the limbs and poor coordination. Some patients report mental problems like memory loss or inability to concentrate.
Doctors diagnose Lyme disease based on symptoms and patient history, but they can also use blood tests to spot antibodies to the Lyme bacteria or tests that spot the genetic material of the bacterium if it is present. Sometimes, however, the disease is present when tests are negative, or tests are positive when the disease is absent.
The lack of a clear-cut diagnostic test is a major problem. Researchers at the National Institutes of Health and elsewhere are developing new tests.
Q. What are the treatments and the prognosis?
A. If left untreated, 60 percent of people with the disease develop intermittent attacks of joint swelling and pain. On the other hand, some people have such mild cases that they may never know they have had Lyme disease. In a few cases, untreated disease can lead to severe, chronic and disabling illness, according to the Centers for Disease Control. In any event, taking antibiotics for 14 to 21 days cures more than 90 percent of the patients. Most of the rest are cured after a second course.
Q. Once you have had Lyme disease, can you get it again?
A. Yes, getting the disease once does not protect you from getting it again. You can be reinfected repeatedly.
Q. What is happening with people who continue to suffer symptoms?
A. Some patients show up with strong symptoms long after treatment. In some cases, when the bacteria can be spotted in the body, another treatment with antibiotics may work. If the bacteria are not spotted by tests, doctors have differing advice about what to do.
Most doctors who treat Lyme disease believe that continuing symptoms are not caused by the presence of the bacteria themselves. It may be, they say, that before the Lyme bacteria were eradicated from the body, they caused some damage that led to recurring symptoms. For example, the bacteria may set off a reaction that leads the body to attack its own tissues instead of attacking the bacteria.
Or, they say, patients are suffering from symptoms of some other condition unrelated to tick bites.
Some researchers have suspected that the re-emergence of the disease comes because the bacteria remain hidden in the body and emerge later. They say people who continue to suffer symptoms must be treated with long-term courses of antibiotics.
The Klempner study challenged this view. It found that extended treatment with antibiotics did not help people who believed they had persistent Lyme infection, a finding that suggested that their symptoms were unrelated to the bacteria. But believers in the long-term Lyme theory criticized the design of the study. Meanwhile, other research is under way.
Photos: Deer and white-footed mice are carriers of the bacteria that cause Lyme disease. Deer ticks feed on the blood of those animals and can spread the disease to humans they bite. (Keith Meyers/The New York Times); A ''bull's-eye rash,'' top and above, is often the first sign of Lyme infection. The disease is caused by the Borrelia burgdorferi, left, a spiral-shaped bacterium. It is generally transmitted by deer ticks in their tiniest stage as nymphs. (Photographs Courtesy of Lyme Net); (Courtesy of Dr. Claire Fraser); (Centers for Disease Control and Prevention) Map of the United States shows the reported cases of Lyme Disease. Each dot represents a single case in 1999. (Source: Centers for Disease Control and Prevention) | <urn:uuid:a3448cc7-17bf-4183-bebf-fbeb363c441b> | CC-MAIN-2015-35 | http://www.nytimes.com/2001/07/10/health/certainty-and-uncertainty-in-treatment-of-lyme-disease.html?pagewanted=all&src=pm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00165-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.957864 | 2,345 | 2.859375 | 3 |
Volume 23, Issue 3, May 1951
Index of content:
23(1951); http://dx.doi.org/10.1121/1.1906756View Description Hide Description
An equivalent‐tone method for calculating the loudness of sounds is described. With this method the spectrum of the sound is divided into frequency bands which are treated as pure tones in determining their loudness. The individual values of loudness are added to obtain the total loudness of the sound. Calculations for bands of white noise and for complex tones are compared with subjectively obtained data of Pollack and Fletcher and Munson. The agreement between calculated and experimental values is good. It is felt that improvement of the method must await further psychoacoustic data.
23(1951); http://dx.doi.org/10.1121/1.1906757View Description Hide Description
In the course of previous investigations, Dix, Hallpike, and Hood have concluded that the phenomenon of loudness recruitment is attributable to disordered function resulting from injury or disease of the hair cells of Corti's organ.
In the present paper, an account is given of an experimental study of certain loudness changes caused by adaptation of the hair cell responses to pure tone stimulation.
Two outstanding characteristics of the adapted state are defined: “on‐effect normality” and “relapse.” Both resemble very closely those described by Matthews in the case of adaptation occurring the muscle stretch receptors of the frog.
Further studies in a number of human subjects show that both of these characteristics are exhibited regularly by the unadapted hair cell responses of individuals suffering from degenerative changes of the hair cells.
These studies appear to establish a very close similarity between “on‐effect normality” occurring in the case of the adapted normal sense organ and the phenomenon of loudness recruitment, as demonstrated by the alternate binauralloudness balance procedure of Fowler in the case of the unadapted but diseased sense organ.
The experimental findings are considered to favor the view that loudness recruitment is one element of a complex disturbance of cochlear function due to disease or injury of specific elements of the cochlear sensory apparatus, namely the hair cells of Corti's organ.
23(1951); http://dx.doi.org/10.1121/1.1906758View Description Hide Description
The method of best beats has been employed to estimate the intensities of aural harmonics and of combination tones. It has been generally assumed that the listener hears best beats when the exploring tone produces in the cochlea a disturbance that is equal in magnitude to that of the aural harmonic or of the combination tone being measured. However, as the experiments to be reported will show, when the tone to be measured is near the absolute threshold or is partially masked, the most prominent beats will be heard when the intensity of the exploring tone exceeds that of the unknown tone. Consequently, since aural harmonics and combination tones are partially masked, their intensities will be overestimated when the method of best beats is used. The present paper concerns the magnitude of this error of overestimation.
A procedure is presented by which a better estimate of the intensity of an aural harmonic or of a combination tone may be made. When the tone being measured by the method of best beats is well above threshold, the listener hears beats over a wide range of intensities of the exploring tone, and the error of over‐estimation is small. However, when the tone being measured is near threshold, there is a small range of audible beats, and the error of overestimation is of considerable magnitude. It is therefore possible to correct for this error by determining the range of intensities of the exploring tone over which beats are audible as well as the intensity of the exploring tone required for best beats. Application of this procedure to the measurement of a second aural harmonic is illustrated.
An explanation is given of the fact that the intensity of a tone near its absolute threshold will be overestimated by use of the method of best beats. This explanation is formulated in terms of the relation of the minimum and the maximum of the envelope of the beating complex to the listener's threshold for the tones that beat.
23(1951); http://dx.doi.org/10.1121/1.1906759View Description Hide Description
A discussion is made of two lines of evidence that bear upon the problem of the relative importance of place and frequency principles in auditory theory. The first line of evidence has to do with the relation between pitchdiscrimination and the degree of specificity of action in the cochlea and brings us to an evaluation of the principle of maximum stimulation. A consideration of loudnessdiscrimination data discloses a serious limitation on our ability to appreciate the point of maximum in the cochlear activity and, in general, to make use of a spatial pattern. We must conclude that for the low tones whose patterns are broad the place principle contributes little or nothing to pitchdiscrimination, and the frequency principle has to serve this function alone.
A study of the evolution of the ear brings out still more clearly the relative roles of place and frequency principles. Hearing in some degree is present throughout the vertebrate series, and even the lowest forms have frequency ranges of several octaves and exhibit a fair degree of tonal discrimination. Yet, these lower forms possess an ear of exceeding simplicity, an otic sac in which there is little evidence of mechanical differentiation. It follows that these primitive ears must operate as frequency receptors, with ranges and discriminative abilities that depend simply upon the patterns of impulses conveyed by the auditory nerve. It appears, therefore, that in the history of the ear the frequency principle came first as the basis of tonal reception, and only sometime later in the course of evolutionary development was it joined by the place principle.
23(1951); http://dx.doi.org/10.1121/1.1906760View Description Hide Description
One of the central problems in auditory theory is to reconcile (a) the acute perception of slight changes in pitch displayed by the human listener with (b) the broad tuning of his cochlear analyzing mechanism. This paper attempts to describe and to relate a number of theoreticalsolutions to that problem. The hypotheses involve mechanisms, both mechanical and neural, for sharpening the analysis inherent in the cochlear transformation from frequency of stimulation to locus of vibration. These mechanisms operate in the domain of place—they are place theories that supplement the classical place theory. In a future paper, we plan to describe and discuss other sharpening mechanisms that operate in the domain of time.
23(1951); http://dx.doi.org/10.1121/1.1906761View Description Hide Description
An artificial mastoid for testing bone conduction receivers is described. It consists essentially of a stiff metal bar which has a fundamental resonant frequency above the measurement range, with a strain‐sensitive translating element fastened to the under side and a compliant pad on top.
The translating element is a slab of activated ceramic, which is essentially invariant to humidity and temperature changes. The compliant pad generally used is Koroseal No. 74, which simulates the flesh over the mastoid prominence reasonably well. Koroseal No. 15 simulates flesh better but is less rugged. Calibration methods are discussed.
23(1951); http://dx.doi.org/10.1121/1.1906762View Description Hide Description
Speech articulation and quality tests were made on an all‐pass system capable of advancing or delaying one frequency band relative to the rest of the spectrum. Parameters in the investigation were (a) width of the advanced or delayed band, (b) amount of advance or delay, and (c) position in the frequency spectrum of the advanced or delayed band. Data were taken at signal‐to‐noise ratios of 30 db and 0 db. The results indicate that maximum impairment of speech intelligibility and quality occurs when the delays and advances are of the order of , and when the band that is advanced or delayed is near the center of the speech spectrum and has an articulation index equal to 0.5. These findings are related to data from statistical studies of the timing of speech energy bursts.
23(1951); http://dx.doi.org/10.1121/1.1906763View Description Hide Description
The fundamental wavelength of a spherical resonator with a circular aperture is calculated. The result, , where a denotes the aperture and R the sphere radius, is the correct form of an expansion as far as terms of relative order (a/R)2, inclusive. A procedure for approximating to the wavelength of a resonator with arbitrary shape is also described.
23(1951); http://dx.doi.org/10.1121/1.1906764View Description Hide Description
A general expression is derived for the force owing to radiation pressure acting on an object of any shape and having an arbitrary normal boundary impedance. It is shown that boundary layer losses may lead to forces that are several orders of magnitude greater than the forces owing to classical radiation pressure. Steady forces arising from an asymmetric wave form are compared with the other forces. A sound wave, consisting of equal parts of fundamental and second harmonic components, can cause forces ten or more orders of magnitude greater than the forces owing to radiation pressure to be exerted on small particles.
23(1951); http://dx.doi.org/10.1121/1.1906765View Description Hide Description
A method for treating the scattering and absorption of sound in the presence of non‐uniform boundary conditions is applied to the case of an infinite strip of material (of given width) placed on an infinite, otherwise hard, wall. The strip is assumed to be characterized by a normal acoustic impedance that may possess a resistive component.
The wave equation is reformulated as an integral equation over the strip. A variational expression is found for the amplitude of soundscattered by the strip in any direction. The total cross section for scattering plus absorption is determined from the amplitude of specular reflection according to a well‐known cross‐section theorem. Comparison with Pellam's results for absorption by such a strip shows that the methods used have an accuracy better than 2 percent over the entire frequency range.
23(1951); http://dx.doi.org/10.1121/1.1906766View Description Hide Description
Sound scattering from a sphere of arbitrary size is treated theoretically when the acoustic properties of the sphere are near those of the surrounding medium. Closed form analytic expressions are found for the reflectivity and total cross section. These expressions become exact only in the limit as the ratio of the densities and ratio of the speeds approach unity, but it is shown that for ratios as large as 5/4 and probably as large as 3/2 the approximate reflectivity and total cross section compare favorably with results of exact calculation. Calculations of reflectivity and total cross section, which would require weeks if made from the exact solution of the wave equation, may be completed in a few hours using the approximate closed form expressions.
23(1951); http://dx.doi.org/10.1121/1.1906767View Description Hide Description
The reflection of a spherical sound wave from a wall with the boundary conditions expressed in terms of a normal impedance independent of the angle of incidence is treated. It is shown that the integral for the reflected wave can be evaluated exactly in closed form under certain conditions. The solution given for an arbitrary normal impedance involves a slight approximation of the integral. The reflected wave is brought into a form such that it can be considered originating from an “image source” having a certain amplitude and phase. Graphs for determining this amplitude and phase are given in terms of a “numerical distance,” which depends on the normal impedance and the position of the field point. Pressure distributions around point sources for different wall impedances are shown. The limitations in simulating plane wave conditions at a boundary and the corresponding effect on free field methods of measuringacoustic impedance are discussed.
23(1951); http://dx.doi.org/10.1121/1.1906768View Description Hide Description
The nonspecular reflection of plane waves of sound from certain surfaces composed of absorbent bosses (semicylinders or hemispheres of arbitrary impedance) on an infinite plane of ∞ or 0 impedance is considered. Exact solutions are obtained for the problem of the single boss and then extended, subject to the single‐scattering hypothesis, to obtain far field solutions for certain planar distributions of bosses of radii small compared with the wavelength. The results are compared with those obtained previously for non‐absorbent bosses, and it is shown that the effects of the finite impedance are most pronounced in the simple source terms of the scattered components and may lead to either a decrease or an increase in the radiation reflected at the specular angle. Another effect of the finite impedance (for the small finite distributions) is to shift the critical value of the angle of incidence for which the reflection at the specular angle consists only of the specular component—below this value the reflection at the specular angle being a minimum and above it a maximum. For the infinite uniform random distributions it is found that the effect of the bosses is essentially but to change the impedance of the plane—these effective impedances being functions of the angle of incidence and the parameters and distribution of the bosses. The effect of the finite impedance of the bosses is most pronounced for these distributions yielding terms much lager than those previously retained for the nonabsorbent bosses. The results for the analogous distributions of cylinders and spheres are also given.
23(1951); http://dx.doi.org/10.1121/1.1906769View Description Hide Description
The measured transmittivity of a steel plate in water is presented as a function of the angle of incidence and the product of frequency and plate thickness over wide ranges.
The normal velocity of the plate surface can attain an amplitude necessary for good transmission only by constructive interference among internal reflections. It is shown that the ideal conditions can be met in a plate of finite width in only a few cases. In general, the conditions for a transmission maximum are the conditions for the existence of appropriate types of stable traveling waves in a plate of infinite extent; these conditions, however, are modified by edge effects. An apparent effect of this modification is to produce changes in the divergence of the transmitted beam and hence in the observed transmittivity.
23(1951); http://dx.doi.org/10.1121/1.1906772View Description Hide Description
It is assumed that acoustic background noise is caused by a distribution of random “white” noise sources whose physical mechanism is unspecified. A law expressing the amplitude‐distance attenuation characteristic of the medium is also assumed. Several distributions of noise sources are considered: uniform volume distributions, uniform surface dipole distributions, and two mixed cases. The noise drop‐off with frequency at a point below the surface is found for each case. For an infinite volume of noise sources, this drop‐off is 6 db/octave at all frequencies. It is shown how this simple model can be generalized to other attenuation laws and other spatial and amplitude distributions of noise sources.
23(1951); http://dx.doi.org/10.1121/1.1906773View Description Hide Description
To obtain information the characteristics of relaxation phenomena in liquids, measurements of ultrasonic absorption were performed in seven binary mixtures having nitrobenzene as one component.
The absorption coefficients in some systems (benzene, chloroform) confirm the results obtained previously in mixtures formed by an unassociated, very absorbing liquid and another component with much smaller absorption coefficient. In this case, the absorption coefficient decreases quickly when small quantities of the second liquid are added to the first one. The explanation of this behavior seems to be found in the fact that an increase of the efficiency of collisions between the molecules occurs. When different molecules are added to the high absorbing liquid, they decrease the time necessary to establish equilibrium among the internal degrees of freedom of the high absorbing molecules.
In other mixtures the two components have similar absorption coefficients. These systems are of great interest, because the characteristic behavior of the ultrasonic absorption in the mixture then depends on the nature of both pure liquids. If the two liquids are unassociated and their molecules do not interact strongly between themselves, the curve of the absorption coefficientversus concentration presents a very clear minimum. If, instead, there are strong interactions, the shape of the curve is altered. This would indicate the presence of an additional type of energy loss.
In the systems of nitrobenzene and an alcohol, the absorption coefficient, as a function of the mole fraction, has a maximum at an intermediate concentration, just as happens in some mixtures of water and alcohols.
Physical Factors Involved in Ultrasonically Induced Changes in Living Systems: II. Amplitude Duration Relations and the Effect of Hydrostatic Pressure for Nerve Tissue23(1951); http://dx.doi.org/10.1121/1.1906774View Description Hide Description
The results of experiments with frogs under a hydrostaticpressure demonstrate that cavitation is not an important factor in the mechanism of production of paralysis of the hind legs of frog by ultrasonic (frequency one megacycle) irradiation over the lumbar enlargement region of the spinal cord. Experimental results indicate that a linear relation exists between the reciprocal of the minimum exposure time for paralysis and the acoustic amplitude. This result is readily described in terms of a one factor rate process. On the basis of this experimentally determined relation, it is shown that time rate of change of temperature cannot be correlated with the observations. It is concluded on the basis of a theoretical calculation that absorption of ultrasound at interfaces in the spinal cord does not result in minute hot regions.
Further work on summation of subparalytic doses, spaced apart at various time intervals, indicates that the recovery process following exposure to a subparalytic dose of ultrasonic radiation may not be a monotonic function of time.
- LETTERS TO THE EDITOR
23(1951); http://dx.doi.org/10.1121/1.1906775View Description Hide Description | <urn:uuid:68bedb6b-5033-4fc9-91ca-5c7069e24b0c> | CC-MAIN-2015-35 | http://scitation.aip.org/content/asa/journal/jasa/23/3 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00104-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.922782 | 3,806 | 3.21875 | 3 |
Social media isn’t really “new.” While it has only recently become part of mainstream culture and the business world, people have been using digital media for networking, socializing and information gathering – almost exactly like now – for over 30 years:
The Phone Phreaking Era (1950′s – Early 90′s)
Social media didn’t start with computers, it was born on “line” – on the phone. Phone phreaking, or the rogue exploration of the telephone network, started to gain momentum in the 1950′s. Phone phreaks weren’t motivated by fraud, but rather, they were technophiles and information addicts trapped in a telecom monopoly long before Skype or “free nights and weekends” existed. (Calling a friend in another state could rack up a $40/hr charge.)
These early social media explorers built “boxes“… homemade electronic devices that could generate tones allowing them to make free calls and get access to the experimental back end of the telephone system. Phreaks sniffed out telephone company test lines and conference circuits in order to host virtual seminars and discussions.
Apple Co-founders Steve Jobs (left) and Steve Wozniak (right) phreaking with homemade blueboxes – image: woz.org
The first real “blogs” / “podcasts” took place on hacked corporate voice mail systems called “codelines,” where phone phreaks would hack into unused mailboxes and set up shop until they were discovered and kicked out. You’d call a corporate 1-800 number, enter an extension and hear recorded audio broadcasts packed with social greetings and useful phone phreaking content: hacked calling card codes to make free calls, “bridges” (audio conference call lines), and plugs for other codelines. You could leave your comments and information as a voice mail, and the phreak would likely respond to you in his next update.
image: sahaja meditation @ flickr
The first “tweetup” type social media events were 2600 meetings. I fondly remember my first one in 1993… in the back of a Ft. Lauderdale bowling alley… with lots of fast food, stolen Bellsouth telephone equipment and industrial music-influenced fashion.
Bulletin Board Systems (BBS’s) – (1979 – 1995)
image: Robert Watson
The first BBS or electronic “Bulletin Board System” was developed and was opened to the public in 1979 by Ward Christensen. The first BBSes were small servers powered by personal computers attached to a telephone modem, where one person at a time could dial in and get access. BBSes had social discussions on message boards, community-contributed file downloads, and online games.
In the 1980′s, the social media scene had a very edgy, underground flavor. There were some legitimate BBSes that offered “shareware” only, but a fair percentage of them had secret “adult” or pirate software rooms in the back. Many were strictly underground – dedicated exclusively to niches like warez (pirated software), H/P (explicit hacking and phreaking information discussion), Anarchy (articles on fraud, bomb making, drug chemistry), and Virus code for download. “Handles” or online pseudonyms were the norm. Real names were closely guarded and generally only revealed to real-life friends (or in the newspaper story when someone got arrested).
Commercial Online Services (1979 – 2001)
Prodigy offered a clean-shaven, moderated social networking environment in the early 90s
Online services, like Prodigy and Compuserve, were the first large scale corporate attempts to bring an interactive, “social” online experience to the masses. Online services rose to popularity concurrently along with BBSes and catered to a more corporate and mainstream-home-user kind of set. They offered a safe, moderated environment for social networking and discussions.
CompuServe was infamous for the high cost ($6 per hour, plus long-distance telephone adding up to almost $30/hr.) – but it offered the first online chat system called CB simulator in 1980. The first real-life wedding from a couple who met via real-time internet chat happened shortly thereafter and was featured on the Phil Donahue show. Prodigy launched nationwide in 1990, growing quickly in popularity for its color interface and lower cost.
AOL brought the social features on the web into the mainstream.
Later, America Online (AOL) gained critical mass with aggressive CD promotions and direct mail campaigns. AOL also did one of the most epic product placements of all time in the 1998 film “You’ve Got Mail!” starring Tom Hanks – bringing “social” online culture and romance into the Hollywood mainstream.
The Dawn of the Word Wide Web – 1991
The internet existed since the late 1960s, as a network, but the world wide web became publicly available on August 6th, 1991.
The Well was a Bay Area BBS that evolved into an ISP and web community.
At the beginning of the 90s, internet access was available only to those with legitimate with university / government / military connections (and to hackers). But around 1994 or 1995, private internet service providers (ISPs) began to pop up in most major metro areas in the United States. This gave millions of home users the chance to enjoy unfiltered, unlimited online experiences. Usenet was the first center for most of the high-end discussion – but early internet users were extremely outspoken and opinionated by today’s standards. The first online social media etiquette standards were proposed, and called netiquette, as a reactionary to stop the rampant flaming and keep things somewhat civilized.
By the late 90′s internet forums grew in popularity and began replacing Usenet and BBSes as the primary nexus for topical discussions.
IRC, ICQ and Instant Messenger
IRC was a popular way to chat and share links in the 90s
People have been addicted to “tweeting” their real-time status updates (using hash tags (#) and at-signs (@)) for over 20 years. IRC, or Internet Relay Chat, was created in August 1988 by Jarkko Oikarinen. It was notably used to break news on the Soviet coup attempt during the media blackout and keep tabs on the first Gulf War. Many people stayed logged into IRC constantly… using it to share links, files and keep in touch with their global network – they same way Twitter is used today.
ICQ technology raised many important questions, such as: “What R U wearing?”
IRC clients were primarily UNIX-based… but in 1996 four Israeli technologists invented the instant messenger (IM) system for desktop computers called ICQ . This was quickly purchased by AOL and it became a mainstream hit. IM technology helped developed the emotional lexicon of social media, with avatars (expressive images to represent yourself), abbreviations (A/S/L? = age, sex, location?) and emotion icons (or emoticons).
P2P – BitTorrent – and “Social” Media Sharing
The “Summer of Music” in 1999 after Napster’s debut was an exciting time for music consumers.
Napster… a peer-to-peer filesharing application that went live in June 1999, marked an radical shift of distribution power from record companies to the consumer. I’ll never forget the (unprecedented) technological thrill of downloading an album in .MP3, burning it to CD on an external $500 drive, and playing it in my car. Music started to freely flow across the internet at an astonishing pace, stripped of hype and payola… on the merit of real people’s tastes and personal collections. The online music party raged through 1999 and 2000 (just like the tech stocks), until it was declared “illegal” and Napster was forced to filter out all the copyrighted content.
Competing peer-to-peer applications like Limewire took Napster’s place – until BitTorrent technology arrived and provided a robust, centralized way to share files without being blocked. The Swedish website The Pirate Bay became a cult online destination for “social” media distribution.
Social Networking & Social News Websites
The first social networking website was SixDegrees which let people make profiles and connect with friends in 1997. This kind of interactive, social web application style became popularly known as “Web 2.0” and it really gained momentum with Friendster around 2002-3.. followed by MySpace (2004 – 2006) and then Facebook (2007 -> ).
Digg gives people a constant, community-filtered stream of potent & engaging content.
Slashdot got famous for generating tons of traffic and buzz around its editor-picked stories, but the modern social news revolution took off when Digg gained critical mass in late 2006 and sites like StumbleUpon and Reddit followed. Delicious became popular as a way to share bookmarks of static pages.
The Real-Time Statusphere & Location-based Social Web (2008 – ???)
Twitter is a form of communication that people needed, even though they didn’t ask for it.
Location-based software will unlock the mobile experience to its full potential.
The iPhone was the tipping point for hardware, a functional mobile web browser after a decade of delayed hopes and false promises from other manufacturers. Location-based social networking sites like BrightKite allow people to use their mobile devices to “check in” at public locations and be seen by other network members who are physically close by, and let people to transcend the awkward social taboos against interacting with strangers in public places.
Google is trying to build an indispensable, real-time social web app with Wave.
What’s around the corner? No one can say for sure, but Google’s Wave looks like a promising new tool to bring productivity to real-time social media… allowing people to actively co-create and collaborate on projects, documents and events… not just announce them.
What’s your history with social media? What were the most exciting moments and milestones on your own personal journey?
Get a FREE Website Evaluation!
Brett Borders, the author of this article, is a professional copywriter who specializes in increasing website sales and signup rates. I'm available now to write for your website and optimize it for maximum sales and profits. Please contact me now for a free consultation. | <urn:uuid:728b6634-da1f-4bac-8fb9-6c2927581e06> | CC-MAIN-2015-35 | http://copybrighter.com/history-of-social-media | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645315227.83/warc/CC-MAIN-20150827031515-00218-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.945199 | 2,218 | 2.578125 | 3 |
common name: predatory gall midge (unofficial common name)
name: Feltiella acarisuga (Vallot) (Insecta: Diptera: Cecidomyiidae)
Introduction - Synonymy - Distribution - Description - Life Cycle - Effectiveness - Commercial Availability and Use - Selected References
The predatory gall midge, Feltiella acarisuga (Vallot), is one of the most effective and widespread natural enemies of spider mites (Tetranychidae) (Gagné 1995). It is a particularly important natural enemy of the twospotted spider mite, Tetranychus urticae Koch, in a number of cropping systems (Opit et al. 1997). Feltiella acarisuga could be particularly useful for integrated pest management of spider mites that attack greenhouse crops (Gillespie et al. 1998).
Feltiella quadrata (Gagné 1995)
The genus Feltiella is virtually cosmopolitan and contains eight species: Feltiella acarisuga (worldwide, except for the Neotropical Region), Feltiella pini (Felt) (North and Central America and West Indies), Feltiella curtistylus Gagné (Brazil), Feltiella occidentalis (Felt) (U.S.- California), Feltiella acarivora (Zehnter) (Indonesia- Java), Feltiella insularis (Felt) (eastern U.S., West Indies and Colombia), Feltiella reducta Felt (northeastern U.S. - New York), and Feltiella ligulata Gagné (Cape Verde Is.) (Gagné 1995). Feltiella acarisuga is the most widely distributed species in the genus and is listed from the U.S., Canada, Finland, Germany, U.K., Switzerland, Italy, Morocco, Greece, Israel, India, Sri Lanka, Taiwan, Japan and New Zealand. It is the only species of Feltiella found throughout most of Europe and Asia.
Egg: The shiny, translucent oblong eggs are deposited individually near prey mites on leaves. They are 0.1 x 0.25 mm in size (Koppert 1997). The eggs hatch within two days after oviposition and the larvae immediately begin to feed.
Figure 1. Eggs of the predatory gall-midge, Feltiella acarisuga (Vallot). Photograph by David R. Gillespie, Agassiz.
Larva: The orange-brown larvae vary in length from 0.2 to 2 mm during their four developmental instars (Koppert 1997). They forage for mites on leaves and feed for four to six days, depending on temperature, relative humidity and the abundance of prey mites (Gillespie and Raworth 1999). They feed exclusively on all developmental stages of several species of spider mites. Feltiella acarisuga larvae occur in populations as large as 160 per cm2 of eggplant leaf.
Figure 2. Larva of the predatory gall-midge, Feltiella acarisuga (Vallot). Photograph by Lance S. Osborne, University of Florida.
Pupa: The fluffy white, 1 to 1.5 mm long pupa requires four to six days to complete development and produce an adult (Koppert 1997). Pupation occurs mainly on the underside of a leaf next to a vein.
Figure 3. Pupa of the predatory gall-midge, Feltiella acarisuga (Vallot). Photograph by Lance S. Osborne, University of Florida.
Adult: The adult Feltiella acarisuga is a delicate pink-brown fly about 2 mm in length with long legs (Koppert 1997). Females have a five day life span and produce about 30 eggs. Males do not live as long as females. The sex ratio is about 1:1. Adult Feltiella acarisuga are not predaceous but drink water and nectar.
Figure 4. Adult of the predatory gall-midge, Feltiella acarisuga (Vallot). Photograph by David R. Gillespie, Agassiz.
In climates without extremely dry or cold seasons, every stage of Feltiella acarisuga is present year-round. Feltiella spp. apparently develop from egg to egg in 26 to 33 days, averaging around 29 days (Sharaf 1984); however, Feltiella acarisuga requires about 15 days (Gillespie et al. 1998). Reproduction and development occur at 15-25°C. Eggs and larvae do not survive above 30°C or below 30% relative humidity. At least 50% relative humidity is required for a normal rate of development. The optimum temperature and relative humidity combination is about 20°C and 90% relative humidity. However, with an abundance of prey, the level of predation remains constant over the developmental range of temperature and relative humidity (Gillespie et al. 1998). If prey populations are sub-optimal, larvae can survive by pupating at a smaller size. Larvae also can survive for several days without prey.
Feltiella acarisuga can be used to manage spider mite populations in a variety of greenhouse and field crops, especially when incorporated into a bio-intensive IPM program. In eggplant, for example, Feltiella acarisuga has appeared naturally and reduced spider mite numbers by more than 40% (Sharaf 1984). Each midge larva can consume an average of least 15 adult mites, 30 mixed developmental stages, or 80 eggs per day. Weekly releases of 1000 individuals per ha have been extremely effective for controlling spider mites on tomato, pepper and cucumber (Gillespie et al. 1998). In addition, Feltiella acarisuga (sold as Therodiplosis persicae) is being used to manage spider mites on strawberries and various ornamental crops. It is recommended that 200 to 1000 individuals per ha. be released weekly as a trial rate for growers. The weekly release rate is approximately doubled for heavy infestations, 2,500 adults per ha for six successive weeks (Biobest 1999).
It is highly advised that Feltiella acarisuga be released in combination with the predaceous mite, Phytoseulus persimilis, a well-established natural enemy used to control spider mites. Feltiella acarisuga is more mobile as an adult than is the predatory mite and, once established, eats at least five times as many spider mites (Biobest 1999). However, Phytoseulus persimilis should not be released where Feltiella acarisuga is becoming established because they are known to eat midge eggs if prey is limited (Gillespie 1998).
Feltiella acarisuga pupae are commercially available from several producers and suppliers of natural enemies (http://www.anbp.org/ and http://www.cdpr.ca.gov/docs/ipminov/ben_supp/contents.htm). Pupae are shipped on leaves or an inert substance in various containers, such as 1-liter pots. Pots are placed in the crop on the ground at the beginning of rows and their lids are pierced to release the adult midges. It is best if the midges are released near concentrations of spider mites. To establish, Feltiella acarisuga requires fairly large prey populations (Gillespie and Raworth 1999). The pots should be stored in the dark for no more than two days at 10 to 15°C (Koppert 1997). Adults should be released from containers every 24 hours, late at night or early in the morning because of the cooler and more humid conditions. The RH should be kept above 80%, if possible (Gillespie and Raworth 1999).
It is essential to avoid non-target side effects of chemical pesticides, such as Thiodan, Diazinon, and Kelthane; however, most fungicides are safe to use with Feltiella acarisuga (Gillespie and Raworth 1999). Sulfur products used as dusts or sprays do not cause mortality in larvae but females avoid laying eggs on treated plants. Another concern is parasitization of Feltiella acarisuga larvae by Aphanogmus floridanus, potentially a very abundant parasitoid during warmer months. However, if necessary, releases can be timed to avoid the parasitoid because unlike the parasitoid Feltiella acarisuga does not diapause during the cooler months. Feltiella acarisuga parasitized by Aphanogmus floridanus have pupal cases with characteristic round emergence holes.
- Biobest NV. (1999). Therodiplosis persicae. Belgium. http://www.biobest.be/ (10 August 2012).
- Gagné RJ. 1995. Revision of tetranychid (Acarina) mite predators of the genus Feltiella (Diptera: Cecidomyiidae). Annals of the Entomological Society of America 88: 16-30.
- Gillespie DR, Raworth DA. 1999. Biological Control of Twospotted Spider Mites on Greenhouse Vegetable Crops. Agriculture and Agri-Food Canada. P. 30-32.
- Gillespie DR, Roitberg B, Basalyga E, Johnstone M, Opit G, Rodgers J, Sawyer N. 1998. Biology and application of Feltiella acarisuga (Vallot) (Diptera: Cecidomyiidae) for biological control of twospotted spider mites on greenhouse vegetable crops. Pacific Agri-Food Research Centre (Agassiz)Technical Report, No. 145. Agriculture and Agri-Food Canada.
- Koppert BV. (1997). SPIDEND. Feltiella acarisuga. Netherlands. http://www.koppert.nl/ (10 August 2012).
- Opit GP, Roitberg B, Gillespie DR. 1997. The functional response and prey preference of Feltiella acarisuga (Vallot) (Diptera: Cecidomyiidae) for two of its prey: male and female twospotted spider mites, Tetranychus urticae Koch (Acari: Tetranychidae). Canadian Entomologist 129: 221-227.
- Osborne LS, Ehler LE, Nechols JR. (July 1999). Biological control of the twospotted spider mite in greenhouses. (10 August 2012).
- Sharaf NS. 1984. Studies on natural enemies of tetranychid mites infesting eggplant in the Jordan Valley. Zeitschrift für Angewandte Entomologie. 98:527-533. | <urn:uuid:047c5748-d789-4783-9839-47473a426c19> | CC-MAIN-2015-35 | http://entomology.ifas.ufl.edu/creatures/beneficial/f_acarisuga.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00339-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.828331 | 2,300 | 3.21875 | 3 |
The Connecticut State Department of Education (CSDE) has published a resource document on assistive technology (AT) called the Connecticut’s Resource Guide of Assistive Technology, Supports and Accommodations for Daily Instruction and Formative, Interim and Summative Assessments.
The Individuals with Disabilities Education Improvement Act (IDEA) states that AT must be considered in the development of the Individualized Education Program (IEP). Students often need assistive technology supports to access instruction and participate in assessments.
The purpose of Connecticut’s Resource Guide of Assistive Technology is to inform educators, instructional staff, parents and students about available resources for consideration during instruction and highlight elements for individualized supports/accommodations that mirror supports utilized during instruction for assessment such as on the Connecticut’s Alternate Assessment (CTAA) and the Smarter Balanced Assessment Consortium (SBAC).
Utilizing these resources in conjunction with the Connecticut AT Guidelines during instruction as well as assessment, provides students access to enriched educational experiences, prepares them to be career and college ready, and ensures that positive educational outcomes can be realized for all students.
If you have questions related to the AT Resource Guide and/or AT Guidelines, please contact Thomas Boudreau, Bureau of Special Education, at 860-713-6925 or email@example.com.
The Connecticut State Department of Education (CSDE) together with the University of Connecticut A.J. Pappanikou Center for Excellence in Developmental Disabilities Education (UCEDD) are hosting the Summer Symposium for Supervision and Management of Paraeducators focused on helping school districts to improve their supervision and management of paraeducators. The event will be held on July 20-21, 2015, from 8:30 a.m. to 2:30 p.m. at the University of Connecticut Greater Hartford Campus, 85 Lawler Road, West Hartford, CT 06117. The cost to attend is $50.00. If you have questions related to this event, please contact Christine Sullivan at firstname.lastname@example.org or call 860-679-1500.
The ParaPro Assessment is a test developed by Educational Testing Service (ETS) and adopted by the Connecticut State Board of Education to comply with the No Child Left Behind Act. The following are two newly established locations to take the ParaPro Assessment in Connecticut:
The Individuals with Disabilities Education Improvement Act (IDEA) Section 300.172 addresses access to instructional and educational materials in a timely manner by individuals who are blind or have other print-related disabilities through the establishment of the National Instructional Materials Access Center (NIMAC) and the adoption of the National Instructional Materials Accessibility Standard (NIMAS).
Connecticut has adopted the NIMAS standard, and in a recent AEM – Topic Brief defines timely manner as “all reasonable efforts will be made by the local education agency (LEA) to ensure that accessible educational materials (AEM) are provided to children with disabilities who need accessible formats of educational materials at the same time as other children receiving their educational materials”.
If a student is identified by the planning and placement team (PPT) as having a print-related disability (e.g., blindness, visual impairment, physical limitations and specific learning disability in reading), which impacts the student’s ability to access curriculum, then the PPT may determine, as the competent authority, that the student qualifies to receive AEM produced in specialized formats as delineated on the individualized education program (IEP) through an accessible media producer and/or the NIMAC.
A helpful AEM – Flowchart with AEM – Scenarios that indicate how to determine the need for AEM, and sources which may be acquired in order to allow the student access to AEM in a timely manner, are available on the Connecticut State Department of Education website under NIMAS/NIMAC (http://www.sde.ct.gov/sde/cwp/view.asp?a=2626&q=322684).
If you have questions related to AEM and the NIMAS/NIMAC process, please contact Thomas Boudreau, Bureau of Special Education, at 860-713-6925 or email@example.com.
Wrapping up her second month in the position of Special Education Bureau Chief, Dr. Isabelina Rodriguez makes introduction to the field and offers some recently created guidance on Independent Educational Evaluations (IEE) and a Kindergarten Option. Visit the links below to review her introduction and the important guidance documents.
New Special Education Bureau Chief Introduction
Independent Educational Evaluation Guidance Memorandum
The Connecticut State Department of Education, Bureau of Special Education, has identified that the Brigance Inventory of Early Development III (IED-III) – Early Childhood Edition, will be the single statewide assessment instrument that will be utilized to collect and report Early Childhood Outcome (ECO) information for the State Performance Plan (SPP) and Annual Performance Report (APR). By June 30, 2015, all school districts will have received a Brigance IED-III Early Childhood Edition assessment manual and a small package of IED-III Record Booklets that have been purchased by the Bureau of Special Education for school districts. Implementation of the IED-III Early Childhood Edition to collect and report ECO information is effective July 1, 2015.
A letter has been disseminated to all Directors of Special Education identifying the change in the assessment instrument to collect ECO information. To access that letter, please click on the following link: ECO Change Assessment Instrument (3-2015)
To order additional IED-III assessment manuals and/or IED-III Record Booklets, please click on the following link: ECO Ordering the Brigance IED-III
The Connecticut State Department of Education (CSDE), Bureau of Special Education, has directed all school districts to convert to the Brigance® Inventory of Early Development III (IED-III) – Early Childhood Edition, effective July 1, 2015, to collect and report Early Childhood Outcome (ECO) information. The following outlines important timelines for completing all post-tests using the Brigance IED-II and when to start using the Brigance IED-III. Please click on the following link: ECO IED-III Timelines Document
The Bureau of Special Education’s 12th Annual Back to School Meeting will be held on Wednesday, September 16, 2015, at the Crowne Plaza Hotel in Cromwell, CT.
Registration information will be sent out at the end of August.
We look forward to seeing you there!
The Connecticut State Department of Education (CSDE) has appointed a Bureau Chief for the Bureau of Special Education (BSE). Dr. Isabelina Rodriguez was recommended and has accepted the bureau chief position. It is anticipated that Dr. Rodriguez will join the Department in late March.
In year 30 as an educator, Dr. Rodriguez began her career as a special education teacher in the Springfield, Massachusetts public schools before being promoted as a supervisor of special education in that district. After nine years in the Springfield school system, Dr. Rodriquez worked in Northampton, Massachusetts public schools, first as the director of pupil services for 10 years and then as superintendent of schools for six years. Dr. Rodriguez currently serves as the superintendent of schools for the Granby, Massachusetts public schools, a position she has held since January 2011.
As superintendent, Dr. Rodriguez has presented on issues related to special education, English learners, teaching strategies, and school committee relations and budgeting at state and regional conferences. She has also held various adjunct faculty positions in education and for the past six years with the American International College in Springfield, Massachusetts. She has served on many state advisory committees and has represented the Massachusetts’ superintendents at the state level on the executive board of the Massachusetts Association of School Superintendents and as Massachusetts’ national representative of the Association of School Superintendents.
Dr. Rodriguez brings a wealth of experience to the CSDE and Connecticut in the role of state director for special education and chief of the BSE. In addition to her considerable knowledge in the area of special education, Dr. Rodriguez has led systemwide school change efforts to raise the achievement levels of all students, including students with disabilities and other student populations. In her administrative positions, Dr. Rodriguez has consistently demonstrated a collaborative, inclusive approach and has proven to be a highly effective educational leader. She supports Connecticut’s current reform efforts and will be a welcome member of the CSDE management team.
The State Department of Education, Bureau of Special Education is providing guidance on a school district’s ability to disclose personally identifiable information from education records in connection with an education placement made under the Individuals with Disabilities Education Act (IDEA), Part B. The guidance is based upon clarification from the Family Policy Compliance Office (FPCO), which indicates that a student’s education records can be provided to proposed out-of-district placements without parent consent so long as the district’s annual notification of rights includes that the district discloses education records for enrollment purposes or the district makes a reasonable attempt to notify the parent in advance of such disclosure.
To see the entire guidance document, please click on this link:
Sending Educational Records to Proposed Out-of-District Placements without Parent Consent
The Connecticut State Department of Education (CSDE), Bureau of Special Education (BSE) recently issued guidance via e-mail distribution to directors of special education and pupil services. The guidance issued by Dr. Patricia Anderson, consultant with the BSE, was specific to the Individualized Education Program (IEP) form and related documents. The same information provided to directors has also been sent to vendors of electronic IEPs. The e-mail distributed to directors of special education and pupil services as well as IEP vendors is provided along with related IEP pages. Related documents can be accessed by clicking the links below. Please review the information carefully.
Access guidance e-mail and updated documents here:
Email guidance of updates to the Connecticut Individualized Education Program
IEP 2014-15 Page 1 10-1-14 R2 (MS Word Doc.)
IEP 2014-15 Page 1 10-1-14 R2 (PDF)
IEP 2014-15 Page 12 9-29-14 (MS Word Doc.)
IEP 2014-15 Page 12 9-29-14 (PDF)
ED625 -Consent Initial Eval 10-1-14 (MS Word Doc.)
ED626 -Consent Initial Provision Spec.Educ. 10-1-14 (MS Word Doc.)
ED627 -Consent ReEval 10-1-14 (MS Word Doc.) | <urn:uuid:65663c49-9a7b-4e1c-8f3e-c8b0da44a923> | CC-MAIN-2015-35 | http://ctspecialednews.org/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645305536.82/warc/CC-MAIN-20150827031505-00279-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.92703 | 2,232 | 2.609375 | 3 |
Archive for the ‘Detroit Electric’ tag
Thomas Edison’s 1889 Electric Runabout. Images courtesy Automotive Hall of Fame.
Today, electric cars are seen by most consumers as an expensive novelty, delivering modest driving ranges, long recharging times and unattractive sticker prices compared to fossil-fueled alternatives. At the dawn of the 20th century, however, electric cars represented a legitimate (and some said superior) alternative to gasoline-fueled motor cars. A new exhibit at the Automotive Hall of Fame celebrates Thomas Edison’s work on the development of the ideal battery for the electric car, prompting visitors to wonder what may have been had electric vehicles won out over gasoline power.
Edison was a strong proponent of the electric car, and the inventor believed that conventional, readily available lead-acid batteries were a poor choice for use in electric vehicles. Lead-acid batteries were seen as unreliable, and given the poor state of roads in the early years of motoring, too fragile for practical use in automotive applications. After testing numerous combinations of materials, Edison developed a storage battery featuring a nickel cathode and an iron anode, not dissimilar to a modern nickel-cadmium rechargeable battery. In fact, Waldemar Jungner, the inventor of the nickel-cadmium battery, had even experimented with nickel-iron combinations prior to Edison, with little success.
Compared to lead-acid alternatives, Edison’s nickel-iron storage battery served up several benefits, including greater durability, quicker recharging, improved longevity and resistance to overcharging, making them seem ideally suited for a consumer product subject to frequent cycling and punishing use.
A salesman’s sample of the Edison nickel-iron battery.
Edison’s nickel-iron storage battery was patented in 1901, and in the same year Edison founded the Edison Storage Battery Company in
East West Orange, New Jersey, with the primary intent of developing storage batteries for electric cars. Though his company had previously supplied nickel-iron batteries for some Baker Electric cars, in 1912 Edison signed an exclusive deal with Detroit Electric, which soon overtook Baker as the highest-volume electric car manufacturer in the United States.
The Edison Storage Battery Company was quick to point put the advantages of electric power over gasoline power, and chief among them was ease of use. A 1912 consumer pamphlet advised that electric cars were “always ready” (assuming the batteries were charged), and required no cranking to start. Once running, no adjustments to ignition timing or fuel mixture were necessary, and electric cars produced “no noise,” “no odor,” and “no smoke.” Furthermore, there were “no gears to shift,” and “no clutch to throw,” meaning that women were clearly the first targeted demographic for the electric car. Electrics were popular with physicians as well, primarily due to their reliability and ease of starting.
As is often the case with modern electric cars, price was a stumbling block to broader acceptance of Edison Battery-equipped cars. Opting for the premium Edison nickel-iron battery over a lead-acid alternative added $600 (roughly the price of a Ford Model T) to the purchase price of a Detroit Electric car, pushing it beyond affordability for the masses. Few objected to the car’s claimed 80-mile range, and its top speed of 20 MPH was considered sufficient for urban and suburban use.
Cost aside, the nickel-iron battery also suffered from performance issues in cold weather, but its biggest foe would prove to be cheap and readily available gasoline. Improvements in internal combustion engine technology soon made the advantages of electric cars moot, and the Edison Storage Battery Company turned its attention towards finding industrial uses for its products. Nickel-iron batteries found success in railroad and mining applications, and the Edison Battery Company manufactured products until 1972, when it was acquired by Exide Technologies.
The Automotive Hall of Fame exhibit will feature an 1889 Electric Runabout, built by Thomas Edison and used to highlight his battery development efforts. Period photographs and story boards paint a picture of what it was like to manufacture storage batteries, and comparative charts show the advantages and drawbacks of electric and period internal combustion engines. For more information on the museum or on the Thomas Edison and the Electric Storage Battery Exhibit, visit AutomotiveHallofFame.org.
Ford’s 1963 Mustang II concept car. Photo courtesy Ford Motor Company
Detroit is broke. Not the auto industry – that’s coming back quite strong, even if employing a fraction of the workers it did just a few decades ago. It’s the city itself that is broke. And while GM and Chrysler got a reprieve from the gallows by a strong federal push for their “precision” bankruptcies in 2009 that erased their debts, there is virtually no chance of that happening for Detroit.
The city is in danger of being taken over by the Michigan state government as part of the controversial new emergency manager’s law that allows the governor to appoint a non-elected manager to oversee failed cities and school districts. Already, Detroit’s public school system and the cities of Pontiac and Flint are under emergency management.
As recently pointed out by Ronnie Schreiber at Cars In Depth, that emergency manager’s law gives the appointed manager the ability to sell or privatize assets that could be used to pay off municipal debts. Among the most appealing of Detroit’s assets is the Detroit Institute of Arts, a world-class art museum with a value potentially over a billion dollars. DIA’s 60,000-plus-item collection includes several renaissance works by the old masters, impressionists and even iconic works by the likes of pop artist Andy Warhol.
Of greater interest to automotive enthusiasts, it also includes a number of significant cars, including Henry Leland’s personal 1905 Cadillac, John and Horace Dodge’s personal 1919 Dodge Brothers cars, the Detroit Electrics owned by Clara Ford (wife of Henry Ford) and Helen Newberry Joy (wife of Packard chief Henry Joy), a Stout Scarab, a Chrysler Turbine car, a pair of Packard Pan American show cars, the Ford Cougar II concept car, and the 1963 Ford Mustang II show car, each one a headline grabber should it ever come up for sale.
It was the very success of the auto industry that saw Detroit’s rise and it was the patronization of the arts by the captains of that industry many years ago that allowed the museum to build its significant collection. The DIA exists as a testament to the very success of the Motor City during its heyday. Unlike most art museums that exist as wholly separate, non-profit cultural institutions, the DIA is owned outright by the city, and any emergency manager would ostensibly be in charge of it disposition.
Although this is largely conjecture right now, and Deputy Mayor Kirk Lewis has been quoted as saying the sale of the art is “not a part of the plan right now,” the art community is understandably upset. Should there be an attempt to sell the DIA’s holdings, the public uproar and inevitable lawsuits would almost certainly drag on for years and years, rendering any such decision useless in all practicality for settling any immediate debts.
It’s pretty easy to identify what ails the city of Detroit, but selling off some of its most treasured assets would be a true shame.
UPDATE (24.May 2013): Since this article first appeared, Detroit has been placed under emergency management, and the emergency manager, Kevyn Orr, has warned that the DIA’s assets may be put up for sale to to settle the city’s debts. In response, the DIA posted the following statement to its Facebook page:
The DIA strongly believes that the museum and the City hold the museum’s art collection in trust for the public. The DIA manages and cares for that collection according to exacting standards required by the public trust, our profession and the Operating Agreement with the City. According to those standards, the City cannot sell art to generate funds for any purpose other than to enhance the collection. We remain confident that the City and the emergency financial manager will continue to support the museum in its compliance with those standards, and together we will continue to preserve and protect the cultural heritage of Detroit.
While in and around Albany last week to visit the Ford hydroelectric plant on Green Island, I took the opportunity to head over to Schenectady to investigate something I had read on Make just a few days prior: an electric car that was once owned by Charles Steinmetz.
As presented by Make, a couple of the facts were incorrect. This was quite obviously not an electric car from 1889, nor was it an electric car built by Thomas Edison. Rather, it was a 1914 Detroit Electric, and on the surface a rather unremarkable one at that, as 96-year-old electric cars go. But from the post I was able to ascertain that the electric was housed at the Edison Tech Center in Schenectady, a sort of museum and workshop designed to celebrate the engineering heritage of New York’s Capital District. This is the area, after all, that gave birth to General Electric, that witnessed the first demonstration of the electromagnet, and where you can find the oldest continually operating hydroelectric plant (in Mechanicville, several miles upriver from Ford’s plant, generating electricity since 1898).
While Edison and Tesla take credit for many of the advances in understanding and applying electricity around the turn of the century, Steinmetz seems to be the real hero to many electrical engineers. Unlike Edison, Steinmetz formulated a more scientific and less scattershot approach to problem solving, and unlike Tesla, Steinmetz never became a nucleus for free energy crackpot ideas. Born a hunchback, Steinmetz couldn’t easily drive a car, but he did enjoy being driven in one, so he purchased the 1914 Detroit Electric, a Model 48 Duplex Drive Brougham. It wasn’t an unusual choice for Steinmetz, a scientist in the employ of General Electric and a member of Union College’s electrical engineering faculty for a couple decades. Edison himself owned one, as did Henry Ford.
The Model 48 didn’t lead an easy life. Sometime after Steinmetz was finished with it – presumably by the time he decided to build his own electric car in 1922 – it ended up discarded in a field in Glenville, New York, where it would have rotted entirely to the ground had not Union College in Schenectady bought it in 1971 and taken on a full restoration of the car. The one change to the car the Union College professors and students made was to change out the original Edison batteries for modern deep-cycle batteries, though they held on to a few of the Edison batteries, which are now on display alongside the Detroit Electric at the Edison Tech Center.
By the way, the electric is licensed and registered and emerges from the museum once a year as part of the Union College commencement ceremonies. | <urn:uuid:b85d8d7a-123b-4c87-8e2e-031ee593404e> | CC-MAIN-2015-35 | http://blog.hemmings.com/index.php/tag/detroit-electric/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00281-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.961502 | 2,298 | 3.171875 | 3 |
from Educators' eZine
I stood in front of the classroom and turned on the power to the overhead transparency projector. Placing a paper printout on the projector, I was met with a projected shadow of the paper and advice from a polite student at Austin High School; "Mr. Frisk, you have to use a transparency with the projector. Paper will not work." Twenty-three years earlier, I had graduated from this school and had just returned as a non-certified substitute teacher, in front of a classroom for the first time. I had never appreciated the stand-alone transparency projectors, desiring something that would show an image printed on paper. A generation later, I discovered little had changed in the classroom, with the exception of white boards replacing black boards, a computer on the teacher's desk, and resemblances of my old classmates in the faces of their children I was now privileged to instruct. A few years later, on my first night in graduate school, I found the projector I desired.
A Better Way to Present
The document camera, commonly known as a video visualizer/presenter or Elmo, a popular brand, is a modern replacement for the stand-alone overhead transparency projector, a device first developed to train soldiers during WWII (Crystal, 2008). Document camera technology consists of a video camera mounted on a stand with an illuminated display platform under the camera lens. Material is placed on the platform and projected through an overhead-mounted projector or by way of a television set. The document camera can display almost anything placed under its lens, including transparencies, opaque materials such as black and white or colored paper printouts and 3D objects. There is no waiting in line at the copier to create a transparency minutes before class begins, and no standing in front of the class, getting in the way, while presenting material. A teacher can stand at the back of the classroom or off to one side and display material via an S-Video cable attached to the camera and leading through the ceiling to a projector or TV. Any printout or anything in a book can be displayed by setting it beneath the camera lens, and transparencies can be viewed by turning on a base light. The learning curve with this technology is short, often measured in minutes.
Document Camera Advantages
A dozen document camera advantages:
- Display paper printouts, slides and transparencies
- Display text and/or photos from a book
- Display three-dimensional objects
- Display and save "live" images
- Display in color or B & W
- Zoom in and out capability
- Long-lasting fluorescent lighting
- Ease and spontaneity of operation
- Students pay better attention
- Students can show off their work
- Useful across disciplines
- Does not require a computer or networking
Document Camera Disadvantages
Document camera disadvantages are few but serve to limit their acceptance. They are:
- Far greater cost than stand-alone transparency projectors
- Require a projection unit or TV set
- Lack of familiarity among teachers and school administrators
Content Clarity and Time On Task
The document camera is a marvelous tool for increasing content clarity. A teacher can do live demonstrations under the camera lens to show students exactly what is being asked of them. An example is filling out a standardized test form, under the camera, so students can see how to correctly prepare the form. I did this during student teaching and my mentor teacher commented it was the first time in his twelve years of classroom instruction that students did not pose a question on preparing the test form. Watching me fill out the form, live under the camera lens, trumped any verbal instruction I could have given.
Veteran teachers who have observed my document camera in use have described student time on task as phenomenal. Students pay attention to this exciting technology. I can lecture 8th graders for 25 minutes without a single head hitting a desk, as the nearly unlimited content that can be shown is a huge advantage when it comes to making presentations fun for students of all ages and abilities.
Student Ownership in the Lesson and Equipment
I believe the greatest advantage of the document camera is the increased participation of students in lessons. Students beg for the opportunity to operate the camera! In my classroom, students run the camera and present materials while I lecture. I prepare lecture notes and graphics and divide them into sections by using line breaks in Word-produced documents. I give a student a quick instructional session before he or she presents the material by sliding the paper forward on the camera base, displaying one section of notes at a time. I allow the students to make mistakes as I teach them how to display material and avoid doing anything that might distract the audience, such as displaying their hands. A copy of the material is given to all students before presenting. When students get to college, they will know how to give professional presentations with this machine.
Since students run the camera, they feel a sense of ownership in the equipment and lesson. They respect the technology and take care of it. In one class, students named my camera after a classmate who was fascinated with it. Students can display their work for all to see when a document camera is present. While teaching the Cornell note style, I allowed students to show their peers how they were taking notes so they could learn from each other. This versatility and spontaneity sets the document camera apart from other classroom tools such as PowerPoint and electronic boards. After modeling presentations, the teacher can put students to work creating group presentations using the camera. I believe retention increases when students teach each other.
Useful Across Disciplines
Document cameras can be used across disciplines. Science teachers can greatly improve their presentations. A teacher stated it would be perfect to display his meteorites, items he did not like to pass around the classroom. Small specimens of just about anything can be clearly displayed under a powerful zoom lens. A biology teacher can show various animal, rock and plant specimens. Dissecting animals can be done under the camera with students following along on their specimens. X-rays can also be displayed.
Math teachers and students can work problems under the live camera lens. Placing math problems on paper and displaying them via the document camera works well when the white board is full. I found the classroom "smart board" to be clumsy for working out problems and school districts are putting them to that use. Electronic boards have a place in education but document cameras are more versatile and do not rely on a computer. If there is a networking or software problem, the smart board goes down with the computer. When subbing in geometry, a teacher forgot to give me access to his computer so I could run the lessons he had ready for his smart board. I had to track him down to gain computer access.
The camera works well in the language arts classroom. English teachers display a lot of text, and the camera is best at showing text. I use size twenty font when putting together presentations, but the zoom feature allows most any font size to be easily read, even from the back of an auditorium or by vision-impaired students. Tiny font sizes often used for the captions of photos can be zoomed in on and easily read. With automatic focus and one-touch zoom, zooming in and out is not a distraction and is nearly instantaneous. In contrast, electronic boards I've used convert imported text files to pictures and blur the text.
In social studies, the document camera can replace map sets. Students enjoy looking at wall-mounted maps and these should be provided, but when it comes to presenting maps in the classroom, nothing beats a document camera. I place maps into three-ring binders and organize them to coincide with my lessons. I can zoom in and out on various parts of a map and show detail that would otherwise be difficult for students to clearly see, especially those sitting in the back of the classroom, and I don't have to worry about getting in the way while displaying a map. I had a professor who would not use the camera, and his presentations suffered. He'd stand to one side of his map and get in the way as he pointed to various locations. To show a photo in a book, he would hold the book in his hands and walk around the classroom. I opined the document camera would help him display maps and photos. He stated he was too old to change.
While I am newly licensed as a secondary school teacher, I have taught in the elementary schools and have found ways to use a document camera in teaching younger children. I believe this technology would be a boon to teachers willing to give it a try, as children enjoy visuals and are not afraid of new technologies. Document cameras are simple for youngsters to operate and are also tough enough to stand up to little hands.
In spite of advantages the document camera offers teachers, I found only one in use in fourteen Minnesota public schools. I believe this is due to cost and a lack of familiarity among school personnel. A school district can purchase ten stand-alone transparency projectors for the cost of a document camera. The document camera is best utilized in conjunction with an overhead projection unit, another costly piece of equipment. Still, with many schools installing ceiling-mounted projection units, it makes sense to take full advantage of the situation by incorporating the document camera. The two technologies are complementary and most electronic boards support document cameras.
What to Look For
I have used cameras by Elmo and Canon with complete satisfaction. A 12X or greater zoom with automatic focus is fine for the classroom. The camera should have a large and unobstructed base with a backlight for displaying transparencies and bright florescent side lamps for showing everything else. Most cameras allow for a variety of hookups, but I prefer to use an S-Video cable. Controls should be on the front of the camera and be easy to operate. The camera should have a nice color display with minimal screen flicker. A small table will easily accommodate it.
Camera prices run from under $1,000 to $4,000. I used ebay to purchase my camera, in unused condition, for $167.50 but schools are not going to be buying on ebay. Teachers, being professionals, should consider weighing the camera cost against the tremendous results and timesaving a document camera can procure for them. Though the cameras are not cheap, prices are falling and costs of not upgrading to them are even greater. For districts looking to provide their educators with the best visual technology currently available to advance content clarity and familiarize students with college-level technology, the document camera is an investment worth making.
8th grade student operates the document camera.
Looking back upon my first day as a substitute teacher, I acknowledge transparency projectors helped to educate students since the Greatest Generation, but something better exists and educators owe it to themselves and their students to try advanced technology. As I walk the schools and observe teachers using traditional methods to display content, such as PowerPoint and transparency projector presentations, I see teachers, not students, presenting material and mainly text being displayed to bored students. In contrast, the document camera opens up a world of colorful possibilities! I have had test scores increase dramatically, close to a full grade, possibly the result of students paying attention to better presentations made possible through the use of advanced technology and a bit of ingenuity.
Additional info. can be found at: http://www.emints.org/ethemes/resources/S00002162.shtml
I am recently certified in 5-12 Social Studies and completed my Master of Arts in Teaching degree out of Minnesota State, Mankato. I have been a part-time substitute teacher in the Austin, Minnesota public school district since 2002.
Crystal, Garry. Wisegeek.com 2008. What is an overhead projector. | <urn:uuid:5daec657-f4b0-4b8b-b95f-17e395345721> | CC-MAIN-2015-35 | http://www.techlearning.com/news/0002/the-document-camera-advancing-classroom-visual-technology/65406 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167592.45/warc/CC-MAIN-20150827031247-00044-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.953071 | 2,416 | 3.046875 | 3 |
From MicrobeWiki, the student-edited microbiology resource
A Microbial Biorealm page on the genus Mycoplasma capricolum
Higher order taxa
Cellular organisms; Bacteria; Firmicutes; Mollicutes; Mycoplasmatales; Mycoplasmataceae; Mycoplasma
Description and significance
Mycoplasma capricolum belongs to the genus Mycoplasma, which is a genus of bacteria that does not have cell wall or murein. This spherical organism is distinguished from other bacteria by its small size (a characteristic of the genus Mycoplasma) and requirement of cholesterol for growth. However, its DNA structure suggests that Mycoplasma capricolum is derivative of Gram-positive bacteria . Though hard to isolate, this microorganism still can be obtained from lungs and pleural fluid of affected animals in necropsy and is readily cultured in cholesterol and serum-containing medium .
Mycoplasma capricolum has a circular genome whose size is 1155.5 kb. Its 25% of GC content is relatively low compared to other organisms. This organism has only one chromosome. Though plasmids are unusual among Mycoplasma species, M. capricolum is found to have a plasmid whose size is around 1.1kb-1.8kb.The extrachromosomal DNA may contribute to antimicrobial resistance since it is found more often in herds that have undergone the use of antimicrobial drugs .
Cell structure and metabolism
M. capricolum has no cell wall but only a lipid bilayer membrane, up to 2/3 of unesterified cholesterol is in the outer membrane. It does not have pili or flagella. This species requires external sources of cholesterol for growth (ie. a natural fatty acid auxotroph), nonetheless, the uptaken fatty acid is not used as substrate for energy production but for phospholipd synthesis instead. The cholesterol is first incorporated into the outer part of membrane then translocated to the inner half. Some experiments also suggest that fatty acid incorporation efficiency is influenced by the concentration of glucose and glycerol, temperature and external pH. Its membrane lipid composition is deduced to be 60% of phosphatidylglycerol and 35% of cardiolipin. M. capricolum can grow on various sterol-containing media, though with different growth rates .
The natural transmission pathway of M. capricolum is through inhalation of infectious droplets, though infection can be achieved through injecting cell culture experimentally. When inhaled, it is bound by the host cell membrane and its surface-exposing lipoproteins induce strong antigenic reactions. However, its sophisticated antigenic variation makes it hard for the host immune system to produce proper antibodies to suppress the infection. An extracellular polysaccharide structure may have capsule-like function.
M capricolum is a causative agent of caprine respiratory diseases, mastitis and severe arthritis. The infection often leads to destructive results in Africa and Asia goat farming industry. Fever is also observed at the end of incubation period. Some subspecies of Mycoplasma capricolum, for example,M. capricolum subsp. mycoides and M. capricolum subsp.capripneumoniae are especially virulent. The morbidity and mortality rate of some subspecies is of 60%-70%. Long-term survival is possible but often accompanied with pleuropneumonia or chronic pleurutis. This bacterium is capable of stimulating macrophages to produce oxygen radicals, TNF-α, IL-6 and nitric oxide. This combination leads the production of peroxynitrite, a very strong oxidant. For the subspcies M. capricolum subsp.capripneumoniae infection, hepatized lesions and necrosis are observed in the lungs. Nevertheless, M capricolum does not cause systemic reaction. As mentioned before, when M capricolum is inhaled, it is bound by the host cell membrane. Consequently, the inflammatory may be a result of acinar epithelial cells attachment of the bacteria. Nutrient absorption of M capricolum from the host cell membrane is probably the initiation of infection. The toxic oxidant accumulation and the hydrolytic enzymes produced by this species also contribute tissue damage . Tissue fibrosis is recorded.
Application to Biotechnology
The membrane of M. capricolum contains factors that are capable of activating cellular macrophage TNFα, which is a tumor necrosis factor. The macrophage could be induced by other bacterial lipopolysaccharide, but this method is limited in application due to the toxicity of bacterial lipopolysaccharide. The relatively stable membrane of M. capricolumis found to be an especially potent and non-lipopolysaccharide activator of macrophage, thus it may have therapeutic value in treating cancer .
Various studies of Mycoplasma capricolum are under process. Those projects include antimicrobials effect, which is determined by flow cytometry techniques. One of these evaluating experiments is done in Turkey to evaluate the efficacy of the drug, danofloxacin (Advocin A180) on goats. The result indicates that this treatment may be able to prevent the spread of Mycoplasma capricolum. Mycoplasma capricolum is also used in an experiment on the incorporation of non-natural amino acids into proteins(ie. amber suppression tRNA experiment). Due to the low incorporation efficiency, the application of amber suppression is limited.M. capricolum is proven to be able to contain tRNA with high specificity.
Swanepoel R., Efstratiou S, Blackburn NK: "Mycoplasma capricolum associated with arthritis in sheep". 1977 Veterinary Record Volume 101,p, 446-447
Rurangirwa, F. R., T. C. Mcguire, N. S. Magnuson, A. Kibor and S. Chema. "Composition of a polysaccharide from mycoplasma (F38) recognized by antibodies from goats with contagious pleuropneumonia Res." The Journal of Veterinary Science 1987 Volume42 p.175-178
E.H. Johnson, D. E. Muirhead and G. J. King;"Ultrastructural Changes in Caprine Lungs Infected with Mycoplasma capricolum Subspecies capripneumonia" Journal of Veterinary Medicine Series B 49 (4), 206–208
Patricia Assunção,Nuno T. Antunes,Ruben S. Rosales,Carlos Poveda,Jose B. Poveda and Hazel M. Davey;"Flow Cytometric Determination of the Effects of Antibacterial Agents on Mycoplasma agalactiae, Mycoplasma putrefaciens,Mycoplasma capricolum subsp. capricolum, and Mycoplasma mycoides subsp. mycoides Large Colony Type ";Antimicrobial Agents and Chemotherapy, August 2006, p. 2845-2849, Vol. 50, No. 8
Elmiro R. Nascimento, Al J. DaMassa, Richard Yamamoto, M. Graça F. Nascimento; "Plasmids in Mycoplasma species isolated from goats and sheeps and their preliminary typing"; Revista de Microbiologia, 1999 30: p.32-36
Masaki Q. Fujitaa, Hiroshi Yoshikawab and Naotake Ogasawarab; "Structure of the dnaA and DnaA-box region in the Mycoplasma capricolum chromosome: conservation and variations in the course of evolution";Gene Volume 110, Issue 1, 2 January 1992, p.17-23
Steve Caplan, Ruth Gallily, Yechezkel Barenholz;"Characterization and purification of a mycoplasma membrane-derived macrophage-activating factor "; Cancer Immunology, Immunotherapy,Volume 39 p.27-33, Number 1 / January, 1994
U. Ozdemir, G. R. Loria, K. S. Godinho, R. Samson, T. G. Rowan, C. Churchward, R. D. Ayling and R. A. J. Nicholas; "Effect of danofloxacin (Advocin A180) on goats affected with contagious caprine pleuropneumonia";Tropical Animal Health and ProductionVolume 38, Numbers 7-8 / October, 2006 p.533-540
Hikaru Taira, Yohsuke Matsushita, Kenji Kojima and Takahiro Hohsaka ; "Development of amber suppressor tRNAs appropriate for incorporation of nonnatural amino acids "; Nucleic Acids Symposium SeriesNo. 50 Pp. 233-234, 2006
Jose M. Odriozola, Ellen Waitzkin, Terence L. Smith, and Konrad Bloch;"Sterol Requirement of Mycoplasma capricolum"; Proceedings of the National Academy of Sciences September 1, 1978 vol. 75 no. 9 p.4107-4109
T H Huang, A J DeSiervo, and Q X Yang; "Effect of cholesterol and lanosterol on the structure and dynamics of the cell membrane of Mycoplasma capricolum. Deuterium nuclear magnetic resonance study"; Biophysical Journal1991 March; 59(3): p.691–702
S Clejan, R Bittman, S Rottem; "Uptake, Transbilayer Distributionm and Movement of Cholesterol in Growing Mycoplasma capricolum cells"; Biochemistry October 31, 1978, Volume 17, Number 22 p.4579-4583
M.M. Darzi, N. Sood, P.P. Gupta and H.S. Banga; "The pathogenicity and pathogenesis of Mycoplasma capricolum subsp. capripneumoniae (F38) in the caprine mammary gland" Veterinary Research Communications Volume 22, Number 3 / April, 1998 p.155-165 | <urn:uuid:2eb0ea2e-646d-4ffd-bfcb-2300e32cd86c> | CC-MAIN-2015-35 | http://microbewiki.kenyon.edu/index.php/Mycoplasma_capricolum | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060413.1/warc/CC-MAIN-20150827025420-00284-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.835251 | 2,180 | 3.265625 | 3 |
"Looking Back, Looking Ahead"
At age nine, I told my mother, "Mom, wolves are my favorite animal!" They have been my favorite ever since. Naturally, when my friend suggested I do a school science project on wolves, I was very enthusiastic.
I began my experiments in the fall of 1998. My first experiment introduced me to the complex social hierarchy of the wolf pack, including signs of dominance and subordination. The purpose of this experiment was to test if the social ranking of a pack could be determined based on the physical gestures and interactions of the wolves. To run my experiment, I researched life within a wolf pack and the physical signs used to show ranking. Signs of dominance include walking with raised tail and ears, standing over a subordinate animal, scent marking, and howling (Esker, 1998). Subordinate signals include an averted gaze, tucked tail, crouching, lowered ears, and lying on the back or side (Esker, 1998). I also studied the dominance hierarchy of the pack, which ranges from the alphas, or leaders, to the omega, or outcast. In between is the beta, mid-ranks, pups, and juveniles (Esker, 1998). Next, I observed the wolf pack at Peoria Wildlife Prairie Park over a period of two days. The pack consisted of nine non-breeding wolves ranging in age from nine to twelve years. All males had been neutered to prevent inbreeding and the resulting genetic abnormalities.
My 1998 experiment was successful. I determined the pack's ranking based on different dominant or subordinate postures. Thus, I concluded the pack's hierarchy could be determined by observing the physical actions of the animals (Esker, 1998). I decided to do a follow-up experiment this year.
For ideas on a follow-up, I contacted Pat Goodman at Wolf Park, Indiana. She suggested that I observe calming signals, special physical gestures used by canines to downplay aggression, fear, and stress within the pack (Goodman, 1999). Calming signals are vital to wolf interaction. "Predatory habits lead to strong emotion," (Steinhart, 1995, p. 24) and fighting within the pack is a dangerous activity. "It's unwise to let confrontation escalate further when the individual that could be hurt is a relative . . . whose help in the hunt is crucial for the health and survival of the entire pack." (Greeley, 1996, p. 93).
Calming signals can be classified into two categories, signals that have their origin in infantile behavior, known as redirected behavior patterns, and complete concealment of aggression (Abrantes, Dog Language, 1997). Examples of redirected behaviors are pawing, play position, and muzzle-nudging (Abrantes, Dog Language, 1997). Calming signals that rely on the concealment of aggression are turning the head, licking the nose or air, and walking in a curve (Rigaas, 1997).
I returned to Peoria to run my next experiment with the purpose of discovering whether subordinate wolves use calming signals more often than their dominant pack mates. Upon my return to the park, I learned that the pack's population had dropped to seven members. One wolf had died and another was removed for his protection. Once again, I observed the pack for two days, collecting instances of calming and ranking signaling. Following are my field journal, results, and graphs.
OBSERVATIONS: Saturday, October 23, 1999
3:50 pm: Short Ears enters den. Subordinate wolves spend significant time in this area.
4:00 pm: Short Ears emerges. Walks in a curve toward unknown wolf. This is a calming signal. Canines will not usually approach another animal directly unless other clear signals have been employed (Rigaas, 1997). Appears nervous, tail lowered, a subordinate posture (Esker, 1998). Walks near fence, stops twice to look at observer.
4:10 pm: Cream 2 gets up. Tail in high position, dominant posture (Esker, 1998).
4:15 pm: Light Cream approaches Den Wolf. Den Wolf lies on back, pawing Light Cream's muzzle. Pawing originated in movements pups use to stimulate milk production in the mother (Abrantes, Dog Language, 1997). This is a redirected behavior pattern.
4:20 pm: Daughter Wolf and Den Wolf approach each other. Daughter Wolf turns back to Den Wolf.
4:35 pm: Den Wolf joins Daughter Wolf and White Wolf at fence. Den Wolf muzzle-nudges Daughter Wolf. Muzzle-nudging is another redirected behavior. Pups use this behavior to encourage adults to regurgitate food for them (Abrantes, Dog Language, 1997). Den Wolf notices observer, sniffs ground, a form of aggression concealment (Rigaas, 1997).
OBSERVATIONS: Sunday, October 24, 1999
9:15 am: Wolves wake, stretch, sniff. Sniffing is a calming signal. During a stressful encounter, the wolf will hold his head to the ground until the situation passes (Rigaas, 1997).
9:20 am: Short Ears and Daughter Wolf sniff ground.
9:25 am: Den Wolf sniffs sleeping wolves in prairie. White Wolf approaches fence where observer is standing, looks, turns, walks away. White Wolf approaches Daughter Wolf, sniffs.
White Wolf approaches fence. Turns side to observer, turns back. A wolf turns its entire body away to indicate its unwillingness to fight ("Learn about Wolves", 1999). Remains several minutes, licks air, a calming signal (Rigaas, 1997).
Walks away. Yawns. Yawning is one of the most common calming signals. Canines use it in hundreds of situations (Rigaas, 1997). White Wolf approaches Daughter Wolf, both wolves turn heads. Daughter Wolf stretches, White Wolf follows Daughter Wolf. Wolves muzzle-nudge. Daughter Wolf licks air. White Wolf, Daughter Wolf and Short Ears approach fence. Short Ears sniffs air. Short Ears yawns. All go to edge of fence area and muzzle-nudge. Light Cream joins. All walk toward observer in a curve, stop, heads turned, a sign concealing any aggressive appearance (Rigaas, 1997).
Short Ears yawns. All line up, begin to patrol perimeter of area. Cream 2 lies down with back to observer. Wolves appear to be with observer; indicating calming signals, head turning, yawning, and walking in a curve (Rigaas, 1997) toward observer.
9:40 am: Cream 2 approaches Daughter Wolf. Daughter Wolf turns back, sniffs ground.
9:45 am: Daughter Wolf approaches, Den Wolf sniffs. Cream 2 walks wide of Daughter Wolf. White Wolf raises leg, paws air. Daughter Wolf continues sniffing ground. No wolf is near.
9:50 am: Daughter Wolf sniffs White Wolf. White Wolf leaves. Daughter Wolf turns toward observer. Daughter Wolf walks away with open mouth. White Wolf approaches Den Wolf and Short Ears. Wolves turn heads away from each other. White Wolf approaches, Daughter Wolf turns head. White Wolf approaches observer, turns head to side and back. Short Ears walks to fence, licks air.
Daughter Wolf approaches fence, turns head. Den Wolf approaches fence, turns back to observer. Short Ears yawns as he passes observer. White Wolf and Den Wolf approach fence in curve. Den Wolf turns head. White Wolf and Den Wolf sniff air. Cream 2 turns back. White Wolf and Den Wolf turn sides to observer.
10:00 am: Daughter Wolf submits to Cream 1. Cream 2 yawns at Cream 1. Cream 1's, White Wolf's, and Den Wolf's backs and sides turned to observer. Vocalizations occurred during submission. Short Ears turns back to Den Wolf.
10:07 am: Short Ears lies down, a calming signal (Abrantes, Dog Language, 1997). Daughter Wolf passes, yawns.
10:10 am: White Wolf approaches observer, turns head, sniffs ground. White Wolf muzzle-nudges Den Wolf. Cream 2, Daughter Wolf, Den Wolf and White Wolf growl. Den Wolf licks Cream 2's muzzle. Short Ears crouches as Cream 2, White Wolf, and Daughter Wolf pass, a sign of submission (Esker, 1998).
10:20 am: Daughter Wolf and Light Cream approach observer, and turn their backs. Cream 2 lies down, turns head. All wolves lie down or walk with heads turned from observer.
10:25 am: Cream 2 sniffs air. All wolves approach gate, turn from observer. Daughter Wolf turns back to observer.
11:05 am: Light Cream narrows eyes, blinks. Softening the gaze is a sign used to calm another animal (Rigaas, 1997). Cream 1 stretches, yawns.
11:10 am: Daughter Wolf licks White Wolf.
11:15 am: Daughter Wolf follows Den Wolf into den. Three Creams follow, growl. Den Wolf emerges, Cream 2 forces him to full submission. Den Wolf paws Cream 2, Cream 1 paws ground.
11:25 am: Two Creams, Daughter Wolf, White Wolf approach gate. Den Wolf muzzle-nudges Daughter Wolf and Cream 1.
11:29 am: Cream 1 plays with stick.
11:33 am: White Wolf, Den Wolf Short Ears, and Light Cream approach observer. Light Cream yawns. Light Cream licks air. Daughter Wolf muzzle-nudges Light Cream.
11:39 am: Cream 2 bumps Daughter Wolf. Bumping is a redirected behavior used by pups when they nurse (Knudsen, 1999).
11:40 am: All wolves patrol perimeter of enclosure. Den Wolf goes into den. Wolves follow, look in den. Den Wolf comes out. Den Wolf muzzle-nudges Daughter Wolf, Cream 1, White Wolf. Den Wolf lies on back and paws. Den Wolf blinks often.
11:45 am: Den Wolf stretches. All wolves muzzle-nudge, yawn. Wolves chase Den Wolf. Den Wolf in full submission. Den Wolf walks in wide curve.
11:55 am: Wolves pace, yawn. It is normal feeding time on weekdays. Wolves are excited.
12:14 pm: Daughter Wolf and Short Ears meet. Turn heads away from each other.
12:30 pm: Short Ears rests head on Daughter Wolf. Physical contact promotes calming (Abrantes, Dog Language, 1997).
12:58 pm: Light Cream, tail high, moves into prairie. Light Cream leaves scent-marks. Dominant behaviors (Esker, 1998).
1:09 pm: Light Cream wanders through high grass, scent-marking. Wolves begin moving into prairie section of enclosure. This is the time bones are thrown. Pack becomes excited.
1:12 pm: Den Wolf swings around Cream 2 in a curve. Cream 1 approaches Short Ears. Short Ears turns head, sniffs ground.
1:14 pm: Cream 2 yawns. Den Wolf, Daughter Wolf muzzle-nudge Short Ears. Daughter Wolf licks Cream 1. Short Ears muzzle-nudges Light Cream.
1:28 pm: Light Cream, Cream 2, Den Wolf lick muzzles. Daughter Wolf yawns, stretches. All wolves lick Light Cream's muzzle. Many wolves sniff ground.
1:30 pm: Wolves mill around. Cream 2 walks between Den Wolf, Short Ears, separating them, stemming any possible aggression (Rigaas, 1997). White Wolf lies down.
1:35 pm: Wolves approach outlook. Many sniff ground. Light Cream gets first bone. Light Cream picks and chooses bones, attacks another wolf, dominant behavior (Esker, 1998). Light Cream chases Den Wolf away from bone. Den Wolf licks air.
1:40 pm: Light Cream approaches Cream 2 with bone in mouth. Cream 2 moves into play position. Play is used during confrontations to appease aggressive animals (Zimen, 1997).
1:45 pm: Light Cream attacks Den Wolf, takes bone. Light Cream and Den Wolf sniff as they come closer. Den Wolf turns from Light Cream. Den Wolf walks away from Daughter Wolf. White Wolf stands with mouth drawn back.
From my observations, I concluded that frequent use of calming signals is not dependent on the social rank of the wolf. Rather, these signals are used far more often than originally expected. The data suggests that certain wolves, such as Daughter Wolf, take the role of peacemaker, making it their duty to downplay aggression and comfort their peers. The data also indicates that relations within a wolf pack are indeed extremely complex.
As we enter the twenty-first century, I believe the use of calming signals among wolves will eventually become less pronounced, based on two scenarios. The first suggests that as the wild wolf population decreases, more wolf refuges will be created. The aggressive tendencies of these wolves will decrease due to contact with humans and the fact that hunting will no longer be essential to their survival. With aggressive behavior curbed, the need for calming signals will gradually lessen. We see this phenomenon today in domesticated dogs.
The second scenario is rooted in man's perception of the wolf. If wolf refuges are not founded, and wolf reintroduction projects such as Yellowstone fail, wolves will vanish. As the wolf becomes extinct, so will the use of canine calming signals.
The future of the wolf is uncertain. There are many factors to consider, such as the extinction of these animals or domestication through zoos and refuges. The twenty-first century will be a tumultuous one for the wolf, but, it is hoped, one that will bring about many positive changes, disproving my predictions.
Abrantes, Roger. Dog Language. Naperville, IL: Wakan Tanka Publishers, 1997.
Abrantes, Roger. Evolution of Canine Social Behavior. Naperville, IL: Wakan Tanka Publishers, 1997.
The Boomer Wolf Web site. New England Wolf Education Foundation. Retrieved October 2, 1999, from the World Wide Web: http://www.boomerwolf.com/video.htm
Dutcher, J. and R. Ballantine. The Sawtooth Wolves. Bearsville, NY: Rufus Publications, 1996.
Esker, C. Behavioral Clues to Social Ranking Within the Wolf Pack.Unpublished manuscript, 1998.
Fritts, S.L. "Wolf." Grolier Multimedia Encyclopedia Version 12.0; available as CD-ROM. Canada: Grolier Interactive, 1999.
Goodman, P. Re: Wolf Experiments. E-mail to Claire Esker, September 29, 1999.
Greeley, Maureen. Wolf. New York: Barnes & Noble Books, 1996.
Griffin, D.R. "Progress Toward a Cognitive Ethology." In Carolyn A. Ristau, ed., Cognitive Ethology: The Minds of Other Animals. Hillsdale, NJ: Lawrence Erlbaum Associates, 1991: (pp. 3 17).
Immelmann, K. and Colin Beer. A Dictionary of Ethology. Cambridge, MA: Harvard University Press, 1989.
International Wolf Center. Learn about Wolves page. Retrieved October 2, 1999, from the World Wide Web: http://www.wolf.org/content.htm
Knudsen, J. Wolf Country: Wolf Pack. Retrieved October 16, 1999, from the World Wide Web: http://www.wolfcountry.net/information/WolfPack.html
Larsson, J. (1998). The Wolf Society. Wolfeye. Retrieved October 16, 1999, from the World Wide Web: http://www.route001.se/wolfeye/wolf/
Marler, P., Karakashian S., and M. Gyger. "Do Animals Have the Option of Withholding Signals When Communication is Inappropriate? The audience effect." In Carolyn A. Ristau,ed., Cognitive Ethology: The minds of Other Animals. Hillsdale, NJ: Lawrence Erlbaum Associates, 1991: (pp. 187 208).
McFarland, D., Ed. Oxford Companion to Animal Behavior. Oxford: Oxford University Press, 1982.
Mission: Wolf-Sanctuary for Wolves. Retrieved October 2, 1999, from the World Wide Web: http://www.indra.com/fallline/mw/
Pettijohn, T. "Ethology." In Grolier Multimedia Encyclopedia Version 12.0; available as CD-ROM. Canada: Grolier Interactive, 1999.
Rehms, N. (1999, October 23). Peoria, IL: Peoria Wildlife Prairie Park. (Interview).
Rigaas, T. On Talking Terms with Dogs: Calming Signals. Carlsborg, WA: Legacy By Mail, 1997.
Ryden, Hope. God's Dog. New York: Coward, McCann & Geoghegan, 1975.
Ryon, J. Re: Science Project. E-mail to Claire Esker, October 3, 1999.
Scott, J.P. and J.L. Fuller. Genetics and Social Behavior of the Dog.Chicago: University of Chicago Press, 1965.
Smith, W. J. "Animal Communication and the Study of Cognition." In Carolyn A. Ristau, ed., Cognitive Ethology: The Minds of Other Animals.Hillsdale, NJ: Lawrence Erlbaum Associates, 1991: (pp. 209 230).
Steinhart, P. Company of Wolves. New York: Alfred Knopf, 1995.
Zimen, E. Wolf Species in Danger. New York: Delacorte Press, 1981.
Less than 1 period
Tell students that in the essay they are about to read a student observes a pack of wolves in order to better understand their behavior.
Have students read the essay focusing on how the data were recorded and presented.
When students have finished have them identify how the student recorded and presented the data. Ask:
Allow students time to discuss other aspects of the essay they found interesting. | <urn:uuid:f0ee3c3f-3533-4850-9761-23684a1ebb9b> | CC-MAIN-2015-35 | http://www.amnh.org/learn-teach/young-naturalist-awards/winning-essays2/selected-winning-essays-1998-20032/the-big-chill-calming-signals-among-wolves | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00343-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.858008 | 3,794 | 3 | 3 |
When you flush the toilet, do you know where your shit goes? Sure, in most cities, it flows into the main sewer system until it reaches a waste-water treatment plant somewhere on the outskirts of town. But then what happens to it? Do you have any idea? If your first response is, “Ask somebody who cares,” then you need to read David Waltner-Toews’s The Origin of Feces. Now.
Despite its goofball title and jokey tone, The Origin of Feces is a deeply serious work of environmental science that strives to do for how we think about shit what Eric Schlosser and Michael Pollan have done for how we think about what we eat. In just more than 200 breezy, gag-filled pages, Waltner-Toews argues that by crowding people into cities and animals onto factory farms we have turned shit from a vital part of a healthy ecosystem into a toxic waste that must be managed. “We are taking a brilliantly complex diversity of animal, plant, and bacterial species,” he writes, “and transforming them into a disordered mess of bacteria and nutrients. We are transforming a wonderful complex planet into piles of shit.”
Unlike journalists such as Schlosser, Pollan, and Malcolm Gladwell who have led the charge in recent years to popularize abstruse scientific findings for lay readers, Waltner-Toews is himself a veterinarian and epidemiologist who teaches population medicine at the University of Guelph, near Toronto. Unfortunately for American readers, Waltner-Toews is also a Canadian whose new book is published by an independent Canadian publisher, ECW Press, which means The Origin of Feces will have nowhere near the public profile of a new Gladwell or Pollan tome.
This is a crying shame. I cannot think of a more necessary work of popular science since Pollan’s The Omnivore’s Dilemma and Schlosser’s Fast Food Nation, which together pulled back the curtain from the American agricultural-industrial food complex and helped kick the slow-food movement into gear. In some ways, though, those books had an easier time of it. While industrial feedlots and food processing plants may be largely invisible to most consumers, we eat the results of this industrial approach to food, which in a lot of cases tastes pretty awful. You don’t have to be an organic farming purist to be willing to pay a little extra to buy whole foods that taste better and, by extension, do slightly less damage to the planet.
No matter how pure your eating habits, however, your shit still stinks, and unless you are living in a yurt in the wilderness, it still gets flushed into the same sewage-treatment system that everybody else uses. Like so many of the systems that undergird a modern industrial society, waste management is opaque to everyone outside a tiny coterie of specialists — until, of course, there is an outbreak of food-borne illness or a fish-killing algae bloom caused by agricultural runoff, in which case we run around looking for villains, who are almost by definition not ourselves.
Waltner-Toews aims to tear down the mental wall we have built between ourselves and our crap and show us that what we excrete is not simply toxic sludge, but an essential, nutrient-rich link in the life cycle of our planet. To do this, he says, we must first find a good way to talk about shit. Early on, Waltner-Toews takes his reader on a whirlwind tour through the etymology of dozens of terms we use to describe what comes out of our asses, from the profane (“shit” and “crap”) to the euphemistic (“poop” and “BM”) to the technical (“biosolids” and “fecula”). This chapter is hilarious and often enlightening. Who knew that “excrement” comes from the Latin word excernere, “to sift,” or that the Middle English word “crap” found a place in the modern lexicon in part by its association with Thomas Crapper, who popularized the use of the flush toilet?
But here as elsewhere in the book, Waltner-Toews’s purpose is deadly serious. The way we talk about shit, he points out, lays bare the way we think about this basic byproduct of human life — which is that, most of the time, we’d rather not think about it at all. Shit embarrasses us. It’s dirty and smelly, and in colloquial language it is the go-to term for everything from outrageous lies (“bullshit”) to illegal drugs (“really good shit”) and worthlessness (“a piece of shit”). But when we are forced to think about its real-world consequences, we quickly retreat to vague technical terms like “biosolids” that have the advantage of not having any real meaning to most people.
This matters, Waltner-Toews argues:
We can use precise technical terms when we want the engineers to devise a solution to a specific organic agricultural or urban waste problem…In so doing, however, we alienate the public, who are suspicious of words like biosolids. This public will need to pay for the filtration and treatment plants. They suspect that the solution to chicken shit in the water might not be a better filtration plant, but they don’t have the language to imagine and discuss what the alternatives might be.
Waltner-Toews spends the rest of the book giving his reader the language, and the knowledge, to begin imagining alternatives to our present industrially engineered solutions to our quickly multiplying waste problems. His central point is that in a healthy, bio-diverse ecosystem, shit is neither waste nor a problem. For millions of years, animals have been eating plants and other animals and shitting out whatever their bodies couldn’t use, in the process distributing seeds that have allowed stationary plants to spread and providing nutrients to fertilize the soil and feed billions of insects and smaller organisms.
But by concentrating people and the animals we eat into increasingly industrialized spaces, we have severed the vital link between shit and the natural biological processes that have been cleaning it up and re-using it for as long as there has been life on our planet. One result is pollution, which, as Waltner-Toews suggests, is just a word we use to describe what happens when a substance — carbon dioxide, say, or pig shit– gets concentrated in one place faster than the natural systems can recycle it. Another outcome is a rise in food-borne illnesses like salmonella and E. coli, most of which are caused by animal or human shit finding its way into our food. The separation of people and animals from the surrounding biosphere also contributes to broader systemic imbalances that lead to problems like extinction of species that depend on healthy ecosystems, famines resulting from nutrient-starved soils, and widespread use of petroleum-based fertilizers designed in part to make up for the lack of natural shit-based fertilizer.
The problem of shit, Waltner-Toews says, is a classic “wicked problem,” meaning a problem that can’t be solved by straightforward science and engineering without creating a whole set of new problems. We can, for instance, pump pig shit into vast manure lagoons and pump the animals themselves full of antibiotics that help them avoid diseases derived from eating shit, but ultimately the toxic brew in those manure lagoons has to go somewhere and antibiotics have a nasty habit of creating antibiotic-resistant strains of bacteria.
The Origin of Feces is better at describing the wickedness of this problem than at articulating solutions, which get high-falutin’ and improbable in a hurry. Drawing on the work of scientists who see the complex interactions in natural ecosystems as “panarchy,” and quoting the philosopher Arthur Koestler, who saw each living thing as a whole unto itself and also a part of something larger, which together he called a “holon,” Waltner-Toews uses the term “holonocracy,” which he says “embodies a way of interpreting nested social and ecological changes and implies a new way to think about management and governance based on those observations.”
Yeah, I know. I didn’t really follow that, either. In later pages, Waltner-Toews thankfully returns to plain English and argues that the problem of shit is merely a particularly unpleasant manifestation of the more generally unsustainable nature of our industrial age, which has created “too much shit in the world, in all the wrong places.” He details some nifty small-scale solutions involving the repurposing energy-rich shit into fuel or animal feed. But at the macro-level, he seems to be saying that a comprehensive, systemic problem of this kind demands an equally comprehensive, systemic solution, which, if I am reading him right, means seriously rethinking industrialized agriculture and urbanized population. Which — call me crazy — I don’t see happening anytime soon.
But of course the very difficulty Waltner-Toews has explaining his solutions for a non-specialist audience underscores the fundamental wickedness of the problem. The Origins of Feces is a genial book, and often a kick to read, but I put it down thinking two things: 1. I will never look at shit the same way again; and 2. We are in deep shit. That Waltner-Toews, clearly one of the smartest guys in the room when it comes to this issue, cannot explain a solution in terms I can understand makes me think we are in even deeper shit than he claims. | <urn:uuid:a8a99f61-210a-4355-ab62-d3bb12b78639> | CC-MAIN-2015-35 | http://www.themillions.com/2013/05/up-shit-creek-sans-paddle-on-david-waltner-toewss-the-origin-of-feces.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645359523.89/warc/CC-MAIN-20150827031559-00277-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.952996 | 2,063 | 2.6875 | 3 |
Advances in computers and fabrication technology have allowed architects to create fantastic designs with relative ease that in years past would likely require the labor of countless master craftsmen. Architecture firms like Gramazio Kohler Architects are known for their innovative approach to digital fabrication, adapting technology from a variety of fields. To create this stunning new brick façade for Keller AG Ziegeleien, Gramazio Kohler used an innovative robotic manufacturing process called “ROBmade,” which uses a robot to position and glue the bricks together.
When one hears the term masonry architecture, digital fabrication and automated construction processes are probably not the first ideas to come to mind. By its very nature, the architecture produced with stone masonry is often heavy, massive, and incorporates less natural light than alternative methods. However, with their research proposal for "Smart Masonry," ZAarchitects are proposing to change masonry buildings as we know them and open opportunities for digital fabrication techniques in stone and other previously antiquated materials. Read on after the break to get a glimpse of what these new masonry buildings could look like and learn more about the process behind their construction.
Between 1945 and 1981 around 170 million prefabricated (prefab) residential units were constructed worldwide. Now, as part of a study undertaken by Pedro Alonso and Hugo Palmarola of the Pontificia Universidad Católica de Chile between 2012 and 2014, an exhibition at the Tel Aviv Museum of Art features 28 large concrete panel systems from between 1931 and 1981. In so doing, it explores a transnational circulation of these objects of construction, "weaving them into a historical collage of ambitions and short-lived enthusiasm for utopian dreams."
This show, curated by Meira Yagid-Haimovici, is an attempt to reveal "how architecture and urbanism was charged with historical, social, and political narratives, and how the modernist vision promoted the fusion of aesthetics and politics." The models, which are being exhibited as part of the Production Routes exhibition, seek to highlight the richness embodied in 'generic' architecture through the lens of prefab construction methods.
An upcoming conference at the University of Manchester will tackle the idea of Model Making In The Digital Age. Based on the premise that the world of architecture is dominated by digital tools today more than ever, from design and manufacturing to the ways in which we visualise complex spaces and structures physically and virtually, this symposium seeks to shed new light on the practice of model making and its uses.
The 5AXISMAKER is a desktop 5-axis multi-fabrication CNC machine that hopes to expand the possibilities of digital fabrication by making it cheap and more versatile. Should the project receive backing on Kickstarter before the 27th October 2014, the possibility of 5-axis milling will become an affordable reality for manufacturing complex design prototypes. The product in development "provides a large cutting volume for it’s size, therefore producing "generously sized objects." Developed by graduates of London's Architectural Association, they hope to "shake the manufacturing world with new ways of fabricating using industrial robots right at your desk."
Why do we make models? From sketch maquettes and detail tests to diagrammatic and presentation models, the discipline of physically crafting ideas to scale is fundamental to the architect's design process. For architect and educator Nick Dunn, architectural models ultimately "enable the designer to investigate, revise and further refine ideas in increasing detail until such a point that the project's design is sufficiently consolidated to be constructed." In Dunn's second edition of his practical guide and homage to the architectural model, the significance and versatility of this medium is expertly visualised and analysed in a collection of images, explanations, and case studies.
Location: Plaça de les Glòries Catalanes, 08013 Barcelona, Barcelona, Spain
Project Architects: Daniel Ibáñez, Rodrigo Rubio
Collaborator: FAB LAB NETWORK
Client: IAAC, ENDESA
Project Year: 2014
Photographs: Adrià Goula
Named the 2014 Designer of The Year by Contract Magazine, Krista Ninivaggi of K & Co is an expert in material innovation. In the following interview, Susan S. Szenasy of Metropolis Magazine asks the young designer about her design process, the materials she uses and more.
The Brooklyn based firm The Principals are known for their interactive design, industrial design and installation work. The video above hi-lights their latest "bionic" installation, which actually responds and reacts to human movement thanks to myoelectric sensors that pick up voltage increases on the skin when a muscle contracts. To learn more head over to their website - and make sure to check out all of The Principals other installations featured on ArchDaily.
In this article, originally appearing on the Australian Design Review as "Tolerance and Customisation: a Question of Value", Michael Parsons argues that the complex forms made possible by digital fabrication may soon be victims of their own popularity, losing their intrinsic value as they become more common and the skill required to make them decreases.
The idea of tolerance in architecture has become a popular point of discussion due to the recent mainstreaming of digital fabrication. The improvements in digital fabrication methods are allowing for two major advancements: firstly, the idea of reducing the tolerance required in construction to a minimum (and ultimately zero) and secondly, mass customisation as a physical reality. Digital fabrication has made the broad-brushstroke approach to fabrication tolerance obsolete and now allows for unique elements and tolerance specific to each element. The accuracy that digital fabrication affords the designer, allows for the creation of more complex forms with greater ease and control. So far, this has had great and far reaching implications for design.
Read on to find out how this ease of form-making could diminish the success of complex forms.
Autodesk has launched the Autodesk Foundation, an organization which will "invest in and support the most impactful nonprofit organizations using the power of design to help solve epic challenges." In an effort to aid those tackling global issues such as "climate change, access to water, and healthcare," the foundation will provide select design-oriented grantees with software, training and financial support.
A total of 68 entries from across the globe representing 14 countries on 5 continents were narrowed down to 4 finalists and 4 honorable mentions in July by the First Round jury consisting of Phil Anzalone, Maria Mingallon, Gregg Pasquarelli, Randy Stratman, and Skylar Tibbits. The Second Round juried by James Carpenter, Neil Denari, Mic Patterson and William Zahner conferred and selected from the finalists 3xLP. All four finalists were exhibited at the ACADIA Adaptive Architecture Conference at the University of Waterloo in October, 2013.
By now, we have all heard the mantra. In twenty years time, the world's cities will have grown from three to five billion people, forty percent of these urban dwellers will be living at or below the poverty line facing the constant threat of homelessness - scary statistics and even scarier implications.
ECOnnect, a Holland-based design firm, envisions a solution for these future housing shortages, one that could build a one-million-inhabitant city per week for the next twenty years for $10,000 per family. Peter Stoutjesdijk, architect at ECOnnect, created the concept after widespread devastation in Haiti caused by a massive earthquake left of hundreds of thousands of people homeless depending on tents for temporary relief.
Since the dawn of the modern era, there has been a strong relationship between architecture and the car, especially in the works of Le Corbusier.
Le Corbusier was fascinated by his car (the Voisin C7 Lumineuse); the aesthetics of this functional, mass produced machine deeply influenced his designs. Its focus on function translated into his concept that houses should be "machines for living" and inspired a series of experiments of mass produced, pre-fab houses (such as the Maison Citrohan). Most of these concepts were later materialized in the iconic Villa Savoye, whose floorplan was even designed to accommodate the car's turning radius.
Robots fascinate us. Their ability to move and act autonomously is visually and intellectually seductive. We write about them, put them in movies, and watch them elevate menial tasks like turning a doorknob into an act of technological genius. For years, they have been employed by industrial manufacturers, but until recently, never quite considered seriously by architects. Sure, some architects might have let their imaginations wander, like Archigram did for their "Walking City", but not many thought to actually make architecture with robots. Now, in our age of digitalization, virtualization, and automation, the relationship between architects and robots seems to be blooming...check it out.
Keep reading to see five new robots making architecture.
Digital fabrication has been a popular discussion among architecture and design professionals. Students are digitally fabricating their models and building their own personalized 3D printers. What was impossible to build by hand is quickly assembled through digital fabrication. As the technology rapidly evolves, larger objects are being fabricated at more affordable prices. Today we may be digitally fabricating furniture and tomorrow we might be 3D printing our house. Architects and designers are jumping on board and exploring the capabilities of this game changing technology. Diatom Studio is currently working on releasing SketchChair. This program offers easy to use, open-source software that allows you to design your own personalized digital furniture. With a few clicks of a mouse, you can view your masterpiece and digitally occupy it in order to test its comfort level and structural capabilities. Options range from personalized ready-made designs to more advanced features that allow you to design your chair from scratch. Satisfied with your design? Perfect. The SketchChair allows you to export your masterpiece to any digital-fabrication service instantly. In a matter of days, you will receive your customized CNC-milled plywood parts for quick hand construction. Digital fabrication is changing the world of design and becoming available to the masses.
A team of graduate students recently created a temporary installation on the Kent State University, Kent campus in Ohio. The project grew out of an internal challenge in the matR design competition. Designed by graduate students Brian Thoma, Carl, Veith, Victoria, Capranica, Matt Veith, and Griffin Morris, the tunnel-like structure called “The Passage” was a study to support the conceptualization and actualization of innovative and experimental material research. The students created the initial form in Rhinoceros with a couple Grasshopper definitions as a waffle structure of 26 vertical ribs and 24 horizontal struts. More images and information after the break.
Recent graduates of the Masters program at Ball State University’s College of Architecture and Planning, Adam Buente and Kyle Perry have spent the last couple years developing their unique interests and ideas into a business of their own. Working with fellow students Elizabeth Boone and Eric Brockmeyer, they began a collaborative graduate thesis project focused on exploring the possibilities of design and fabrication via digital equipment as a business platform. After their first year out of school they have begun to independently manage their Indiana based company. PROJECTiONE recently produced the ACADIA competition winner HYPERLAXITY and boast other projects such as EXOtique, bitMAPS, and Radiance. Words and images from the PROJECTiONE team after the break. | <urn:uuid:6739457f-75ff-40a4-b532-2e9be8780cfc> | CC-MAIN-2015-35 | http://www.archdaily.com/tag/digital-fabrication/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065464.19/warc/CC-MAIN-20150827025425-00337-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.945235 | 2,383 | 2.6875 | 3 |
By their nature, South Florida’s tropical butterflies have always been ephemeral creatures, coming and going with the rhythms of the life cycle and season. Now they’re just gone.
In what may be an unprecedented die-off, at least five varieties of rare butterflies have vanished from the pine forests and seaside jungles of the Florida Keys and southern Miami-Dade County, the only places some were known to exist.
Marc Minno, a Gainesville entomologist commissioned by the U.S. Fish and Wildlife Service to perform a major survey of South Florida’s butterfly population, filed reports late last year recommending that the Zestos skipper and rockland Meske’s skipper — both unseen for a decade or more — be declared extinct. He believes the same fate has befallen a third, a Keys subspecies called the Zarucco duskywing, and that two more, the nickerbean blue and Bahamian swallowtail, also have disappeared from their only North American niche.
Considering that there have been only four previous presumed extinctions of North American butterflies — the last in California more than 50 years ago — Minno finds the government response to such an alarming wave frustrating.
“There are three butterflies here that have just winked out and no one did a thing about it,’’ Minno said. “I don’t know what has happened with our agencies that are supposed to protect wildlife. They’re just kind of sitting on their hands and watching them go extinct.’’
And the list of the lost could easily grow. South Florida has one of the world’s highest concentrations of rare butterflies. At least 18 others are considered imperiled, reduced to small, isolated populations vulnerable to a host of threats from exotic ants that eat their larvae to a single tropical storm that could blow a colony into oblivion.
Federal wildlife managers insist they are doing all they can do in a state with one of the longest lists of endangered and threatened species in the nation.
“It basically comes down to resources, what we have in terms of money, staffing and those kinds of things that we aren’t always in control of,” said Ken Warren, spokesman for the Fish and Wildlife Service’s South Florida field office in Vero Beach. “We are trying to be as responsive as we can.”
State and federal agencies haven’t ignored the decline. They formed a joint group in 2007 to develop recovery strategies and have supported laboratory breeding programs for signature species like the Schaus’ swallowtail and Miami blue. The service even put a biologist full-time on butterfly problems, Warren said, a level of attention otherwise reserved for high-profile species like the manatee and Florida panther.
But so far, it hasn’t been enough to reverse troubling trends. Experts acknowledge that reviving the rich array of butterflies that once ranged along much of the coast poses significant, possibly insurmountable challenges.
“It’s hard to see for some of the species what really can be done,’’ said Jaret Daniels, assistant curator of Lepidoptera for the Florida Museum of Natural History in Gainesville, who has directed captive breeding efforts for the two others on the brink, Miami blue and Schaus’ swallowtail.
Those booster injections of new butterflies may be the last and best hope, having worked to reinvigorate populations in other states. But so far, they have fizzled in South Florida. Biologists have produced copious quantities of butterflies in protected pens but the lab-bred bugs have never managed to make it in the wild, for reasons still under study.
“With a lot of these butterflies, we don’t necessarily know the ins and outs of them,’’ said Mark Salvato, the service’s lead butterfly biologist. “There are a whole bunch of factors that could be affecting them. It’s hard to find a smoking gun.’’
South Florida’s unceasing growth has clearly hastened the decline. Many of the area’s unique subspecies, originally blown in from Cuba, the Bahamas or other Caribbean islands, developed their distinctive colors or markings in the subtropical comfort of rocky pinewoods and hardwood hammocks, ecosystems now paved over or cut into small pieces.
But there are a long list of additional suspects: Pesticide spraying for mosquitoes can kill delicate larvae. Hurricanes and tropical storms can ravage habitat. Exotic predators have more recently emerged as a major concern, with iguanas eating essential “host plants” that shelter eggs and caterpillars. In some cases, invasive predatory ants may have supplanted native varieties that once protected butterfly larvae in symbiotic relationships.
A lack of breeding partners and genetic diversity also could cripple populations. Climate change and land management may also have impacts. .
What is particularly puzzling is why many butterflies have declined in otherwise healthy habitats in places like Everglades and Biscayne National parks, protected areas where mosquito spraying is prohibited. Last summer, for instance, teams scoured Elliott Key in Biscayne Bay, once prime breeding ground for the Schaus, hunting for enough to jumpstart a new breeding effort. They found only a handful — and not one female. They’ll try again in a few months.
And the Miami blue, once common along the coast from Daytona Beach to the Dry Tortugas, has now been reduced to the Marquesas islands west of Key West.
“It makes no sense from an ecological standpoint,’’ Minno said. “They should be up in the biggest islands with the most habitat.’’
While other butterflies are also in decline globally, South Florida’s problems are acute, with roughly a third of the 100 or so varieties known to live south of Lake Okeechobee at risk, said Elane Nuehring, past president of the Miami blue chapter of the North American Butterfly Association.
“We are sort of the capital of declining species,’’ she said.
Though butterfly watching doesn’t rival bird-watching, the delicate creatures are fascinating for many people, said Daniels, calling them “as close as you can get to the panda’’ in the insect world.
Scientists also say their disappearance is more than damaging than simply erasing fluttering flecks of color from the landscape. They are part of complex food webs and rank next to bees among the most important pollinators. They also are indicators of the health of the forests and hammocks they call home, Salvato said.
“When you start to lose the butterflies, something broader is going on,” he said.
Not everyone, however, is quite ready to pronounce the missing butterflies dead — at least not yet.
The Fish and Wildlife Service, which received Minno’s extinction recommendations late last year, is pondering its next steps.
Butterflies in the past have vanished for years only to make surprise reappearances. The Miami blue, for instance, was considered unofficially extinct after Hurricane Andrew in 1992 until the discovery of a colony of 50 in Bahia Honda State Park seven years later. Those disappeared in 2001 but more were later found in the Marquesas.
The Meske’s also once before went missing for a decade, Salvato said. “It’s a very indistinct butterfly. It’s not hard to overlook.’’
Daniels agreed it’s too soon to make a pronouncement. Many of the butterflies have brief life spans and live in areas difficult to fully survey. “That’s the inherent challenge, having enough data to verify that something is gone,’’ he said.
The service’s Warren said the butterflies Minno believes are gone also fall in a bureaucratic “gray area.” None of them were yet in the official pipeline for listing. Only two, the Schaus and Miami blue, have endangered status. Two others, the Florida leafwing and Bartram’s scrub-hairstreak, have been elevated to “candidates.” The agency won’t add something just to turn around and stamp it extinct.
“There is no requirement for us to do anything as far as a formal announcement that it’s gone,’’ Warren said. “At this point, I would say the smart thing for us is to take the recommendation under consideration and give it a little time to see what happens.’’
Minno argues something is wrong when butterflies vanish before the agency charged with protecting them even begins its process of declaring them in trouble. Environmental groups have expressed similar frustrations. In 2011, the Arizona-based Center of Biological Diversity sued the Fish and Wildlife Service over a backlog of 757 species awaiting listing.
Federal wildlife managers blame the sluggish action on shortages of money and resources, estimating the cost at simply listing a species as endangered or threatened at $150,000 to $300,000. In many states, there also has been strong political resistance to additional listings from landowners and developers.
Minno is persuaded the Zestos and Meske’s skippers are gone forever. His survey was supposed to take two years, he said, but he spent six on it, logging thousands of hours in the field. Other experts also did the same. No one has spotted any of them, in any stage of life, from larvae to butterfly.
“I thought I was going to find some at some point so I just took a lot more time,’’ Minno said. “They’re just not there.’’ | <urn:uuid:0f5a3ea0-c749-412f-b5bc-9d63b35945ad> | CC-MAIN-2015-35 | http://www.miamiherald.com/incoming/article1950730.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065828.38/warc/CC-MAIN-20150827025425-00282-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.952885 | 2,053 | 2.59375 | 3 |
Benjamin Gough (1805-77), a Wesleyan Methodist, achieved some moderate renown as a poet and hymnodist. His Our National Sins: A Poem of Warning and Exhortation appeared posthumously in 1878, and was dedicated to the evangelical reformer Anthony Ashley-Cooper, the Seventh Earl of Shaftesbury. Given Gough's call for social action at all levels of English culture, his choice of dedicatee is not surprising. Gough's long poem is a jeremiad in rather unambitious heroic couplets that suffer from some clumsy handling. One suspects that "[w]hile hecatombs are hurried to the tomb"(9) does not achieve quite the effect that Gough desired. The poem's interest lies not in its formal quality, or lack thereof, but its attempt to meld some exceptionally traditional notions about Roman Catholicism (bad), luxury (also bad), and English modernity (necessarily Protestant) with more recent challenges to the national health, including Biblical criticism, vivisection, and Darwinian evolution.
The first five sections take us from "England in the Olden Time" to "England under Queen Victoria," and celebrate England's post-Reformation development into a great imperial power. Gough, like a number of his contemporaries, insists that England is naturally Protestant: the English were "l]ong tied and bound beneath the priestly bane" (1) and "[h]eld back for centuries, to foul rites confined" (2), suggesting that an entirely foreign Roman Catholic power artificially constrained England's native spiritual impulses. Protestantism, by contrast, proves entirely liberatory. Under Protestantism's influence, the nation "[r]e-lit" its loathing for Roman Catholicism (2)--implying, again, that the Reformation merely releases an innate religious tendency--and "sprang with swift rebound/To a new life, with peace and freedom crowned" (2). As is typical in such narratives, Protestant conversion determines; it is not determined by. That is to say, although national and international politics may imprison or conceal the true, invisible church, they do not produce it in the way that they produce Roman Catholicism. Protestantism's essence, contained in the Bible alone, remains universal and unchangeable, whatever its merely local historical fortunes. By the same token, England's culture, economic power and military might emerge from a uniquely Protestant mindset, instead of shaping it. As the last two poems in this section make clear, a Catholic country like Spain rapidly devolves into the "fifth-rate" (3) because it remains Catholic, and thus resists the Protestant winds of change (which include technological advances like the telegraph ). In England, however, the Protestant era of Queen Victoria issues in a new age of international strength and domestic tranquility, apparently turning England into the globe's sole superpower. This is not so much a history of Protestantism as a claim that without Protestantism, England would have no history worth the writing.
As I noted earlier, Gough's historical narrative offers nothing new. Nor does his complaint about the nation's paradoxical collapse under the weight of its own success. The "England's Wealth and Luxury" section signals the poem's transition from celebration to prophecy. Invoking the decline and fall of the Roman Empire, Gough warns his readers that those countries "[...] raised to wealth and luxury, soon were weak/And fell, self-slain, who conquered Goth and Greek" (6-7). In this inevitable cycle of triumph and collapse, the nation strong enough to indulge itself in pleasures, untroubled by war, soon ennervates itself; readers familiar with classical republicanism should recognize the point. Thus, the Victorian age, the site of Protestantism's greatest triumph, also marks its moment of decay. (Anti-Catholic propagandists frequently made a similar argument, sans the republican overtones: Protestantism's very success threatened its ability to resist Catholicism, precisely because Protestants felt sure that nothing could uproot their religious dominance.) Modern man, Gough bluntly warns, has been "[e]masculated" (7) by a course of unlawful indulgences, all enabled by the nation's financial prosperity. Hence the need for this poem...
The next several sections, then, crusade against England's dominant "sins," which develop from the utopia that Gough had been celebrating just a few lines earlier. Gough sees the tide of national corruption beginning (and, indeed, engineered) at the aristocratic top, gathering speed as it races to the working-class bottom. Thus, the "statesmen" who profit from taxes on liquor (8) deliberately encourage working-class drunkenness and, ultimately, the sex trade (10), just as upper-class gamblers facilitate the "dread infection" of their sport (18). Although all sin, Gough clearly holds the upper- and professional classes responsible for modeling Christian virtues--indeed, as in the case of drinking, he charges the wealthy with debauching the nation in pursuit of profit. More interestingly, Gough's attacks on hunting and vivisection intersect with contemporary feminist critiques of both activities: he charts a direct line of descent from riding to hounds to "wife-beating," arguing that "murder crowns what cruelty begins" (26). The purportedly refined, elegant pleasures of the genteel hunt, which kills "God's creatures in their innocence of joy" (24), actually inculcates a disrespect for life in general and the weak in particular. Working-class men, who cannot afford to cloak a lust for killing under the equestrian's fashionable garb, reveal the upper-class hunter's true motivations in their ugliest light. Similarly, the scientific vivisector only demonstrates that "[n]othing is heinous or revolting now,/And murder wears no brand upon its brow!" (29) In both cases, cruelty to animals, however dressed-up or explained, is merely the first step on the road towards murdering human beings--especially weaker human beings, like women.1
With "England's Scepticism," one of the poem's longest sections, Gough changes tack: specifically, he indicts the country's clerical leaders for demolishing the Protestant reverence for the Bible they ought to encourage. Here, his parody of liberal Christianity--"The Bible's but half true and uninspired,/The writers only wrote what they desired" (30)--suggests that he's thinking of the previous decade's controversies over Essays and Reviews and Bishop Colenso, as well as the cautious acceptance of Biblical criticism in their wake. Bearing in mind that Gough identifies Englishness itself with Protestantism, any attack on the Bible threatens not only orthodox religion (however defined), but the very fabric of the nation itself. There is no way to separate belief from England's ongoing viability as an imperial power. Darwinian biology, which claims that "Man's great progenitor in Time's forenoon/Sprang from the oyster and the stark baboon" (33), falls into the same error, abandoning "truth's strong rock/the glory of our land" (34) for the stormy waters of heresy. Both Biblical critics and Darwin's followers reject the authentic lessons of the Reformation in favor of mere, self-aggrandizing novelty, leading the nation on to certain ruin.
But this "intellectual" revolt is nothing compared to Roman Catholicism's project: reversing history itself. Looking back on the joined legacies of Catholic Emancipation and the Oxford movement, Gough accuses Anglo-Catholics of believing that "[t]he Reformation was a downright sin,/So a new Reformation they begin!" (36) Returning us to the beginning of the poem, Gough, in a typically anti-Catholic moment, mourns that "[w]ives, mothers, maidens, and young Children come,/And sell their souls to slavery and Rome" (37). Like the hunters earlier on, Anglo- and Roman Catholics prey on the weak, whom they convince to voluntarily relinquish their hard-won liberties, once again, to a foreign power. By tolerating Catholicism, in other words, England has effectively permitted its own betrayal by "faithless traitors" (39). Thus, Gough's poem imagines a politico-theological cycle that moves from Roman Catholic slavery to Protestant liberty to Protestant decadence to Roman Catholic slavery again--but a slavery that the English willingly choose for themselves. Only a new "Champion" (39), in the mode of the first reformers, can restart the cycle. There will be no champion, however, as long as England dwells in its own forgetfulness, refusing to acknowledge, as Gough puts it in the next section, that "from the hour that Popery returns/'Twill not be long before she brands and burns!" (42). What England lacks, in other words, is any self-consciousness of this dangerous historical cycle, in which Antichrist eternally wars with the forces of God; just as England's national prosperity rests on an unchanging rock, so too does Roman Catholicism's inner corruption.
After admitting, however, that there are still signs of goodness within England, Gough concludes with the one solution to rule them all--the "living gospel" (49). In other words, having posited an apparently inevitable cycle of decline, Gough now argues that the cycle can be broken with a few timely reminders. This is not, however, quite so self-contradictory as it might seem, for Gough's point remains that England already has Protestant truth. Unlike the ancient empires, which imploded because there was no corrective to the corruptions of luxury, England can arrest her own fall by simply reverting to "truth's strong rock"; the cycle begins, in effect, from an entirely different historical location. Those who fall from the rock can climb back up. With true atonement, Gough exults, "[w]hy should not earth its Paradise regain?" (50) With Protestant truth at hand, England can become truly self-aware of her participation in that cycle of triumph and decay...and, so, consciously make her own historical (and transcendent) destiny.
1 On the links beween feminism and anti-vivisectionism during the Victorian period, see Lucy Bending, The Representation of Bodily Pain in Late Nineteenth-Century English Culture (Oxford: Oxford University Press, 2000), 116-76. | <urn:uuid:17df009c-84c3-4f68-b181-0e958a0b29cb> | CC-MAIN-2015-35 | http://littleprofessor.typepad.com/the_little_professor/2009/11/further-adventures-in-victorian-protestant-poetry-our-national-sins.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645264370.66/warc/CC-MAIN-20150827031424-00101-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.9392 | 2,177 | 2.546875 | 3 |
LADDER III Cruise
What is LADDER all about?
LADDER stands for LArval Deep Dispersal on the East Pacific Rise. This is the third of three LADDER cruises. PIs: Lauren Mullineaux (WHOI), Andreas Thurnherr (LDEO), Jim Ledwell (WHOI), Dennis McGillicuddy (WHOI) and Bill Lavelle (NOAA/PMEL). The project is funded by NSF Grant OCE-0424953. (Visit here for more details.)
The main objective of LADDER III is to answer the following questions:
Bio-1) What are the influences of advection and eddy diffusion on the maximal dispersal distance of vent species with given larval life spans?
1) Recovery of Mooring Array. Seven moorings equipped with 15 current meters, 2 velocity profilers and 3 sediment traps will be recovered from a cross-shaped array centered on the EPR crest at 9:30N. Three axial moorings were positioned within the axial trough between 9:10 and 9:50N; a cross-axial mooring line was deployed at 9:30N between 103:55 and 104:35W. The moorings collected data between 2000 and 2900m.
Current meter being recovered. Photo by Skylar Bayer.
Station map for LADDER III cruise. The colors on the map indicate meters below sea-level. You can see that the average depth of the Ridge is 2500 meters below sea-level (mostly orange). The red in the upper left quarter of the map indicate seamounts. (Image: Andreas Thurnherr)
Left: Angel Ruiz Angulo working on the CTD. Right: Xinfeng Liang with CTD tagline. (Photos: Andreas Thurnherr)
3) Microstructure profiling. A survey using a deep microstructure profiling system (DMP) will be coordinated with the CTD/LADCP deployments, to be carried out at night, and occasionally during daytime, along the EPR axis and on the flanks. This is part of a separate project (described below) with PIs Andreas Thurnherr and Louis St Laurent.
The DMP in its cradle. Photo by Matthew Schwartz.
4) Recover larval sediment trap moorings. Two short moorings with sediment traps will be recovered from a site near 9-50(TrapL1, S of Ty/Io vent, near NA) and near 9-30 (TrapL2, near CA). These were used to monitor temporal variation in larval supply to the vent sites.
Sediment trap being recovered. Photo by Susan Mills.
Plankton pump being deployed. Photo by Susan Mills.
Recruitment sandwich with many vestimentiferans (tubeworms) growing inbetween the plates. Photo by Skylar Bayer.
The Mullineaux group (WHOI)
Lauren Mullineaux (WHOI), Susan Mills (WHOI), Carly Strasser (WHOI), Ben Walther (University of Adelalide), Nica Staglicic (Institute of Oceanography and Fisheries of Croatia), Skylar Bayer (Brown University)
Left to right: Chip, Carly and Ben (by Skylar Bayer); Nica & Lauren (by Susan Mills), Ben & Susan (by Skylar Bayer), Alvin (from WHOI).
The WHOI mooring team
Brian Hogue (Left) & Paul Fraser (Right)
Photo by Skylar Bayer.
Vertical Mixing near the crest of the EPR (PIs: Andreas Thurnherr and Louis St. Laurent) This project is funded by NSF Grant OCE-0727402.
Andreas Thurnherr (LDEO), Louis St. Laurent (FSU), Angel Ruiz-Angulo (LDEO), Xinfeng Liang (LDEO), Eric Howarth (FSU), Ken Decoteau (FSU)
Left: Louis St. Laurent, Ken Decoteau, Eric Howarth. (Photo: Skylar Bayer) Right: Amy Simoneau (SSSG), Angel Ruiz-Angulo and Xinfeng Liang deploying CTD. (Photo: Andreas Thurnherr)
In spite of several decades of research, mixing in the deep ocean remains poorly characterized. In particular, all available data on which current roughness-based mixing measurements are based, were taken on slow-spreading ridges where different processes have been implicated in the observed high levels of mixing. Therefore, it is entirely unknown to what degree these mixing measurements apply on fast-spreading ridges.
Our sampling program consists of examining both the microstructure and finestructure of ocean mixing. Miscrostructure is on a scale of centimeters to millimeters and finestructure is on the scale of meters. The Microprofiler measures the turbulence and mixing rates on the microstructure scale and the lowered acoustic Doppler profiler measures those of the finestructure. This will yield direct estimates of turbulent dissipation rates and finestructure parameters.
Left to Right: The Microprofiler (DMP) being loaded. Grad student Jon Stewart and the LADCP. Photos by Matthew Schwartz.
The microstructure survey is the first to characterize dissipation levels at any fast-spreading ridge site and will thus significantly extend the available mixing data set from the deep ocean to sites with different topographic characteristics. The measured dissipation rates will be used to improve mixing measurements to be used in circulation and climate models.
Water column denitrification (PI: Schwartz)
Matthew Schwartz & Jonathan Stewart (University of Western Florida)
I am collecting water samples from the portion of the water column known as the oxygen deficit zone. This region is about 500-900 m below the ocean surface and is marked by a significant decrease in dissolved oxygen concentrations due to respiration of organic matter (OM) in the form of phytoplankton and such from the shallow, photic water column.
One of the ways that OM is consumed is via a process called denitrification in which nitrate (NO3) is transformed in dinitrogen gas (N2). I am collecting water column samples and will analyze them at the University of West Florida in Pensacola, FL, to determine the concentrations of inorganic nutrients (including nitrate) and dissolved N2. This will allow me to see just how much denitrificaton is occurriing in these waters and if the mixing between various oceanic water masses in this region of the eastern tropical north Pacific alters the rate of denitrification.
This research overlaps with other research that I conduct in estuaries and coastal environments the Pensacola, FL, region of the Gulf of Mexico.
Matt Schwartz sampling from the CTD. Photo provided by Matthew Schwartz.
Suspended Particle Rosette (SUPR) Sampler for Investigating Hydrothermal Plume Particulates (PI: Breier)
Chip Breier (WHOI AOP&E), Brandy Toner (currently WHOI MC&G, very soon Univ. of Minnesota), Chris German (WHOI MG&G)
This is the first at-sea deployment of a new oceanographic tool, a suspended particulate rosette (SUPR) sampling system capable of rapidly filtering 25 large water volume samples (> 100 liters per sample) for suspended particulates during a single CTD cast or moored deployment. In addition to being able to rapidly collect many samples at a time, I designed the SUPR sampler to be compatible with in situ optical analysis techniques we are currently developing back in the laboratory. Being able to collect samples when and where we choose, and eventually being able to carry out a portion of the analysis underwater in their natural environment, will allow us to investigate these fundamental questions,
“How do iron- and manganese-rich, hydrothermal plume particles affect seawater chemistry and to what extent do these particles fuel microbial activity in deep-sea hydrothermal plumes?”
Chip, Carly, Ben, Brian (OS), Ronnie (OS) and Patrick (Bosun) deploying the SUPR for the first time. Photo by Lauren Mullineaux.
The samples we are collecting now at EPR will allow us to take an unprecedented look at spatial variability in the mineralogy and biogeochemistry of non-buoyant hydrothermal plume particles – using a combination of laser Raman, micro-scale x-ray absorption spectroscopy, and bulk and trace elemental analysis.
Hydrothermal Vent Meiobenthos (PI: Bright)
Monica Bright, Sabine Gollner & Ingrid Kolar (University of Vienna)
Left: Ingrid Kolar. Right: Sabine Gollner. Photos by Xinfeng Liang.
Deep-sea hydrothermal vents are globally wide-spread extreme environments located at the mid-ocean ridge system of the largest mountain chain on Earth. Driven by in situ primary production via chemosynthesis, a special vent fauna thrives under highly fluctuating conditions along a gradient of temperature and toxic chemicals such as hydrogen sulfide.
Meiofauna (the small-sized animal and protist community) of the 9o50' N EPR region is a prominent component of all known vent communities there and has been found in low diversity and low abundance. As the volcano of this region erupted in early 2006 and destroyed most of the organisms living there, we have the unique opportunity to study the sofar completely unknown successional patterns of meiobenthos.
Settlement experiment site. Some of the artificial surfaces seen in yellow and green. At the bottom of the picture is the Alvin basket with the austrian Bioboxes on the right. (WHOI Alvin group/LADDER I cruise)
Using artificial settlement devices and control natural collections in a variety of benthic locations with and without vent flux in the axial summit collapse trough, as well as in the pelagial on moorings, we will investigate the temporal and spatial hydrothermal vent communities over a time course of about 6 months to 3 years post eruption.
Close up of yellow, blue and red settlement surfaces (basically brillo pads/sponges). (WHOI Alvin group/LADDER I cruise)
This study on succession, the non-seasonal, directional continuous pattern of colonization and extinction will include the description of new species, the identification, and quantification of the specific meiofauna communities of selected hydrothermal vent habitats in terms of species richness, diversity, and abundance in conjunction with an assessment of the abiotic conditions as well as of the bacterial abundance and particulate organic matter measurements serving as food for this exclusively primary consumer community.
In addition, this study will include the search for vent meiobenthic species in the pelagial in the vicinity the 9o50' N EPR region. This study will be the first of its kind and will lead to a better understanding of the processes and underlying mechanisms of vent meiofauna succession.
The statements above were compiled from and writen by the scientists aboard. Many thanks to them!
This is Atlantis Cruise 15-26
Our main study site is 9 50' N on the East Pacfic Rise (image provided by Susan Mills)
The EPR is a fast-spreading ridge (lots of volcanic activity)
CTD: This device measures Conductivity, Temperature and Depth and collects water samples in Niskin bottles from various depths.
LADCP: Lowered Acoustic Doppler Current Profiler. This device uses acoustic waves to determine velocities in the ocean.
Sediment Trap: Collects particles and larvae moving downward.
Plankton Pump: pumps a certain volume of seawater for a certain amount of time catching anything usually between a centimeter and 63 microns.
PO: Physical Oceanography.
PI: Principle Investigator.
My name is Skylar Bayer, I'm a Brown University undergraduate senior concentrating in marine biology. I worked in the Mullineaux lab this previous summer (2007) as a Summer Student Fellow.
I'm a cruise participant in the science party aboard the Atlantis 15-26 cruise (November 13th - December 3rd 2007).
The purpose of this site is to provide students at high schools in the Westford, MA area, Middlesex high school, and Groton High School with an idea of what oceanographic research is like from a scientist and student's perspective.
If you have any questions, please contact me. | <urn:uuid:39bca561-5bdf-45c3-a4cd-d8255de2dfba> | CC-MAIN-2015-35 | http://www.whoi.edu/science/B/atlantis-15-26/background.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645323734.78/warc/CC-MAIN-20150827031523-00162-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.901031 | 2,632 | 2.625 | 3 |
POLITICAL FILMS OF THE REICH – PART IV * with switchable English subtitles *
DER STAHLHELM IN BERLIN (1927)
Documentary film about Stahlhelm Tag in Berlin in 1927
DEUTSCHLAND FEIERT DEN TAG DER NATIONALEN ARBEIT (1933)
One of the first documentary films about the celebration of May Day in 1933 Germany. The leftist unions in Germany had always demanded May Day be made a workers’ holiday. Hitler complied; the next day, he abolished the unions.
THE WINTERHILFSWERK DRIVES OF 1933 – 1935
Moving documentation about the work of the WHW in the early years of the Nazis’ reign in Germany.
DEUTSCHLAND – GESTERN UND HEUTE (1936)
A comparative study of Germany as it was and now “is” after three years of Nazi rule. Not exactly unbiased.
NS VOLKSSCHULE PLETTENBERG (1937)
Documentary film about a model National Socialist school in Plettenberg.
GESTERN UND HEUTE (1938)
Another Nazi propaganda film contrasting how it once was with how much better life is under the Fuhrer.
UNSER STURM (1941)
Film biography of an NSKK unit’s adventures at home and on the Eastern Front. Interesting, previously unpublished films of ghetto Jews they run into on their way through Poland.
POLITICAL FILMS OF THE REICH – PART V * with switchable English subtitles *
THE RISE OF THE NSDAP, 1921 – 1933
This film traces the major events in the Years of Struggle with its termination being the success of the Nazis to take over power in Germany.
WAHLKAMPF IN FRANKFURT (1932)
A review of the situation in Frankfurt shortly before the national elections, in which the Nazis performed their best to date before taking over the Reich government in toto. The atmosphere in the city among the competing parties vying for power is palpably felt.
THE SA MARSCHIERT
Documentary about the SA put out in 1933-1934, mainly consisting of march music accompanying film bragging about the prowess of the SA.
FRITZ TODT – ZUM BAU DER REICHSAUTOBAHNEN (1933)
Rather short film, in which Fritz Todt discusses the progress of the highway building program in Nazi Germany.
SCHONHEIT DER ARBEIT (1934)
Early National Socialist film praising the joy of working to a country, where unemployment was still massively high. The film documents the rebuilding of the economy through pictures of common people at work. That this reform was started during the Weimar Republic, for which the Nazis were now taking credit, is nowhere documented in this or any other film.
TAG DER ARBEIT – MANNHEIM (1934)
The May Day celebrations in Mannheim to the accompaniment of martial music.
DER FEIERTAG DER ARBEIT
And yet another film about the May Day celebrations in the early Reich years.
POLITICAL FILMS OF THE REICH VI (2012) * with switchable English subtitles *
DER OLYMPIA ZUG (1935)
In 1936, Germany sponsored both the winter and summer Olympic Games. It was Dr. Goebbels intention that everyone in the county get into the spirit of the Olympic Games and view it as both an honor and a mission, through which the bad press about Germany --- and the Nazis --- could be wiped away, if Germany presented itself as a welcoming and courteous host to its foreign guests. And to ensure national awareness of the Games, a mobile exhibition known as “The Olympic Train” toured around Germany with displays, dioramas and information about the upcoming event. This film portrays the preparation of the train for its journey around the Reich and the setting-up of displays wherever it went.
EIN BESUCH IM JUNGVOLKLAGER BEI HEYM (1935)
The Jungvolk was the preparatory group for the Hitler Youth, set up for children, who were still too young to join the HJ. This film takes us on a tour of one of these camps and of the region surrounding it.
TAG DER FREIHEIT (1935)
This movie would no doubt be of great interest to those interested in military history. No other documentary allows us to see how certain equipment is used in its most proper way, how it works, how it is operated and so on. Just take a look at the rapid deployment of the PAK guns; their crews fighting off tanks; flak batteries in an air defense role … Such pictures have great learning potential in and of themselves. Scholars of military history can learn very much about the strategies and military doctrine of pre-Blitzkrieg Germany; especially through the well performed maneuvers shown in this movie. Truly essential documentary for anyone interested in the topic.
EHRE DER ARBEIT (1936)
And yet another Nazi propaganda film extolling the virtues of work and honor.
DIE DEUTSCHE FRAUENKOLONIALSCHULE RENDSBURG (1937)
Propaganda film shows the kind of training women receives at such institutions prior to accompanying their men on the journey to colonize what is to be future conquered lands.
GARDETAG IN DUSSELDORF (1937)
A quick review of veterans and a present-day Wehrmacht honor guard parading in Dusseldorf in commemoration of this day honoring the military.
EXHIBITION: “SCHAFFENDES VOLK” IN DUSSELDORF (1937)
The Reichsausstellung Schaffendes Volk (The Reich's Exhibition of a Productive People) of 1937 was held in the North Park district of Düsseldorf, Germany, along one mile of the Rhine shoreline. It was opened on May 8, 1937 by Hermann Göring. Through October of the same year it attracted more than six million visitors.
Planned in secret and deliberately designed as a rival to the 1937 International Exposition of Modern Life in Paris, the exhibition was meant to showcase the domestic accomplishments of the National Socialists in new housing, art, and science during their four years in power. The fair's director was Dr. Ernst Poensgen.
The exhibition was laid out in four main divisions:
industry and economics
land utilization and city planning
material progress (with an emphasis on progress in synthetics)
arts and culture.
Through the publicity efforts of its CEO, Max Keith, a functioning Coca-Cola GmbH bottling plant stood at the center of the fairgrounds, with a miniature train for children, and immediately adjacent to the Propaganda Office.
REMEMBERING DR. FRITZ TODT (1939)
Interesting homage to the man, who was responsible for the upgrading and new construction of highways in the Reich during the pre-War years. If you’re driving on a highway in Germany, there’s a good chance, he had it built.
REICHSMINISTER SEYSS-INQUART IN LIMBURG (1940)
Just as it says: a short film about the Reichsminister’s visit to Limburg.
REICHSMINISTER SEYSS-INQUART IN THE NETHERLANDS (1940)
This film shows the events in Holland, when the transfer of power to the new minister for internal affairs, Dr. Seyss Inquart, took place.
POLITICAL FILMS OF THE REICH VII (2012) * with switchable English subtitles *
THE NEW GERMANY (1934)
The events leading up to Adolf Hitler being named Reich’s chancellor and the celebratory procession afterwards are detailed in this 1934 Nazi film.
LANDESBISCHOF PETER BESUCHT HALLE (1934)
A rather curious film put out by the Nazis, which has very little to do with the title. Sure, the bishop pays a visit to Halle and one can see he has nothing against the Nazis … quite the contrary. But most of the film seems to have something to do with the wedding nuptials of some royalty at the local St. Moritz Church.
FUR UNS – ZUM APPELL (1937)
NS documentary about the Day of Remembrance and Honor to those who fell in the Beer Hall Putsch in 1923. The upper crust of Germany’s leadership walk to Feldherrnhalle and lay wreaths and the names of the dead are read out … but why Horst Wessel’s name is mentioned and why there’s an honorary column with his name on it when he died seven years later in a street brawl is anyone’s guess.
THE DUCE IN GERMANY (1937)
An official film put out about the Duce’s state visit to Germany in 1937. Followed by …
PRIVATE FILMS FROM MUSSOLINI’S VISIT TO BERLIN (1937)
An amateur photographer captures the city and the events surrounding the abovementioned visit.
REICHSTAGUNG DER AUSLANDSDEUTSCHEN IN STUTTGART (1938)
Propaganda film of the sixth rally of foreign-residing Germans, who all came to Germany to celebrate their Germanness; the New Reich; and to party hardy with their overseas cousins. Rudolf Hess makes a big show of telling them that their host countries shouldn’t feel insecure about they’re being loyal to an aggressive state, which has been licking its lips everytime it looks at its neighbors. Uh huh.
THE POLITICAL EVENTS OF 1938 – 1939
The significant events of 1938 – 1939 are presented here in excerpts from Wochenschauen.
SPORTING EVENT IN THE REICH (1939)
Simply a private color film from 1939, showing some public sporting event somewhere in the Reich.
DR. FRITZ TODT – HIS CALLING AND HIS ACHIEVEMENTS (1942)
A memorial film put out by the National Socialists to honor the recently deceased Dr. Fritz Todt. I know, we’ve introduced several films about Dr. Todt recently; but with his death, I promise you, this will be the last film about him you’ll see put out by us.
POLITICAL FILMS OF THE REICH VIII (2012)
VERRATER VOR DEM VOLKSGERICHT (1944)
More than three-hour long “excerpt” of the trial of the July 1944 conspirators against Hitler’s life.
POLITICAL FILMS OF THE REICH IX (2012) * with switchable English subtitles *
DAS LEBEN UND UBERLEBEN IM DRITTEN REICH
This documentary concentrates on amateur films taken during the Reich years and emphasizes how politics played little part in the amateur photographers pictures … until the War broke out.
DER STURM BRICHT LOS – THE APOCALYPSE BEGINS
Film montage documents the closing years of the War, when the days of victory and glory were but a distant memory.
SHORT FILMS ABOUT HERMANN GOERING
Twelve short films documenting Hermann Goering’s life and political activities.
POLITICAL FILMS OF THE REICH X (2012): * with switchable English subtitles *
ERWERBSLOSE KOCHEN FUR ERWERBSLOSE (1932)
Moving documentary about the unemployed running a soup kitchen for the benefit of the massive numbers of hungry unemployed people out there. As the film states: “30 cents a day will provide a warm lunch for three people! Become a member of the Union of Kitchen Workers for the Unemployed and contribute your monthly subscription of 30 cents!” A message, which unfortunately resonates all too familiarly to this day.
CONSTRUCTION OF THE 05001 BORSIG (1934)
Short documentary film about the construction of a new locomotive in the Reich.
ERNSTES LERNEN – FREUDIGES SCHAFFEN (1937)
Film about young women hired by the NSV to care for young children, the sick, the needy. This particular film deals with their training in dealing with young children. A very heartwarming film, which leaves one wondering at the end about the positive benefits of a community welfare system all must take turns serving in.
GESUNDE FRAU – GESUNDES VOLK (1937)
Another propaganda film along BDM lines, in which the message is beaten home, that a woman who exercises and competes Nordically is not only healthy, but superior to those who don’t. Nice figures to look at, at least.
PRIVATE FILMS FROM THE REICH (1934 – 1942)
A collection of private films covering a variety of topics by different filmmakers, which show that it wasn’t all fun and wargames in Hitler’s Germany.
Dutch-language documentary film about the infamous Leon Degrell, who managed to escape the fate of a lot of his buddies at War’s end by taking a plane away from the battlefield. Useful, if you understand Dutch.
WIR LEBEN IN DEUTSCHLAND (1943)
Very brief excerpt from a now lost film, which spoke volumes about the situation in Germany in 1943. A man arrives in a town unknown to him and stops strangers to ask directions. Unfortunately, there are so many forced laborers and foreign Wehrmacht volunteers in Germany these days, that no one speaks English. Sort of like asking for directions in most of Phoenix, Arizona.
TOTAL DURATION: 1,190 MINUTES (or almost 20 hours!) | <urn:uuid:56d67bca-6c4d-4f36-b85a-6d0771f3a363> | CC-MAIN-2015-35 | http://www.rarefilmsandmore.com/de/10-dvd-set-political-films-of-the-reich-i-x-with-switchable-english-subtitles- | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065828.38/warc/CC-MAIN-20150827025425-00281-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.919863 | 2,970 | 2.71875 | 3 |
| Streptococcus mutans|
Streptococcus mutans was first described by JK Clark in 1924 after he isolated it from a carious legion. Clarke also succeeded in producing caries in teeth in vitro with S. mutans, providing early evidence that S. mutans was a major cause of dental caries. Though discovered in the 1920's, it wasn’t until the 1960’s that a great deal of interest was generated when researchers began studying dental caries in detail.
The group of oral streptococci closely related to S. mutans is referred to as the “mutans group” or mutans streptococci. The actual taxonomic bacteria Streptococcus mutans is always written in italics while the group name mutans streptococci is not. The mutans group consists of S. rattus, S. mutans, S. cricetus, S. maccacae, S. sobrinus, and S. downeii. S. mutans and S. sobrinus comprise the majority of the group and are only found in humans. They can be distinguished by laboratory tests, but it is not always practical due to costs and time. There is no selective media that would allow the detection and separation of the species and laboratory work is therefore done on the entire mutans streptococci group. Because of its greater prevalence, most of the isolates will in fact be S. mutans.
The two selective media that are widely used for isolating caries-related streptococci are based on Mitis-Salivarius agar and TYC agar to which the antibiotic Bacitracin is added (TYSCB). This suppresses the growth of most species but allows S. mutans and S. sobrinus to grow. The inclusion of sucrose leads to the formation of glucans and a distinctive colony appearance that aids in identification.
One of the harmful effects of the normal flora, especially S. mutans, is bacterial synergism between a member of the normal flora and a potential pathogen. During the bacterial synergism, a member of the normal flora facilitates the growth of other potential pathogens. S. mutans facilitates such a condition through the initiation of the biofilm to which several oral bacteria adhere to and rely upon. S. mutans readily colonizes tooth surfaces and forms a thin film on the tooth called the enamel pellicle. It contains a cell-bound protein, glycosyl transferase, that serves as an adhesion for attachment to the tooth and produces lactic acid which demineralizes tooth enamel. Furthermore, if oral streptococci such as S. mutans are introduced into wounds created by dental manipulation or treatment, they may adhere to heart valves and initiate subacute bacterial endocarditis.
S. mutans has a single double stranded circular genome. The sequence of the 2,030,936 bp Streptococcus mutans strain UA159 Genome was completed and the results published in the October 29, 2002 issue of Proceedings National Academy of Sciences USA. This feat was accomplished first sequencing 12,000 individual, shotgun-based, double stranded templates. Next, a directed custom synthetic primer-based approach was used for closure and quality improvement to complete the sequence of the genome to a high level of accuracy.
Cell structure and metabolism
S. mutans is an obligate anaerobe that receives energy through lactic acid fermentation. It is an alpha hemolytic Streptococci class bacteria that appears greenish on a blood agar plate. It possesses spherical cells that appear in chains due to cellular division in one plane and incomplete cytokinesis following mitosis. S. mutans is Gram-positive and catalase negative, preventing it from catalyzing the breakdown of hydrogen peroxide to oxygen and water.
In utero the human fetal oral cavity is sterile, but the colonization of bacteria begins at birth. Handling and feeding of the infant after birth leads to the establishment of a stable normal flora in the oral cavity in about 48 hours. Streptococcus salivarius is the first dominant colonizer and may make up 98% of the total oral flora until the appearance of teeth occurs around 6 to 9 months of age. After the eruption of teeth, S. mutans and S. sanguis colonize the oral cavity and persist as long as teeth remain.
S. mutans as well as other bacteria that compose the normal oral flora provide valuable services to the human host. They occupy available colonization sites which makes it more difficult for other microorganisms to establish themselves. The oral flora also contributes to host nutrition through the synthesis of vitamins, and they contribute to immunity by inducing low levels of circulating and secretory antibodies that have the potential to react with other pathogens. In addition, the oral flora exerts microbial antagonism against foreign species through the production of inhibitory substances.
S. mutans is normally present in the oral flora, but only becomes pathogenic under conditions that lead to frequent and prolonged acidic conditions. The cariogenic potential of S. mutans is manifested by the organism's ability to ferment various carbohydrates, producing large amounts of acid, and by its ability to participate in the formation of dental plaques. S. mutans is involved in the initial formation of the plaque biofilm through glucan formation. The dextran-like glucose polymers allows bacteria to stick to enamel of teeth to form a biofilm. This flora is extensive and may reach a thickness of 300-500 cells on the surface of the teeth. Acid production through lactic acid fermentation drives the dissolution of hydroxyapatite crystals and promotes the growth and development of acidic bacteria. Acid tolerance allows cariogenic bacteria to thrive and exclude bacteria that cannot survive in a pH below 5.5.
S. mutans is able to colonizes in high proportions through sucrose-dependent as well as sucrose-independent methods. Sucrose-independent adhesion involves both specific and nonspecific interactions with salivary glycoproteins and enamel pellicle. Sucrose-dependent adhesion relies on the synthesis of extracellular glucan polymers from sucrose by the action of glucosyltransferase enzymes. Both GbpB and GbpC are part of S. mutans cell wall and GbpB functions as a peptidoglycan hydrolase and may be necessary for cell wall cycling and synthesis. GbpC acts as a surface receptor for glucan and is responsible for dextran-dependent aggregation. GbpA and GbpD are secreted Gbps that contribute to sucrose-dependent biofilm production.
The effects of S. mutans can be counteracted through proper oral hygiene and the use of fluoride. Fluoride has been found to be the most effective agent against caries because it acts through topical mechanisms inhibiting the demineralization enamel and tooth structure, enhancement of remineralization at the crystal surfaces, and the inhibition of bacterial enzymes such as enolases, phosphatases, ATPases, and pyrophosphatases. Fluoride alters the physiochemical properteis of teeth by makging them more resistant to acid dissolution due to the formation of fluorapatite and fluorhydroxyapatite.
Application to Biotechnology
One application of S. mutans in biotechnology is the making paper utilizing glucans, produced by the glucosyltransferase C enzyme, instead of modified starches. Glucans are functionally similar to the hydroxethyl modified starch and are particularly useful in the coating step of paper manufacture. S. mutans or plants transformed with the gene encoding the glucosyltransferase C enzyme can be used to create a paper substitute and reduce the effects of deforestation and the clearing of trees for paper mining.
Manganese has been shown to be essential for the expression of S. mutans virulence factors such as the glucan-binding proteins. As such, current research is underway to examine the effects of Mn on the transcription of genes encoding Gbps. The glucan binding proteins (GbpA, GbpB, GbpC, and GbpD) promote adhesion and accumulation of S. mutans on tooth structures.
Researchers are working to interfere with key genes and proteins necessary for the survival of S. mutans to remove the ability of the bacteria to thrive in acidic conditions. Past research has shown that this ability has several components including the bacterial membrane bound enzyme fatty acid byosynthase M (FabM), which when shut down makes S. mutans 10,000 times more vulnerable to acid damage. In addition, early work suggests that FabM helps to resist the human body’s defenses. As a result, FabM is a major target for the design of new drugs.
The effect of local immunization with Streptococcus mutans on dental caries is being studied with the hopes of creating a carries vaccine. It is proposed that salivary immunoglobulin A antibody may be viewed as an ecological determinant in the oral cavity by affecting oral microorganisms and possibly their by-products.
- J.K. Clark (1924), "On the bacterial factor in the etiology of dental caries", Brit J Exp Pathol
- "No Tooth Brush, No Cavities? Cavity-causing Bacteria May Be Made To Self-destruct", ScienceDaily, 7 January 2008
Martin A. Taubman and Daniel J. Smith "Effects of Local Immunization with Streptococcus mutans on Induction of Salivary Immunoglobulin A Antibody and Experimental Dental Caries in Rats" Infect Immun. 1974 June; 9(6): 1079–1091.
Jeevarathan, J., Deepti, A., Muthu, M., Rathna Prabhu, V., Chamundeeswari, G. "Effect of fluoride varnish on Streptococcus mutans counts in plaque of caries-free children using dentocult SM strip mutans test: A randomized controlled triple blind study" Journal of Indian Society of Pedodontics and Preventive Dentistry. 2007. Volume 25 Issue 4 p. 157-163. | <urn:uuid:d234653b-90a9-4fc4-9f8e-ca6428befc8d> | CC-MAIN-2015-35 | http://en.citizendium.org/wiki/Streptococcus_mutans | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00162-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.918038 | 2,132 | 3.515625 | 4 |
< Browse to Previous Essay | Browse to Next Essay >
Yarrow Point -- Thumbnail History
HistoryLink.org Essay 4212
: Printer-Friendly Format
Yarrow Point, a small peninsula located in King County on the east side of Lake Washington, extends a mile northward into the lake, forming the western shore of Yarrow Bay, just south of Kirkland. It is the farthest east of three fingers of land, the other two being Hunts Point and Evergreen Point. Clyde Hill and the SR-520 Highway edge Yarrow Point's southern border. Residents of Yarrow Point can hear caroling bells sound on the quarter hour, echoing across Yarrow Bay from Carillon Point to the east. Pleasure and tour boats crisscross highways through the waters to and from Seattle and Kirkland's harbor, reminders of the busy world across the bay. Before the late nineteenth century, Yarrow Point remained forested and unspoiled, only visited by shifting gray, soft fog, and frequent mists. William Easter filed the first homestead claim in 1886. Later came strawberry, vegetable, and holly farms. Yarrow Point incorporated as a town in 1959. Conserving the peninsula's wetlands and woods has been a key focus of the town.
Until the middle of the twentieth century small farming enterprises that grew strawberries, vegetables, and holly still covered much of Yarrow Point’s 231 acres. In 1902, Edward Tremper purchased a large tract of land and imported holly stock from England to plant on it. By the 1920s, he owned the largest holly farm in the United States. Farmers of Japanese descent came to work for Tremper and on leased land where they grew strawberries and vegetables. During World War II, the policy of Japanese American internment forced the Japanese of Yarrow Point and elsewhere in the Northwest into internment camps. Cano Numoto owned and farmed land west of 92nd Avenue NE and was one of only a few who returned to the Eastside after World War II.
Others who settled on Yarrow Point came for the benefits of its country setting. Samuel Curtiss Foster and his wife Harriett filed a "Declaration of Homestead" document and in 1910 built a cabin on the west side of their one-acre Yarrow Point property that fronted on 92nd Avenue NE and extended eastward to 94th Avenue NE. Counseled by his doctor to move to the more salubrious air of the “country,” Mr. Foster and his wife established a permanent home there that three generations of their family enjoyed from 1910 to 1983. Curtiss Foster moved his Seattle plumbing business to the Eastside, doing this work on many of the early homes on the Point and for schools in Bellevue and Kirkland.
Their daughter Wilma commuted daily via boat to Seattle's Garfield High School, from which she graduated in 1926. Foster cultivated the eastern parcel of his property in corn, beans, and peas, using a draft horse he kept in a barn on Clyde Hill to pull the plow. By 1923 the Fosters, appreciating the benefits of living on the Eastside, decided to build a more permanent home, moving the cabin closer to 92nd Avenue NE at the northeast corner of the intersection of NE 42nd Street. Foster built up the present house from the cabin, doing much of the construction himself. The house remains today (22003) essentially as it was when built, lovingly restored by its current owners.
What’s in a Name?
Two individuals are especially significant to Yarrow Point history because of their contributions to the town’s names. Leigh S. J. Hunt, owner of the Seattle Post-Intelligencer, became Yarrow Point’s first land speculator. He bought most of it in 1888 and on its northern shoreline built a large estate he named “Yarrow” after a favorite poem by William Wordsworth. Over time the name “Yarrow” seemed suitable as a name for the location, and the small peninsula became known as Yarrow Point.
In 1907 a Scotsman, George F. Meacham, filed the first development plat for Yarrow Point. He advertised lots for sale and sponsored a contest to name the streets, asking for Scottish names. Sunnybrae, Bonneybrae, Mossgiel, Loch Lane, and Haddin Way continue to appear alongside the numbers on Yarrow Point street signs. In 1913, he deeded two acres for a park that became known as the George F. Meecham-Morningside Park and later the location of the Yarrow Point Town Hall, dedicated in 1990.
Developing a Community
By the 1940s, women on Yarrow Point participated in community service endeavors as members of the Overlake Service League and of the Yarrow Garden Club. In 1946, Yarrow Point established its own Circle of the Overlake Service League. Along with members from neighboring community circles, their efforts included helping the disabled, assisting the Red Cross, providing shoes and sewing clothing for disadvantaged children, and eventually establishing a Thrift Shop off Main Street.
For more than 50 years, since its founding in 1948, the Yarrow Garden Club’s dedication to “knowledge and love of gardening” has contributed to the establishment of beautiful gardens on Yarrow Point and the improvement of its public landscape. Members have also helped beautify Bellevue High School and Clyde Hill Elementary School and contributed to such causes as the Marine Hospital in Seattle and Eastside Handicappers. Founding member Marjorie Baird became a trustee of the University of Washington Arboretum Foundation and chaired the Gardens of the Governor’s Mansion in Olympia.
Yarrow Point citizens voted to incorporate as a town in 1959, and as a result of this decision they began to define and develop the community’s traditions and values.
Yarrow Point’s Fourth of July Celebration
In 1976 celebrations occurred all over the nation to commemorate the bicentennial of the signing of the United States Declaration of Independence. On the east side of Lake Washington so many communities organized celebrations that to avoid competition that year Bellevue delayed its own until July 10. For most towns it would be a one-time event. For Yarrow Point, the 1976 Bicentennial Celebration became both a tradition and a transformation.
Minor Lile was Mayor in 1976 and he and his wife Sue gathered together a committee of friends and neighbors to brainstorm what they could do to celebrate. Sue Lile remembers that the idea just came to her: “We really ought to have a Fourth of July celebration on Yarrow Point.” She not only became chair for the next three years, but also launched what would become Yarrow Point’s most important annual community tradition. Neighbors found out they enjoyed celebrating and working on committees together, having a common goal.
As the celebration grew in size and complexity it became more integral to community life, with planning beginning months in advance and more residents focused on it as the commencement of their summer fun and activities. Participation in the annual celebration encouraged friendships and volunteerism in other aspects of community life throughout the year. Chairs of the event sometimes went on to serve on the town’s commissions; one eventually even became mayor! No longer just a place to live, the town began to function as a community, putting forth its own set of values as a cohesive future agenda.
The celebration also became the logical time to commemorate other events. In 1979, it became an anniversary party for the town’s incorporation, and 10 years later, in 1989, citizens memorialized another defining occasion for Yarrow Point with the dedication of the Wetherill Nature Preserve.
Wetherill Nature Preserve
The land, so beautiful and so valuable, continues to be the vital source of the outlook and character of Yarrow Point. In 1894, Jacob Furth purchased from Leigh S. J. Hunt a 22-acre plat on the southwest side of Yarrow Point, and established himself and his family as regular summer visitors to what was still a relatively untouched peninsula of land. The new property, located on the comparatively undeveloped eastside of Lake Washington, was only accessible either by boat or by going around the lake over rough, dirt roads. Nevertheless, the impressive lakefront site showed promise.
The Furths built a comfortable country home there to accommodate their family’s summer holiday needs and even gave it a name, calling it Barnabee, after a famous Shakespearean actor. Jacob’s wife Lucy loved to recite passages from the plays and sonnets written by the famous Bard. A farm girl from Indiana, it was also she who had an orchard planted. Otherwise, it was mostly open space with only a few trees on the property. Eventually, the family leased 16 acres of it to the Saiki family to farm.
In 1916 when Lake Washington was lowered nine feet to create the Lake Washington Ship Canal, which provided access from Lake Washington into Puget Sound, the property gained rich lake bottom land at the new lower level along its waterfront boundary. In 1927 Sidonia Furth Wetherill, daughter of Jacob and Lucy Furth, and her husband, Army Colonel Wetherill, took over the Furth estate. Their two daughters, Marjorie and Sidonia loved going there for summer vacations. Later, when daughter Marjorie's husband Hugh Baird was called to war in 1941, she moved there with their two children, and when Hugh returned it became their permanent residence.
After World War II, the leased farm property reverted to a woodland with blackberry vines and small trees thriving where a field of strawberries and vegetables had formerly grown. It became a haven for birds and small animals and even had a resident beaver.
Marjorie and her sister, Sidonia Wetherill Foley, who now lived on the East Coast, became concerned about the preservation of the beautiful piece of land their family had enjoyed for so many years. Eager buyers called Marjorie, inquiring if she would divide it up into parcels for homes. Preferring to conserve its natural beauty, she first contacted the Nature Conservancy, but they wouldn’t guarantee its preservation for perpetuity.
All of this led to what would result in a “gift of a lifetime.” When James Barton, Mayor of Hunts Point, suggested gifting the land to the towns of Hunts Point and Yarrow Point, pledging to guarantee it would be kept as is with the trees, Marjorie and Sidonia could see that gifting the land in this manner would benefit the most people. They officially deeded 16 acres as the Wetherill Nature Preserve on July 4, 1988. Their decision to protect fields and forests from being turned into concrete and housing tracts and to preserve the wildlife is an incredible, unprecedented commitment of individuals to the environment. A sign at the entrance announces that the Wetherill Nature Preserve is a “natural place, a habitat” area. True to that concept, any designs for it have remained simple, primarily giving the public access rather than creating a landscaped garden.
Defining Decisions -- Land and Water Issues
When in 1916 the construction of the Hiram M. Chittenden Locks and the Lake Washington Ship Canal lowered the level of Lake Washington, the additional shoreline of Yarrow Bay created a wetlands area, a natural sanctuary for wildlife. Along Yarrow Point’s eastern boundary, Yarrow Bay has been the focus of several development attempts. Each has resulted in decisions with vital consequences for the town of Yarrow Point.
The first, proposed in the 1950s by the Austin Company, would have resulted in the creation of a little “Venice.” It envisioned a shopping center, home sites, boat moorage and apartments built along canals. A downturn in the economy prevented its realization, but Yarrow Point citizens understood the significance of the Yarrow Bay project and decided to incorporate as a town in order to have the authority to determine how the town would develop.
In the 1970s developers again proposed to develop Yarrow Bay and claimed it would be the largest development north of San Francisco. Citizens of neighboring communities, including Yarrow Point, founded the Yarrow Bay Conservancy Council. They worked for three years to educate public officials and the community about the importance of preserving the Yarrow Bay wetlands.
Supported by guidelines determined by legislation for wetland protection, a consortium of government agencies established an official wetland boundary for Yarrow Bay. This resulted in preservation of two thirds of the area as “undisturbed wetlands.” In the 1980s, the remaining upland parcel near Lake Washington Boulevard eventually was developed.
A Unique History with Regional Significance
Some may be surprised that Yarrow Point, essentially a neighborhood of homes, has a notable and unique history and that its citizens have contributed to issues with regional significance. Its development as a community reflects the transformation from rural to suburban life repeated throughout the Northwest during the past century.
The Year 2000 census recorded 1,008 residents living in 393 homes on Yarrow Point. From early settlers who were Seattle businessmen, farmers, and small landholders to all those citizens who dedicated time and talent to the community and the town, Yarrow Point’s history is about people and what they value. They value the land and want to preserve it, and they adjust to change by becoming involved and finding solutions. Their history ensures Yarrow Point’s future as well.
Robert E. Ficken and Charles P. LeWarne, Washington: A Centennial History (Seattle: University of Washington Press, 1988); Point in Time: A History of Yarrow Point, Washington, ed. by Suzanne Knauss (Yarrow Point, WA: Belgate Printing, 2002); Roger Sale, Seattle: Past to Present (Seattle and London: University of Washington Press, 1976); Jeanne Whiting, Yarrow: a Place (Seattle: 1976).
< Browse to Previous Essay
Browse to Next Essay >
Cities & Towns |
Licensing: This essay is licensed under a Creative Commons license that
encourages reproduction with attribution. Credit should be given to both
HistoryLink.org and to the author, and sources must be included with any
reproduction. Click the icon for more info. Please note that this
Creative Commons license applies to text only, and not to images. For
more information regarding individual photos or images, please contact
the source noted in the image credit.
Major Support for HistoryLink.org Provided
By: The State of Washington | Patsy Bullitt Collins
| Paul G. Allen Family Foundation | Museum Of History & Industry
| 4Culture (King County Lodging Tax Revenue) | City of Seattle
| City of Bellevue | City of Tacoma | King County | The Peach
Foundation | Microsoft Corporation, Other Public and Private
Sponsors and Visitors Like You | <urn:uuid:9fae3bf3-ea8b-4811-8736-cbe5aac72ce1> | CC-MAIN-2015-35 | http://www.historylink.org/index.cfm?DisplayPage=output.cfm&File_Id=4212 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00158-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.962424 | 3,082 | 3 | 3 |
Searching for details:
The author of this page will appreciate comments, corrections and imagery related to the subject. Please contact Anatoly Zak.
The final configuration of the Columbus laboratory module for the International Space Station. Credit: ESA
The cornerstone of Europe's space program, the 1.3-billion-euro Columbus laboratory module embodied many great hopes and saw bitter disappointments, as other human missions beyond Earth faced at the turn of the 21st century. Conceived in the 1980s, as a European path to independence in space, the Columbus program made it to the launch pad years behind schedule and a mere shadow of its original scope. Still, its launch in 2008 marked a spectacular achievement of human ingenuity, perseverance and cooperation.
In early 1980s, the European Space Agency, ESA, considered a manned orbiting station to succeed and expand the capabilities of the Spacelab complex, which could only fly short missions inside the cargo bay of the US Space Shuttle.
With the launch of the NASA Space Station project in January 1984, which was accompanied by an American invitation to international partners to join the effort, Europe had to make a choice about its plans in space. At the beginning of March 1984, NASA Administrator James Beggs visited Europe and Japan to discuss possible cooperation on the space station with their respective agencies.
In January 1985, in Rome, a meeting of science ministers representing countries of the European Space Agency, ESA, approved the Columbus program, the most ambitious effort in space undertaken by the organization to date. The plan spearheaded by Germany and Italy envisioned a manned orbital module, which would be attached to the American space station, and with the capability to evolve into a full-fledged European orbital outpost before the end of the century. (269) However the first elements of the Columbus program were expected to fly as early as 1992, to coincide with the 500th anniversary of Columbus' voyage to America.
As in previous space projects, European countries pulled resources to pay the continent's expected $2.5 billion contribution into the Space Station program, with the largest powers within the agency doing the financial "heavy lifting":
In June 1985, Germany's MBB-ERNO took the role of prime contractor in the Columbus program, while Aeritalia was given the responsibility for building habitable sections, derived from Spacelab modules.
Under the original proposal, French aerospace giant Aerospatiale received a contract for the development of a payload servicing module. However due to disagreements with NASA and resulting re-design of the Columbus complex, the module was dropped from the program. It was one of the first, but not the last conflict with NASA, which used its dominant position as provider of the Space Shuttle system to pressure its partners. (268)
NASA was reluctant to transfer crucial information on the project to its European allies, while ESA struggled to ensure equal access for Europeans to the various facilities of the future station. After some wrangling, on June 3, 1985, NASA and ESA signed a Memorandum of Understanding on the Space Station cooperation. Still, the technology transfer issue remained unresolved and it was postponed until the conclusion of the Phase B definition and preliminary design work, which at the time was expected to take two years. The document at least provided Europeans with basic information about the American design that enabled them to do some minimal work.
At the beginning of 1986, ESA and NASA clashed over the very concept of the Columbus program. NASA objected to ESA's original plan to design Columbus as building block of a future European space station. The Americans were concerned that they would facilitate the creation of a potential competitor if the manned space outpost fulfilled its promise as supplier of commercially viable products, such as new materials and pharmaceuticals.
The Europeans and the Americans also clashed over the military potential of the Space Station. The partners eventually agreed to limit research onboard the outpost to peaceful purposes, "as determined by each partner for its own space station module." (139) Despite objections from Pentagon and difficult negotiations, NASA did need European partners. The international status of the project promised to prop up shaky political support for the Space Station program in the US Congress and "spread out" the development cost.
The preliminary design and technology preparation phase of the Columbus program lasted until the end of 1987. A ten-year development phase of the project was expected to take place during 1988-1998. Only on September 29, 1988, NASA, ESA, Japan and Canada signed a final agreement on the Space Station cooperation.
APM and MTFF
Yielding to American pressure, ESA ultimately decided to split the Columbus program into two inter-dependent parts: the Attached Pressurized Module, APM, which would be permanently docked to the US space station and a smaller orbital facility, designated the Manned-Tended Free Flyer, MTFF, which would be built and launched exclusively by Europe.
By 1987, the service module, which was to provide propulsion, attitude control, electric power, thermal control, communications and data processing for the entire "independent" European space station, was scaled down in size in comparison to the original proposals. The module's pressurized habitation section was eliminated. Germany's Dornier would build the service module.
Unlike the European Columbus module for the US Space Station, the MTFF station would not be permanently occupied, but only visited by astronauts every six months. As such, MTFF would be well-suited for then commercially promising microgravity research, which requires an environment free of even slightest disturbances, which astronauts onboard certainly cause. At the same time, such an arrangement would negate NASA objections to European microgravity research onboard the US space station.
The entire two-element MTFF station could be launched by the Ariane-5 rocket and serviced by then yet-to-be built Hermes reusable orbiter. Thus, if ever built, the facility would be totally immune from American dictates. At the same time, the "independent" station would be launched into orbit with an inclination 28 degrees toward the Equator, (the same as the US station's orbital parameters), which would make it possible for the Space Shuttle to access it, if necessary. In the wake of the Challenger accident, it looked to some optimists in Europe as though the MTFF could make it into orbit before the US space station.
In addition to the APM module and the MTFF, the Columbus program also envisioned an autonomous orbital platform in polar orbit. The spacecraft would be dedicated to Earth-observation and other remote-sensing experiments, as a polar orbit would provide global coverage of the Earth surface.
Design and development of the platform would be conducted by the Satcom International consortium with UK's British Aerospace and French company Matra leading the program. As of 1985, the cost of the program was estimated at $360 million and it was expected to climb to $560 million if another proposed platform flying near the main space station was added. (267)
Original plans called for the launch of the polar platform onboard the Space Shuttle, flying from Vandenberg Air Force base, however the mission was later switched to the European Ariane-5 rocket. The platform would carry up to one ton of science gear and would support a power supply system with a 4 kW output. It was expected that Space Shuttle and Hermes space plane would visit the platform on an annual basis delivering new payloads and servicing existing systems. It was expected that after the first visit, the mass of scientific payload could be increased to two tons and the solar panels enlarged to provide up to 7 kW of power. As of 1992, between 1,700 and 2,400 kilograms of international payloads would be hosted onboard the spacecraft.
As post-Cold War changes swept the world in the 1990s, neither the scope of the Columbus program, nor the size of its hardware could be sustained. The painful unification of Germany, took a high toll on that country's contribution into European Space Agency, ESA. In 1991, ESA announced that it would slash the length of the Columbus module by 20 percent. To make matters worse, the "independent" European space station, MTFF, was "deferred" to 2001 and ultimately killed. The unmanned platform shared a similar fate.
On October 18, 1995, ESA council slashed the length of the Columbus module to 6.7 meters, or half of its original size. To minimize the impact of the shrinking module's ability to house scientific payloads, European developers come up with an ingenious ergonomic design, which utilized walls and ceiling of the laboratory for placement of hardware. A total of 10 experiment racks were housed in the production version of the module. Two experiment platforms -- one facing the Earth and one pointing toward stars -- were available on the exterior of the module.
In addition, terrified by the disastrous maiden flight of the Ariane-5 rocket in 1996, ESA officials decided to launch the irreplaceable laboratory on the US Space Shuttle.
Still, delays in the construction of the International Space Station and the Columbia accident in 2003 pushed the launch of the Columbus module years behind schedule. High hopes circa 1980s about lucrative space ventures, producing miracle drugs, ultra-strong alloys and perfect optics were all but dashed by the turn of the 21st century, turning early quarrels between ESA and NASA into "much to do about nothing." By the time Columbus reached the launch pad in December 2007, the United States had already declared its intention to unilaterally abandon the ISS program in the following decade.
Chronology of the Columbus project:
1985 Jan. 31: In Rome, a meeting of science ministers representing countries of the European Space Agency, ESA, approved the Columbus program.
1985 June: Germany's MBB-ERNO took the role of prime contractor in the Columbus program, while Aeritalia was given the responsibility for building habitable sections, derived from Spacelab modules.
1985 June 3: NASA and ESA signed a Memorandum of Understanding on the Space Station cooperation.
1986 April: Aeritalia proposes to develop a second habitation module, which could become a core of Europe's independent space station, MTFF, in addition to a European module within an American space station.
1987: NASA and ESA agreed that MTFF could periodically dock to the US space station for servicing.
1987 November: In Hague, Ministerial council of ESA members approved the development of an Ariane-5 rocket, the Hermes mini-shuttle, and a three-element Columbus program:
1988 February: British government makes a decision to withdraw from the Columbus program and other related projects.
1988 Sept. 29: NASA, ESA, Japan and Canada signed a final agreement on the Space Station cooperation.
1991: ESA announces that it would slash the length of the Columbus module by 20 percent.
1995 Oct. 18: ESA council slashed the length of the Columbus module to 6.7 meters, or half of its original size.
2003: Columbia disaster and resulting grounding of the Shuttle fleet pushes the launch of the Columbus laboratory to the end of 2007.
2007 December: On-pad technical problems with the Space Shuttle push the launch of the Columbus laboratory to the beginning of 2008.
2008 Feb. 7: Space Shuttle Atlantis lifts off from Cape Canaveral, Florida, carrying the Columbus laboratory in the STS-122/1E mission.
2008 Feb. 9: Space Shuttle Atlantis, carrying the Columbus laboratory, successfully docks to the International Space Station, ISS.
Evolution of characteristics of the Columbus module, attached to the US Space Station:
* Total mass of the module would grow up to 21,000 kilograms after all its payloads are added in orbit
Characteristics of the MTFF space station, as of 1986:
Characteristics of the polar platform, as of 1992:
Page author: Anatoly Zak; Last update: March 12, 2008
Editor: Alain Chabot; Last edit: February 6, 2008
All rights reserved
A concept of the Man-Tended Free Flyer, MTFF, and its future upgrades were considered by European Space Agency, ESA, within the Columbus program. Click to enlarge. Credit: ESA
The Hermes space plane docks to the MTFF space station. Click to enlarge. Credit: Aerospatiale
Continuous redesigns of the Hermes space plane once led to this bizarre configuration, which was needed to dock the vehicle with the Columbus MTFF station. Click to enlarge. Credit: ESA
The European space station as it was envisioned in 1992. Credit: NASA
The European space station as it was envisioned in 1993. Credit: Deutsche Aerospace
A concept of the polar orbiting platform within the Columbus program, which also had gone through several reincarnations before being dropped altogether. Note the use of the cargo pallets from the Spacelab program and solar panels from the Hubble Space Telescope on the early version of the platform. Credit: ESA
A concept of the Columbus module, which would be permanently attached to Space Station Freedom. Click to enlarge. Credit: ESA
The European space station module was conceived on a truly grand scale, as this mockup of exterior and interior shows. Click to enlarge. Credit: ESA
Europe's Ariane-5 was meant to be the carrier of the Columbus laboratory, however the rocket's shaky beginning prompted a switch to the Space Shuttle. Copyright © 2005 Anatoly Zak
The final assembly of the Columbus module. Credit: ESA
Space Shuttle Atlantis, carrying the Columbus laboratory module approaches the ISS Saturday, Feb. 9, 2008. Credit: NASA TV
Columbus is finally attached to the station. Credit: NASA | <urn:uuid:9bb3b13b-a5c9-4076-9dea-cb664c976aad> | CC-MAIN-2015-35 | http://www.russianspaceweb.com/columbus.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00101-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.945629 | 2,759 | 3.75 | 4 |
World historians and others on St.Thomas tradition.
As mentioned in the beginning of the first article,the ancient,continuous , Nazrani tradition, about the Indian apostolate of Saint Thomas,is supported by a good number of World historians of repute, Indian/Kerala (Hindu) historians and Church historians. Let us examine their opinions and conclusions, in a bit more detail.
In the words of British historian, Vincent Smith :”It must be admitted that,a personal visit of the apostle to Southern India,was easily feasible, in the condition of the time and there is nothing incredible in the traditional belief that he(St.Thomas),came by way of Socotra, where an ancient Christian settlement undoubtedly existed”(Early history of India,p250.)
Referring the East Syriac tradition about Thomas, eminent scholar-historian. Alphonse Mingana, who conducted extensive research on Indian history too, observes:
It is the constant tradition of Eastern Church that the Apostle Thomas evangelized India, and there is no historian, no poet, no breviary, no liturgy and no writer of any kind who, having the opportunity of speaking of St. Thomas, does not associate his name with India. Some writers mention also Parthia and Persia, among the lands evangelized by him,but all of them are unanimous in the matter of India. To refer to all the Syrian and Christian Arab authors who speak of India in connection with Thomas would therefore be equivalent to referring to all who have made mention, of the name of Thomas. Thomas and India are in this respect, synonymous.(Dr.A.Mingana, Early Spread of Christianity in India, p.447-448.)
Another historian, Natalis Alexander, in his book, specifically mentioned that the converts of Thomas, in
, include, Brahmins and others. (Quoted by Bernard Thoma, The St.Thomas Christians(Malayalam) i/169). India
Subscribing the rational, but slightly different, analysis of Paoli, reputed early historian, Franscis Day, views the genesis of the first Christian conversion of Malabar as under: It is very probable,that these converts made by St. Thomas, were joined by others from Syria, who had heard of their existence.In the second century, Egyptian marines carried tidings to Alexandria, of the Christians residing in Malabar, who traced their paternity in Syria to St.Paul, and owned the supermacy of the Patriarch of Babylon. Therefore they must have been here , one hundred years prior to the doctrines of Nestorius. It is by no means improbable, that the Jews who came to Malabar, divided themselves into two parties, one of which became Christians ( mixed themselves to the small body of Indian Christians , whose ancestors were formally converted to the Christian faith by the Apostle Thomas ) , and the other retained their ancient faith. ( Franscis Day, The Land of the Perumals, VI /214.)
According to Anglican scholar-historian Buchanan, we have as good authority that Apostle Thomas died in
India, as that Apostle Peter died at Rome. (Dr.Claude Buchanan, Christian Researches in , p.135). India
“ The glory of the introduction of the teachings of Christ to India is,by time- honoured tradition, ascribed to Apostle Saint Thomas. According to this tradition so clearly cherished by the Christians of this Coast, about 52 AD, the apostle landed at Maliankara near Cranganur ( Kodungallur), the Mouziris of the Greeks, or Muyirikode of the Jewish Copper plates”—(Edgar Thurston,British ethnographer, “The Castes and Tribes of Southern India,Vol.vi/429)
Colonel Yule,the translator of ‘Marco Polo’, one of the best authorities, on the tradition, thinks it, so old that it is probably in it’s simple form true’(Quoted in ‘Church history of Travancore, C.M.Agur, 1/4-5)
Specifying the place of rest of the Apostle, Marco polo, the Venetian traveler, who visited India, in 1293,says, “The body of Messer Saint Thomas the Apostle, lies in this ‘province” of Maabar, at a little town having no great population…Both Christians and Saracens, however, greatly frequent it in pilgrimage”. (Henry Yule, editor of The travels of Marco Polo p.338). Here, though he is not naming the place, one can rightly conclude that it is Mylapore of South India.
Dr.A.E.Medlycott in his book 'India and Apostle St.Thomas' , presents a graphic picture of the early Christianity in India, it’s traditions, and connections with St.Thomas.
……..Philostorgius,an Arian Greek Church historian,records the travels of Theophilus to
(sent by emperor Constantius—about 354 AD): India
…….Theophilus after fulfilling his mission to the Home-rites, sailed to his island home. Thence he visited other parts of India reforming many things …….for the Christians of the place heard the reading of the gospel in a sitting etc. This reference to a body of Christians with church,priest,liturgy in the immediate vicinity of Maldives, can only apply to a Christian and faithful on the adjacent of India…….The people referred to were Christians known as a body that had their liturgy in Syriac language, and inhabited the west coast of India i.e. Malabar (India and Apostle Thomas, p.133, 256).
Referring to Saint Thomas tradition, Jacob Canter Visscher, the Dutch author, expresses his belief on it as “a , tale not to be scoffed at”, seeing that it is asserted in the traditions of the old Christians both of Malabar and Cormandel,which agree in indicating certain spots,where he preached, and laboured. (Visscher, 'Letters from Malabar', Edited, by K. P. Padmanabha Menon, p.41).
Critically examining the Nazrani tradition about apostolic origin, William Logan, the English historian, of Coloneal India, writes: evidence as yet available in support of truth of the tradition is by no means perfect. It is certain that the first century AD, a very extensive trade and connection existed directly between India and the Western world, and a precise and expanding knowledge of the geography of the Indian coasts and markets, is manifest in the writings of the author of the ‘Priplus Maris Erythroci’ and several others. Mouziris, in particular which has already been alluded to, was one of the places best known to travellers and merchants from the West, and it was there and thereabouts that the original settlements of Christians were formed……This direct trade connection seems to have been maintained through ……some centuries after birth of Christ,and if the evidence of the Peutingenerian Tables (which are believed to have been constructed about 226A.D) is accepted, the Romans even at that date are said to have had a force of two cohorts (840 -1200 men) at Mouziris to protect their trade, and they had also erected a Temple to Augustus about 226 at the same place. That Christians, among others, found their way to Malabar in the very early centuries after Christ is there fore highly probable (Malabar Manual,William Logan p.234.). This statement is almost akin to the assertion of historian, L.W. Brown that 'There is no doubt that an Apostolic visit in the 1st century,A.D.,whether or not it actually happened , was perfectly possible from a physical point of view' ,
( Indian Christians of St.Thomas,p.59 ).
( Indian Christians of St.Thomas,p.59 ).
Anglican historian Dr..M.Neale also is a staunch supporter of ' Apostolic origin' of this Church. (Primitive Liturgies, p.140.)
Observations of Indian /Kerala historians:
We can see, valuable, positive references about this ‘historical probability' by several eminent secular historians and Church historians.
Let me quote the words of the great Indian States man-historian, Jawaharlal Nehru, in his Autobiography,and work,‘Discovery of India’ : We also visited ,among the backwaters of Malabar,some towns inhabited chiefly by Christians, belonging to the Syrian Churches. Few people realize that Christianity came to India, as early as the first century after Christ, long before Europe turned to it, and established a firm hold in South India (An Autobiography,p.273.).
Mrs.Romila Thapar , the foremost living authority on early Indian History, has no reluctance to accept the Malabar tradition about the visit of St.Thomas as a credible historical probability.(A History of India,I / 134 )
The reputed Malayalee historians of yester years like K.P.Padmanabha Menon, Sardar K.M.Panicker etc. were inclined to respect the tradition as being worthy of acceptance.
Mr.Panicker find it difficult to deny the truth in the St.Thomas tradition, for, as he says, "We have the recorded statements of Pantaenus, the head of the Alexandrian school, who visited
,in the 2nd century that, he found a flourishing Christian Community here" (History of Kerala,K.M.Panicker, p.5.). India
The unbiased observation of, Kerala’s prominent historian, and author of many masterly works in Malayalam and English, A. Sreedhara Menon, is as under: "About three centuries before Christianity was considered as an approved religion of Europe, and Rome, it started flourishing in Malabar coast." "On the background of extensive trade relations existed between Kerala and Mediterranean countries, even before the Christian Era, nothing improbable about the coming of Saint Thomas”( A. Sreedhara Menon, Kerala History p.133-134.).
The author of the Travancore State Manual too, hold a stand quite favorable to Kerala Nazrani tradition. “…Pliny says that in his day voyages were made to India every year, the average length of a voyage being 40 days. This became possible owing to the great discovery of the monsoon winds of the Malabar Coast by Hippalus, whose name was there fore given to the wind itself.It should be remembered here that the discovery of the trade-winds by Hippalus was just before St.Thomas’s visit to Malabar, which tradition fixes at 52.A.D. Thus the route of communication,then most used was quite favourable to the voyage of St.Thomas to
South India.”(T.S.M.Vol.II/p.123. ed. by V. Nagam Aiya.)
Church historians on St.Thomas:
Among the old generation,writers on St.Thomas history, the contributions of Fr. Bernard Thoma and Placid Podipara can hardly be under estimated. High lighting the unique and unbroken tradition exists in
Malabar coast, more particularly in places like Kodungalloor, Chavakkad, Palayoor, Kunnamkulam, Pacid Podipara observes, “The St.Thomas Christians of Malabar have a tradition immemorial, constant, definite and living about their origin from the Apostle Thomas”(The Thomachristians,p.245.).
Now let us see, how Dr. A. Mathias Mundadan, one of the living
historians, who has devoted decades to the study of Indian Christianity, and made commendable contributions to secular history too, view the apostolate of St.Thomas: Scholar Church
An important group of historians, follow a line of argument more or less like the following:The possibility of one or two Apostles of Christ having preached the Gospel in India, and even in China, no serious-minded scholar would object to. At the dawn of Christianity there were trade routes connecting West Asia and the East,routes very much frequented. The land routes reached parts of North India, while the sea routes reached the coasts of Kerala and other parts of South India.The tradition as it is found in the witnesses of various authors and Churches makes this possibility a probability.
Add to this, the living testimony of the community of the St.Thomas Christians and the witness of the tomb of Mylapore,the Little Mount and the Big Mount or St.Thomas Mount in the vicinity of Mylapore, together with the tradition connected with these monuments. These considerations, they think, should incline any earnest inquirer to accept the Indian apostolate of St.Thomas, as established beyond doubt ( Indian Christians,Search for Identity,p.3).
Quoting the letter of St.Franscis Xavier, to St.Ignatius Loyola (dtd.14th Jan.1549), (Late) Mar Varkey Vithayathil, in his doctoral thesis on ‘The mission & life of St.Thomas in India’observes as under:
“There is a city called Cranganore where there are many Christians….descended from those made Christians by St.Thomas”. We have the testimony Western eye-witnesses to the existence of Christian communities from the end of the 2nd century onwards. These Indian Christians came into contact with the Mesopotamian church probably from the first half of the 4th century and subsequently became hierarchically dependent on it. Nevertheless, they have preserved their cultural and ecclesiastical identity.The claim of these Syrian rite Christians of India, known from time immemorial as ‘St.Thomas Christians’,is that St.Thomas, the apostle arrived in the port of Cranganore by sea, converted their ancestors to the faith, ordained priests, erected crosses, founded churches and received the crown of martyrdom in Mylapore, where they still venerate his tomb…….this tradition has no rival any where in the world, (Thomapedia,p.3.).
Benedict Vadakkekara, eminent Church historian of the day, whose works invited praise from secular historians too, argues in favor of accepting tradition as an aid, in the absence of written evidence other than circumstantial evidence, in the case of
studies, provided it should be historically coherent and scientifically verifiable. Saint Thomas
In his own words:
It (the tradition of the Syrian Christians of Kerala / India) is quite unlike a loose and vague belief among the populace precisely because the community has with consistence kept the arrival, the mission, and the death of Apostle Thomas inseparably linked with certain specific families, situations, and places. The tradition points to definite spots as having been in association with the Apostle, e.g. the place where the Apostle landed,or preached or died (Benedict Vadakkekara, Origin of India’s St.Thomas Christians,p.25-27 ). | <urn:uuid:076a7142-d6e9-454c-a167-ed89a592618c> | CC-MAIN-2015-35 | http://antonyka.blogspot.com/2011/09/valdity-of-st.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064865.49/warc/CC-MAIN-20150827025424-00223-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.943693 | 3,206 | 2.890625 | 3 |
History of Portugal
Introduction to History of Portugal
In ancient times, what is now Portugal was inhabited mainly by the Lusitanians. This tribe carried on trade with the Phoenicians at Gades (Cádiz) and later with the Carthaginians who colonized southern Spain.
|Important dates in Portugal|
|1000's B.C.||Phoenicians established settlements in what is now Portugal.|
|100's B.C.||Portugal became part of the Roman Empire.|
|A.D. 711||Muslims invaded the Iberian Peninsula.|
|1143||Portugal became an independent nation.|
|1419||Portugal began its overseas expansion.|
|1500||Pedro Alvares Cabral claimed Brazil for Portugal.|
|1580||Spain invaded and conquered Portugal.|
|1640||Portugal regained its independence.|
|1822||Portugal lost its colony of Brazil.|
|1910||The Portuguese established a republic.|
|1928||Antonio de Oliveira Salazar, who ruled as a dictator for 40 years, began his rise to power.|
|1949||Portugal and 11 other nations formed a military alliance, the North Atlantic Treaty Organization (NATO).|
|1960's||Rebellions against Portuguese rule broke out in the country's African colonies.|
|1974||A revolution overthrew the Portuguese dictatorship.|
|1975||Almost all remaining Portuguese colonies gained independence.|
|1976||Portugal held its first free general elections in more than 50 years.|
|1986||Portugal joined the European Community, an economic organization that later became the basis of the European Union.|
About 200 B.C. the Romans annexed the Iberian Peninsula, and in 60 years had subdued the Lusitanians. In the early fifth century A.D., the Alans, a Scythian tribe, settled in Lusitania. In mid-century the Germanic Suevi, who had settled to the north, invaded Lusitania and sacked Lisbon. About 470 the Visigoths from southern Gaul (France) migrated to the peninsula and founded a kingdom.
In 711 the Moors, Muslims from Africa, began invading Spain. By 719 they had conquered the whole peninsula. Lisbon was besieged by Norsemen in the ninth century, but withstood the attack. In the north of Spain the Christians had slowly pushed the Moors southward, and at the end of the ninth century the northern part of modern Portugal was occupied by the Christian Kingdom of León. By the mid-11th century, the boundary reached almost to Lisbon.
Rise of the Portuguese Kingdom
In 1094 Alfonso VI of León and Castile made Portugal, from the Minho River to Coimbra, a separate county under the rule of his son-in-law, Henry of Burgundy. Alfonso (I) Henriques, Henry's son, took the title of king in 1139. He extended the kingdom south to the Tagus River, and made Coimbra the capital. By the mid-13th century the southern boundary was the coast, and Lisbon became the capital.
Castile made repeated attempts to regain Portugal. When this goal was achieved finally by marriage, the half-brother of the previous ruler led a revolt, defeated the Castilian forces at Aljubarrota in 1385, and took the throne as John I of the House of Aviz. The Treaty of Windsor, 1386, established a permanent alliance between Portugal and England.
Under the Aviz dynasty Portugal underwent its period of greatest achievement. Prince Henry the Navigator, a son of John I, started the search for a route to the Indies. Bartholomeu Dias rounded the Cape of Good Hope in 1488, and Vasco da Gama reached India in 1498. Pope Alexander VI in 1493–94 established the Line of Demarcation, dividing the world's unclaimed lands between Spain and Portugal. Pedro Alvares Cabral reached Brazil and claimed it for Portugal. Trading stations were established in Morocco and on the west and east coasts of Africa, as well as in India. Colonial trade brought wealth and power to the nation.Da Gama's voyage from Portugal to India, 1497-1498. Vasco Da Gama sailed from Portugal to India in 1497 and 1498. This map shows his historic voyage around Africa, which opened a new trade route between Europe and Asia.
Decline In Portugal
When the direct Aviz line died out in 1580, the throne was seized by Philip II of Spain, an heir through his mother. Portugal remained under Spanish rule until 1640, when it revolted, and the House of Braganza came to the throne. Much of Portugal's colonial empire had been lost to the English and Dutch. In 1654 the Portuguese expelled the Dutch from the northern coastal areas of Brazil, which they had seized earlier in the century. However, Portugal continued to decline economically and as a world power.
Spain tried repeatedly to reconquer Portugal, but finally recognized its independence in 1668. In the War of the Spanish Succession (1701–14) there was further fighting between the two countries. During the Seven Years' War (1756–63), Portugal was invaded by Spain and France, who withdrew at the end of the war.
Napoleonic Era and Aftermath
In 1801 France and Spain invaded Portugal in the War of the Oranges (named for oranges sent by the Spanish commander to his queen). In 1807 a French army occupied Portugal, and the royal family fled to Brazil. The Peninsular War started the next year. British forces landed in Portugal and defeated the French, who withdrew. The French invaded again in 1809, but were repelled by the British.
Portugal adopted a constitution in 1820, and the king, John VI, returned from Brazil to rule as a constitutional monarch. Brazil, under his son Dom Pedro, declared itself independent. Another son, Miguel, started a civil war in Portugal to restore absolute monarchy. Dom Pedro succeeded his father to the Portuguese throne in 1826, but abdicated in favor of his infant daughter, Maria. In 1828 Miguel, then regent, seized the throne and abolished the constitution. Dom Pedro, with the aid of England, France, and Spain, defeated him in 1834.
Founding of the Republic
During the rest of the 19th century Portugal's government was in the hands of professional politicians who had little concern for the wishes of the people. Discontent was widespread by the early 20th century, and in 1908 King Carlos I and the crown prince were assassinated. A younger son, Manuel II, came to the throne and restored the constitution, but in 1910 he was forced to leave the country and a republic was declared.
A liberal constitution was adopted in 1911, and Manoel d'Arriaga was elected president. Church and state were made separate, and property of the religious orders was confiscated. In 1916 Portugal entered World War I on the side of the Allies.
Portugal Under Salazar
In 1926 the army seized control of the government. Antonio de Oliveira Salazar became minister of finance in 1928, and was soon dominant. He strengthened the economy and established a fascist type of government. In 1932 he became premier, with the powers of a dictator. A new constitution went into effect in 1933. Salazar supported Erancisco Franco in the Spanish Civil War, and signed a nonaggression pact with Spain in 1939.
Portugal observed neutrality in World War II, but permitted Allied bases in the Azores and in 1949 joined the North Atlantic Treaty Organization. It was refused admission to the United Nations by Soviet veto until 1955, when it became a member.
In Portugal, strong opposition developed against Salazar and his regime. To keep opposition candidates from coming to power, in 1959 the government abolished direct election of presidents in favor of election by an electoral college. In the 1960's revolts began in Portugal's African possessions of Angola, Portuguese Guinea, and Mozambique. In 1961 India seized the territories of Goa, Damo, and Diu.
Portugal After Salazar
In 1968 Salazar became critically ill and his close associate Marcello Caetano replaced him as premier. Under Caetano, government controls were eased slightly and greater internal selfgovernment was authorized for the overseas possessions. However, many repressive practices were maintained. As a result, unrest continued within Portugal while liberation groups intensified the battle for independence in the African possessions. Prolonged warfare in Africa seriously weakened Portugal's economy, contributing to internal discontent.
In 1974 Caetano's government was overthrown by an armed forces coup. A provisional government was formed, and a series of reforms returned many freedoms to Portugal's citizens. A new constitution in 1976 guaranteed civil liberties and established a democratic socialist government.
Meanwhile, the provisional government dismantled Portugal's empire. Portuguese Guinea was granted independence in 1974, followed by Angola, Cape Verde, Mozambique, and Sao Tomé and Príndpe in 1975. Portugal withdrew from Portuguese Timor, now called East Timor, in 1974 but never fully granted it independence. It retained control over Macau, a small territory on the Chinese coast it acquired in 1557.
The government moved increasingly to the left, nationalizing many businesses and collectivizing agriculture. This trend was reversed after the failure of a coup by extreme leftists late in 1975. From 1975 to 1987 no party held a majority of seats in parliament and Portugal had a succession of unstable governments. In 1987 the Social Democratic party won a majority of seats in parliament. Meanwhile, in 1986, Portugal joined the European Community (now the European Union).
The Social Democrats fell from power in 1995 when the Socialists won a majority of seats in parliamentary elections. In 1998, Portugal and Indonesia, after years of acrimonious relations, reached an agreement on an autonomy plan for East Timor, which Indonesia had occupied since 1975. In 1999, Portugal returned Macau to China in accordance with the terms of a 1987 agreement between the two countries.
In elections in 2002, the Social Democrats won the most seats in the Assembly. The Socialist Party regained control of the Assembly in 2005. | <urn:uuid:23e5a9d2-9f86-4157-855d-b794d54a5b2e> | CC-MAIN-2015-35 | http://history.howstuffworks.com/european-history/history-of-portugal.htm/printable | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645371566.90/warc/CC-MAIN-20150827031611-00218-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.967379 | 2,132 | 3.609375 | 4 |
Melbourne was meticulously planned. It began as a barely legal, speculative settlement in 1835 that broke away from New South Wales in 1851 as the capital of the new state of Victoria. It was fortunate to be blessed with farsighted founders who envisioned a great 19th century city with an abundance of parks and wide roads and boulevards.
Since Robert Hoddle laid out his grid in 1837, many buildings have been raised and razed within the original ‘town reserve’ bounded by Victoria Street, Hoddle Street and the Yarra River, but the streets and parks remain resolute.
Settlement - foundation and surveying
Melbourne was founded in 1835. Unlike other Australian capital cities, Melbourne did not originate under official auspices. It owes its birth to the enterprise and foresight of settlers from Tasmania, where the land available for pastoral purposes was becoming overstocked. These settlers formed the Port Phillip Association for the purpose of the pastoral exploration of Port Phillip.
On 10 May 1835 John Batman set sail in the 30-tonne schooner ‘Rebecca’ on behalf of the Association to explore Port Phillip for land. After entering Port Phillip Bay on 29 May, Batman and his party anchored their ship a short distance from the heads and made several excursions through the countryside. On 6 June, at Merri Creek near what is now Northcote, Batman purchased 600,000 acres of land, including the sites of both Melbourne and Geelong, from eight Aboriginal chiefs. The Government later cancelled this purchase and, as a result, had to compensate the Port Phillip Association.
On 8 June 1835, Batman and his party rowed up the Yarra River and landed near the site of the former Customs House (now the Immigration Museum). John Batman recorded in his journal: "about six miles up, found the river all good water and very deep. This will be the place for a village." Batman left three white men of his party and five Aborigines from New South Wales behind with instructions to build a hut and commence a garden, and returned to Launceston to report to his association.
John Pascoe Fawkner had made a similar decision to settle at Port Phillip and formed a syndicate in Launceston that purchased the 55-tonne schooner ‘Enterprize’. Fawkner and his party of six set sail from Launceston but due to sea sickness Fawkner had to return to shore and the party sailed without him.
On 29 August 1835, the ‘Enterprize’ sailed up the Yarra River and anchored at the site chosen earlier by Batman as the place for a village. Fawkner’s party then went ashore, landed stores and livestock, and proceeded to erect the settlement’s first home. The ‘Enterprize’ returned to Launceston to collect Fawkner and his family who eventually arrived at the settlement on 10 October that year.
There has been conjecture as to who is Melbourne's rightful founder, John Batman or John Pascoe Fawkner, and indeed the two were rivals during their lives. To find out more about the founders of Melbourne, visit Explore History, the State Library of Victoria's online exhibition portal.
A settlement formed
The Governor of New South Wales, Sir Richard Bourke, issued a proclamation on 26 August 1835 stating that all treaties with Aborigines for the possession of land would be dealt with as if the Aborigines were trespassers on Crown lands. Later that year, Bourke wrote to the Secretary of State, Baron Glenelg, reporting his action and proposing that a township be marked out and allotments sold. On 13 April 1836, Baron Glenelg authorised Governor Bourke to form a settlement.
The settlement lacked the essentials of a town (a governing authority, a legal survey and ownership of lands) but the community was law-abiding.
On 25 May 1836, Governor Bourke sent a Commissioner to report on affairs. In his report he stated that the settlement, which he called ‘Bearbrass’, comprised 13 buildings – three weatherboard, two slate and eight turf huts. At the time, there was a European population of 142 males and 35 females.
Surveying the Settlement
On 4 March 1837, Governor Bourke arrived and instructed the Assistant Surveyor-General Robert Hoddle to lay out the town. The first name suggested by the Colonial Secretary was Glenelg. However, Governor Bourke overruled this and named the settlement Melbourne as a compliment to the Prime Minister of Great Britain.
Hoddle’s plan for Melbourne was approved by Governor Bourke but the plan was based largely on the work of Hoddle’s predecessor and junior, Robert Russell. The grid Hoddle designed was controversial - it was much larger than the population of 4000 people needed, and the roads were unusually wide. Governer Bourke disapproved of the plan but Hoddle convinced him that wide streets were advantageous to the health and convenience of the future city.
However, in return for allowing wide main streets, Bourke insisted that every second street running north and south be a mews or little street. This left Melbourne with a legacy of constraint. This legacy necessitated the Council, in the late 1930s, to request the enactment of legislation to permit it gradually to buy back a four-foot strip of land on both sides of the little streets when redevelopment of each property fronting took place.
First and second land sales
Governor Bourke authorised the first sale of Crown land in Melbourne, which was conducted by Robert Hoddle on 1 June 1837. The sale comprised three areas bounded by:
- Swanston Street, Collins Street, William Street and Bourke Street
- King Street, Flinders Street, William Street and Collins Street
- Elizabeth Street, Flinders Street, Queen Street and Collins Street
Each block, as laid out by Hoddle, was subdivided into 20 allotments each of approximately half an acre (0.202 hectares). Each purchaser was covenanted to erect a substantial building on the land within two years. All the land was sold and the more westerly the block, the more valuable the land.
The Golden Mile
The highest price was paid for the north-east corner of William Street and Collins Street. The lowest price was paid for the allotments on the north side of Collins Street, between Swanston Street and Elizabeth Street – an area later to be known as ‘The Golden Mile’ and the highest-priced real estate in Australia.
Second land sale
On 1 November 1837, five months after the first land sale, the second sale of land took place. The boundary streets were:
- Swanston Street, Flinders Street, Elizabeth Street and Collins Street
- Queen Street, Flinders Street, Market Street and Collins Street
- Swanston Street, Bourke Street, William Street and Lonsdale Street, with the exception of the reserved land where the General Post Office and the Law Courts now stand
Even in this short space of time the price of land in Melbourne had risen, the highest price being paid by John Batman for the allotment on the north-west corner of Swanston Street and Flinders Street.
It was thought that land in Melbourne would fetch a higher price if the auctions were conducted in Sydney, thus the next land sales were conducted in that city and this assumption proved correct.
Newspaper articles from the time show that Melbourne was 'kind of a big settlement' and could not yet be called a town. A census taken on 2 March 1841 showed that the total population of the province was 16,671 and that the inhabitants of Melbourne numbered 4479, comprising 2676 males and 1803 females.
Incorporation of the Town of Melbourne
On 22 October 1841 the settlement of Melbourne was divided into four wards for the purpose of electing commissioners for the management of the Melbourne markets. The internal boundaries of the four wards were the centre lines of Bourke Street and Elizabeth Street prolonged to the settlement’s boundaries.
The first markets were established by the Commissioners at the present sites of:
St. Paul’s Cathedral (hay and corn markets)
the National Mutual Centre (Western market site, fruit and general produce)
the north-east corner of Elizabeth Street and Victoria Street opposite the present Queen Victoria Market site (cattle)
A fish market was later established on the present site of the Flinders Street railway station.
Autonomy for Melbourne
From the time of its establishment in 1835, Melbourne had been a province of New South Wales and the affairs of the settlement had been administered by the Parliament of New South Wales. With the growth of the settlement there had been an increasing demand by the inhabitants for greater autonomy over their own affairs. On 12 August 1842, Melbourne was incorporated as a Town.
The Town of Melbourne was then subdivided into four wards, the internal boundaries being the same as those defined by the markets. The names given to the wards were Bourke Ward (north-west), Gipps Ward (north-east), La Trobe Ward (south-east) and Lonsdale Ward (south-west).
Melbourne becomes a city
The Town of Melbourne was raised to the status of a City by Letters Patent of Queen Victoria dated 25 June 1847, just five years after its incorporation. This royal action arose from a desire to establish a bishop’s see of the Church of England in the town, as the establishment of a bishopric required the status of a city.
The Right Reverend Charles Perry was consecrated as the first bishop of Melbourne on 29 June 1847, four days after the granting of the Letters Patent by the Queen. He arrived in Melbourne on board ‘The Stag’ on 23 January 1848, and was installed in the Cathedral Church of St. James.
However, the Letters Patent merely changed the name from Town to City. An Act of the Colonial Legislature was necessary to change the status of Melbourne from town to city. A motion was tabled at a meeting of the Town Council to alter the style and title of Melbourne, and a draft bill was approved and sent to the government for introduction to the legislature.
On 3 August 1849, the City of Melbourne finally found a place in the statute book. Act 13 Victoria No. 14 states: "An Act to effect a change in the Style and Title of the Corporation of Melbourne rendered necessary by the erection of the Town of Melbourne to a City."
In 1851, the state of Victoria was created with the City of Melbourne as its capital. | <urn:uuid:450cd758-af4d-4e2b-8e33-defe10bbfb32> | CC-MAIN-2015-35 | http://www.melbourne.vic.gov.au/AboutMelbourne/History/Pages/SettlementtoCity.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645264370.66/warc/CC-MAIN-20150827031424-00103-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.977763 | 2,203 | 3.265625 | 3 |
By Shannon Des Roches Rosa, BlogHer
Have you ever wondered why, exactly, vaccines are erroneously associated with autism? I’ll tell you: In 1998, Dr. Andrew Wakefield held a press conference to announce that his research had revealed a possible link between the MMR vaccine and autism. He published his findings in the respected independent medical journal The Lancet, and spent the next few years promoting his vaccine-autism “concerns” through media outlets like the TV news magazine 60 Minutes.
The result was panic, a vaccination rates nosedive, and the resurrection of vaccine-preventable diseases like measles.
In 2004, it was revealed that Wakefield had also been conducting a separate, simultaneous study funded by lawyers seeking compensation for clients who claimed their children suffered from vaccine damage. Ten of Wakefield’s twelve original paper co-authors, horrified by Wakefield’s conflict of interest as well as the public health crisis they’d help cause, issued an official retraction in The Lancet [PDF], stating, “We wish to make it clear that in [Wakefield’s] paper no causal link was established between MMR vaccine and autism as the data were insufficient.”
Read more at: http://www.blogher.com/verdict-vaccination-boogeyman
Read Full Post »
By Alison Singer
President, Autism Science Foundation
The week, the British General Medical Council (GMC) ruled that Dr. Andrew Wakefield, who first proposed a link between the MMR vaccine and autism, acted “dishonestly and irresponsibly” when he published his research and showed a ‘callous disregard’ for the suffering of children.
The GMC decision came after the longest and most expensive hearing in its 148-year history. The hearing focused on a small study of a dozen children by Dr Wakefield and 12 doctors which linked the MMR with autism and bowel problems. It was published in the Lancet, the highly respected medical journal, in 1998. At a press conference following the publication, Wakefield said there were “sufficient anxieties for a case to be made” to give the three vaccines separately. Numerous other studies, including one involving three million children, failed to make the link. But that didn’t prevent MMR vaccination rates from plummeting by 12% in Great Britain after Wakefield’s report. And in 2006 a 13-year-old boy died from measles. More death followed.
Eventually Wakefield’s collaborators withdrew their names from the Lancet paper and the paper itself was eventually retracted. Later it was revealed that Wakefield had received funds from lawyers representing the children enrolled in his study. And now the GMC has spoken in clear and convincing terms. And let’s not forget that the hearing itself was not even about the science; it was about Wakefield’s methods. The science has been in for some time now. No study has shown a link between autism and MMR. To read the studies visit www.autismsciencefoundation.org/autismandvaccines.html
But will this be the end of the controversy. I doubt it.
Once you put an idea in people’s head, even in the presence of clear and convincing science, it is very hard to unscare them. Anti vaccine autism advocates continue to see Wakefield as a hero who remains willing to take on the establishment and fight for their children. In the meantime, Wakefield’s actions have had a lasting negative effect on children’s health in that some people are still afraid of immunizations. In some cases, the younger siblings of children with autism are being denied life saving vaccines. This population of baby siblings, already at higher risk for developing autism, is now also being placed at risk for life threatening, vaccine preventable disease, despite mountains of scientific evidence indicating no link between vaccines and autism. This is the Wakefield legacy.
Read Full Post »
UK Daily Mail Online
The doctor at the centre of the MMR controversy ‘failed in his duties as a responsible consultant’, and went against the interests of children in his care, a disciplinary panel ruled today.
Dr Andrew Wakefield also acted dishonestly and was misleading and irresponsible in the way he described research which was later published in The Lancet medical journal, the General Medical Council (GMC) said.
In the late 1990s, Dr Wakefield and two other doctors said they believed they had uncovered a link between the jab and bowel disease and autism.
Today’s ruling will be a setback to campaigners who back Dr Wakefield’s claims but will fuel fears that the controversial doctor has been the victim of a sustained witch-hunt.
Dr Wakefield was absent from today’s hearing but parents who believe their children were damaged by the MMR jab heckled the GMC panel of experts as they delivered their findings.
The hearing – which was the longest and most complex case ever held by the GMC – has sat for 148 days over a two-and-a-half-year period.
Thirty-six witnesses gave evidence at the hearing, which has reportedly cost more than £1 million.
It centred around Dr Wakefield’s study, which sparked a massive drop in the number of children given the triple jab for measles, mumps and rubella.
During the mid 1990s, uptake of the MMR vaccination had stood at 92 per cent, but five years after The Lancet paper, the vaccination level had fallen below 70 per cent in some places. Measles cases in Britain rose from 56 in 1998 to 1,370 in 2008.
Read more: http://www.dailymail.co.uk/news/article-1246775/Doctor-centre-MMR-controversy-failed-duties-responsible-consultant-rules-GMC.html#ixzz0dvDPwLQl
Read Full Post »
Autism Science Foundation President and Interagency Autism Coordinating Committee (IACC) member Alison Singer joined all her colleagues on the IACC this week in voting to approve the 2010 Strategic Plan for Autism Research.
The plan calls for upwards of $217 million to be devoted to autism research in 2010. It includes new objectives for identification of behavioral & biological markers, and calls for new studies to improve understanding of the biological pathways of genetic conditions related to autism; studies that target the underlying biological mechanisms that co-occur with autism; and studies that investigate what causes phenotypic variation across individuals who share an identified genetic variant. The new plan cites the need for more research on services and supports, as well as a greater focus on lifespan issues. The committee also added a new chapter to the plan calling for infrastructure investments that will support data sharing among researchers, encourage and enable individuals with ASD and their families to participate in research, and improve the speed with which research findings are disseminated. The new chapter also calls for enhancing and expanding autism surveillance efforts.
The plan does not include any references or objectives that imply that vaccines cause autism, and it does not call for additional vaccine research. “Draft materials submitted to the IACC suggesting vaccines and/or vaccine components were implicated in autism were rejected by the committee because the IACC determined that they were not based on good science,” said Singer.
Read Full Post »
Autism Science Foundation Founder and President Alison Singer today was awarded the Community Impact Award for her role in launching the Autism Science Foundation, a not for profit organization that raises funds to support autism research. The award was presented by Matan, a not-for-profit organization that provides Jewish educational services to children with developmental disabilities, at its 10th Anniversary Gala.
In accepting the award, Singer said “There are so many things we need to do in the autism community. We desperately need more research, and that is what our organization, the Autism Science Foundation, focuses on; research. But the answers that come from the lab are years away, and meanwhile we have our beautiful children who are here now and need help to become the people they are meant to be, and that’s what Matan does so well.”
Later this month, Autism Science Foundation will announce the recipients of its fellowship awards for graduate and medical students interested in pursuing careers in basic and clinical scientific research relevant to autism spectrum disorders. “The launch of the Autism Science Foundation marked the beginning of a new chapter in autism research; one with a deep and unwavering commitment to an evidence-based agenda” said Dr. Paul Offit, Autism Science Foundation board member and Chief of Infectious Diseases at Children’s Hospital of Philadelphia. “ASF is the best of all worlds: parents and scientists coming together to support research that stands the best chance of making a difference.”
The Autism Science Foundation is a 501(c)(3) public charity. Its mission is to support autism research by providing funding and other assistance to scientists and organizations conducting, facilitating, publicizing and disseminating autism research. The organization also provides information about autism to the general public and serves to increase awareness of autism spectrum disorders and the needs of individuals and families affected by autism.
Matan supports Jewish communities, professionals, and institutions in educating children with special learning needs. Matan is committed to exposing all children to the “wonder” of Jewish life and fostering literate and engaged Jews through creative and multi-sensory approaches. By strengthening the capacity of Jewish institutions to support and sustain more educationally varied programs, Matan is expanding the Jewish community’s ability to fulfill the obligation to include all children – not just typical learners – in their Jewish educational birthright.
To learn more about the Autism Science Foundation or to make an online donation visit www.autismsciencefoundation.org
To learn more about Matan or to make an online donation visit www.matankids.org
Read Full Post »
The Vaccinate Your Baby project, sponsored by Every Child By Two, has launched a new website featuring video answers to frequently asked questions (FAQs) about vaccines and autism. Several experts in the fields of immunization and autism participated and their answers have been edited into short video clips.
- Paul Offit, MD, Chief, Division of Infectious Diseases and Director, Vaccine Education Center, Children’s Hospital of Philadelphia and Board Member of the Autism Science Foundation
- Alison Singer, Co-Founder & President, Autism Science Foundation and parent of a child with autism
- Mark Sawyer, MD, Professor, Clinical Pediatrics and Pediatric Infectious Disease Specialist, UCSD School of Medicine & Rady Children’s Hospital San Diego
- Mary Beth Koslap-Petraco, DNP(c), CPNP, Coordinator, Child Health Suffolk County Department of Health Services, NY
The questions cover a wide range of topics, including Why Vaccinate?, Why Follow the Recommended Immunization Schedule?, and What does the Science Tell Us About Autism and Vaccines?
To view the video clips, visit http://www.vaccinateyourbaby.org/faq/index.cfm
Read Full Post »
The Interagency Autism Coordinating Committee (IACC) will be holding a Full Committee Meeting on Tuesday, January 19, 2010 from 9:00 AM – 5:00 PM ET at the William H. Natcher Conference Center, NIH Campus, in Bethesda, MD.
The purpose of the IACC meeting is to discuss and vote on recommendations for the annual update of the IACC Strategic Plan for Autism Spectrum Disorders Research. The meeting will also include a presentation on epigenetics and autism by Dr. Andrew Feinberg of Johns Hopkins University School of Medicine.
The meeting will be open to the public and pre-registration is recommended. Seating will be limited to the room capacity and seats will be on a first come, first served basis, with expedited check-in for those who are pre-registered. The meeting will be remotely accessible by videocast and conference call. Members of the public who participate using the conference call phone number will be able to listen to the meeting, but will not be heard.
To access the conference call:
USA/Canada Phone Number: 888-577-8995
Access code: 1991506
Individuals who participate using this service and who need special assistance, such as captioning of the conference call or other reasonable accommodations, should submit a request to the contact person listed above at least seven days prior to the meeting. If you experience any technical problems with the conference call, please-mail IACCTechSupport@acclaroresearch.com.
The latest information about the meeting can be found at: http://iacc.hhs.gov/events/2010/full-committee-mtg-announcement-January19.shtml
Read Full Post » | <urn:uuid:a3ffed3c-b250-40db-a34d-3f2f533419db> | CC-MAIN-2015-35 | https://autismsciencefoundation.wordpress.com/2010/01/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.25/warc/CC-MAIN-20150827025424-00282-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.957184 | 2,663 | 2.546875 | 3 |
2013, Oceanography 26(2):191–195, http://dx.doi.org/10.5670/oceanog.2013.27
Katherine E. Mills | School of Marine Sciences, University of Maine and Gulf of Maine Research Institute, Portland, ME, USA
Andrew J. Pershing | School of Marine Sciences, University of Maine and Gulf of Maine Research Institute, Portland, ME, USA
Curtis J. Brown | Gulf of Maine Research Institute, Portland, ME, USA
Yong Chen | School of Marine Sciences, University of Maine, Orono, ME, USA
Fu-Sung Chiang | Institute of Applied Economics, National Taiwan Ocean University, Keelung, Taiwan
Daniel S. Holland | Conservation Biology Division, Northwest Fisheries Science Center, National Marine Fisheries Service, National Oceanic and Atmospheric Administration, Seattle, WA, USA
Sigrid Lehuta | Institut Français pour la Recherche et l'Exploitation de la Mer, unité Halieutique Manche-Mer du Nord, Boulogne sur Mer, France
Janet A. Nye | School of Marine and Atmospheric Sciences, Stony Brook University, Stony Brook, NY, USA
Jenny C. Sun | Gulf of Maine Research Institute, Portland, ME, USA
Andrew C. Thomas | School of Marine Sciences, University of Maine, Orono, ME, USA
Richard A. Wahle | School of Marine Sciences, University of Maine, Darling Marine Center, Walpole, ME, USA
Climate change became real for many Americans in 2012 when a record heat wave affected much of the United States, and Superstorm Sandy pounded the Northeast. At the same time, a less visible heat wave was occurring over a large portion of the Northwest Atlantic Ocean. Like the heat wave on land, the ocean heat wave affected coastal ecosystems and economies. Marine species responded to warmer temperatures by shifting their geographic distribution and seasonal cycles. Warm-water species moved northward, and some species undertook local migrations earlier in the season, both of which affected fisheries targeting those species. Extreme events are expected to become more common as climate change progresses (Tebaldi et al., 2006; Hansen et al., 2012). The 2012 Northwest Atlantic heat wave provides valuable insights into ways scientific information streams and fishery management frameworks may need to adapt to be effective as ocean temperatures warm and become more variable.
Mills, K.E., A.J. Pershing, C.J. Brown, Y. Chen, F.-S. Chiang, D.S. Holland, S. Lehuta, J.A. Nye, J.C. Sun, A.C. Thomas, and R.A. Wahle. 2013. Fisheries management in a changing climate: Lessons from the 2012 ocean heat wave in the Northwest Atlantic. Oceanography 26(2):191–195, http://dx.doi.org/10.5670/oceanog.2013.27.
Atlantic States Marine Fisheries Commission. 2009. American Lobster Stock Assessment Report for Peer Review. Stock Assessment Report No. 09-01, Atlantic State Marine Fisheries Commission, Washington, DC, 316 pp.
Belkin, I.M. 2009. Rapid warming of large marine ecosystems. Progress in Oceanography 81:207–213, http://dx.doi.org/10.1016/j.pocean.2009.04.011.
Cheung, W.W.L., J.L. Sarmiento, J. Dunne, T.L. Frolicher, V.W.Y. Lam, M.L.D. Palomares, R. Watson, and D. Pauly. 2013a. Shrinking of fishes exacerbates impacts of global ocean changes on marine ecosystems. Nature Climate Change 3:254–258, http://dx.doi.org/10.1038/nclimate1691.
Cheung, W.W.L., R. Watson, and D. Pauly. 2013b. Signature of ocean warming in global fisheries catch. Nature 497:365–369, http://dx.doi.org/10.1038/nature12156.
Dicolo, J.A., and N. Friedman. 2012. Lobster glut slams prices: Some fishermen keep boats in port; outside Maine, no drop for consumers. Wall Street Journal, July 16, 2012. Available online at: http://online.wsj.com/article/SB10001424052702304388004577529080951019546.html (accessed April 17, 2013).
Edwards, M., and A.J. Richardson. 2004. Impact of climate change on marine pelagic phenology and trophic mismatch. Nature 430:881–884, http://dx.doi.org/10.1038/nature02808.
Frederick, A. 2012. A new fishery in Maine. The Working Waterfront. Available online at:
http://www.workingwaterfront.com/articles/A-New-Fishery-in-Maine/14963 (accessed April 17, 2013).
Greene, C.H., J.A. Francis, and B.C. Monger. 2013. Superstorm Sandy: A series of unfortunate events? Oceanography 26(1):8–9, http://dx.doi.org/10.5670/oceanog.2013.11.
Greene, C.H., and B.C. Monger. 2012. An Arctic wild card in the weather. Oceanography 25(2):7–9, http://dx.doi.org/10.5670/oceanog.2012.58.
Hansen, J., M. Sato, and R. Ruedy. 2012. Perception of climate change. Proceedings of the National Academy of Sciences of the United States of America 109:E2415–E2423, http://dx.doi.org/10.1073/pnas.1205276109.
Holland, D.S. 2011. Optimal intra-annual exploitation of the Maine lobster fishery. Land Economics 87:699–711.
Link, J.S., J.A. Nye, and J.A. Hare. 2011. Guidelines for incorporating fish distribution shifts into a stock assessment context. Fish and Fisheries 12:461–469, http://dx.doi.org/10.1111/j.1467-2979.2010.00398.x.
Meehl, G.A., T.F. Stocker, W.D. Collins, P. Friedlingstein, A.T. Gaye, J.M. Gregory, A. Kitoh, R. Knutti, J.M. Murphy, A. Noda, and others. 2007. Global climate projections. Pp. 747–845 in Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. S. Solomon, S.D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor, and H.L. Miller, eds, Cambridge University Press, Cambridge, UK.
Nghiem, S.V., D.K. Hall, T.L. Mote, M. Tedesco, M.R. Albert, K. Keegan, C.A. Shuman, N.E. DiGirolamo, and G. Neumann. 2012. The extreme melt across the Greenland ice sheet in 2012. Geophysical Research Letters 39, L20502, http://dx.doi.org/10.1029/2012GL053611.
Nye, J.A., J.S. Link, J.A. Hare, and W.J. Overholtz. 2009. Changing spatial distribution of fish stocks in relation to climate and population size on the Northeast United States continental shelf. Marine Ecology Progress Series 393:111–129, http://dx.doi.org/10.3354/meps08220.
Pearce, J., and N. Balcom. 2005. The 1999 Long Island Sound lobster mortality event: Findings of the Comprehensive Research Initiative. Journal of Shellfish Research 24:691–697.
Perry, A.L., P.J. Low, J.R. Ellis, and J.D. Reynolds. 2005. Climate change and distribution shifts in marine fishes. Science 308:1,912–1,915, http://dx.doi.org/10.1126/science.1111322.
Pinsky, M.L., and M. Fogarty. 2012. Lagged social-ecological responses to climate and range shifts in fisheries. Climatic Change 115:883–891, http://dx.doi.org/10.1007/s10584-012-0599-x.
Randall, D.A., R.A. Wood, S. Bony, R. Colman, T. Fichefet, J. Fyfe, V. Kattsov, A. Pitman, J. Shukla, J. Srinivasan, and others. 2007. Climate models and their evaluation. Pp. 589–662 in Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. S. Solomon, S.D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor, and H.L. Miller, eds, Cambridge University Press, Cambridge, UK.
Reynolds, R.W., T.M. Smith, C. Liu, D.B. Chelton, K.S. Casey, and M.G. Schlax. 2007. Daily high-resolution-blended analyses for sea surface temperature. Journal of Climate 20:5,473–5,496, http://dx.doi.org/10.1175/2007JCLI1824.1.
Simpson, S.D., S. Jennings, M.P. Johnson, J.L. Blanchard, P.-J. Schön, D.W. Sims, and M.J. Genner. 2011. Continental shelf-wide response of a fish assemblage to rapid warming of the sea. Current Biology 21:1,565–1,570, http://dx.doi.org/10.1016/j.cub.2011.08.016.
Taboada, F.G., and R. Anadón. 2012. Patterns of change in sea surface temperature in the North Atlantic during the last three decades: Beyond mean trends. Climatic Change 115:419–431, http://dx.doi.org/10.1007/s10584-012-0485-6.
Tebaldi, C., K. Hayhoe, J.M. Arblaster, and G.A. Meehl. 2006. Going to the extremes: An intercomparison of model-simulated historical and future changes in extreme events. Climatic Change 79:185–211, http://dx.doi.org/10.1007/s10584-006-9051-4.
Wahle, R.A., C. Brown, and K. Hovel. 2013. The geography and body-size dependence of top-down forcing in New England’s lobster-groundfish interaction. Bulletin of Marine Science 89:189–212, http://dx.doi.org/10.5343/bms.2011.1131.
Wahle, R.A., M. Gibson, and M.J. Fogarty. 2009. Distinguishing disease impacts from larval supply effects in a lobster fishery collapse. Marine Ecology Progress Series 376:185–192, http://dx.doi.org/10.3354/meps07803.
Woodard, C. 2012. Cheap Maine lobsters spark protests in Canada. Portland Press Herald, August 3, 2012. Available online at: http://www.pressherald.com/news/maine-glut-rattles-maritimes_2012-08-03.html (accessed April 17, 2013). | <urn:uuid:036d037e-95bc-43e3-88bb-820524bcb9a1> | CC-MAIN-2015-35 | http://www.tos.org/oceanography/archive/26-2_mills.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00281-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.716099 | 2,625 | 2.59375 | 3 |
Leiomyosarcoma (LMS) is a type of soft tissue sarcoma. The information here should ideally be read with our general information about soft tissue sarcomas.
We hope this information answers your questions. If you have any further questions, you can ask your doctor or nurse at the hospital where you're having your treatment.
Sarcomas are rare types of cancer that develop in the supporting or connective tissues of the body. There are two main types, soft tissue sarcomas and bone sarcomas.
Soft tissue sarcomas can develop in muscle, fat, blood vessels, or any of the other tissues that support, surround and protect the organs of the body.
Bone sarcomas can develop in any of the bones of the body, but may also develop in the soft tissue near bones.
Soft tissue sarcomas are rare. Only about 3,300 new cases are diagnosed each year in the UK. There are several different types of soft tissue sarcoma.
Leiomyosarcomas are one of the more common types of sarcoma to develop in adults. They start from cells in a type of muscle tissue called smooth muscle.
Smooth muscles are involuntary muscles that we have no control over. They are found in the walls of muscular organs like the heart and stomach, as well as in the walls of blood vessels throughout the body. This means that leiomyosarcomas can start anywhere in the body. Common places are the walls of the womb (uterus), the trunk of the body, and the arms and legs.
Causes of leiomyosarcoma
Back to top
The exact causes of leiomyosarcomas are unknown. Researchers are trying to find out as much as possible about them.
Most people with leiomyosarcoma are over the age of 50.
Soft tissue sarcomas may occur in an area that has previously been treated with radiotherapy for another type of cancer. The sarcoma won’t usually develop until at least 10 years after the radiotherapy treatment.
Exposure to some types of chemicals may increase the risk of developing some sarcomas. The chemicals include vinyl chloride (used for making plastics), some types of herbicides (weed killers) and dioxins.
Signs and symptoms of leiomyosarcoma
Back to top
People with early leiomyosarcoma often don’t have any symptoms. Most leiomyosarcomas are diagnosed after a person develops symptoms. These may include:
a lump or swelling
abdominal discomfort or bloating
swelling or pain in any area of the body
bleeding from the vagina in women who have had the menopause, or a change in periods for women who have not yet had the menopause.
If you notice any of these symptoms, you should contact your GP. Remember that these symptoms can also be caused by conditions other than cancer.
How leiomyosarcoma is diagnosed
Back to top
Usually you begin by seeing your family doctor (GP), who will examine you. You will be referred to a hospital specialist for any tests that may be necessary and for expert advice and treatment. The doctor at the hospital will take your full medical history, do a physical examination and take blood samples to check your general health.
The following tests are commonly used to diagnose a leiomyosarcoma. The tests you have will depend on the part of the body being investigated. You may have had some of these tests already. If you’re having investigations other than those listed, our cancer support specialists can give you further information.
This test is used to diagnose problems in the womb. The doctor uses a small, thin tube with a light and camera at the end (hysteroscope) to look into the womb and take tissue samples (biopsies) to be looked at under a microscope. The hysteroscope is passed through your vagina and into your womb. You may have this test as an outpatient under local anaesthetic, but sometimes a general anaesthetic is needed.
A hysteroscopy may be uncomfortable but should not be painful. Some women may have mild cramping during the procedure and for a few days afterwards. You may be advised to take mild painkillers, such as paracetamol, 30 minutes before the procedure.
This test uses sound waves to create a picture of the abdomen and surrounding organs. It is done in the hospital's scanning department. You will be asked not to eat, and to drink only clear fluids (nothing fizzy or milky) for 4-6 hours before the scan.
Once you’re lying comfortably on your back, a gel is spread over your abdomen. A small device like a microphone (called a probe) is then rubbed over the area. It emits sound waves that are then converted into a picture using a computer. The test should not be painful and takes about 15-20 minutes.
If a uterine sarcoma is suspected, the probe will also be inserted gently into the vagina to examine the womb more closely.
Ultrasound may also be used to look for a suspected cancer in a limb.
CT (computerised tomography) scan
A CT scan takes a series of x-rays that build up a three-dimensional picture of the inside of the body. The scan is painless and takes 10-30 minutes. CT scans use small amounts of radiation, which is very unlikely to harm you or anyone you come into contact with. You will be asked not to eat or drink for at least four hours before the scan.
You may be given a drink or an injection of dye that allows particular areas to be seen more clearly. This may make you feel hot all over for a few minutes. If you’re allergic to iodine or have asthma, you could have a more serious reaction to the dye, so it’s important to let your doctor know beforehand.
You will probably be able to go home as soon as the scan is over.
We have a video about having a CT scan at macmillan.org.uk/testsandscans
MRI (magnetic resonance imaging) scan
This test is similar to a CT scan but uses magnetism instead of x-rays to build up cross-sectional pictures of your body. During the test, you’ll be asked to lie very still on a couch inside a large metal cylinder that is open at both ends. The whole test may take up to an hour. It can be slightly uncomfortable and some people feel a bit claustrophobic during the scan. It’s also very noisy, but you will be given earplugs or headphones to wear. You will be able to hear, and speak to, the person operating the scanner.
If you have any metal implants such as certain types of surgical clips, or pacemakers, it will not be possible for you to have this test. In this situation, another type of scan may be used.
The results of the previous tests may make your doctor strongly suspect you have cancer. The only way to be sure is to take some cells or a small piece of tissue from the affected area to look at under a microscope. This is called a biopsy. The area is numbed using a local anaesthetic injection and then a fine needle is passed into the tumour through the skin. A CT or ultrasound scan may be used at the same time to make sure that the biopsy is taken from the right place. Sometimes the biopsy is taken during a hysteroscopy (see above).
When the cells are looked at under a microscope, the specialist will be able to tell whether they are benign (not cancerous) or malignant (cancerous). If a sarcoma is diagnosed, further tests may be done on the sample to try to find out exactly what type of sarcoma it is.
Waiting for test results can be an anxious time for you. It may help to talk about your worries with a relative or friend. You could also speak to one of our cancer support specialists.
Grading of leiomyosarcoma
Back to top
Grading refers to the appearance of cancer cells under a microscope. The grade gives an idea of how quickly a cancer may develop.
Grading of soft tissue sarcomas can sometimes be difficult, especially for the less common types.
Low-grade means that the cancer cells look very much like the normal cells of the soft tissues. They are usually slow-growing and are less likely to spread. In high-grade tumours the cells look very abnormal, are likely to grow more quickly, and are more likely to spread..
Staging of leiomyosarcoma
Back to top
The stage of a cancer is a term used to describe its size and whether it has spread beyond its original site. Knowing the particular type and the stage of the cancer helps the doctors decide on the most appropriate treatment.
The following is a commonly used staging system for non-gynaecological leiomyosarcoma. A different system is used for leiomyosarcoma arising in the gynaecological organs (the organs of the female reproductive system). Your specialist can explain more if you have this type of leiomyosarcoma.
The tumour is low-grade and small (less than 5cm [2in]). It can be near the surface of the body (superficial) or deep within the body, but with no sign that it has spread to the lymph nodes or other parts of the body.
The tumour is low-grade and large (more than 5cm [2in]). It’s superficial with no sign that it has spread to the lymph nodes or other parts of the body.
The tumour is low-grade and large (more than 5cm [2in]). It’s deep within the body but has not spread to the lymph nodes or other parts of the body.
The tumour is high-grade and small (less than 5cm [2in]). It can be near the surface of the body or deep within the body, but has not spread to the lymph nodes or other parts of the body.
The tumour is high-grade, large (more than 5cm [2in]) and superficial, but has not spread to the lymph nodes or other parts of the body.
The tumour is high-grade, large (more than 5cm [2in]) and deep, but has not spread to the lymph nodes or other parts of the body.
The tumour has spread to lymph nodes in the area or to any other part of the body. This is known as secondary or metastatic soft tissue sarcoma.
This means that a soft tissue sarcoma has come back after it was first treated. It may come back in the tissues where it first started (local recurrence) or in another part of the body (metastasis).
Treatment for leiomyosarcoma
Back to top
As sarcomas are rare, they are usually treated by a team of doctors and other health care professionals at a specialist hospital. This means that you may have to travel some distance to have your treatment.
The treatment for leiomyosarcoma depends on a number of things, including your general health and the size and position of the tumour in the body. The results of your tests will help your doctor plan the best type of treatment for you. They will then discuss this with you.
The usual treatment for a leiomyosarcoma is surgery, wherever possible, to remove the tumour. This may be followed by radiotherapy to reduce the chance of the cancer coming back.
Chemotherapy is also used for some leiomyosarcomas. It’s mainly used to treat a leiomyosarcoma that has come back (recurred), or that has spread (advanced or metastatic cancer). Chemotherapy may also sometimes be used after surgery to try to reduce the chances of it coming back.
Treatment of sarcomas is discussed in more detail in our general information about soft tissue sarcomas.
Clinical trials for leiomyosarcoma
Back to top
Research into treatments for leiomyosarcoma is ongoing and advances are being made. Cancer doctors use clinical trials to assess new treatments.
Before any trial is allowed to take place, it must be approved by an ethics committee, which protects the interests of the patients taking part.
You may be asked to take part in a clinical trial. If you decide to take part, your doctor will discuss the treatment with you so that you have a full understanding of the trial and what it means to take part. You may decide not to take part or withdraw from a trial at any stage. You’ll still receive the best standard treatment available.
Follow-up after leiomyosarcoma
Back to top
After your treatment is completed, you will have regular check-ups and x-rays. Your specialist will advise you on how frequently you need to be seen. Follow-up will continue for several years. If you have any problems, or notice any new symptoms in between your regular appointments, let your doctor know as soon as possible.
Your feelings about leiomyosarcoma
Back to top
You may have many different emotions, including anger, resentment, guilt, anxiety and fear. You may find yourself tearful, restless and unable to sleep. Or you may have feelings of hopelessness and depression. These are all normal reactions but it can be difficult and distressing to admit to them.
The need for support will vary from person to person and may depend on the treatment you receive and any side effects this causes. Your specialist will tell you about any potential side effects and how to deal with them before you begin any treatment.
Some hospitals have their own emotional support services with trained staff, and some of the nurses on the ward will have had training in counselling. You may feel more comfortable talking to a counsellor outside the hospital environment or to a member of your religious faith, if you are religious.
Everyone has their own way of coping with difficult situations. Some people find it helpful to talk to family or friends, while others prefer to keep their feelings to themselves. There is no right or wrong way to cope, but help is available if you need it. Our cancer support specialists can give you information and support to help you cope. You can also take a look at our Online Community to meet other people in a similar situation.
Cancer52 is an alliance of more than 50 organisations working to address the inequalities that exist in policy, services and research into the less common cancers and to improve outcomes for people with these highly challenging diseases.
Rarer Cancers Foundation
Rarer Cancers Foundation (RCF) is a UK charity that offers general advice and information about rare and less common cancers, facilitates supportive networking between patients and carers, and works to improve services for people with rarer cancers.
Sarcoma UK provides information and support to anyone affected by sarcoma, and aims to achieve the best possible standard of treatment and care for people with sarcoma.
This information has been compiled using a number of reliable sources, including:
DeVita V, et al. DeVita, Hellman, and Rosenberg's Cancer: Principles & Practice of Oncology (9th edition). 2011. Lippincott Williams & Wilkins. Philadelphia.
Grimer R, et al. Guidelines for the Management of Soft Tissue Sarcomas. British Sarcoma Group. 2010.
Raghavan D, et al. Textbook of Uncommon Cancer (3rd edition). 2006. Wiley.
National Institute for Health and Clinical Excellence (NICE). Improving outcomes for people with sarcoma. March 2006.
National Cancer Intelligence Network (NCIN). Bone and Soft Tissue Sarcomas: UK Incidence and Survival 1996 to 2010. November 2013.
Thank you to Dr Robin Crawford, Gynaecological Oncologist; and the people affected by cancer who reviewed this information.
Thanks to people like you
Thank you to all of the people affected by cancer who reviewed what you're reading and have helped our information to grow.
You could help us too when you join our Cancer Voices Network - find out more. | <urn:uuid:d6a2fb95-19da-4ad5-8a94-fad8861453d8> | CC-MAIN-2015-35 | http://www.macmillan.org.uk/Cancerinformation/Cancertypes/Softtissuesarcomas/Typesofsofttissuesarcomas/Leiomyosarcoma.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645371566.90/warc/CC-MAIN-20150827031611-00220-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.953235 | 3,420 | 3.609375 | 4 |
William B. Hincks was born in either 1841, or 1842, in Maine. He would later move to Bridgeport, Connecticut. On July 22, 1862, at either 19 or 20 years of age, young William would enlist in the 14th Connecticut Infantry. He would officially muster into service at Hartford, Connecticut, on August 23, 1862. The 14th, commanded by Colonel Dwight Morris, would arrive in Washington, D.C., on August 25, and would be placed in the Second Brigade, Third Division (US Brigadier General William French) of US Major General Edwin V. Sumner’s II Corps.
Private Hincks was assigned to Company A, and was considered industrious, and brave. Hincks, and the remainder of the 14th Connecticut would have little time to settle in. After US Major General John Pope’s debacle at Second Manassas, his Army of Virginia would return to Washington, D.C. Abraham Lincoln, with his hands tied, turned back to US Major General George B. McClellan, to command the eastern theater. McClellan would waste little time as it was quickly determined that CS General Robert E. Lee was moving into Maryland. McClellan would rapidly push after him along different routes, all leading through passes in South Mountain. This would be Hincks’ first experience in battle – and it would define what the young man would expect from battle.
On September 17, 1862, at the Battle of Antietam, Sumner’s II Corps was ordered to support US Major General Joseph Hooker’s I Corps advance, on the Confederate left, along the Hagerstown Pike. Hooker would advance through the infamous Corn Field, an area of tremendous slaughter, while the Second Division (US Major General John Sedgwick), of Sumner’s II Corps would push diagonally from the East Woods towards the Dunker Church and West Woods. Hincks’ Division, commanded by William French, somehow became disoriented and did not guide on Sedgwick’s Division. Instead, he marched his men in a southernly direction slowly losing sight of Sedgwick. Coming over a rise Hincks, the 14th Connecticut and the rest of the division, became silhouetted against the sky and were decimated by musketry from the 6th Alabama commanded by a little known colonel – John Brown Gordon. Part of CS Brigadier General Robert E. Rodes’s Brigade, they would be waiting for the Federals in a sunken road – now called Bloody Lane. As line after line of French’s Division passed over the rise, they were met with severe musketry. However, due to their numerical superiority, and with relief from US Major General Israel B. Richardson’s First Division, which included the Irish Brigade, the Sunken Road became untenable for the Rebels, they would be forced to pull back. At the end of Antietam, the Second Brigade, now commanded by Dwight Morris, which included Hincks’s 14th Connecticut, would be severely punished, suffering 529 casualties. Hincks and Company A, of the 14th Connecticut, received their “baptism of fire” and proved up to the challenge.
After Antietam, Robert E. Lee would retreat back to the safety of Virginia. Unfortunately, McClellan was slow to react – stating he needed time to re-fit his army. He would be removed from overall command of the Army of the Potomac on November 8, 1862. Hincks, and the rest of the Army of the Potomac, now had a new commander, US Major General Ambrose Burnside. Burnside immediately made plans to cross the Rappahannock River near Fredericksburg before Robert E. Lee could react. Essentially reaching the south side of the Rappahannock first would leave the road to Richmond wide open for the Union Army. Arriving at the Rappahannock ahead of the Army of Northern Virginia, Burnside’s plan was coming together. Unfortunately, his pontoons had not arrived in time. This gave Lee the necessary time to reach Fredericksburg and entrench his army – setting the stage for the Battle of Fredericksburg.
The Army of the Potomac, now divided into three Grand Divisions, brought battle against Robert E. Lee on December 13, 1862. The battle opened on the Federal left, when US Major General William Franklin’s Left Grand Division assaulted CS Lieutenant General Thomas J. “Stonewall” Jackson’s 2nd Corps. Initially, the Federal forces had some success. However, before long, Jackson’s 2nd Corps pushed Franklin’s forces back beyond the Richmond, Fredericksburg and Potomac railroad tracks.
US Major General Edwin V. Sumner’s Right Grand Division, including the II Corps, now commanded by US Major General Darius Couch, assaulted the high ground above Fredericksburg. Master Sergeant William B. Hincks’s 14th Connecticut was still in the Second Brigade of William French’s Third Division. It would attack the left side of a “sunken road” heavily defended by CS Lieutenant General James Longstreet’s 1st Corps. Having had time to organize behind the stone wall of the Sunken Road, French’s Division had no chance. They would be roughly handled and quickly repulsed before reaching the wall. The 14th Connecticut, now commanded by Lieutenant Colonel Sanford Perkins, would also suffer. At the end of the battle, Burnside’s Army of the Potomac would never reach the Sunken Road and would end up retreating across the Rappahannock River on December 14.
In May 1863 Hincks would again fight with the II Corps at the Battle of Chancellorsville. The 14th Connecticut would support the Federal lines around the Chancellor tavern, and would again suffer significant losses. Due to their losses at Fredericksburg, the 14th was now commanded by Major Theodore Ellis. The Army of the Potomac, under the overall command of US Major General Joseph Hooker, would suffer a terrible defeat at Chancellorsville.
After Chancellorsville the 14th Connecticut would head north, following Robert E. Lee’s Army of Northern Virginia, as it once again headed past the Mason-Dixon Line. Sergeant Major Hincks would provide his country his most valuable service in early July 1863, at the Battle of Gettysburg. The Army of the Potomac, now commanded by US Major General George Gordon Meade, would bring battle against the Army of Northern Virginia July 1–3. By July 3, Meade’s army had established a significant line of battle, shaped like a fish hook, running from Culp’s Hill on the north to Little Round Top on the south. The II Corps, now commanded by US Major General Winfield S. Hancock, held the center of the Union line along Cemetery Ridge. The 14th Connecticut, still in the Second Brigade (Colonel Thomas A Smyth), Third Division (Brigadier General Alexander Hays) of the II Corps would be assigned just north of the “Angle” of Hancock’s salient on Cemetery Ridge and would maintain the brigade’s left flank. This portion of the line would be directly in the path of CS Major General George E. Pickett’s famous charge. Commanding the 14th Connecticut, Major Theodore Ellis would brace his men for the coming onslaught from CS Brigadier General James J. Pettigrew’s North Carolinians and Colonel Birkett D. Fry’s mixed brigade of Alabamans and Tennesseans. The Federal troops could see Pickett’s troops coming thefrom nearly a mile away. Bracing, the Federals were told to hold their fire, until the Rebels came across the fence north of the Codori Farm, running along the Emmitsburg Road. Once they crossed, the Federal artillery opened large gaping holes in the Confederate formation, which quickly closed as the Rebels reformed. As they approached closer division, brigade and regimental commanders would allow their commands to open with musketry, further decimating the Rebel formation.
Opposing the 14th Connecticut was CS Captain Bruce Phillips’ 14th Tennessee. As they closed in on Ellis’s regiment, the Tennesseans planted their regimental flag sporting twelve separate battle honors. With the intensity of the Federal musketry and cannister coming from the artillery, many men of the 14th Tennessee were forced to lay down on the ground to save themselves. Ellis seeing the flag apparently unprotected asked for volunteers to capture it. Hincks, and two other Connecticut soldiers, leaped from behind the wall and ran towards the flag some 50 yards in the distance. Immediately after jumping the wall, one Connecticut soldier was shot. Outrunning his other companion, Hincks would reach the flag under tremendous fire, grab the colors, swinging his saber over the prone Confederates, and run back to the safety of his lines.
The Federal defense along Cemetery Ridge would win the day – and the battle, for Gettysburg. Robert E. Lee would never again take his entire Army of Northern Virginia into the North. The 14th Connecticut would remain with the Army of the Potomac for many additional battles including Mine Run, the Wilderness, Spotsylvania Court House, North Anna, Cold Harbor, Deep Bottom, Petersburg, Sailors’s Creek and Lee’s surrender, at Appomattox Court House.
Sergeant Major William B. Hincks would be awarded the highest military honor for his actions at Gettysburg. On December 1, 1864 he would receive the Congressional Medal of Honor. Following, is his citation.
During the highwater mark of Pickett’s charge on 3 July 1863 the colors of the 14th Tenn. Inf. C.S.A. were planted 50 yards in front of the center of Sgt. Maj. Hincks’ regiment. There were no Confederates standing near it but several lying down around it. Upon a call for volunteers by Maj. Ellis, commanding, to capture this flag, this soldier and 2 others leaped the wall. One companion was instantly shot. Sgt. Maj. Hincks outran his remaining companion running straight and swift for the colors amid a storm of shot. Swinging his saber over the prostrate Confederates and uttering a terrific yell, he seized the flag and hastily returned to his lines. The 14th Tenn. carried 12 battle honors on its flag. The devotion to duty shown by Sgt. Maj. Hincks gave encouragement to many of his comrades at a crucial moment in the battle.(i)
After the Civil War, Sergeant Major William B. Hincks worked as a treasurer of a gas company and Secretary and Treasurer of City Savings Bank. Hincks died at Bridgeport, Connecticut, on November 7, 1903. He was approximately 64 years old. Sergeant Major Hincks is a true American HERO.
(i) R.J. (Bob) Pfoft, Editor, United States of America’s Medal of Honor Recipients, Fifth Edition, Pg. 897. | <urn:uuid:aa96676e-7c82-496e-b4fa-6c7e1db7dbb1> | CC-MAIN-2015-35 | http://thismightyscourge.com/2009/03/28/william-b-hincks-sergeant-major/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00098-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.971049 | 2,347 | 2.875 | 3 |
Currently approaching its sixth year in space, the CALIPSO satellite collects lidar linear depolarization ratios δ from 0.532 μm laser backscatter, an indicator of particle phase, shape, and orientation. We examine one-yearδaverages for day and night periods when the lidar was pointing close to the nadir (0.3°) and off-nadir (3.0°), in terms of geographic location and zonal height averages. For the first time, also given is the dependency ofδon temperature versus latitude for ice clouds. The analysis involves all ice clouds with a cloud top temperature of <−40°C, which include mainly cirrus and altostratus, as well as some polar stratospheric clouds identified by CALIPSO. We find significant differences from ∼−10° to −30°C between the nadir and off-nadir data, consistent with the effects of horizontally oriented plate crystals: overall the off-nadirδ are increased by ∼0.05 globally. Strong dependencies of δ also occur with latitude and height. Day minus night δ differ by 0.02–0.03. The global average day plus night δfor nadir and off-nadir data are 0.318 and 0.365, respectively. As expected from ground-based studies,δ increase steadily with decreasing temperature, which is particularly apparent in the nadir data because of oriented plate effects. These findings have implications for the modeling of radiative transfer through ice clouds.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
Polarization diversity lidar is a basic tool for the probing of clouds and aerosols that dates back to the beginnings of laser atmospheric research [Schotland et al., 1971]. Although a variety of more sophisticated techniques have evolved [Weitkamp, 2005], measuring laser backscatter depolarization is an important application that is seeing increasing use by virtue of its ability to infer particle phase, type, and shape. This includes the inclusion of dual-polarization channels on the CALIOP lidar system aboard the CALIPSO satellite [Winker et al., 2007] at the 0.532 μm wavelength, which is a constituent of the A-train formation of meteorological satellites [Stephens et al., 2002]. The availability of CALIPSO lidar depolarization data is proving to be an important supplement for microphysical analyses [e.g., Hu et al., 2007; Cho et al., 2008; Sassen and Zhu, 2009; Noel and Chepfer, 2010; Okamoto et al., 2010].
A major justification for the April 2006 launch of CALIPSO was the ability of its polarization lidar to classify the global distribution of clouds, especially the clouds of the middle and upper troposphere that present difficulties for probing using the cloud radar (CloudSat) and passive methods also available from the A-train. According to theoretical ray-tracing simulations of laser backscatter depolarization [e.g.,Takano and Liou, 1989], ice crystal shape has a fundamental influence on the lidar linear depolarization ratio (δ, the ratio of the returned laser powers in the planes of polarization orthogonal and parallel to that of the linearly-polarized source). The effects of uniformly oriented crystal populations such as horizontally-oriented ice plates, however, also have a major impact: they produce near-zeroδ when probed in the zenith (or nadir) direction [Platt et al., 1978; Sassen and Benson, 2001; Noel and Sassen, 2005] because aerodynamic drag forces align the large mirror-like faces of the plates in the horizontal plane. This effect, however, rapidly diminishes as the incident lidar angle falls off the vertical direction.
In an earlier study [Sassen and Zhu, 2009], curious patterns in the global character of ice cloud δwere found, including significant differences with height, latitude, and in day versus night and nadir versus off-nadir measurements. While oriented plate crystal effects could explain many of these findings, we reexamine here the day/night differences and directly correlate the data with cloud temperature using one-year averages of CALIPSO polarized signals prior to and following the 28 November 2007 lidar viewing angle change from 0.3 to 3.0°.
2. CALIPSO Data Analysis
Our data analysis algorithm processes CALIPSO Level 1 and 2 standard data products, using the Version 2 dataset. Level 1B data provide attenuated total (parallel plus perpendicular) and perpendicular polarized backscatter profiles at a horizontal resolution of 333 m and a vertical resolution varying from 30–240 m depending on altitude. From CALIPSO Level 2 data products, we employ the 5-km cloud layer product to initially obtain cloud height boundaries. Various horizontal (5-km, 20-km, and 80-km) signal averaging schemes are used to obtain sufficient signal-to-noise ratios to detect even tenuous cloud and aerosol layers in the atmosphere, but by using the 5-km cloud detection product we have selected relatively strongly-scattering clouds for analysis. This minimizes the inclusion of aerosol layers in our cloud sample, and also diminishes the impact of molecular scattering on lowering laser depolarization in weakly-scattering ice clouds. This differs from that used inSassen and Zhu , which included all cloud layers detected at any of the three horizontal signal averages. The parallel (total minus perpendicular) and perpendicular backscatter profiles, along with their interpolated temperatures (derived from global models), are then extracted from the Level 1 file.
The data sample is intended to include all types of ice clouds, even those that cause the complete attenuation of the laser pulses. Based on the climatological midlatitude cirrus-cloud dataset inSassen and Campbell , a maximum cirrus cloud top temperature of −40°C (i.e., the homogeneous freezing point of pure water) is first applied, which reduces the possibility of including clouds containing supercooled water [Pruppacher and Klett, 1997; Sassen, 2002]. A minimum temperature of −10°C is also applied for this reason. Thus, our findings are mainly derived from cirrus and altostratus clouds, although water-ice dominated polar stratospheric clouds (PSC) clouds likely to be detected by the 5-km CALIPSO detection algorithm [Sassen and Zhu, 2009] are also included in the sample.
Total (molecular plus aerosol plus cloud backscattering) δ profiles for the clouds selected by our criteria (corresponding to the sum of 15 consecutive laser shots at each height) are derived from the total parallel ‖ and perpendicular ⊥ backscatter coefficients (β) extracted from Level 1 data. That is,
where the subscripts refer to scattering from molecules, aerosols, and clouds.
Such data are included in data files for all orbits for each day in 5° latitude by 5° longitude grid boxes globally. Yearly datasets are derived after summing all the parallel and perpendicular signals at each cloud height and temperature: average δ are not calculated by averaging individual δ calculated at lesser temporal scales.
3. Global Depolarization Findings
Given in Figures 1 and 2are global views of 1-yr average ice cloudδfor nadir (top row) and off-nadir data in terms of day (left of each pair) and night displays of vertically-integrated (Figure 1) and height-resolved zonal (Figure 2) averages. Given in Table 1are the differences in the global vertically-integratedδfor day, night, day plus night, and day minus night for both nadir and off-nadir CALIPSO data. As described above, theseδare based on the signals summed in the two channels interpolated to the maximum 30-m vertical resolution interval.
Table 1. Yearly-Averaged Global CALIPSO Ice Cloudδfor Nadir (December 2006–November 2007) and Off-Nadir (December 2007–November 2008) Data for Day, Night, Day Plus Night, and Day Minus Night Measurements
Off-Nadir − Nadir
Day + Night
Day − Night
It is obvious that significant variations occur in connection with geography, latitude, and height, and in nadir versus off-nadir and day versus night. In keeping with the effects of probing horizontally oriented planar crystals at nadir, the nadirδare considerably lower at low altitudes. Nadir versus off-nadir differences inδ at heights above ∼12 km MSL are not very noticeable in Figure 2, although the data attributable to PSC at high latitudes are more variable due perhaps to the smaller sample size. The specular reflections from plates generate strong backscattered signals, and their effects are more dramatically portrayed in the vertically-integratedδ at middle and high latitudes in Figure 1. Geographically, certain (mostly continental) regions like southern South America, the U. S. Great Basin, the southern Himalayas, central Africa, and Malaysia, tend to display higher δ probably as a result of the local cirrus cloud generating mechanisms. Table 1shows that the global nadir versus off-nadirδ differences are ∼0.05, while the day minus night differences are ∼0.025.
Differences in day versus night vertically-integratedδare evident by the lower night-timeδ at high latitudes, and the higher δ in the tropics/midlatitudes during day (Figure 1). From Figure 2, diurnal differences are apparent at heights >∼12 km in cirrus, where they are lower at night particularly in the tropics. On the other hand, δdifferences in PSC are ambiguous, and probably correspond to cloud detection threshold effects. The causes of these day/night differences could be attributable to CALIPSO signal noise and thermally-induced signal drift issues, but also likely reflect cloud microphysical variations due to diurnal ice cloud formation, aging, and radiative effects. Diurnal differences in ice cloud formation based on CALIPSO and CloudSat data are discussed inSassen et al. [2008, 2009].
4. Global Temperature Variations
The above depolarization findings may find a basic explanation in terms of cloud level temperature. Theoretical ray-tracing simulations [Takano and Liou, 1989] have established that the basic ice crystal habits generate differing δ according to the major internal ray paths, essentially increasing with increasing hexagonal particle axis ratio (i.e., length over width). When randomly oriented in space, thin plate crystals generate δ ∼ 0.3, whereas long solid column crystals yield δ ∼ 0.6 by virtue of their internal ray paths from multiple refractions and reflections off their large basal faces. (Hollow crystals and more complicated spatial particles are more difficult to treat, but these shapes appear to make a difference.) It is also known that ice crystal habit depends in a fundamental way on temperature [Pruppacher and Klett, 1997], with planar crystals like plates dominating between about −10° to −20°C, and columns and spatial crystal types at colder temperatures. Of course, ice crystal orientation also has a strong influence, particularly for horizontally oriented plates, which as mentioned produces an anisotropic lidar scattering medium involving zero depolarization at normal incidence.
The CALIPSO data considered here are presented in terms of their temperature (in 5°C bins) and latitude dependence in Figure 3for both the nadir (left) and off-nadir measurements. These displays show the averaged day plus nightδ, indicating a steady increasing trend with decreasing temperature within the troposphere. It is again clear that δhave anomalously low values at the warmest temperatures treated here when probed in the nadir. It appears that the effects of widely-fluttering dendrites [Noel and Sassen, 2005], randomly-oriented plates, or both, are manifested at the warmest temperatures even in the off-nadir data.
5. Comparison With Previous Results
Several extended ground-based polarization lidar studies, mostly from midlatitude stations, have demonstrated that cirrus cloudδ tend to increase with increasing height/decreasing temperature [Sassen and Benson, 2001]. As a matter of fact, most single time-height images of deep cirrus cloud systems display this effect. Differences in previous temperature trends are also evident between zenith and off-zenith datasets due to the effects of oriented planar crystals in the expected −10° to −20°C temperature interval. Nonetheless, differences in the magnitude of the basic depolarization trend are readily apparent in the results from different geographical locations/latitudes, such that the question of basic cirrus cloud depolarization, versus lidar systematic errors, remained until recently uncertain.
CALIPSO depolarization data now provide the opportunity to examine the global distribution of ice cloud δ using a single lidar system. In our earlier study [Sassen and Zhu, 2009], we examined a broader sample of ice clouds that included optically thin clouds detected after CALIPSO signal averaging of up to 80 km. The consequences on the results are that the day versus night differences in cloud detection/frequencies are exaggerated at night with respect to the current approach, and importantly, depolarization is biased toward lower values at night because of the contributions of molecular backscattering to the total air plus cloud signal (see equation (1)). Since the pure molecular atmosphere produces δ ≈ 0.02, this Rayleigh scattering lowers the depolarization in clouds with weak backscattering.
Nonetheless, it is apparent that the main depolarization feature noted here, a steady δincrease with increasing height/decreasing temperature, is consistent with earlier ground-based and CALIPSO polarization lidar analyses. The current ∼0.05 off-nadir minus nadir average difference is about twice as large as that reported previously inSassen and Zhu .
Unlike the model treatment of the radiative effects of water clouds containing spherical cloud droplets, ice crystal clouds like cirrus present many uncertainties. The major added complexity is related to the diverse shapes of the crystals present, although obviously additional factors such as the vertical distribution of the size distribution and ice water content come into play. The polarization properties of laser backscattering can be considered as an analog of natural light scattering in the atmosphere, because δ are sensitive to the exact particle shape and orientation.
The data presented here deal mainly with what would be identified by surface weather observers as cirrus and altostratus clouds, although PSC are included because according to depolarization, they often appear to have similar contents as other ice clouds [Sassen and Zhu, 2009], and modeled tropopause heights over the polar regions may not be reliable. It is clear that the effects of oriented plate crystals on ice cloud CALIPSO laser depolarization is significant within the expected crystal growth regime of ∼−10° to −30°C (Figure 3). Plate effects also tend to be more important in high latitude ice clouds (Figures 1 and 2).
In terms of the relative number of oriented plates versus those poorly or randomly oriented, an estimate can be derived from the following equation suggested by Sassen and Benson based on laboratory experiments:
where δis the measured off-nadir depolarization ratio,δi the nadir value, and N and β the number concentration and backscatter coefficient of horizontally (h) and randomly (r) oriented plates. Using the day plus night average δfor off-nadir (0.318) and nadir (0.365) measurements inTable 1, a ratio of non- to oriented- plates of roughly 2,500/1.0 is found fromequation (2)adding all temperatures. This is not a large relative number of oriented plates, but it must be kept in mind that their impact on radiation transfer is highly anisotropic and largely concentrated in certain latitudes and height/temperature ranges. Although the two CALIPSO viewing angles may not be ideal to assess orientated plate effects, the 0.3 and 3.0° nadir angles should yield results representative of the binary lidar outcome according to ground-based scanning lidar research [Noel and Sassen, 2005].
The fundamental result of the observed δ-temperature dependence suggests that improved cirrus cloud radiative transfer models could follow by creating a simplified vertical gradient in ice crystal shape based on theδ trend. The use of a vertically homogeneous ice crystal shape model seems especially inappropriate. Although the tendency for the horizontal orientation of a portion of the plates at relatively warm temperatures is an anisotropic complication, even a model of a gradual change from plates to columns with height should be an improvement.
The geographical/latitudinal differences in δ (Figures 1 and 2) are also quite interesting. These data imply that higher latitudes have lower δ and relatively more oriented plates, while certain continental areas have higher δ, especially in the tropics/subtropics. These differences must reflect the different ice particle shapes that depend on the cirrus cloud formation mechanism (e.g., convective-anvil, orographic, synoptic, etc.), or more specifically on the basic cloud-particle forming aerosol available for crystal formation and the microphysical effects of typical cloud updraft velocities (Sassen, 2002). CALIPSO lidar depolarization data will continue to provide a rich source of information on cloud and aerosol microphysics.
This research has been support by NSF grant ATM-0645644, NASA grant NNX0A056G, and JPL contract NAS-7-1407.
The Editor thanks two anonymous reviewers for assistance evaluating this paper. | <urn:uuid:011431d5-1b46-4198-9330-cb0a6c5aa008> | CC-MAIN-2015-35 | http://onlinelibrary.wiley.com/doi/10.1029/2012GL053116/full?wol1URL=/doi/10.1029/2012GL053116/full&globalMessage=0®ionCode=US-VA&identityKey=f6d05e72-3ea8-4f07-98a8-1eb71f65dc88&isReportingDone=false | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.25/warc/CC-MAIN-20150827025424-00263-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.895445 | 3,820 | 2.8125 | 3 |
The Principles Behind Digital Image Sharpening
Optical imaging, recording, and reproduction systems are
imperfect, and as a result, they
degrade and blur images. To the extent that degradation in image resolution is known and
is reproducible over the area of an image, often it can be reduced or reversed.
Generically, the inversion process is called sharpening, and purpose of the following
material is to explain how sharpening works.
Blurring and its inversion by sharpening can be analyzed in
several ways. One basic
approach considers the spreading of a single sharp point of light. This form of analysis is
analogous to analyzing electronic circuits by their response to a sharp pulse input. The
second basic way to analyze blurring and sharpening considers images to consist of a sum
of signals that oscillate with varying position in the image. Low frequency components
describe properties that vary across large portions of an image and high frequency
components describe small details like the edges of objects.
A perfect lens faithfully images all details of an object.
That is, it perfectly transmits all
spatial frequencies without loss. A real lens may transmit low spatial frequencies with an
efficiency near one, but it attenuates image information carried at the highest frequencies.
For a nice discussion of the phenomenon, see
The loss of high frequency components produces blurring. If
the high frequency
components could be restored, then images would possess their normal contrast for the
smallest objects, that is, they would be sharpened. In reality it is quite easy to boost some
of the higher spatial frequencies and thereby improve the sharpness of an image. The
steps are the following:
Blur a copy of the image. This destroys fine detail, that is,
it eliminates the
high spatial frequencies while it leaves gross features that are described by
low spatial frequencies largely unaltered.
Subtract a fraction, say 0.2, of the blurred image from the
subtracts low frequency components from the original image while leaving the
high frequency components unaltered.
Finally, boost the amplitude of all frequencies (contrast
produces an end result in which the high frequencies have been boosted closer
to what a perfect lens would deliver, and the low frequencies remain as
The procedure described above is the unsharp mask sharpening
http://www.unm.edu/~keithw/astroPhotography/imageSharpening.html. Clearly, the
radius used in the blurring step needs to be carefully chosen for optimum results.
To simplify the following discussion, images will be
considered as one dimensional. The
same principles and conclusions apply to two dimensional images. A one dimensional
image can be represented as a set of pixel intensities along a line. Represent these
intensities as a column vector where x1 provides the intensity of the first pixel of the line,
x2 the intensity of the second, and so on.
A point object P that is positioned so that its image should fall on pixel 4 is represented as
An imaging system somewhat blurs the point object P, and the
blurred image output B is
spread somewhat into adjacent line segments, perhaps as follows
Represent the effect of the imperfect lens and sensor that
produced the above blurred
image as a matrix operator M acting on the point object vector P to produce the blurred
or, writing out the matrices,
If other points along the original image line are similarly
blurred, for example, the next
point above the original point, then another column must be added to the left of the center
of M. Hence, it is clear that the blurring matrix for a complete “image” is the following.
So as to maintain total image intensity and to make
subsequent steps easier, the image
intensities that run off the edges of the matrix are piled up along the edges. Of course, for
a real image thousands of pixels across, edge effects are negligible. The examples shown
here have been chosen to be just large enough to display the general properties.
In some cases, and this is one of them, the inverse of exists such that
where I is the identity matrix with ones along the main diagonal and zeroes everywhere
else. ( times any vector equals the vector.) The inverse of in this case can be found
with an Excel™ spreadsheet and is
In this case it is possible to reverse the blurring because
the blurred image, of a
general object is and hence, . That
is, is the original image . Let us test this by applying our sharpening or
unblurring matrix to our blurred image. Multiplying our blurred point vector
and except for round off errors, we recover the original image.
Areas of an image that possess uniform intensity ought not
to be changed by the
application of the sharpening matrix. This can be seen to be the case because the sum of
the elements in each row is close to one. The sum of the elements in the central row is
1.0000 when the more precise values in the spreadsheet are summed.
The inverse of the blurring matrix that was determined in
the previous section contains an
interesting prescription as to how to increase sharpness. Consider how the intensity of the
central pixel, pixel 4, in the blurred image contributes to the final unblurred image. For
pixel 1, examination of the unblurring matrix shows that the contribution of blurred pixel
4 to the final sharpened intensity of pixel 1 is 0.03 time the intensity of blurred pixel 4.
Similarly, the contribution of blurred pixel 4 to the sharpened intensity of pixel 2 is 0.14
times the intensity of blurred pixel 4. As expected, the strongest contribution of pixel 4 to
the sharpened image is to adjacent pixels 3 and 5, where it contributes –1.1 times the
intensity of pixel 4. This last contribution is nothing other than unsharp masking! That is,
the prescription for unsharp masking is to blur the image and subtract a portion from the
original. That is the same as what is accomplished by the matrix elements of value –1.1.
The unblurring or sharpening matrix contains the combined effects of subtracting a
portion of the blurred original and then multiplying the amplitude so as to restore normal
Photographers were not the first to invent the unsharp mask
procedure for sharpening
images. The human retina is hard wired to perform the unsharp masking operation.
Psychologists call the phenomenon lateral inhibition,
http://www-psych.stanford.edu/~lera/psych115s/notes/lecture3/ because exciting one
sensor in the eye slightly desensitizes nearby sensors. This is the same as subtracting a
portion of the blurred image from the original. The halos which surround objects
following aggressive sharpening of digital images also occur in human vision where they
produce an optical illusion called Mach bands, http://dragon.uml.edu/psych/mach.html.
An interesting property of the unblurring matrix is the fact
that it contains instructions for
improving upon unsharp masking. That is, it also instructs us to ADD a portion of the
intensity of pixel 4 to pixels 2 and 6 and a smaller portion to pixels 1 and 7. One might
wonder whether the presence of both subtractive and additive terms in the unblurring
matrix might be a special property dependent upon the precise shape of the blurring curve
or blurring matrix. Experimenting with a wide collection of peak shapes for the blurring
function shows that this is unlikely to be the case. Hence, this raises the question, can the
unsharp mask sharpening algorithm usefully be improved upon by blurring a real image
and subtracting a portion from the original, and then blurring with about twice the
blurring radius and ADDING a smaller portion of this to the original?
Another question that the above analysis raises is whether
customizing a sharpening
algorithm to a particular picture significantly improves the results. In principle, if one
knew that a particular feature in an image was the blurred image of a point source, then
one could measure the intensities of nearby pixels and determine the elements in the
blurring matrix. Inverting this should give the best possible method of sharpening that
The sharpening matrix can be deduced by applying it to a
suitable image because, as we
will extract the values from M-1 giving the
contribution of the central pixel to surrounding
pixels. Since image editing programs do not record or use negative intensities or
intensities above 1.0, a more useful “image” to process with a sharpening procedure is
Picture Window Pro™. http://www.dl-c.com/
is an image editing program
designed for the serious photographer. Below are shown images prepared
with Picture Window Pro. The first is an enlarged view of an 11 pixel x 11 pixel “image”
in which the background has a luminosity of 0.2 and the central pixel has a luminosity of
Sharpening not only lightens the central pixel, but it also
darkens surrounding pixels as
shown below. The exact values can be obtained with the program's readout tool.
Applying an unsharp mask also clearly lightens the central
pixel, but because more of the
surrounding pixels are darkened, the darkening effect is spread more thinly and is less
Images frequently can be brightened by increasing contrast.
Usually this cannot be done
globally because the process converts dark areas to pure black and light areas to pure
white. Instead, techniques are used that enhance the contrast of light and dark areas
differently. One method is to generate contrast masks so as to allow separate
manipulation of contrast in different parts of an image. While this method can be
laborious, it can also yield beautiful results,
http://www.normankoren.com/PWP_contrast_masking.html. There is, however, a rapid
global method for enhancing contrast that frequently gives good results.
Before describing how one uses an image editing program to
enhance contrast, it is
helpful to see how contrast can be increased without simply turning up contrast on the
entire image. Consider an object in an image with features whose intensities vary
between 9 and 10 units. The contrast in this area is then about one in ten. If in this area of
the image we subtract five units of intensity and then multiply the intensities by two,
points that previously had an intensity of nine, now have an intensity of eight, and points
that previously had an intensity of ten still end with an intensity of ten. By the subtract
and multiply operations, contrast has been increased from one in ten to two in ten, that is,
it has been doubled. Similarly, if there are areas in the image with average intensity of
100 and detail in this area has intensities between 90 and 100, then subtracting 50 and
multiplying by two also doubles the contrast in these areas as well.
Blurring the original image discussed above with a radius on
the order of the major light
and dark objects in the image generates a new image with an intensity of about 10 in the
one area of the image and an intensity about 100 in the other area of the image. That is,
the blurred image is what we need to subtract from the original. If half the blurred image
is subtracted, and then the overall contrast is increased by a factor of two, the local
contrast over most of the image is doubled.
The local contrast enhancement method just described above
boosts the amplitudes of the
higher frequency spatial components of an image by the same method that unsharp mask
boosts the amplitudes of high frequency spatial components. In using the unsharp mask
procedure to enhance local contrast, the radius for the blurring operation, however, needs
to be chosen considerably larger than when the only objective is the sharpening of edges.
The net result is that the amplitudes of some of the spatial frequencies are increased well
above their natural values. When not done to excess, the process adds snap to an image.
Typically a blur radius of a couple hundred pixels and an amplitude of about 20% is used.
Note that the process also boosts the amplitudes of the high frequency components that
define edges. Thus, less edge sharpening is required after local contrast enhancement,
Sometimes sharpening is described in terms of the Laplacian
, or in one dimension . At a slightly blurred edge, the
first and second derivatives of intensity would look roughly as shown, and when a
portion of the second derivative is subtracted from the image, the end result is a slight
halo and a sharpened edge.
The first derivative of intensity at the pixel i, xi,
is the difference between the intensity of
xi+1 and xi, that is xi+1 – xi divided by the distance between i and i + 1 which we shall take
to be one. Similarly, the first derivative of the intensity at pixel i-1 is xi - xi‑1. The second
derivative of intensity at the origin is the difference of the first derivatives, and thus is
xi+1 – 2xi + xi-1. If a portion of this second derivative is to be subtracted from the original
intensity, the operation is - xi+1 + 2xi - xi-1 which is the by now familiar recipe for
subtracting a fraction of a pixel’s intensity from the intensities of adjacent pixels.
August 2, 2005 | <urn:uuid:4f8d6164-8744-44cb-ae54-9737d5edabfc> | CC-MAIN-2015-35 | http://gene.bio.jhu.edu/sharpening/sharpen2.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064445.47/warc/CC-MAIN-20150827025424-00341-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.904905 | 2,899 | 3.640625 | 4 |
International Women's Day 2015 - Empowering Women - Empowering Humanity: Picture It!
The official United Nations theme for International Women's Day 2015 is "Empowering Women - Empowering Humanity: Picture It!"
Celebrated globally on 8 March, International Women's Day will highlight the Beijing Declaration and Platform for Action, a historic roadmap signed by 189 governments 20 years ago that sets the agenda for realizing women's rights. While there have been many achievements since then, many serious gaps remain. This is the time to uphold women's achievements, recognize challenges, and focus greater attention on women's rights and gender equality to mobilize all people to do their part. The Beijing Platform for Action focuses on 12 critical areas of concern, and envisions a world where each woman and girl can exercise her choices, such as participating in politics, getting an education, having an income, and living in societies free from violence and discrimination. Read more »
Official UN Observance for IWD 2015
Friday 6 March, 9-10 a.m. EST, Trusteeship Council Chamber, UN Secretariat, New York
This event was held as part of the High-level Thematic Debate on "Advancing Gender Equality and Empowerment of Women and Girls for a Transformative Post-2015 Development Agenda", hosted by the President of the UN General Assembly. With the theme of "Empowering Women, Empowering Humanity: Picture It", the IWD event saw high-level participation including, Sam Kahamba Kutesa, President of the UN General Assembly; Ban Ki-moon, UN Secretary-General; Phumzile Mlambo-Ngcuka, UN Women Executive Director; and other dignitaries.
International Women's Day Messages
Message from Ban Ki-moon, UN Secretary-General
Twenty years ago, when the world convened a landmark conference on women’s human rights, the devastating conflict in the former Yugoslavia prompted deserved attention to rape and other war crimes there against civilians. Two decades later, with girls as young as seven not only targeted but used as weapons by violent extremists, it would be easy to lose heart about the value of international gatherings. But while we have a long way to go to achieve full equality – with ending gender-based violence a central goal – progress over the past two decades has proven the enduring value of the 1995 Beijing Conference on Women.
Since the adoption of its Declaration and Platform for Action, more girls have attained more access to more education than ever before. The number of women dying in childbirth has been almost halved. More women are leading businesses, governments and global organizations. I welcome these advances. At the same time, on this International Women’s Day, we must acknowledge that the gains have been too slow and uneven, and that we must do far more to accelerate progress everywhere. Read more »
Message from Phumzile Mlambo-Ngcuka, UN Under-Secretary-General and UN Women Executive Director
In 1995, at the Fourth World Conference on Women in Beijing, world leaders committed to a future where women are equal.
One hundred and eighty nine countries and 4,000 civil society organizations, attended the conference.
Women left Beijing with high hopes, with a well-defined path towards equality, and firm commitments at the highest level. Their hope was that we would see this by 2005.
Today, not one single country has achieved equality. It is more urgent than ever that we define – and stick to – a time frame.
There has been some progress in the last 20 years – although it has been slow and uneven.
Countries have narrowed the gender gap in education and some have even reached gender parity in school enrolment.
They have reduced the toll of maternal mortality and morbidity. Many more women survive pregnancy and childbirth than in 1995. Read more »
International Fund for Agricultural Development (IFAD)
Message from Kanayo F. Nwanze, President of IFAD
Welcome to this year’s celebration of International Women’s Day.
IFAD is proud to be hosting today’s programme in collaboration with the other Rome-based agencies.
I am very pleased to be joined by Ertharin Cousin of the World Food Programme and by Marcela Villarreal of the Food and Agriculture Organization.
I would also like to thank our moderator, and each of the panelists, who will share their insights and knowledge with us today.
It has been said that women are the backbone of rural societies. Not only do they grow and process food, they make sure their families are well-fed and well-nourished.
Too often, however, being the “backbone of society” simply means that rural women are the ones doing the backbreaking work. Read more»
International Labour Organization (ILO)
Message from Guy Ryder, ILO Director-General
Two decades ago the 4th World Conference on Women in Beijing adopted a visionary and far-reaching Declaration and Platform for Action on gender equality and women’s empowerment. What progress there has been since then must be tempered by the reality that it is far less than what we had hoped to see by now.
In the areas of national gender equality policies, and legislation against discrimination based on sex, much has been accomplished. Nevertheless, progress on the ground remains elusive.
Globally, only about half the world’s women are in the labour force, compared to nearly 80 per cent of men – a figure basically unchanged in 20 years. A large gender pay gap hasn’t narrowed much, with women still earning on average 23 per cent less than men. And new evidence is emerging that mothers suffer a wage penalty, often over and above the gender pay gap. Read more »
- International Labour Organization page on International Women's Day
- ILO: Progress on gender equality at work remains inadequate
International Monetary Fund (IMF)
Message from Christine Lagarde, Managing Director of IMF
- International Monetary Fund page on International Women's Day
- Gender equality means better business, say heads of MDBs and IMF
- To mark International Women's Day, find out how much you know about gender, law and the global economy. Take the International Monetary Fund's #genderequality quiz and share your results via social media
International Organization for Migration (IOM)
Message from William Lacy Swing, Director General of IOM
More women are on the move than ever before. They represent approximately half of the world’s one billion migrants and are approximately half of the estimated 51 million displaced persons.
On this year’s International Women’s Day, IOM calls on the international community to ensure the empowerment of migrant and displaced women through the full realization of their human rights.
As we commemorate the historic twenty-year anniversary and review of the Beijing Declaration and Platform for Action, we must acknowledge that, while there have been many significant achievements in realizing women’s rights, serious gaps remain in making gender equality a reality. This is particularly true for migrant women.
Migration can empower women in search of new opportunities and a better life for themselves and their families. The income-generating opportunities, access to education and economic independence found through migration all serve to empower women.
Yet, migration can also be fraught with challenges such as discrimination, exclusion and even violence. Those who feel driven to move irregularly or flee due to disaster or conflicts face additional risks of trafficking, exploitation and marginalization.
To correct this, we must continue to engage and learn from migrant women as we review the accomplishments of the last twenty years and chart a course for the next twenty. Read more »
- International Organization for Migration page on International Women's Day
- Empower Women, Empower Humanity: A Video from IOM
- Picture It!: Photos and Stories of Empowered Women from Around the World: English | Español
- Women on the Move: A Look at Migration, Women and Cities
- "I Want to Inspire More Women to Become Carpenteras"
- Improving Livelihoods and Standing Together: Changing Communities in Sri Lanka
Joint United Nations Programme on HIV/AIDS (UNAIDS)
Message from Michel Sidibé, Executive Director of UNAIDS
As we celebrate International Women’s Day, world leaders and civil society are gathering in New York to take part in the 59th session of the Commission on the Status of Women. There, they will review the progress made since the adoption 20 years ago of the Beijing Declaration and Platform for Action, which set ambitious targets designed to improve the lives of women around the world. The Platform for Action strived to make sure that women and girls could exercise their freedom and realize their rights to live free from violence, go to school, make decisions and have unrestricted access to quality health care, including to sexual and reproductive health-care services.
In the response to HIV, there have been major advances over the past 20 years and new HIV infections and AIDS-related deaths are continuing to decline. However, in reducing new infections this success has not been shared equally.
In 2013, 64% of new adolescent infections globally were among young women. In sub-Saharan Africa, young women aged 15 to 24 are almost twice as likely to become infected with HIV as their male counterparts. Gender inequalities, poverty, harmful cultural practices and unequal power relations exacerbate women’s vulnerability to HIV, but concerted global commitment and action can reverse this.
United Nations Development Programme (UNDP)
Message from Helen Clark, UNDP Administrator
This week, the United Nations Commission on the Status of Women will commemorate the 20th anniversary of the Beijing Declaration and Platform for Action, which remains the world’s best blueprint for achieving gender equality and empowering women. The review of this visionary roadmap, adopted at the Fourth World Conference for Women in 1995, is an opportunity to celebrate the world’s progress toward ensuring the rights and opportunities of women and girls, and also to renew and reinvigorate commitments to achieve gender equality.
One of the great achievements of the Beijing Platform for Action was the clear recognition that women’s rights are human rights. Since that historic gathering in Beijing, when 17,000 participants and 30,000 activists gathered to voice and demonstrate their support for gender equality and women’s empowerment, there has been increasing recognition that gender equality, in addition to being a human right, is also critical to making development progress. If women and girls are not able to fully realize their rights and aspirations in all spheres of life, development will be impeded. Read more »
United Nations Educational, Scientific and Cultural Organization (UNESCO)
Message from Irina Bokova, Director-General of UNESCO
2015 marks the 20th anniversary of the 4th World Conference on Women that culminated in the adoption of the Beijing Declaration and Platform for
Action. In 1995, States and civil society representatives signed a commitment for gender equality, guided by the conviction that "women's empowerment and their full participation on the basis of equality in all spheres of society, including participation in the decision-making process and access to power, are fundamental for the achievement of equality, development and peace." I was among the 17,000 delegates from across the world, who gathered in Beijing in 1995, and I remember leaving Beijing with hope and a sense of accomplishment. Read more (pdf) »
United Nations Environment Programme (UNEP)
Message from Achim Steiner, Executive Director of UNEP
United Nations Industrial Development Organization (UNIDO)
Message from LI Yong, Director General of UNIDO
Empowering women is empowering humanity. Gender equality and women’s empowerment is central to UNIDO’s work as it is not only a matter of human rights, but also a precondition for sustainable development and economic growth, which are drivers of poverty reduction and social integration. When women and men are more equal, economies grow faster, more people are lifted out of poverty and the overall well-being of societies is enhanced. Central to UNIDO’s mission of inclusive and sustainable industrial development (ISID) is the urgent need to harness the economic potential of women – half of the world’s population. Women are powerful drivers of ISID and their role is poised to become even greater in the future.
However, women and girls still make up 70 per cent of the world’s extreme poor. The majority lives in rural areas, where communities are resource-poor and isolated, and most subsist on small-scale productive activities. UNIDO helps develop competitive agro-industries in order to create jobs and sustainable livelihoods for the rural poor. By providing technical assistance, UNIDO aims to strengthen agro-industrial capabilities and linkages to facilitate economic transformation in rural communities, particularly among women and youth. For example, UNIDO provides rural women and men equal access to new agro-technologies and skills upgrading. Assistance is also provided for process optimization, compliance with quality and environmental standards, and the identification of market opportunities.
United Nations Population Fund (UNFPA)
Message from Dr. Babatunde Osotimehin, Executive Director of UNFPA
During the past 20 years, we have witnessed remarkable advances in promoting the human rights and dignity of women and girls and their full and equal participation in society.
The International Conference on Population and Development (ICPD) in Cairo, and the Fourth World Conference on Women in Beijing bolstered progress for women's rights to make their own choices about their bodies and their futures.
For the first time, world leaders proclaimed sexual and reproductive health and reproductive rights as human rights integral to gender equality and women's dignity and empowerment. These rights are essential for the enjoyment of other fundamental rights, for eradicating poverty and for achieving social justice and sustainable development.
Today, on International Women's Day, we celebrate the progress we have made. And, we pledge to redouble efforts to complete these unfinished agendas. We will not stop until we cross the finish line and realize equality between girls and boys and women and men.
Together, we have come a long way. Today, more girls are going to school, more women have joined the labour force, and more women have access to sexual and reproductive health services, including family planning.
United Nations Volunteers (UNV)
Message from Richard Dictus, Executive Coordinator of UNV
The United Nations Volunteers (UNV) programme is proud to celebrate International Women’s Day in this year of Beijing plus 20.
From the Partners for Prevention project in the Asia-Pacific region to the Female Genital Mutilation project in Sudan, UNV supporting its UN partner agencies, has a long history of working towards Gender Equality and Women’s Empowerment and integrating these issues into all of its projects and programmes.
But the most vivid examples of the continued pursuit of gender equality in development and peace, are UN Volunteers themselves making a difference throughout the world.
Ms. Bicharo Gure is a UN Volunteer currently serving as a mechanic in the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo. Bicharo is the picture of an empowered woman, who is empowering humanity.She once said: “Work doesn’t know if you are male or female.” And when she was burdened by gender stereotypes, she would ask herself, “Can I replace a wheel single-handedly? – Check. Can I drive a truck? – Check.” And she would keep working, strongly determined to make a change.
World Food Programme (WFP)
Picture it - a Nepali schoolgirl on the top of Mount Everest. At primary school, where she received WFP school meals, Nimdoma used to dream of doing extraordinary things. She made her dream come true and didn't stop at Everest, which she conquered when she was 17 years old. Nimdoma, a wonderful example of women's empowerment, went on to climb the highest mountain on every continent with a Nepali, all-woman, climbing team.
On International Women's Day (March 8) this year, I want to pay tribute to the girls and women who dream big. Empowering women empowers humanity. At the World Food Programme, we believe that women and girls who are empowered will lead to our ultimate goal, a world with zero hunger. Women and their work – paid and unpaid -- are central to the production, preparation and provision of food, so are essential to food and nutrition security.
While every March 8 we celebrate the many images of women, there is another picture that comes to mind, of women's hunger and deprivation. A WFP gender assessment of one of the poorest countries in the world in 2014 found women had half the time that men had to rest in the course of a day. At 7am, when men woke up to have their breakfast, women had already worked for two hours to prepare the food, fetch water and get the children washed and they also went to bed later.
- World Food Programme page on International Women's Day
- How Being Energy Efficient In Ethiopia Is Helping The Environment
- Niger: School Meals Help Girls Reach Their Full Potential
World Tourism Organization (UNWTO)
Message from Taleb Rifai, Secretary-General of UNWTO
On the occasion of International Women's Day, UNWTO Secretary-General, Taleb Rifai, calls upon the tourism sector to step up policies and business practices that promote gender equality and women's empowerment.
The UNWTO/UN Women Global Report on Women in Tourism shows that tourism can offer significant opportunities to narrow the gender gap in employment and entrepreneurship as women are nearly twice as likely to be employers in tourism as compared to other sectors. The Report also shows that women are well represented in service and clerical level jobs, but poorly represented at professional levels and earn 10% to 15% less than their male counterparts. | <urn:uuid:f57179d5-8f96-4ff9-8cd7-e6c514f8d08d> | CC-MAIN-2015-35 | http://www.un.org/womenwatch/feature/iwd/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065464.19/warc/CC-MAIN-20150827025425-00338-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.94157 | 3,703 | 2.71875 | 3 |
- What are Juniper Berries?
- Juniper berries – Other Names
- Juniper Berry Tree
- Juniper Berries Growing
- Juniper Berries Harvesting
- Juniper Berries Nutrients
- Juniper Berries Substitutes
- Juniper Berries Uses
- Juniper Berries Gin
- Juniper Berries Recipes
- Juniper Berries Side Effects
- Juniper Berry Oil
- Juniper Berries Edible
- Juniper Berries Poison
- Juniper Berry Combination
What are Juniper Berries?
Juniper berries are the female seed cone of the juniper tree. It is a fleshy cone, with merged scales, thus having a berry like appearance. These berries are green in color at a young stage, but later, when they mature, they turn purple-black in color. The juniper berries belong to the family of N.O.Coniferae. Its botanical name is Juniperus communis. It is a since, due to the presence of a primary flavoring agent in gin.
Juniper berries – Other Names
Juniper berries are known by several names in different places. It has the following synonyms:
India – Dhup, Shur
French – Genievre
Italian – Ginepro
Spanish – Enebro
German – Waccholer
The name of Juniper Berry in hindi is Araar.
Juniper Berry Tree
It is a coniferous tree, which belongs to the family of Cupressaceae. It has which bluish-green branches, and bears small yellow-purplish flowers and three cornered seeds. It is a wonderful and a versatile garden plant. It can tolerate a wide range of growing and soil conditions, as well as certain extent of drought. It is distributed throughout Europe, North America and certain parts of northern Asia.
The tree has thick foliage, commonly used as strewing herb to freshen the stale air, due to its air-cleansing fragrance. The juniper trees very in size, ranging from small to tall trees. Their leaves are needle shaped and scale like. Its seeds are cone like and very fleshy, due to which they appear as berries. The foliage of the tree comes in many colors, ranging from silvery-grey to mauve and purple in some seasons.
Juniper Berries Growing
The juniper trees and shrubs grow mostly in all parts of Northern Hemisphere. All the species of juniper trees produce berries. These plants grow from direct planting of berries, and also from the cuttings from new growth. The trees can grow very well in light drained, impoverished soil. The soil to be selected can be alkaline or acidic. This plant is effective against gravel much and paving.
The berries are picked when they are fully ripe, sometime in late summer or early autumn. When ripe, these berries are very dark in color.
Juniper Berries Harvesting
When the juniper berries are ready to be harvested, you need to spread them on a flat ground, in a sunny place. You can also dehydrate the berries, or place them in an oven. But you should make sure that these berries are used within one year of harvesting.
Juniper Berries Nutrients
Juniper berry is rich in nutrients, and provide many health benefits to the people. They possess many important nutrients, which are as follows:
Vitamins and minerals – juniper berries are rich in vitamin C and Vitamin B. they contain small amounts of calcium.
Volatile Oils – it also contains a substantial amount of volatile oil.
The percentage of various nutrients is as follows:
- Protein- 4%
- Lipid – 16%
- Carbohydrate – 46%
- Fiber – 34%
- Ash – dry weight.
The Juniper Berries health benefits are as follows:
Juniper berries are a natural cure for treating flatulence. The tea that is prepared from these berries is often used to wash the joints so as to relive oneself from joint pain and soreness. It treats gout and other similar conditions. Apart from this, it is used for expelling respiratory problems.
Juniper Berries Substitutes
Juniper berries are excellent flavoring agents, and you may not get the exact flavor on using any other spice or flavoring agent. However there are certain spices that provide almost a similar taste, Gin being one of them. Other distant spices that are used as a substitute for juniper berries may include usage of garlic or onion extracts, so as to give some flavor to the cooked food.
Juniper Berries Uses
Juniper berries are primarily used as a spice, providing a great flavor to the food. The outer scales of the berries do not contain any flavor, so they are removed while crushing the seed to extract the spice out of it. The spice can be used in a dry state or in a wet state; however the odor is strongest immediately after harvest, so they should be used fresh. They flavor the liquor such as jenever and sahti beers. It flavors the various meat and veal dishes. The juniper berries extract has an antioxidant activity.
Apart from the use of juniper berries in cooking, junipers play a major role as a landscape attraction in various places. The juniper forests are a rich storehouse of wood, fuel, food and shelter for the people living in and around these forests. In Morocco, the tar of these trees is applied in the dotted designs of the drinking cups.
An essential oil that is extracted from these seeds is used in making perfumes and in aromatherapy. Juniper berries healing properties are also too many, it therefore cats as a medication for many ailments. The seeds inside the berries are used for decoration and in jewellery making.
Juniper Berries: Medicinal Uses
Juniper berries are a diuretic. They have many important medicinal uses. Here is a list of juniper berries benefits:
- The consumption of tonics made out of it can act as an appetite increaser. It can also act as an appetite suppressant.
- It also has remedies for Arthritis and Rheumatism. It is being used for treatment against diabetes.
- In some tribal areas, it is used as a female contraceptive.
- It also helps in combating bacterial infections such as prostates, vaginitis and inflamed kidneys.
- It can be used for treatment of lung disorders.
- It is used as a purifier and overall system cleaner.
- Juniper berries have also acted as an old herbal remedy for digestive tract problems.
- The extracts increase peristalsis and intestinal tone.
Juniper Berries Kidney Stones
Juniper berries can be used to dissolve kidney stones in the prostrate. It has some natural healing properties, due to which the person gats cured without the need of surgery to remove the stones.
Juniper Berries Gin
Gin is a kind of white spirit whose flavor is primarily extracted from juniper berries. It is a kind of intoxicant, made from juniper berries by crushing its seeds and thereby obtaining the flavor. Gin is of two kinds: distilled and compound. Gins produced from juniper berries are of the distilled type.
Juniper Berries Recipes
Juniper Berries are used for preparing a wide range of dishes. They enhance the taste and enhance the flavor of many meat dishes. They can be used for seasoning the meat dishes like that of the wild birds – blackbird, woodcock; pork and also game meats. They also compliment the taste of beef.
Some of the traditional recipes that can be cooked with the help of juniper berries are:
- Choucroute garnie
- Alsatian dish of sauerkraut and meat
- Calico carrots
- Brussel sprouts
- Apple and juniper berry
- Cheesy brussel sprouts
- Venison roast
- Juniper Tilapia
- Beef-pork Goulash
Venison Juniper Berries
Juniper berries are effectively used to cook delicious venison recipes. This spice enhances the distinctive flavor of venison due to its woody fragrance. Some of the yummy venison recipes that are cooked with juniper berries are:
- Wild mushroom and venison stroganoff
- Venison and juniper stew
- Pan-seared venison with blue berries, red wine and shallot
- Venison pot roast
Juniper Berries Side Effects
Juniper berries stimulates the uterus, therefore it should not be consumed by pregnant women, as it may even lead to abortion of the fetus in worst circumstances. Those people with severe kidney infection should absolutely not consume it, as it may increase the chances of its severity. Side effects of this spice include certain urinary problems such as purplish urine and blood in the urine; intestinal pain and diarrhea. Juniper oil should never be applied to open wounds as it may result in swelling and irritation.
Juniper Berries Allergy
Juniper berries have certain allergic reactions like most plants. People who are allergic to it may show symptoms of sneezing, rash, coughing and wheezing. People who handle this plant are more vulnerable to these allergies.
Juniper Berry Oil
Juniper berry oil is generally obtained from raw fruit or from the berries. This oil occurs as a pale green or yellowish limpid liquid, having a terebinthic odor. This oil serves many medical purposes. It acts as a diuretic, as a carmintive and for curing certain diseases of kidney. It can be used in combination with lard to prevent irritation from flies. It is also used for the expulsion of uric acids in the joint and gout. It can be used to detoxify and clear congested skin. Sometimes this juniper berries essential oil is also used as a local stimulant.
Juniper Berries Edible
All the species of the juniper trees grow berries, and almost all are edible. But all are not, and some of them are just too bitter to eat. Therefore you should know which kind of juniper berries you can eat. The Juniperus monosperma should not be eaten as it is unpalatable. Even Savin juniper should not be eaten. The rest are pretty much edible.
Juniper Berries Poison
Some varieties of Juniper Berries are poisonous. You should be careful while picking the berries to make sure that the species that you have selected id edible. You should be sure that the species that you have selected is not poisonous in disguise.
Juniper Berry Combination
Due to the special Juniper berries diuretic properties, and because they help in alleviation of fluid retention, there is a special formula prepared, containing juniper berries and certain other herbs that are beneficial for medical purposes. This is the juniper berry combination. It has proved in improving the genital-urinary problems and supporting the kidney.
The juniper berries are nutritionally rich. They provide many health benefits and are excellent when used as a flavoring agent. Try this spice if you still have not. You will surely feel sorry if you miss this amazing spice. | <urn:uuid:50103529-e51d-4118-8ea1-6bef6dbab2a3> | CC-MAIN-2015-35 | http://www.onlyfoods.net/juniper-berries-uses-health-benefits-side-effects-recipes-and-substitute.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00104-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.942016 | 2,326 | 2.921875 | 3 |
By Dr. Mercola
A growing body of research clearly shows the absolute necessity of vitamin D for good health and disease prevention. However, despite vitamin D’s role in keeping your body ticking along like a well-oiled clock, you are likely deficient in the “sunshine vitamin”—because the majority of people are.
Our vitamin D levels have dropped as a result of being scared sunless by those spreading misinformation that the sun causes melanoma, a myth that survives by mass promotion but really lacks any factual basis. It has been repeated so many times that most people believe it.
Vitamin D affects your biological function by influencing nearly 3,000 of your genes through vitamin D receptors. In fact, vitamin D receptors are found throughout your body, which should come as no surprise, given we humans evolved in the sun.
Recent research has also revealed yet another benefit of sun exposure beyond the protective benefits of producing vitamin D, namely the production of nitric oxide—a compound that lowers your blood pressure.
According to the researchers, the heart-health benefits from this may outweigh the risk of developing skin cancer. Your vitamin D level varies not only with time of day, season, and geographic location, but also with your genetics.
For example, if you have dark skin, you may need up to 10 times more sun exposure to maintain an optimal vitamin D level as a person with pale skin.Redheads have to be particularly careful, as they appear to be genetically predisposed to developing melanoma, regardless of whether or not they spend time in the sun.
Tens of Thousands of Health Studies Attest to Vitamin D’s Importance
- Pain-killing (analgesic) properties
- Increased subcutaneous fat metabolism
- Regulation of human lifespan (solar cycles appear to be able to directly affect the human genome, thereby influencing lifespan)
- Daytime sun exposure improves evening alertness
- Conversion to metabolic energy (i.e. we may “ingest” energy directly from the sun, like plants do)
When it comes to vitamin D production, the benefits are simply immeasurable. In fact, correcting a vitamin D deficiency may cut your risk of dying in half, according to an analysis of more than 10,000 individuals.
According to a January 2013 press release by Orthomolecular Medicine, 3,600 medical papers with vitamin D in the title or abstract were published in 2012 alone, bringing the grand total to 33,800. Research to date shows vitamin D has far reaching benefits to your physical and mental health, with the following chart representing only the tip of the sunbeam.
Pregnancy outcomes (reduced risk of Cesarean section and pre-eclampsia)
- Childhood language impairment
- Cardiovascular disease
- Type 1 diabetes
- Type 2 diabetes
- Alzheimer’s disease
- Bacterial and viral infections
- Falls and bone fractures
- 16 different types of cancer
- All-cause mortality
Another Way Sun Exposure Protects Your Heart Health
UVB exposure also improves your mood and energy level, helps regulate melatonin, and, as mentioned earlier, increases nitric oxide production, which benefits your cardiovascular system. With regards to the latter:
“Richard Weller, Senior Lecturer in Dermatology, and colleagues, say the effect is such that overall, sun exposure could improve health and even prolong life, because the benefits of reducing blood pressure, cutting heart attacks and strokes, far outweigh the risk of getting skin cancer,” Medical News Today reports.
Weller and colleagues found that the body’s production of nitric oxide is separate from production of vitamin D… Human skin contains large stores of nitrite (NO2) and nitrate (NO3). The researchers note that while nitrate is “biologically inert”, the action of sunlight can reduce it to active nitrite and nitric oxide (NO).They found that circulatory nitrate fell and nitrite rose during UV and heat exposure, but not during exposure to heat only. There was no difference in vitamin D levels.
Weller says in a statement that: ‘We suspect that the benefits to heart health of sunlight will outweigh the risk of skin cancer. The work we have done provides a mechanism that might account for this, and also explains why dietary vitamin D supplements alone will not be able to compensate for lack of sunlight… If this confirms that sunlight reduces the death rate from all causes, we will need to reconsider our advice on sun exposure.'”
Skin Cancer, in Brief
Before we discuss melanoma, you need a basic understanding of the three most common types of skin cancer, each named for the type of cells affected:
- Basal cell carcinoma (BCC): Begins in the basal cell layer of the skin, typically on the face; the most common form of skin cancer and the most common type of cancer in humans; least likely skin cancer to spread.
- Squamous cell carcinoma (SCC): Begins in the squamous cells, typically on the face, neck, ears, lips, and backs of hands; tends to grow and spread a bit more than BCC.
- Melanoma: Begins in the melanocytes (the cells that produce the pigment melanin, responsible for your tan); melanin protects the deeper layers of your skin from excess radiation. Melanoma is more likely than other types of skin cancer to spread to other parts of your body and causes more deaths than any other type of skin cancer.
Don’t Fall for the Melanoma Myth
If you believe the lure of the sun is equivalent to the siren’s call for melanoma, you’ll be relieved to learn melanoma is not actually caused by sun exposure, unlike the other two types of skin cancer, BCC and SCC. Although the reported number of new cases of melanoma in the US has been reportedly increasing for more than 30 years, a landmark study in the British Journal of Dermatology suggests this apparentincrease is a result of non-cancerous lesions being misclassified as “stage 1 melanoma.” In other words, people are being diagnosed with melanoma even when they have only a minimal, non-cancerous lesion, and these diagnoses are significantly skewing cancer statistics. The sun is nothing more than a scapegoat in this phenomenon of “increased melanoma.”
But this misdiagnosis is doing more than just skewing statistics—it’s causing a mountain of unnecessary melanoma surgeries. A study in the Journal of the American Academy of Dermatology found that 90 percent of melanoma excisions end up NOT being melanoma at all. But if the sun doesn’t cause melanoma, then what does?
The REAL Role of the Sun in Melanoma
As with all serious diseases, there are multiple interacting factors that cause your immune system to go awry, such as nutrition, environmental toxins, stress, inadequate sleep, etc. But for melanoma, the sun does appear to have a significant role—melanoma may signify too little of it!
Studies show melanoma mortality actually decreases after UV exposure. Additionally, melanoma lesions do not predominate sun-exposed skin, which is why sunscreens have proven ineffective in preventing it. Exposure to sunlight, particularly UVB, is protective against melanoma—or rather, the vitamin D your body produces in response to UVB radiation is protective. The following passage comes from The Lancet:
“Paradoxically, outdoor workers have a decreased risk of melanoma compared with indoor workers, suggesting that chronic sunlight exposure can have a protective effect.”
“There is solid descriptive, quantitative, and mechanistic proof that ultraviolet rays cause the main skin cancers (basal and squamous). They develop in pale, sun exposed skin, are related to degree of exposure and latitude, are fewer with avoidance and protection, are readily produced experimentally, and are the overwhelmingly predominant tumor in xeroderma pigmentosum, where DNA repair of ultraviolet light damage is impaired. None of these is found with melanoma.”
The bottom line is, by avoiding the sun, your risk for vitamin D deficiency skyrockets, which increases your odds of developing melanoma and a multitude of other diseases. The risks associated with insufficient vitamin D are far greater than those posed by basal cell or squamous cell carcinomas, which are fairly benign by comparison, as you’ll see by reading on.
Vitamin D Could Prevent 90 Percent of Breast Cancers
Theories linking vitamin D deficiency to cancer have been tested and confirmed in more than 200 epidemiological studies, and understanding of its physiological basis stems from more than 2,500 laboratory studies. In the above interview, GrassrootsHealth founder Carole Baggerly believes 90 percent of ordinary breast cancer is related to vitamin D deficiency. In fact, breast cancer has been described as a “vitamin D deficiency syndrome.” The way vitamin D interferes with breast cancer’s ability to spread is by affecting the structure of those cells—without adequate vitamin D, they fall apart and are forced to “overmultiply” in order to survive.
Previous research has shown that optimizing your vitamin D levels can reduce your risk for as many as 16 different types of cancer, including pancreatic, lung, ovarian, breast, prostate, and skin cancers. A study of menopausal women showed that maintaining vitamin D serum levels of 40ng/ml lowers overall cancer risk by 77 percent.
Two recent papers in the journal Science Express shed light on how cancer might begin. A cancer cell can be created when unusual mutations occur in a small area of its DNA that controls and regulates its genes, as contrasted with mutations in the genes themselves. The mutations spur the cell to make telomerase. One of the functions of telomerase is to prevent telomere shortening, which leads to cell death. According to Harvard researchers, abundant telomerase is so important to cancers that it appears in nine out of ten.
In addition to being a strong cancer preventative, vitamin D is crucial for pregnant women and their babies, lowering the risk for preterm birth, low birth weight, and C-section. And sadly, 80 percent of pregnant women have inadequate vitamin D levels.
Low Vitamin D in Pregnancy May Increase Your Baby’s Risk for Multiple Sclerosis Later in Life
Sunshine is so important to your overall health that science is now finding a connection between the strength of your immune system and your birthday, called the “birth month effect.” If you were born in the spring, you are statistically more vulnerable to developing an autoimmune disease such as multiple sclerosis (MS), than if you were born in the fall.
Why would this be?
Some researchers suggest it’s related to a pregnant woman’s vitamin D levels during her baby’s gestation. April and May babies have been gestating during the colder, darker months, as opposed to November and December babies, who’ve been developing over the spring and summer. Now a study in JAMA Neurology shows this hunch may be correct, suggesting a mechanism related to thymic development. Another study suggests sun exposure and vitamin D may play roles in the CNS demyelination associated with MS.
And the sun can lift your mood! New research published in the American Journal of Preventive Medicine shows that Google searches for mental health related issues drop by 15 to 42 percent during the summer months, which could very well be related to the boost in vitamin D. Vitamin D deficiency is a known factor in cognitive impairment and dementia.
Practicing Safe Sunning
Vitamin D3 is an oil-soluble steroid hormone (the term “vitamin” is a misnomer) that forms when your skin is exposed to UVB radiation from the sun or a safe tanning bed. When UVB strikes the surface of your skin, your skin converts a cholesterol derivative into vitamin D3. It takes up to 48 hours for this D3 to be absorbed into your bloodstream to raise your vitamin D levels. Therefore, it’s important to avoid washing your skin with soap for 48 hours after sun exposure. In case you do develop a sunburn, immediately apply raw aloe vera, as it’s one of the best skin remedies.
As a general guideline, research by GrassrootsHealth suggests that adults need about 8,000 IU’s per day to achieve a serum level of 40 ng/ml. If you opt for a vitamin D supplement, you also need to boost your intake of vitamin K2 through food and/or a supplement. How do you know if your vitamin D level is in the right range? The most important factor is having your vitamin D serum level tested every six months, as people vary widely in their response to ultraviolet exposure or oral D3 supplementation. Your goal is to reach a clinically relevant serum level of 50-70 ng/ml.
Overuse of Sunscreen May Turn You into a Melanoma Magnet
Following the advise of health officials’ to slather on sunscreen may increase your melanoma risk instead of decreasing it, which is certainly not what you want. Indeed, you never want to let yourself burn. However, if you practice safe sunning, you will avail yourself of all of the sun’s health benefits with none of the risk.
If you do use a sunscreen, please be careful about which product you choose as many sunscreen products contain chemicals you don’t want absorbed into your body. According to the Environmental Working Group’s 2012 Sunscreen Guide, about 75 percent of sunscreens contain potentially harmful ingredients, such as oxybenzone and retinyl palmitate. Avoid products with SPFs higher than 50, and make sure yours offers protection against both UVA and UVB rays.
Keep in mind SPF only protects against UVBs—but it’s the UVAs that increase your risk for skin cancer and are responsible for photoaging your skin. Recall that it’s the UVBs that stimulate your vitamin D production, so you don’t want to block out too many of them.
Using an “internal sunscreen” is an alternative to topical sunblock agents. Astaxanthin—a potent antioxidant—has been found to offer effective protection against sun damage when taken as a daily supplement. It can also be used topically and a number of topical sunscreen products contain it. Some sunscreens are also starting to use astaxanthin as an ingredient to protect your skin from damage. As an alternative, you can cover up with lightweight clothing to protect yourself. Sometimes we forget about the simple things, like simply wearing a hat.
For the latest information about vitamin D, please visit our Vitamin D News and Information page.
How Vitamin D Performance Testing Can Help Optimize Your Health
Additionally, a robust and growing body of research clearly shows that vitamin D is absolutely critical for good health and disease prevention. Vitamin D affects your DNA through vitamin D receptors (VDRs), which bind to specific locations of the human genome. Scientists have identified nearly 3,000 genes that are influenced by vitamin D levels, and vitamin D receptors have been found throughout the human body.
Is it any wonder then that no matter what disease or condition is investigated, vitamin D appears to play a crucial role? This is why I am so excited about the D*Action Project by GrassrootsHealth. It is showing how you can take action today on known science with a consensus of experts without waiting for institutional lethargy. It has shown how by combining the science of measurement (of vitamin D levels) with the personal choice of taking action and, the value of education about individual measures that one can truly be in charge of their own health.
In order to spread this health movement to more communities, the project needs your involvement. This is an ongoing campaign during the month of February, and will become an annual event.
To participate, simply purchase the D*Action Measurement Kit and follow the registration instructions included. (Please note that 100 percent of the proceeds from the kits go to fund the research project. I do not charge a single dime as a distributor of the test kits.)
As a participant, you agree to test your vitamin D levels twice a year during a five-year program, and share your health status to demonstrate the public health impact of this nutrient. There is a $65 fee every 6 months for your sponsorship of the project, which includes a test kit to be used at home, and electronic reports on your ongoing progress. You will get a follow up email every six months reminding you “it’s time for your next test and health survey.”
Read the full article here: http://articles.mercola.com/sites/articles/archive/2013/07/01/vitamin-d-benefits.aspx
We Lost the War on Cancer – Review of Alternative Cancer Therapies
We have lost the war on cancer. At the beginning of the last century, one person in twenty would get cancer. In the 1940s it was one out of every sixteen people. In the 1970s it was one person out of ten. Today one person out of three gets cancer in the course of their life.
The cancer industry is probably the most prosperous business in the United States. In 2014, there will be an estimated 1,665,540 new cancer cases diagnosed and 585,720 cancer deaths in the US. $6 billion of tax-payer funds are cycled through various federal agencies for cancer research, such as the National Cancer Institute (NCI). The NCI states that the medical costs of cancer care are $125 billion, with a projected 39 percent increase to $173 billion by 2020.
The simple fact is that the cancer industry employs too many people and produces too much income to allow a cure to be found. All of the current research on cancer drugs is based on the premise that the cancer market will grow, not shrink.
John Thomas explains to us why the current cancer industry prospers while treating cancer, but cannot afford to cure it in Part I. In Part II, he surveys the various alternative cancer therapies that have been proven effective, but that are not approved by the FDA.
Read We Lost the War on Cancer – Review of Alternative Cancer Therapies on your mobile device!
FREE! – $0.99 | <urn:uuid:81a4f4bc-cb1d-4b68-8ab9-6243a229cb74> | CC-MAIN-2015-35 | http://healthimpactnews.com/2013/sensible-sun-exposure-can-help-prevent-melanoma-breast-cancer-and-hundreds-of-other-health-problems/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00047-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.93827 | 3,816 | 3.140625 | 3 |
British Poetry Revival
|This article needs additional citations for verification. (February 2014)|
"The British Poetry Revival" is the general name given to a loose poetry movement in Britain that took place in the 1960s and 1970s. The revival was a modernist-inspired reaction to the Movement's more conservative approach to British poetry. The poets included Bill Griffiths, Allen Fisher, Iain Sinclair, Gilbert Adair, Lawrence Upton, Peter Finch, Ulli Freer, Gavin Selerie, Frances Presley, Elaine Randell, Robert Sheppard, Adrian Clarke, Clive Fencott, Maggie O'Sullivan, cris cheek, Tony Lopez and Denise Riley.
If the Movement poets looked to Thomas Hardy as a poetic model, the poets associated with the British Poetry Revival were more likely to look to modernist models, such as the American poets Ezra Pound and William Carlos Williams and British figures such as David Jones, Basil Bunting and Hugh MacDiarmid. Although these major British poets had effectively been written out of official histories of 20th century British poetry, by the beginning of the 1960s a number of younger poets were starting to explore poetic possibilities that the older writers had opened up.
These poets included Roy Fisher, Gael Turnbull, Ian Hamilton Finlay, Bob Cobbing, Jeff Nuttall, Tom Raworth, Michael Horovitz, Eric Mottram, Peter Finch, Edwin Morgan, Jim Burns, Elaine Feinstein, Lee Harwood, Dick Russell and Christopher Logue. Many of these poets joined Allen Ginsberg and an audience of 7,000 people at the Albert Hall International Poetry Incarnation on 11 June 1965 to create what was, effectively, the first British happening.
These poets provided a wide range of modes and models of how modernism could be integrated into British poetry. Fisher, also a professional jazz pianist, applied the lessons of William Carlos Williams' Paterson to his native Birmingham in his long poem City. Turnbull, who spent some time in the U. S., was also influenced by Williams. His fellow Scots Morgan and Finlay both worked with found, sound and visual poetry. Mottram, Nuttall, Horovitz and Burns were all close to the Beat generation writers. Mottram and Raworth were also influenced by the Black Mountain poets while Raworth and Harwood shared an interest in the poets of the New York School.
A number of publishing outlets for this new experimental poetry also began to spring up, including Turnbull's Migrant Press, Raworth's Matrix Press and Goliard Press, Horovitz's New Departures, Stuart Montgomery's Fulcrum Press, Tim Longville's Grosseteste Review, Galloping Dog Press and its Poetry Information magazine, Pig Press, Andrew Crozier and Peter Riley's The English Intelligencer, Crozier's Ferry Press, and Cobbing's Writers Forum. In addition to the poets of the revival, many of these presses and magazines also published avant-garde American and European poetry. The first anthology to present a wide-ranging selection of the new movement was Horovitz's Children of Albion: Poetry of the Underground in Britain (1969).
Thanks in no small part to Cobbing's Writers Forum and its associated writers' workshop, London was a hub for many young poets, including Bill Griffiths, Allen Fisher, Iain Sinclair, Gilbert Adair, Lawrence Upton, Peter Finch, Ulli Freer, Gavin Selerie, Frances Presley, Elaine Randell, Robert Sheppard, Adrian Clarke, Clive Fencott, Maggie O'Sullivan, cris cheek, Tony Lopez and Denise Riley.
Griffiths writes a poetry of dazzling surface and deep political commitment that incorporates such matter as his professional knowledge of Anglo-Saxon and his years as a Hells Angel. Both Sinclair and Fisher share a taste for William Blake and an interest in exploring the meaning of place, particularly London, which can be seen in Sinclair's Suicide Bridge and Lud Heat and Fisher's Place sequence of books. O'Sullivan explores a view of the poet as shaman in her work, while Randell and Riley were among the first British women poets to combine feminist concerns with experimental poetic practice. For more on Griffiths's poetry, see William Rowe (ed.), Bill Griffiths (Salt, 2007).
Griffiths started Pirate Press to publish work by himself and others. Allen Fisher set up Spanner for similar reasons, and Sinclair's early books were published by his own Albion Village Press, which also published work by Chris Torrance and Brian Catling. Book production has always been an important part of Revival practice. Many of these writers also participated enthusiastically in performance poetry events, both individually or in groups like Cobbing's Bird Yak and Konkrete Canticle.
Eric Mottram was a central figure on the London scene, both for his personal and professional knowledge of the Beat generation writers and the US poets linked with the New American Poetry more generally, and his abilities as a promoter and poet. In large part through Mottram's presence there, King's College London was another important site for the British Poetry Revival. Poets who attended there (a number of them also students taught by Mottram) included Gilbert Adair, Peter Barry, Sean Bonney, Hannah Bramness, Clive Bush, Ken Edwards, Bill Griffiths, Robert Gavin Hampson, Jeff Hilson and Will Rowe.
Northumbria and Northern England
|This section does not cite any references or sources. (February 2014)|
By the early 1950s, Basil Bunting had returned to live in Newcastle and, in 1966, Fulcrum Press published Briggflatts, which is widely considered to be his masterpiece. A number of younger poets began to gather around Bunting. In 1963, Connie and Tom Pickard started a reading series and bookshop in the Morden Tower Book Room. The first reading was by Bunting, and Ginsberg, Robert Creeley, Lawrence Ferlinghetti and Gregory Corso all read there. They were soon joined by Richard Caddel, brought up in Kent but an honorary Northumbrian, Barry MacSweeney and Colin Simms.
Through Bunting, these younger writers became familiar with the work of the Objectivist poets. Specifically, Louis Zukofsky and Lorine Niedecker were to become important models for Caddel and Simms in their writing about the Northumbrian environment. Pickard and MacSweeney shared Bunting's interest in reviving Northumbrian vowel patterns and verbal music in poetry and all of these poets were influenced by the older poet's insistence on poetry as sounded speech rather than purely written text.
At Easter, 1967 MacSweeney organised the Sparty Lea Poetry Festival. This was a ten-day session of reading, writing and discussion. The participants, including the Pickards, MacSweeney, Andrew Crozier, John James, John Temple[disambiguation needed], Pete Armstrong, Tim Longville, Peter Riley, John Hall, J. H. Prynne and Nick Waite, stayed in a group of four cottages in the village of Sparty Lea. This was to be a pivotal event in the British Poetry Revival, bringing together poets who were separated geographically and in terms of poetic influences and encouraging them to support and publish each other's work.
Although published by Writers Forum and Pirate Press, Geraldine Monk is very much a poet of the North of England. Like Maggie O'Sullivan, she writes for performance as much as for the page and there is an undercurrent of feminist concerns in her work. Other poets associated with the North of England included Paul Buck, Glenda George, and John Seed. Paul Buck and Glenda George for many years edited Curtains, a magazine instrumental in disseminating contemporary French poetry and philosophical/theoretical writing. John Seed had picked up on Objectivism while still in the North-East.
|This section does not cite any references or sources. (February 2014)|
The Cambridge poets were a group centred around J. H. Prynne and included Andrew Crozier, John James, Douglas Oliver, Veronica Forrest-Thomson, Peter Riley, Tim Longville and John Riley. Prynne was influenced by Charles Olson and Crozier was partly responsible for Carl Rakosi's return to poetry in the 1960s. The New York school were also an important influence for many of the Cambridge poets - most obviously in the work of John James. The Grosseteste Review, which published these poets, was originally thought of as a kind of magazine of British Objectivism.
The Cambridge poets in general wrote in a cooler, more measured style than many of their London or Northumbrian peers (although Barry MacSweeney, for example, felt an affinity with them) and many taught at Cambridge University or at Anglia Polytechnic. There was also less emphasis on performance than there was among the London poets.
Wales and Scotland
|This section does not cite any references or sources. (February 2014)|
In the 60s and early 70s Peter Finch, an associate of Bob Cobbing, ran the No Walls Poetry readings and the ground breaking inclusive magazine, second aeon. He began Oriel Books in Cardiff in 1974 and the shop served as a focal point for young Welsh poets. However, some of the more experimental poets in Wales were not of Welsh origins. Two of the most important expatriate poets operating in Wales were John Freeman and Chris Torrance. Freeman is another British poet influenced by the Objectivists, and he has written on both George Oppen and Niedecker. Torrance has expressed his debt to David Jones. His ongoing Magic Door sequence is widely regarded as one of the major long poems to come out of the Revival.
In Scotland, Edwin Morgan, Ian Hamilton Finlay and Tom Leonard emerged as key individual poets during this time, each interested in, among other forms, sound and visual poetry. The viability of a wider, deeper experimental infrastructure in poetry was helped by the gallery, performance space and bookshop at the Third Eye Centre in Glasgow (later renamed the Centre for Contemporary Arts). Magazines such as Scottish International, "Chapman", and Duncan Glen's magazine Akros maintained links with the modernist legacy of the inter-war and post-war years while publishing contemporary poets; often, however, by mixing the avant-garde with aesthetically conservative texts.
"A treacherous assault on British poetry"
In 1971, a large number of the poets associated with the British Poetry Revival joined the dormant, if not moribund Poetry Society and in the elections became the Poetry Society's new council. The Society had been traditionally hostile to modernist poetry, but under the new council this position was reversed. Eric Mottram was made editor of the society's magazine Poetry Review. Over the next six years, he edited twenty issues that featured most, if not all, of the key Revival poets and carried reviews of books and magazines from the wide range of small presses that had sprung up to publish them.
Nuttall and MacSweeney both served as chairperson of the society during this period and Bob Cobbing used the photocopying facilities in the basement of the society's building to produce Writers Forum books. Around this time, Cobbing, Finch and others established the Association of Little Presses (ALP) to promote and support small press publishers and organise book fairs at which they could sell their productions.
In the late 1970s, in response to the number of foreign poets being featured in Poetry Review, Mottram was removed as editor of the magazine; his editorial practices being described[by whom?] as "a treacherous assault on British poetry". At the same time, the Arts Council set up an inquiry that overturned the result of the Society's elections that had once more brought in a council dominated by those sympathetic to the Poetry Revival.
The 1980s and after
A number of younger poets, many of whom who first found an outlet in Poetry Review under Mottram, began to emerge around the end of the 1970s. In London, Bill Griffiths, Ulli Freer, cris cheek, Lawrence Upton, Robert Gavin Hampson, Robert Sheppard, and Ken Edwards were among those who were to the fore. These, and others, met regularly at Gilbert Adair's Subvoicive reading series. Edwards ran Reality Studios, a magazine that grew out of Alembic (UK poetry magazine), the magazine he had co-edited through the 1970s. Through Reality Studios, he helped introduce the L=A=N=G=U=A=G=E poets to a British readership. He also ran Reality Street Editions with Cambridge-based Wendy Mulford, which continues to be a major publisher of contemporary poetry. The London-based Angel Exhaust magazine brought many of the younger poets together - in particular, Adrian Clarke, Robert Sheppard and Andrew Duncan. In the Midlands, Tony Baker's Figs magazine focused more on the Objectivist and Bunting-inspired poetry of the Northumbrian school while introducing a number of new poets.
In 1988 an anthology called The New British Poetry was published. It featured a section on the Revival poets edited by Mottram and another on the younger poets edited by Edwards. In 1987, Crozier and Longville published their anthology A Various Art, which focused mainly on the Cambridge poets, and Iain Sinclair edited yet another anthology of Revival-related work Conductors of Chaos (1996). For an account of some of the work produced by these poets, see Robert Hampson and Peter Barry (eds.), The New British poetries: The Scope of the possible (Manchester University Press, 1993). In 1994 W. N. Herbert and Richard Price co-edited the anthology of Scottish Informationist poetry Contraflow on the SuperHighway (Gairfish and Southfields Press).
The anthology Conductors of Chaos featured another aspect of the Revival; the recovery of neglected British modernists of the generation after Bunting. Poets David Gascoyne, selected by Jeremy Reed; W. S. Graham, selected by Tony Lopez; David Jones, selected by Drew Milne; J.F. Hendry, selected by Andrew Crozier and Nicholas Moore, selected by Peter Riley were reappraised and returned to their rightful place in the history of 20th century British poetry. Another interesting development was the establishment of the British and Irish poetry discussion list by Richard Caddel. This continues to provide an international forum for discussion and the exchange of news on experimental British poetry. Much wider publication for Revival poetry was arranged via the USA. Caddel, together with Peter Middleton, edited a selection of new UK poetry for US readers in a special issue of Talisman (1996). With Peter Quartermain Caddel also edited Other: British and Irish Poetry since 1970 (USA, 1999); whereas Keith Tuma's Anthology of Twentieth-Century British and Irish Poetry (Oxford University Press, USA, 2001) incorporates this poetry into a wider retrospective of the whole century.
Into the 1990s and beyond poets such as Johan de Wit (poet), Sean Bonney, Jeff Hilson, and Piers Hugill have surfaced after direct involvement in the Cobbing-led Writers Forum workshop. An interesting sub-development of the workshop was the instigation of the Foro De Escritores workshop, in Santiago Chile, run on similar aesthetic principles. This workshop has contributed to the development of Martin Gubbins, Andreas Aandwandter, and Martin Bakero, to name but few. Those associated with the Barque Press (most obviously Andrea Brady and Keston Sutherland), and more recently Bad Press (in particular, Marianne Morris and Jow Lindsay), have made a similar impact via the Cambridge scene. Perdika Press in North London has been instrumental in bringing to wider attention contemporary Modernist writers such as Nicholas Potamitis, Mario Petrucci, Robert Vas Dias and Peter Brennan; the press was also responsible for the first publication in Britain of Bill Berkson. From Scotland, Peter Manson, who had co-edited the magazine Object Permanence in the mid-1990s, Drew Milne, editor of Parataxis, David Kinloch and Richard Price (previously editors of Verse and Southfields) also emerged more fully as poets in their own right. New writings have arisen from the involvement of cris cheek, Bridgid Mcleer, and Alaric Sumner, under the direction of Caroline Bergvall and John Hall[disambiguation needed] through the Performance Writing programme at Dartington College of Arts including Kirsten Lavers, Andy Smith, and Chris Paul; from the involvement of Redell Olsen in the MA in Poetic Practice at Royal Holloway, University of London, including Becky Cremin, Frances Kruk, Ryan Ormond, Sophie Robinson, John Sparrow and Stephen Willey; and Keith Jebb at University of Bedfordshire's Creative Writing programme, including Alyson Torns and Allison Boast.
- For a detailed account of these events, see Peter Barry, The Battle of Earl's Court (Manchester University Press, 2007).
- Talisman: a Journal of Contemporary Poetry and Poetics 16 (1996): 110-173.
- List of related links
- The English Intelligencer Archive
- The Morden Tower
- Archives of the British and Irish poetry discussion list
- The Life and Works of Jeff Nuttall
- Piers Hugill on British Poetry since 1977 | <urn:uuid:246e0e68-ffa3-4b18-9f5f-9a19410f6afc> | CC-MAIN-2015-35 | https://en.wikipedia.org/wiki/British_Poetry_Revival | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00166-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.954269 | 3,610 | 2.640625 | 3 |
July 28, 2005 Volume 4 Number 15
1. View from the Field
2. Pea Aphids Come in Red and Green: Ever Wonder
3. Do You Know How to Scout for Corn Rootworm?
4. Downy Mildew in Soybean
5. European Corn Borer, Armyworm and Corn Earworm
6. Soybean Rust Update
7. Soybean Aphid Management Guidelines
8. Clipboard Checklist
9. NYS Growing Degree Days
10. Contact Information
View From the Field
Soybeans: This week at the Cornell University
Farm in Valatie soybean aphids ranged for 10 to 200 per plant.
The average was about 75 aphids per plant. There were several
ladybeetles (Adults, and larvae) feeding on the aphids. There
were several lady beetle pupae on the leaves also. There were
also green lacewing adults and larvae in the field. There are
a few pictures of the good beasts:
Alfalfa: Alfalfa fields at the Cornell University Farm in
Valatie were far about threshold for potato leafhopper. In three
samples I counted 255 potato leafhopper nymphs and adults. The
alfalfa was showing the traditional yellowing of the leaves
that potato leafhoppers can do. Here is a picture of the alfalfa:
I've seen downy mildew in all the soybean fields I've visited
(Wayne, Ontario, Seneca, and Cayuga Counties).
Nancy Glazier (NWNY Team) continues to observe soybean aphid
mites, particularly on drought-stressed beans, as she scouts
Orleans County soybean TAg program. This week she also saw phytophthora
root rot on beans in an area where there is heavy soil. The
indicated that there was standing water this spring.
Reports from wheat harvest have been favorable so far. Neither
sprouting has been of concern.
Jeff Miller and Kevin Ganoe both reporting soybean fields
with soybean aphids approaching 250+ per plant. Yellow alert
for these fields.... time to increase SBA monitoring frequency
to see if populations maintain, increase or decrease. Natural
enemies are present but variable in number, some fields with
winged aphid forms.
Mike Hunter reports fall and armyworms in corn field causing
damage. Mike says the field had an obvious giant foxtail and
large crabgrass that attracted the moths to the site.
Pea aphids come in red and green… Ever wonder why?
Keith Waldron, NYS IPM
NY alfalfa fields are home to resident populations
of pea aphids. Close observation will detect red and green color
morphs of this aphid species in the same field. What’s that
Because the aphids reproduce parthenogenically,
the color morphs remain distinct through the summer months.
Pea aphids experience high levels of parasitism by the wasp
Aphidius ervi and heavy predation by several predators,
including ladybird beetles, especially Coccinella septempunctata.
Both the parasitoid and the predator can have a major impact
on pea aphid populations and may be important selective agents.
Research by John Losey (Entomologist now at Cornell) determined
that for these aphids there was an advantage of one color over
another but it varied… Think predators, think escape…
Losey and his colleagues sampled 100 alfalfa stems in eight
similar locations every 6 days. They counted the number of red
and green aphids, ladybugs, and the mummified remains of aphids
parasitized by wasp larvae. They discovered that the predators
had distinct preferences--ladybugs being partial to red aphids,
and wasps to green aphids.
The result is that the two predators keep the colors balanced.
With ladybugs eating the red aphids and wasp larvae parasitizing
the green, neither form dominates the population and both survive.
Believe it or don’t…. More information see: Nature 388, 269
- 272 (17 July 1997).
Do you know how to scout for Corn Rootworm? Ken Wise,
You will need to scout all corn fields that
will be kept in corn next year from emergence of the tassel
until pollination is complete. Pollination occurs for three
weeks and monitoring takes about 20 minutes per field. You will
need to monitor each field once a week until you reach a threshold
or until pollination is over. Look for gravid, (i.e. mature,
egg laying) corn rootworm female beetles--the ones that, when
you squeeze their bodies, release white eggs from their posteriors.
Here’s how you scout:
Are female beetles present? Mature and capable of egg
laying? Conduct the squeeze test (see above) to determine
if they are ready to lay eggs.
Approach a corn plant carefully because the beetles
will fly off if they disturbed too much.
Grasp the silk with one hand.
Count the beetles on the entire plant.
Start counting at the top working down.
Gently pull leaves away from the stalk so you count
any beetles that may be hiding in the whorls.
For each corn plant monitored, record the total number
of beetles observed. See the sequential sampling chart below.
Since western corn rootworms are potentially more damaging
than their northern cousins, count each western (yellow
striped) beetle observed as “one” and each northern (green
type) as “1/2”.
Check several plants at random (not next to each other!)
in several parts of the field.
Continue sampling at seven-day intervals until the ear
silks are brown, approximately 3 weeks after tassels are
first visible, pollination is complete or an above threshold
number of beetles are found.
Using the Sequential Sampling Card for Corn Rootworm(For
fields with uniform physiological crop age, for variable age
fields see the CU Guide for FCrop Management “www.fieldcrops.org”)
Keep a running total (RT) of the number of corn rootworm
beetles you have counted on each plant. Each northern corn
rootworm has half the value of each western corn rootworm.
The western corn rootworm does twice the damage to corn
than does the northern. So if you count 3 westerns and 4
northern (2 western equivalent) on a plant you would have
a total of 5 beetles.
If the number of corn rootworm beetles observed is smaller
than the “N” (“Not at threshold”) number stop and scout
7 days later.
If the number of corn rootworm beetles observed is larger
than the “T” (“At threshold” or “Treat”) number then you
need to manage rootworms next year.
If the number of corn rootworm beetles observed fall
between “N” and “T”, continue sampling additional plants
until you finally go over or under.
In a very low or very high rootworm population a sampling
decision can be made in sampling as few as 3 to 8 plants.
For moderate populations more samples may be necessary to
Sequential Sampling for Corn Rootworm
Downy Mildew in Soybean
Downy mildew is showing up on soybean foliage
in New York State. I’ve observed this fungal disease frequently
on leaves in the upper canopy. Downy mildew can be identified
by the pale yellow or greenish irregular areas on upper leaf
surfaces. These spots show through to the lower leaf surface,
where the affected areas are grayish. Under humid conditions,
grey tufts of the fungus are apparent on these spots on the
underside of leaves (see photo below). Soybean productivity
is generally not affected by downy mildew.
On a severely infected plant, downy mildew also can affect
soybean seed. While pods show no symptoms, seeds inside can
be covered with white fungal mycelia. If this infected seed
were planted, stunted seedlings with mottled leaves would result.
The fungus that causes downy mildew can survive on infected
leaves and seed. A key management strategy for downy mildew
is to not plant contaminated seed. Rotation to a crop other
than soybean or tillage that deeply buries infected crop residue
effectively control downy mildew.
European Corn Borer, Fall Armyworm, Corn Ear Worm Trap
Keith Waldron, NYS IPM
Two calls this week about potential
armyworm / fall armyworm issues in corn and mixed alfalfa grass
stands. Positive identification still pending. Either species
could be possible this time of year. Trapping networks reporting
such data indicate low FAW counts. ECB counts have been moderate.
If you are looking for the latest information on trap catches
for these lepidopteran pests in NY check the Sweet Corn PheromoneTrap
Network for Western NY, Report for 7/12/05
This trapping network is gathered by Abby Seaman of the NYS
For information on trap catches for these insects in the
states to our south go to the
Soybean Rust Update
Keith Waldron, NYS IPM
rust on soybeans has been reported in Florida , some adjacent
counties in Alabama and Georgia as well as Mississippi. The
most recent finds in Mississippi and Georgia were in sentinel
plots. The most recent find in Alabama was the first detection
in a commercial field in 2005. Scouting and spore trapping continues
throughout the soybean production areas of the U.S. Scouting
of sentinel plots in New York State continues this week and
to date no soybean rust has been found. The risk of soybean
rust infection in New York is currently considered to be low
and no fungicide application for soybean rust is advocated at
this time. Septoria brown spot is the most prevalent foliar
disease in all 10 of the research protocol sentinel plots in
New York State over the past two weeks. Bacterial blight has
also been observed and soybean aphid populations are increasing.
Growth stages in New York State sentinel plots range from V5
to R2 stages and several are flowering. (Last updated 7/27/05)
Soybean Aphid Management Guidelines
Expect soybean aphid populations to rise over
the next week or so as “cooler” temperatures (in the 80’s F)
occur over our region.
Guidelines for management? A short review of guidelines.
Adapted from: http://www.plantpath.wisc.edu/soyhealth/aglycine.htm
Pay particular attention to late-planted fields, or fields
under moisture stress. Examine the entire plant, particularly
the new growth at the top and side branches.
Use an action threshold of 250 aphids per plant if
populations are actively increasing. This action threshold
should be based on an average of 250 aphids per plant
over 20-30 plants sampled throughout the field. Regular
field visits are required to determine if soybean aphid populations
In replicated research trials, in the Midwest , this threshold
have worked well in R1 (right at first bloom) to R4 soybeans.
This threshold incorporates an approximate 7-day lead-time between
scouting and treatment to make spray arrangements or handle
weather delays. Spraying at or beyond R6 has not been documented
to increase yield.
Like more discussion? See
“Soybean Aphid Management Recommendations (Consensus recommendations
developed by Ontario and U.S. researchers, Jan. 2004)”
Research to enhance Soybean aphid management guidelines continues.
Data (SBA counts, growth stage, yield, etc.) from Treat No
treat fields is in “short supply”. If you have a situation that
will be treated and have the ability to collect information
including yield checks please consider doing trial. More SBA
data will help us all. For more information contact
Keith Waldron, NYS IPM
• Maintain crop production
activity records by field, including harvest date, pesticides
used, nutrient inputs including manure, etc.
Alfalfa & Hay:
• Continue monitoring for potato leafhopper- harvest early
or spray on basis of need.
• Monitor for diseases record information on type and location
for future cropping decisions.
• Monitor for foliar and stalk diseases, nitrogen and other
nutrient deficiencies, European corn borer, weeds.
• Monitor corn rootworm adults at silking.
• Monitor for soybean aphid, soybean rust, foliar diseases.
• Continue livestock facility sanitation management (manure,
feed bunks and storage areas, waterers, etc.). Cleaner, less
• Mow around facilities to minimize rodent habitat.
• Monitor young stock for cattle lice and mange mites
• Check condition of pastures and animals on pastures
- Evaluate need for face fly and stable fly control measures
- Check and clean pasture water supplies.
NYS Growing Degree Days
Keith Waldron, NYS IPM
March 1 - July 27, 2005
Base 50 F
Julie Stavisky: IPM Area Educator, Livestock
and Field Crops, Western NY
Phone: (315) 331-8415
Fax: (315) 331-8411
Keith Waldron: NYS Livestock and Field Crops IPM Coordinator
Phone: (315) 787 - 2432
Fax: (315) 787-2360
Ken Wise: Eastern NYS IPM Area Educator: Field Crops and Livestock
Phone: (518) 434-1690
Fax: (518) 426-3316 | <urn:uuid:ebcdf46a-c48d-474d-9fde-95ef71e2fc26> | CC-MAIN-2015-35 | http://www.nysipm.cornell.edu/fieldcrops/tag/pestrpt/pestrpt05/7_28_05.asp | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00343-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.87508 | 2,991 | 2.546875 | 3 |
Signs of Power
The Rise of Cultural Complexity in the Southeast
Publication Year: 2004
Traces the sources of power and large-scale organization of prehistoric peoples among Archaic societies.
By focusing on the first instances of mound building, pottery making, fancy polished stone and bone, as well as specialized chipped stone, artifacts, and their widespread exchange, this book explores the sources of power and organization among Archaic societies. It investigates the origins of these technologies and their effects on long-term (evolutionary) and short-term (historical) change.
The characteristics of first origins in social complexity belong to 5,000- to 6,000-year-old Archaic groups who inhabited the southeastern United States. In Signs of Power, regional specialists identify the conditions, causes, and consequences that define organization and social complexity in societies. Often termed "big mound power," these considerations include the role of demography, kinship, and ecology in sociocultural change; the meaning of geometry and design in sacred groupings; the degree of advancement in stone tool technologies; and differentials in shell ring sizes that reflect social inequality.
Published by: The University of Alabama Press
Download PDF (34.8 KB)
Download PDF (44.1 KB)
Download PDF (25.0 KB)
Preface and Acknowledgments
Download PDF (54.9 KB)
This book was born on a mustard-smeared napkin in a foyer of a New Orleans hotel. It all started innocently enough—Joe Saunders, Bob Connolly, Phil Carr, and Jon Gibson brainstorming about a symposium we wanted to offer to the 1999 Southeastern Archaeological Conference in Pensacola. Well, maybe it was not so innocent: scribbling covered both sides of the napkin. ...
1 Big Mounds, Big Rings, Big Power
Download PDF (125.6 KB)
Mounds have quickened the pulse of American antiquarians and archaeologists for generations. They still do. Who among you could stay calm after hacking a trail through a bottomland-hardwood jungle and suddenly realizing that the incline you’re struggling to climb is no natural levee but a lost Indian mound? Or stand atop a mound on a starlit night with a handful of fellow archaeologists and keep from getting caught up...
2 Late Archaic Fisher-Foragers in the Apalachicola– Lower Chattahoochee Valley, Northwest Florida– South Georgia/Alabama
Download PDF (155.1 KB)
The archaeological constructs of the Late Archaic and prehistoric cultural complexity are examined here with a discussion of data from the Apalachicola–lower Chattahoochee River valley in northwest Florida, southwest Georgia, and southeast Alabama (Figure 2.1). The Apalachicola is the largest Florida river, originating at the confluence of the Flint and Chattahoochee Rivers, at the Florida-Georgia border, and ®owing southward to the Gulf of Mexico. ...
3 Measuring Shell Rings for Social Inequality
Download PDF (632.1 KB)
Two basic interpretations of shell rings vie for archaeological acceptance. One posits that rings are the daily subsistence refuse incidentally tossed behind or underfoot of households (Trinkley 1997; Waring and Larson 1968:273; cf. White, this volume). The other suggests that shell rings are among the earliest examples of large-scale public architecture in North America, intentionally built for ritual and ceremony (Cable 1997; Waring 1968a:243). ...
4 Regional-Scale Interaction Networks and the Emergence of Cultural Complexity along the Northern Margins of the Southeast
Download PDF (150.8 KB)
The emergence of culturally complex hunter-gatherer societies has been a topic of great anthropological interest for at least the past 15 years (Arnold, ed. 1996; Johnson and Earle 1987; Price and Brown 1985). In 1985, James Brown published a seminal article on the emergence of cultural complexity in the prehistoric American Midwest. He suggested several indicators of emerging hunter-gatherer complexity including the appearance of permanent habitations, food storage facilities, plant domestication, cemeteries, and interregional exchange. ...
5 The Green River in Comparison to the Lower Mississippi Valley during the Archaic: To Build Mounds or Not to Build Mounds?
Download PDF (109.0 KB)
In the mid-latitude regions of North America along the Illinois, Ohio, and Tennessee river systems, Archaic period hunters and gatherers created extensive, deeply stratified middens exemplified by sites such as Koster and Black Earth in Illinois, Indian Knoll in Kentucky, and Eva in Tennessee. Many archaeologists interpret these sites as evidence of increased sedentary behavior and complex social interaction
6 Cultural Complexity in the Middle Archaic of Mississippi
Download PDF (149.1 KB)
Mississippi is often thought of as a poor state. From an economic standpoint, this is true. However, if one were to argue from a cultural perspective, Mississippi would have to be thought of as a very wealthy state. The musical heritage of this relatively small state is second to none. Mississippi is the birthplace of country music, blues, and rock and roll. The literary heritage is also without peer. ...
7 The Burkett Site (23MI20): Implications for Cultural Complexity and Origins
Download PDF (245.9 KB)
The nature and complexity of Late Archaic O’Bryan Ridge manifestations and their relationship to Poverty Point culture in the Lower Mississippi Valley have been controversial topics for more than half a century. When baked clay objects and other trappings of material culture similar to those in Poverty Point assemblages were first identified at sites in the Cairo Lowlands of southeastern Missouri, Stephen Williams characterized them as a regional variant of Poverty Point (S. Williams 1954). ...
8 Poverty Point Chipped-Stone Tool Raw Materials: Inferring Social and Economic Strategies
Download PDF (153.9 KB)
It is easy to become awed by the Poverty Point site located in northeast Louisiana (Figure 1.1). Poverty Point was occupied by hunter-gatherers but included a built landscape with a geometric layout suggestive of master planning (Clark, this volume; Gibson 1973:69, 1987:19–22). Additionally, an interesting array of material culture indicative of intense and wide-scale trade and significant production activities was present. ...
9 Are We Fixing to Make the Same Mistake Again?
Download PDF (167.8 KB)
The identification of mounds dating to ca. 5000–6000 B.P. has required archaeologists to rethink the process of social evolution. The existence of Archaic mounds provides us with one of those rare research opportunities of a win-win situation. Archaic period mounds are significant if they were constructed by societies with social inequality, and they are equally significant if they were constructed by egalitarian cultures. ...
10 Surrounding the Sacred: Geometry and Design of Early Mound Groups as Meaning and Function
Download PDF (829.3 KB)
Squier and Davis’s (1848) fabulous study of early mound groups ranks as the best early archaeological project in the New World; even today, the data presented and preserved are unsurpassed. Yet, a century and a half after the fact, the early promise of their study remains unrealized. In this essay I revisit their inferences that early mound builders had a standard of measurement, geometry, and engineering skills for planning sites. ...
11 Crossing the Symbolic Rubicon in the Southeast
Download PDF (181.1 KB)
With the discovery of earthen mounds dating to the sixth millennium before present in the American Southeast, the enduring anthropological question of the emergence of cultural complexity returns to an unusual setting. Although the Poverty Point complex of northeast Louisiana once garnered its share of attention as regards emergent complexity (Ford and Webb 1956; Gibson 1974), recent archaeological discourse over its genesis and organization has downplayed the level of sociopolitical development attending mound construction and long-distance exchange...
12 Explaining Sociopolitical Complexity in the Foraging Adaptations of the Southeastern United States: The Roles of Demography, Kinship, and Ecology in Sociocultural Evolution
Download PDF (193.1 KB)
With the discovery of the Watson Brake mound complex in Louisiana (Saunders et al. 1997), archaeologists have had to reevaluate causal factors in the rise of sociocultural complexity in North America. Previously, archaeologists have been strongly influenced by the stage concept of cultural development that sees the rise of sociopolitical complexity as a series of gradual, linear, steplike developments culminating in the Mississippian Tradition...
13 The Power of Beneficent Obligation in First Mound– Building Societies
Download PDF (155.1 KB)
Mound building began in the Lower Mississippi Valley and Florida more than fifty-five hundred years ago. Some mounds were large, and sometimes they were strung together in arrangements that lead us to think the unthinkable. Images of mounds as territorial and identity markers, as cosmic sociograms and creation metaphors, and even as massive earthen calendars aligned with the stars and moon creep into our consciousness (Byers 1998; Charles and Buikstra 1983...
14 Archaic Mounds and the Archaeology of Southeastern Tribal Societies
Download PDF (258.1 KB)
The recognition a decade ago that Southeastern societies engaged in complex shell and earthen mound building more than 5,000 years ago is revolutionizing our thinking about the archaeology of the region. In this chapter I discuss some of the implications of this research and where it will take us in the years to come.1 In brief, the discovery of Archaic mounds has forced us to confront head-on how tribal societies operate...
15 Old Mounds, Ancient Hunter-Gatherers, and Modern Archaeologists
Download PDF (146.1 KB)
When asked to be a discussant of the symposium that led to this book, I jumped at the chance. The papers promised to be informative and provocative, but I also had another interest in the symposium. I feel research on the hunting-and-gathering societies of the southern Eastern Woodlands is likely to gain momentum in the near future, and these essays can play a big part in defining the trajectory of that work. ...
Download PDF (348.1 KB)
Download PDF (37.3 KB)
Download PDF (124.3 KB)
Publication Year: 2004 | <urn:uuid:6b0f0ab9-8419-4618-a726-3b816555b024> | CC-MAIN-2015-35 | http://muse.jhu.edu/books/9780817382797 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066017.21/warc/CC-MAIN-20150827025426-00216-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.917904 | 2,182 | 2.6875 | 3 |
Which organs constitute the urinary system?
The kidneys, ureters, urinary bladder, and urethra are the components of the urinary system.
Why are the kidneys said to be retroperitoneal?
The kidneys are retroperitoneal because they are posterior to the peritoneum.
What structures pass through the renal hilum?
Blood vessels, lymphatic vessels, nerves, and a ureter pass through the renal hilum.
What volume of blood enters the renal arteries per minute?
About 1200 mL of blood enters the renal arteries each minute.
What are the basic differences between cortical and juxtamedullary nephrons?
Cortical nephrons have glomeruli in the superficial renal cortex, and their short loops of Henle penetrate only into the superficial renal medulla. Juxtamedullary nephrons have glomeruli deep in the renal cortex, and their long loops of Henle extend through the renal medulla nearly to the renal papilla.
When cells of the renal tubules secrete the drug penicillin, is the drug being added to or removed from the bloodstream?
Secreted penicillin is being removed from the bloodstream.
Which part of the filtration membrane prevents red blood cells from entering the capsular space?
Endothelial fenestrations (pores) in glomerular capillaries are too small for red blood cells to pass through.
Suppose a tumor is pressing on and obstructing the right ureter. What effect might this have on CHP and thus on NFP in the right kidney? Would the left kidney also be affected?
Obstruction of the right ureter would increase CHP and thus decrease NFP in the right kidney; the obstruction would have no effect on the left kidney.
Why is tubuloglomerular feedback termed autoregulation?
Auto- means self; tubuloglomerular feedback is an example of autoregulation because it takes place entirely within the kidneys.
What is the main function of the tight junctions between tubule cells?
The tight junctions between tubule cells form a barrier that prevents diffusion of transporter, channel, and pump proteins between the apical and basolateral membranes.
How does filtered glucose enter and leave a PCT cell?
Glucose enters a PCT cell via a sodium ion-glucose symporter in the apical membrane and leaves via facilitated diffusion through the basolateral membrane.
Which step in sodium ion movement is promoted by the electrochemical gradient?
The electrochemical gradient promotes movement of sodium ions into the tubule cell through the apical membrane antiporters.
By what mechanism is water reabsorbed from tubular fluid?
Reabsorption of the solutes creates an osmotic gradient that promotes the reabsorption of water via osmosis.
Why is the sodium ion-potassium ion-chloride ion symporter considered secondary active transport?
This is considered secondary active transport because the symporter uses the energy stored in the concentration gradient of sodium ions between extracellular fluid and the cytosol.
Does water reabsorption accompany ion reabsorption in the thick ascending limb of the loop of Henle?
No water is reabsorbed here because the thick ascending limb of the loop of Henle is virtually impermeable to water.
Which hormone stimulates reabsorption and secretion by principal cells, and how does this hormone exert its effect?
In principal cells, aldosterone stimulates secretion of potassium ions and reabsorption of sodium ions by increasing the activity of sodium-potassium pumps and number of leakage channels for sodium ions and potassium ions.
In addition to ADH, which other hormones contribute to the regulation of water reabsorption?
Aldosterone and atrial natriuretic peptide influence renal water reabsorption along with ADH.
Which portions of the renal tubule and collecting duct reabsorb more solutes than water to produce dilute urine?
Dilute urine is produced when the thick ascending limb of the loop of Henle, the distal convoluted tubule, and the collecting duct reabsorb more solutes than water.
Which solutes are the main contributors to the high osmolarity of interstitial fluid in the renal medulla?
The high osmolarity of interstitial fluid in the renal medulla is due mainly to sodium ions, chloride ions, and urea.
In which segments of the nephron and collecting duct does secretion occur?
Secretion occurs in the proximal convoluted tubule, the loop of Henle, and the collecting duct.
What is a lack of voluntary control of micturition called?
Lack of voluntary control over micturition is termed urinary incontinence.
What are the three subdivisions of the male urethra?
The three subdivisions of the male urethra are the prostatic urethra, membranous urethra, and spongy urethra.
When do the kidneys begin to develop?
The kidneys start to form during the third week of development.
What are the major functions of the kidney?
Regulation of blood ionic composition, regulation of blood pH, regulation of blood volume, regulation of blood pressure, maintenance of blood osmolarity, production of hormones, regulation of blood glucose level, and excretion of wastes and foreign substances.
What are the waste products normally excreted by the kidneys?
Urea, creatinine, bilirubin, and ammonia.
This is smooth dense irregular connective tissue that is continuous with the outer coat of the ureter.
Which is the order of blood flow through the kidneys?
Renal artery > segmental arteries > interlobar arteries > arcuate arteries > interlobular arteries > afferent arterioles > glomerular capillaries > efferent arterioles > peritubular capillaries > interlobular veins > arcuate veins > interlobar veins > renal vein
Which is the order of filtrate flow?
Glomerular capsule, proximal convoluted tubule (PCT), loop of Henle, distal convoluted tubule (DCT), collecting duct
This is a nephron process that results in a substance in blood entering the already formed filtrate.
This layer of filtration membrane is composed of collagen fibers and proteoglycans in a glycoprotein matrix.
This occurs when stretching triggers contraction of smooth muscle walls in afferent arterioles.
This occurs when a substance passes from the fluid in the tubular lumen through the apical membrane, across the cytosol, and then into the interstitial fluid.
What are the ways in which angiotensin II affects the kidneys?
It can decrease GFR, it enhances reabsorption of certain ions, and it stimulates the release of aldosterone.
Increased secretion of hydrogen ions would result in a(n) ______________ of blood ____________?
Increased secretion of aldosterone would result in a(n) ______________ of blood ____________?
This layer of the ureter is composed of connective tissue, collagen and elastic fibers.
An increase in permeability of the filtration membrane due to disease, injury, or irritation of kidney cells by substances such as bacterial toxins, ether, or heavy metals indicates which condition?
Stress, causing excessive amounts of epinephrine secretion which stimulates glycogen breakdown, indicates which condition? This condition can also indicate diabetes mellitus.
Excessive urine concentration of a normal breakdown product of hemoglobin, caused by pernicious anemia, infectious hepatitis, jaundice or cirrhosis, indicates which condition?
These are tiny masses of material, hardened in the lumen of the urinary tubule and are flushed out when filtrate builds up behind them:
If the urinary excretion rate of a drug such as penicillin is greater than the rate at which it is filtered at the glomerulus, how else is it getting into the urine?
How does the urinary system help the nervous system?
The kidneys perform gluconeogenesis, providing glucose for neurons.
Which condition is characterized by elevated blood levels of cholesterol, phospholipids and triglycerides with depressed blood levels of albumin?
True or False: In the kidneys, the countercurrent mechanism involves the interaction between the flow of filtrate through the loop of Henle of the juxtamedullary nephrons (the countercurrent multiplier) and the flow of blood through the limbs of adjacent blood vessels (the countercurrent exchanger). This relationship establishes and maintains an osmotic gradient extending from the cortex through the depths of the medulla that allows the kidneys to vary urine concentration dramatically.
Place the following in correct sequence from the formation of a drop of urine to its elimination from the body.
1. major calyx
2. minor calyx
6. collecting duct
3, 6, 2, 1, 5, 4
True or False: In the absence of hormones, the distal tubule and collecting ducts are relatively impermeable to water.
True or False: Ridding the body of bicarbonate ions is an importance feature of tubular secretion.
True or False: Obligatory water reabsorption involves the movement of water along an osmotic gradient.
The juxtaglomerular apparatus is responsible for ________.
Regulating the rate of filtrate formation and controlling systemic blood pressure.
What are the most important hormone regulators of electrolyte reabsorption and secretion?
Angiotensin II and Aldosterone
The mechanism that establishes the medullary osmotic gradient depends most on the permeability properties of the ________.
Loop of Henle
A disease caused by inadequate secretion of antidiuretic hormone (ADH) by the pituitary gland with symptoms of polyuria is ________.
The glomerulus differs from other capillaries in the body in that it ________.
Is drained by an efferent arteriole.
The factor favoring filtrate formation at the glomerulus is the ________.
Glomerular Hydrostatic Pressure | <urn:uuid:46a7a430-cc9f-44c8-a227-c49b768178c4> | CC-MAIN-2015-35 | https://quizlet.com/22061049/chapter-26-the-urinary-system-flash-cards/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00339-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.870631 | 2,141 | 3.03125 | 3 |
Pandemic Flu and the U.S. Food Industry
By Regina Phelps, CEM, RN, BSN, MPA
No matter where in the world an influenza pandemic starts, the U.S. food supply will be impacted very quickly, and the repercussions to the industry could be severe and long-lasting. I recently moderated a roundtable discussion where professionals from six top food service and agriculture companies discussed the intricacies of pandemic planning. Based on the discussion, it’s clear that executives in the food industry are engaged in pandemic preparedness. This paper will address the current threat and the rationale for planning; how the food and agriculture industry could be affected by an influenza pandemic; and a strategic approach to planning. We begin with putting the pandemic threat in perspective.
Defining the Threat
Whether it’s a widespread Salmonella outbreak, a debate about food versus biofuels, or a struggling economy’s drain on retail sales, today’s food safety professionals have no shortage of pressing concerns with which to contend. As a result, preparing for an influenza pandemic may not be at the top of the priority list, but according to world health experts, it should be.
A flu pandemic occurs when a novel strain of influenza that humans have never before been exposed to spreads efficiently through the population, eventually infecting people worldwide. Three pandemics have occurred in the past century, and health officials continue to emphasize that the risk of the next pandemic remains real. In a Reuters article about a United Nations (UN) news conference in June of this year, David Nabarro, the UN’s influenza coordinator, was quoted as saying that the threat of a pandemic had not receded: "We are anticipating that there will be another pandemic at some time…the probability is still there and it hasn’t changed."
Earlier this year, Michael Leavitt, Secretary of the U.S. Department of Health and Human Services (HHS), stated in Pandemic Update V, available on PandemicFlu.gov, that "the media buzz has died down, but the ’bird flu’ virus has not." The H5N1 avian flu virus, which is considered by health experts to be the most likely culprit of the next pandemic, continues to spread and become entrenched in bird populations in more countries. According to the World Health Organization (WHO), there have been 385 human cases of H5N1 since 2003, 243 of them fatal. Five countries have also documented human-to-human transmission—in other words, a human infecting another human with the deadly H5N1 virus.
How the Food Industry Will be Affected
Once a pandemic begins, the nation’s entire food supply chain will be hugely disrupted—everything from manufacturing to packaging to retailing. The availability of goods will be unpredictable, and in the worst-case scenario, some products may be completely unavailable. The transportation industry will be grappling with problems of its own during a pandemic. Most notably, keeping enough healthy drivers on the road will be an ongoing struggle, meaning shipments and deliveries of goods could be delayed. In addition, borders will most likely shut down, leading to travel restrictions. The end result will be major complications in the flow of goods.
Companies operating in the food industry should also prepare for distinct changes in consumer habits. Retailers will notice that some items will be in very high demand (i.e., bottled water, non-perishable foods), while demand for other products, like chicken, may decrease. Consumers are likely to change their shopping patterns in an effort to protect themselves and their families. Pandemics typically occur in two or three waves and last six to eight weeks each, so retailers should prepare for surges in demand as consumers attempt to stockpile and purchase larger quantities of products in fewer visits. It will be not unlike the clearing of shelves that’s often seen just before a severe snow storm or hurricane hits, except magnified and spread across a far broader swath of the country. A pandemic is also likely to cause a soaring demand for online shopping and home delivery as consumers travel less often to "brick and mortar" stores, either because of mandated quarantines from the government or "self-quarantining" of individuals.
These issues will only intensify as the pandemic continues, which could be as long as 12—18 months. Despite the obstacles that a flu pandemic will create, food retailers will still be expected to operate essential services and meet government and community obligations throughout a pandemic.
Considering all of the turmoil that a flu pandemic will likely cause, it’s not surprising that it will also have a direct impact on the global economy. The non-profit Trust for America’s Health (TFAH) released a report last year finding that a severe flu pandemic could result in the second-worst recession in the United States since World War II. The U.S. Gross Domestic Product (GDP) could drop more than 5.5%, leading to an estimated $683 billion loss. The report cited specific implications for the food services industry, finding that it could experience an 80% decrease in consumer demand, leading to $67.6 billion in GDP losses for the food and accommodations industry. The agriculture industry would also be affected, though not as severely—a 10% drop in consumer demand and $2.9 billion in GDP losses.
The good news for food industry professionals is that by taking concrete steps now, the overall health of individual companies and the entire industry can be significantly strengthened in advance of a pandemic.
The Pillars of Employee Protection
The food industry is a people-intensive business. Employees are the backbone of the business and the driver of all revenue. Without adequate plans to protect the health and well-being of their workforce during a pandemic, companies are at risk for major problems associated with absenteeism and may even risk closure or bankruptcy.
The best employee protection plans are built upon five distinct pillars, the first being education. To ensure that employees trust them as a valid and reliable source of information before, during and after a pandemic, employers must dedicate appropriate resources to develop education materials that address such topics as hygiene practices, the importance of staying home when sick and methods to avoid contracting the virus. In the food industry, hand hygiene through frequent and thorough hand washing as well as proper cough protocols (i.e., coughing into the sleeve rather than the hands) should be the core components of a solid education program.
The second pillar is personal protective equipment (PPE), which includes items like face masks and respirators that can prevent others from contracting the virus. PPE may be particularly important for the food retail sector since employees are often in close contact with each other and the public. PPE is recommended in areas where staff members are not able to be separated from others by at least six feet, such as at cash registers and customer service areas.
The next pillar is facility cleaning. Customers will judge food retailers primarily by what they see when they enter a store. While companies typically don’t clean during business hours, they may want to reconsider that during a pandemic. Customers will want to see that companies are making hygiene and safety at the facility a top priority. If they see store employees performing additional cleaning measures, it will help to ease their minds about shopping at that particular store. Food retailers should pay special attention to "high-touch" areas such as the check-out lines, shopping carts, point-of-sale credit card machines, cash registers, bathrooms, etc.
Social distancing is the fourth pillar and encompasses all tactics that can create physical space between people—employees, vendors and customers. Some options include providing alternate work locations, staggering work schedules, closing company common areas (i.e., gyms, cafeterias), and limiting face-to-face meetings. Food retail businesses will need to pay special attention to their employees who must interact frequently and more closely with customers or co-workers during a pandemic. As mentioned earlier, these employees should be provided with surgical masks and other PPE.
Pharmaceutical interventions represent the final pillar of solid employee protection planning. When a pandemic strikes, antivirals will be a key line of defense until an effective vaccine can be developed and distributed, which experts estimate will take at least five months. Because antivirals need to be taken within 12 to 48 hours of illness onset, stockpiling them is necessary for advance positioning, according to the WHO. Even though federal and state governments have built stockpiles, they will cover only 25% of the population, with much of that going to a pre-established priority distribution list.
In June 2008, HHS issued proposed guidance encouraging U.S. businesses to consider stockpiling antiviral medications as part of pandemic preparedness plans. Some companies have chosen to provide antiviral medications to all employees in advance of a pandemic, while others have purchased medications for those employees that have been identified as "critical work staff" during a pandemic. A flexible corporate purchase program introduced in June 2008 by Roche, the maker of the antiviral medication Tamiflu, now allows U.S. businesses to gain access to their own stockpile of Tamiflu for a nominal annual fee. Roche will maintain and store the stockpile until the company is ready to take ownership.
Supply Chain Management
Of all the business continuity issues facing companies in the food industry, managing the supply chain may be the most challenging. It is essential that companies assess the mission-critical supply chain components and analyze how they would be impacted during a pandemic. As part of this process, companies should identify minimum inventories required for critical products, determine which products to stock more or less of and collaborate with suppliers and vendors to exchange information and discuss pandemic plans.
Food manufacturers will need to take a close look at the products they produce and make some difficult decisions about what they can realistically provide to customers during a pandemic, given the expected disruptions in the supply chain. This process can be implemented by revisiting their product mix and narrowing production down to a smaller, more manageable list of pre-identified foodstuffs. Retail companies should identify basic goods and increase their inventory of staples that all families are likely to need and use throughout a pandemic.
The supply chain is only as strong as its weakest link, so it’s crucial for companies to open a dialogue with suppliers to understand how they plan to maintain operations throughout a pandemic. These conversations can serve as the perfect opportunity to be sure suppliers are aware of what will be needed of them during a pandemic. It’s possible that a business may decide to approach new vendors if a current vendor’s pandemic planning is not at an appropriate level.
Lastly, security may be an issue during a pandemic. Food manufacturers and retailers will need to work closely with government agencies (state and local) to ensure that products will be delivered safely to store shelves. If deliveries become dangerous, drivers will be less likely to report to work, further worsening a potentially challenging situation.
Keeping Food on the Shelves: It All Comes Down to Planning
When the dust settles after a pandemic hits, the food industry and individual businesses will be judged on how they conducted operations during the pandemic. Without adequate planning, the odds of a company being judged favorably by its employees, customers and other external stakeholders are greatly reduced.
While a pandemic is inevitable, time is on our side, as Secretary Leavitt noted earlier this year: "There is simply no reason to believe that this century will be different than any past century. The difference now is that we better understand the threat, so we can increase our preparedness for a pandemic before it comes, in order to diminish its potential impact."
Regina Phelps, CEM, RN, BSN, MPA, is an internationally recognized expert in the field of emergency management and continuity planning, assisting more than 150 companies in developing domestic and global pandemic plans. Ms. Phelps is the founder of Emergency Management & Safety Solutions (EMSS), a consulting company specializing in emergency management, continuity planning and safety. She can be contacted via www.ems-solutionsinc.com. | <urn:uuid:da96ca40-f2d7-4d1a-a5f5-4be25b4a9eeb> | CC-MAIN-2015-35 | http://www.foodsafetymagazine.com/magazine-archive1/octobernovember-2008/pandemic-flu-and-the-us-food-industry/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00161-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.958879 | 2,518 | 2.796875 | 3 |
Coal is a fossil fuel extracted from the ground either by underground mining or strip mining. It is a readily combustible black or brownish-black sedimentary rock. It is composed primarily of carbon and hydrocarbons, along with assorted other elements, including sulfur. Often associated with the Industrial Revolution, coal remains an enormously important fuel and is the most common source of electricity world-wide. In the United States, for example, the burning of coal generates over half the electricity consumed by the nation.
Coal is thought ultimately to derive its name from the Old English col but this actually meant charcoal at the time; coal was not mined prior to the late Middle Ages; i.e. after ca. 1000 AD. Mineral coal was referred to as sea-coal since it was found washed up on beaches occasionally.
It is associated with the astrological sign Capricorn. It is carried by thieves to protect them from detection and to help them to escape when pursued. It is an element of a popular ritual associated with New Year's Eve. To dream of burning coals is a symbol of disappointment, trouble, affliction and loss, unless they are burning brightly, when the symbol gives promise of uplifting and advancement.
Santa Claus is said to leave a lump of coal instead of Christmas presents in the stockings of naughty children.
Coal consists of more than 50 percent by weight and more than 70 percent by volume of carbonaceous material (including inherent moisture). Coal is formed from plant remains that have been compacted, hardened, chemically altered, and metamorphosed by heat and pressure over geologic time. It is suspected that coal was formed from ancient plants that grew in swamp ecosystems. When such plants died, their biomass was deposited in anaerobic, aquatic environments where low oxygen levels prevented their decay and oxidation (rotting and release of carbon dioxide). Successive generations of this type of plant growth and death formed thick deposits of unoxidized organic matter that were subsequently covered by sediments and compacted into carbonaceous deposits such as peat or bituminous or anthracite coal. Evidence of the types of plants that contributed to carbonaceous deposits can occasionally be found in the shale and sandstone sediments that overlie coal deposits, and with special techniques, within the coal itself. The greatest coal-forming time in geologic history was during the Carboniferous era (280 to 345 million years ago).
Coal is primarily used as a solid fuel to produce heat through combustion. Combustion of coal, like any other carbon containing compond, produces carbon dioxide (CO2), along with varying amounts of sulfur dioxide (SO2) depending on where it was mined. Sulfur dioxide reacts with water to form sulfurous acid. If sulfur dioxide is discharged into the atmosphere, it reacts with water vapor and is eventually returned to the Earth as acid rain.
Coal also contains many trace elements, including arsenic and mercury, which are dangerous if released into the environment. Coal also contains low levels of uranium, thorium, and other naturally-occurring radioactive isotopes whose release into the environment may lead to radioactive contamination (see (http://www.ornl.gov/info/ornlreview/rev26-34/text/colmain.html) and (http://greenwood.cr.usgs.gov/energy/factshts/163-97/FS-163-97.html) for numbers and details). While these substances are trace impurities, if a great deal of coal is burned, significant amounts of these substances are released.
When coal is used in electricity generation, the heat is used to create steam, which then is used to power turbine generators. Approximately 40% of the Earth's current electricity production is powered by coal, and the total known deposits recoverable by current technologies are sufficient for at least 300 years' use. Emissions from coal-fired power plants represent the largest source of artificial carbon dioxide emissions, according to most climate scientists a primary cause of global warming. Many other pollutants are present in coal power station emissions. Some studies claim that coal power plant emissions are responsible for tens of thousands of premature deaths annually in the United States alone. In addition, emissions from coal-fired power plants are a major contributor to acid rain in some countries. Modern power plants utilize a variety of techniques to limit the harmfulness of their waste products and improve the efficiency of burning, though these techniques are not widely implemented in some countries, as they add to the capital cost of the power plant.
In the past, coal was converted to make coal-gas, which was piped to customers to burn for illumination, heating, and cooking. At present, the safer natural gas is used instead. Coal can also be converted into liquid fuels like gasoline or diesel. The Bergius process (direct liquefaction by hydrogenation) was used in Nazi Germany, and for many years in South Africa - in both cases, because those regimes were politically isolated and unable to purchase crude oil on the open market. Estimates of the cost of producing liquid fuels from coal indicate that domestic US production of fuel from coal becomes cost-competitive with oil priced at around 35 USD per barrel (http://www.findarticles.com/p/articles/mi_m0CYH/is_15_6/ai_89924477), which is well above historical averages - but is now viable due to the spike in oil prices in 2004 and nanotechnology (http://www.coalpeople.com/old_coalpeople/march03/tiny_tomorrow.htm). There is another process to manufacture oil from coal called low temperature carbonization (LTC). The process was perfected by Lewis Karrick, an oil shale technologist at the U.S. Bureau of Mines in the 1920s.(http://www.rexresearch.com/karrick/karric~1.htm)
Anthracite coal is the highest rank of coal; used primarily for residential and commercial space heating. It is hard, brittle, and black lustrous coal, often referred to as hard coal, containing a high percentage of fixed carbon and a low percentage of volatile matter. The moisture content of fresh-mined anthracite generally is less than 15 percent. The heat content of anthracite ranges from 22 to 28 million Btu/ton (26 to 33 MJ/kg) on a moist, mineral-matter-free basis. The heat content of anthracite coal consumed in the United States averages 25 million Btu/ton (29 MJ/kg), on the as-received basis (i.e., containing both inherent moisture and mineral matter). Note: Since the 1980s, anthracite refuse or mine waste has been used for steam electric power generation. This fuel typically has a heat content of 15 million Btu/ton (17 MJ/kg) or less.
Bituminous coal is a dense coal, usually black, sometimes dark brown, often with well-defined bands of bright and dull material, used primarily as fuel in steam-electric power generation, with substantial quantities also used for heat and power applications in manufacturing and to make coke. Bituminous coal is the most abundant coal in active U.S. mining regions. Its moisture content usually is less than 20 percent. The heat content of bituminous coal ranges from 21 to 30 million Btu/ton (24 to 35 MJ/kg) on a moist, mineral-matter-free basis. The heat content of bituminous coal consumed in the United States averages 24 million Btu/ton (28 MJ/kg), on the as-received basis (i.e., containing both inherent moisture and mineral matter).
Lignite coal is the lowest rank of coal, often referred to as brown coal, used almost exclusively as fuel for steam-electric power generation. It is brownish-black and has a high inherent moisture content, sometimes as high as 45 percent. The heat content of lignite ranges from 9 to 17 million Btu/ton (10 to 20 MJ/kg) on a moist, mineral-matter-free basis. The heat content of lignite consumed in the United States averages 13 million Btu/ton (15 MJ/kg), on the as-received basis (i.e., containing both inherent moisture and mineral matter).,
Sub-bituminous coal is a coal whose properties range from those of lignite to those of bituminous coal and are used primarily as fuel for steam-electric power generation. It may be dull, dark brown to black, soft and crumbly at the lower end of the range, to bright, jet-black, hard, and relatively strong at the upper end. Subbituminous coal contains 20 to 30 percent inherent moisture by weight. The heat content of subbituminous coal ranges from 17 to 24 million Btu per ton on a moist, mineral-matter-free basis. The heat content of subbituminous coal consumed in the United States averages 17 to 18 million Btu/ton (20 to 21 MJ/kg), on the as-received basis (i.e., containing both inherent moisture and mineral matter). A major source of subbituminous coal in the United States is the Powder River Basin in Wyoming.
>Coke is a solid carbonaceous residue derived from low-ash, low-sulfur bituminous coal from which the volatile constituents are driven off by baking in an oven without oxygen at temperatures as high as 2,000 °F (1,000 °C) so that the fixed carbon and residual ash are fused together. Coke is used as a fuel and as a reducing agent in smelting iron ore in a blast furnace. Coke from coal is grey, hard, and porous and has a heating value of 24.8 million Btu/ton (29 MJ/kg). Byproducts of this conversion of coal to coke include coal-tar, ammonia, light oils, and "coal-gas". (Coke can also be made from petroleum)
Jet is a compact form of lignite that is sometimes polished and has been used as an ornamental stone since the Iron Age.
It has been estimated that, as of 1996, there are around one exagram (1 × 1015 kg) of total coal reserves economically accessible using current mining technology, approximately half of it being hard coal. The energy value of all the world's coal is well over 100,000 quadrillion Btu (100 zettajoules). There probably is enough coal to last for 300 years.
The United States Department of Energy uses estimates of coal reserves in the region of 1,081,279 million short tons, which is about 4,786 BBOE (billion barrels of oil equivalent) ((http://www.eia.doe.gov/emeu/iea/res.html)). The amount of coal burned during 2001 was calculated as 2.337 GTOE (gigatonnes of oil equivalent), which is about 46 MBOED (million barrels of oil equivalent per day) ((http://www.iea.org/dbtw-wpd/bookshop/add.aspx?id=144)). At that rate those reserves will last 285 years. As a comparison natural gas provided 51 MBOED, and oil 76 MBD (million barrels per day) during 2001.
Coal is also a color.
This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)
Copyright © 2005 Petrolpump.co.in . All rights reserved. Best viewed with IE 4 + browsers at 800 X 600 resolution | <urn:uuid:02bfe714-c079-45fa-953a-6f96767a420c> | CC-MAIN-2015-35 | http://www.petrolpump.co.in/energy-sources/coal.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064445.47/warc/CC-MAIN-20150827025424-00339-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.948616 | 2,405 | 3.59375 | 4 |
Hello. Lets think about fluids . . .
The first materials we used to design with were stone, and natural composites like wood. After adding fire to ores, we produced metals. Later, we added ceramics and glass the fourth category of construction material. The fifth type of building material is membrane. In earlier times, membranes were pelts, fur, leather, and woven textiles; currently, we are developing technical textiles.
Now for the sixth building material, to which computing opens the door: fluids.
The Fluidic Muscle is a hose consisting of alternate layers of elastomer and fibres and can be operated as an actuator with compressible as well as non-compressible fluids. Unlike a traditional drive, this actuator has no piston or sealing ring no moveable parts. The Fluidic Muscle can be used in tensegrity structures or in dynamic, flexible applications.
Fluids, used with construction membranes, are capable of causing an industrial revolution.
Look at dragonflies. The wings consist of a Y-shaped structure, and this three-point knot was the inspiration source for the exoskeleton used for our Airtecture at Festo with its Y-shaped supporting columns. Air is a fluid, a compressible one like all the gasses, while water is a non-compressible fluid, and you can make use of both, as we do.
Our Airtecture buildings are self-regulating, self-organising, and always changing, depending on the load. Even the ropes and cables can be altered so that they contract, and then you have a fluid actuator. They can be semi-transparent, they can be arrayed in a way so as to give stiffness and structure, and intermediate membranes make for tolerance, because such a building or structure will change according to the ambient conditions of sun, wind, and rain or snow.
The interior is spanned by a fluidic roof, where you have either pressure or non-pressure, in this case a vacuum, to make an undulation, which stiffens the roof. The windows themselves consist of four membranes, and these make up for all the longitudinal changes.
You can make inflatable manned aircraft, which can take off from the runway without the gears rolling on the tarmac. You can have inflatable wings, you can have very easy-to-fly ultra-light aircraft where the trailing edge changes according to the movements, left, right, up and down. Or you can have unmanned delta wings, which are capable of flight.
When you look at leeches, you see two angles under which the muscle fibres meet in a spiral form, and to transfer that, you get to an actuator. They can be miniaturised so that you end up with prosthetic devices, which we may one day use in minimally invasive surgery, or in very, very small components.
Inflatables and membranes are even an influence on the graphic design that we are using at Festo, since our pictograms, even the design language, change to reflect pneumatic elements.
Our Cocoon emergency tent is derived from insect cocoons. It offers the possibility of survival for 24 to 48 hours and weighs only 100 grams. And the mermaids purse, or sharks egg pouch, gave us the inspiration for the office sleeping pillow, which prevents keyboard imprints on the forehead!
Airquarium is a truly mobile architecture using a very special membrane. The membrane changes translucency according to the ambient light level. So when there is no sun, it is translucent, and when there is a lot of sun its getting whiter, more opaque.
It has a membrane torus, which is filled with 120 150 metric tons of water depending on the slope, which acts as a foundation. Over this spans a half-dome of 32 metres diameter and a height of 8 metres, supported by air pressure. And the idea came from raindrops, which naturally form a spherical shape when they land on a water surface.
Airquarium can be deflated and used again; it goes into a 20-foot container, and a second 20-foot container houses all the technical equipment needed to get it up. So you have a portable architecture using fluids, water and air, and the membrane itself behaves in a very intelligent way, as it changes according to the ambient light.
Festos Airfish is an airship based on the form of penguins bodies. When you study penguins, you find that their drag co-efficient is much better than any sports car. When you apply the same aerodynamics to hulls and surface bodies, they have such an amount of freedom in movement they can perform aerobatics. And with six degrees of freedom in movement, they can be very agile with very little energy.
Air on Aqua is an inflatable sports field which floats near the coast, for all kinds of beach sports like volleyball. The players are not hampering the sunbathers, and vice versa. It was inspired by water lilies.
This inflatable mobile theatre has muscles which allow it to fit into different city spaces.
Airhopper is a shoe, which allows you to jump much faster than you can walk.
It is based on kangaroo sinews and it stores the kinetic energy which otherwise would be lost when youre putting your foot down onto earth again, so you can get a bit back from gravity.
Aircruiser, a re-innovated skateboard, works in such a way that you have the suspension, the dampening and the steering intelligently built into the muscle material, just like with this insect. You have a very high clearance, and you can just steer by moving more or less over to different sides of the board.
Airbug is a six-legged walking beetle, an autonomous walking machine dedicated to landmine detection and clearing. It is very low in metals, and very high in composites. It is driven by artificial muscles.
This flat six-muscle engine is for our Airkart, fluidically powered. We are currently working on high-temperature versions, not to burn fossil fuels, which we never do at Festo, but for a chemical reaction where a lot of gas results instantaneously from a drop of fluid and then powers the muscle itself.
Our modular 36-muscle engine allows you to fluidically power energy sources. It allows you to add power in a very discrete way. Normally you cannot change the number of pistons in an engine, but with this system, you just add more and more muscles until you have reached the number of 36.
This is a fully sprung mountain bike, made with the help of intelligent membrane material. The diamond-shaped frame has been calculated so that a maximum of the muscles capability is being used in dampening and springing.
When you think of jellyfish, you come to balloons. This is our inflatable balloon basket the first innovation in wicker basket ballooning since September 1783, the inaugural flight of the Montgolfier brothers!
We do a lot with the membranes in order to build with fluids; you have to design the borders, meaning materials R&D. There is no metal inside. With all the structures, you see a similarity to jellyfish, because they are filled with water in water. The same applies here, you have the same fluid inside and outside. The only difference is gravity of course, and temperature as well as the energy level changes.
And that brings us to another way of buoyancy, an aesthetic one, when you have fluids that are lighter than the surrounding fluid. And then you get lift, and you can of course use the same fluid with more energy in order to create a larger object. This is the largest inflatable airship for hot air use. You use a cold fill first, then you compress the fins with hot air, and then you can lift off. You dont need an airport, and you steer with the help of fluids that change the angle of the membranes of the fins. So left, right, up, and down are all done by temperature changes.
The flat bed truck that transports the airship weighs in at only 7.4 tons, so its not a huge truck, although some 22 metres long. And you just let the hot air out of the membrane, and then you can fold it up, and just drive somewhere else. Now, when you do the very same with a traditional airship, you lose some 100,000 Euros worth of helium. Not so with the hot air version. It costs you only 300 Euros to heat up the air. Another advantage is flexible flying, because you are very lightweight, and the only metal pieces are parts of the engine and the cabin.
You can apply the same principle to a technology centre and cover it with fluidically structured roofing, and cover the fronts with membranes, like sails. On top, you see the air-covered roof which contains certain layers that are all printed in a checkered way. And by changing the internal pressure, you have a continuously variable sunshade. On the right image you see that one is open, while the other is closed. The whole office is wirelessly connected, including keyboards and mice, for some 1500 people.
Here is another Festo world record, the worlds largest single stem umbrella, called Funnbrella inspired by the chanterelle mushroom. It has an edge length of 31.6 by 31.6 metres covering 1,000 square metres, and this very record will hopefully last for another ten years, so that we can further improve our span by the help of new materials.
All of this work was made possible only by the heavy use of computational simulation, as well as fluids.
Q&A WITH JOHN THACKARA
John: Im just getting ready for the panel. Heres a remarkable body of experimentation. How many years of work are we looking at in that presentation?
Axel: Eight years now.
John: And weve heard about biomimicry and about materials innovation, does one depend on the other? Is it now possible to copy a scarab beetle or a membrane in ways that could not have been possible 10 years ago?
Axel: Definitely. It has something to do with the mathematical methods on the one hand, and newly developed materials on the other one. But there is no direct copying. You can only gain inspiration from nature, and find what is useable in that and what could then be translated into a different system.
John: And when you look at, for example the way the fluids are transported from one place to another in the airship - is that just a mental innovation or do you require technical support to do that?
Axel: A lot, of course. We started working with chemical firms in order to have new fibres, new coatings, and new jointing techniques, and in the end also new control techniques with the help of newly developed sensors in order to make everything controllable.
John: And do you find that people from traditional aircraft design understand what youre doing?
Axel: No, they find it totally crazy.
John: Is that pleasing to you? What can you learn from those guys whove been doing it for so long?
Axel: The problem is that in most cases theres very little interaction, because they think its ridiculous. I cannot understand how they can be so conservative, because if they had always been that conservative, there would have been no flight.
John: Here comes the table . . .
Here's an interesting interview with Axel Thallemer published in the "New Scientist":
Castles of air?
Are we ready to live like modern nomads, with ultra-advanced homes that pack up and travel with us wherever we go? For towns that spring up and vanish when they are no longer needed, or spacecraft that could be any shape you wanted. Axel Thallemer certainly is. His big idea is to build real inflatable buildings that are as far removed from the blow-up jokey plastic world of the 1960s as you can imagine. And at Festo, where he is head of corporate design, they already have eight years of experience in building working prototypes that are among the most advanced in the world - and the only ones based on close observation of nature. Liz Else caught up with Thallemer at Festo's headquarters | <urn:uuid:e070e771-2d2b-4b9a-aecd-b3362c37e793> | CC-MAIN-2015-35 | http://flow.doorsofperception.com/content/thallemer_trans.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645339704.86/warc/CC-MAIN-20150827031539-00043-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.952295 | 2,566 | 3.25 | 3 |
Pity the Fool
It’s April Fools’ Day, and whatever its origins, the Scriptures have something to say about playing the fool.
There is uncertainty about how and when people began mocking the fool on the first day of April. Many think it goes back to sixteenth-century France when the nation changed from the Julian Calendar to the Gregorian. April 1 had been the end of a weeklong festival celebrating the coming of Spring and with it the new year. Now the new year changed to January 1. Some refused to make the switch, or lived in rural areas and didn’t get the word, and were mocked as fools by those who made the change.
Others think the origin may be in a scribal error in Chaucer’s Canterbury Tales that had readers thinking the “Nun’s Priest’s Tale” — and the fox’s fooling of Chauntecleer the vain cock — occurred on April 1 (when Chaucer actually meant May 2). Still others connect the day to celebrations in ancient Rome, Persia, and India.
But however murky the true origin of April Fools’ Day, what’s clear enough is the Christian teaching about what makes a person truly foolish.
Wisdom: The Skill of Living
The Book of Proverbs provides the Bible’s densest teaching about wisdom and folly, and what quickly becomes plain is that the biblical concept is radically God-centered.
God himself is the source of wisdom. Thus, it is the fool who says in his heart there is no God (Psalm 14:1; 53:1), and Proverbs gives us the refrain, “the fear of the Lᴏʀᴅ is the beginning of wisdom” (Proverbs 1:7; 2:5; 8:13; 9:10; 15:33). True wisdom begins with God and has its constant source and supply in God. So, says Tremper Longman, Proverbs teaches us that “relationship precedes ethics” (Intro to the OT, 269).
According to Longman, “wise” is the biblical word to “describe the person who navigates life well” (How to Read Proverbs, 13). Wisdom is
the skill of living. It is a practical knowledge that helps one know how to act and how to speak in different situations. Wisdom entails the ability to avoid problems, and the skill to handle them when they present themselves. Wisdom also includes the ability to interpret other people’s speech and writing in order to react correctly to what they are saying to us.
Wisdom is not intelligence pure and simple. . . . Biblical wisdom is much closer to the idea of emotional intelligence than it is to Intelligence Quotient. Wisdom is a skill, a “knowing how”; it is not raw intellect, a “knowing that.” (14–16)
The biblical concept of wisdom is, in large measure, analogous with the idea of maturity. The wise person is one who is mature in his knowledge of God — based on God’s self-revelation — as well as his understanding of himself and his surroundings. The wise person is able to “navigate life well,” in the real world, as defined by God in the Scriptures.
The Folly of Not Navigating Well
Meanwhile, the fool does not possess such skill. He does not navigate life well in God’s universe, from God’s perspective, in God’s categories. The very essence of foolishness is the suppression of God’s truth (Romans 1:18).
Folly is not just silly, but sinful (Psalm 69:5; 107:17; Romans 1:22). Fools desperately need to “learn sense” (Proverbs 8:5), but instead they hate knowledge (Proverbs 1:22). They are complacent (Proverbs 1:32), easily frustrated (Proverbs 12:16), reckless and careless (Proverbs 14:16), and crooked in speech (Proverbs 19:1). Fools are prone to “a hasty temper” (Proverbs 14:29), “anger lodges in the heart of fools” (Ecclesiastes 7:9). Fools “walk into a fight” and invite a beating (Proverbs 18:6).
The fool despises instruction, even from the ones who love them most (Proverbs 15:5), and thus brings misery to his own biggest fans (Proverbs 17:21).
While the wise have learned the beauty and value of righteousness, “doing wrong is like a joke to a fool” (Proverbs 10:23). While the wise are able to hold back quietly, “a fool gives full vent to his spirit” (Proverbs 29:11). And as Jesus taught, while the wise are “rich toward God,” the fool presumes on “many years” and “lays up treasure for himself” in this life (Luke 12:19–21).
A fool is “like an archer who wounds everyone” (Proverbs 26:10) and “like a dog that returns to his vomit” (Proverbs 26:11). It is better to meet up with “a she-bear robbed of her cubs” than a fool in his folly (Proverbs 17:12).
Prideful, Mouthy, and Alone
Because the biblical notions of wisdom and folly are God-centered, at the very heart of folly is pride and self-sufficiency. The fool is arrogant, and the arrogant are fools. The fool says in his heart there is no God — and sees no need for God, quite frankly. The fool is “wise in his own eyes” (Proverbs 26:12), “right in his own eyes” (Proverbs 12:15), and “trusts in his own mind” (Proverbs 28:26). He feels that he has all his ducks in a row and doesn’t need others’ input — especially not God’s instruction.
The fool is more the talker, less the listener. “A fool takes no pleasure in understanding, but only in expressing his opinion” (Proverbs 18:2). “A fool’s mouth is his ruin, and his lips are a snare to his soul” (Proverbs 18:7). He is one “with many words” (Ecclesiastes 5:3) and “multiplies words” (Ecclesiastes 10:14). “The woman Folly is loud” (Proverbs 9:13). The fool “gives an answer before he hears” (Proverbs 18:13).
While the wise aggressively listen and long for the counsel of others — and “the wise of heart will receive commandments” — ruin comes to “a babbling fool” (Proverbs 10:8). It is “the mouth of a fool” that brings ruin (Proverbs 10:14).
The fool not only suppresses his need for God’s words, but also for the counsel of others. “A wise man listens to advice” (Proverbs 12:15). Keeping company with the wise is essential in learning wisdom (Proverbs 13:20). Fools would rather talk than listen. They may say they love to “have others in their lives,” but they don’t really want to hear any correction. They would rather utter slander (Proverbs 10:18) than heed reproof (Proverbs 15:5).
While wisdom leads to life (Proverbs 3:18; 16:22), folly ultimately leads to death (Proverbs 5:23; 10:21).
All the Treasures of Wisdom
For the Christian, the radical God-centeredness of wisdom in the Proverbs takes a radically Christ-centered shape in the New Testament.
Jesus himself, as the fullest and final revelation of God (John 1:18; Colossians 1:15; Hebrews 1:1–3), is now-revealed as the secret to true wisdom. As the God-man, he is the perfect embodiment of divine wisdom in human form — he is the life of God in the soul of man — and in him “are hidden all the treasures of wisdom and knowledge” (Colossians 2:3). To those who are perishing, “the word of the cross is folly,” but to those who are being saved, it is God’s power and the paragon of wisdom (1 Corinthians 1:18).
If wisdom is the ability to navigate life well, in God’s world, on God’s terms, now we see that it can mean nothing less than having him who is “the way, the truth, and the life,” the only one through whom we may come to the Father (John 14:6). And so to present anyone truly wise, truly mature, it is “him we proclaim, warning everyone and teaching everyone with all wisdom” (Colossians 1:28).
Only in Jesus can those born into folly, increasingly manifesting foolishness, on a crash course for destruction, be set free to true wisdom and ultimate life. “We ourselves were once foolish, disobedient, led astray, slaves to various passions and pleasures, passing our days in malice and envy, hated by others and hating one another” (Titus 3:3). But Wisdom himself saved us (Titus 3:4–5).
Only in Jesus can we truly have Wisdom and then be sufficiently changed to not merely mock folly, but pity the fool.
More from David Mathis: | <urn:uuid:a616a7fe-15c5-47db-8525-287f53cf282c> | CC-MAIN-2015-35 | http://www.desiringgod.org/articles/pity-the-fool | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00101-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.953924 | 2,085 | 2.984375 | 3 |
April 1998 // Volume 36 // Number 2 // Feature Articles // 2FEA2
Establishing Effective Mentoring Relationships for Individual and Organizational Success
This article reports findings from a study conducted to explore and describe mentoring relationships in Pennsylvania State Cooperative Extension's planned mentoring program based on the perceptions and experiences of proteges and mentors in Cooperative Extension. Factors that facilitate or hinder the mentoring relationship were explored and described by the participants. Also, proteges were asked to describe from their perspectives the qualities of an effective mentoring relationship. Data were collected from a series of in-depth qualitative interviews with mentor/protege pairs
The term mentor is over three thousand years old and has its origins in Greek mythology. When Odysseus went off to fight the Trojans, he left his trusted friend Mentor in charge of his household and his son's education. Mentor's name has been attached to the process of education and care by an older, experienced person.
Mentors have been defined in the literature as higher ranking, influential senior organization members with advance experience and knowledge, who are committed to providing upward mobility and support to a protege's professional career (Collins, 1983; Kram, 1985; Roche, 1979). According to Zey (1984) the outcomes of a formal mentoring program and the result of the mentoring relationship can benefit the protege, the mentor, and the organization. The protege receives knowledge and skills, support, protection, and promotion. The mentor may realize assistance on the job, prestige, and loyalty. The organization achieves development of employees, managerial success, reduced turnover, and increased productivity.
A study was conducted to explore and describe mentoring relationships in Pennsylvania State Cooperative Extension's planned mentoring program based on the perceptions and experiences of proteges and mentors in Cooperative Extension. Factors that facilitate or hinder the mentoring relationship were explored and described by the participants. Also, proteges were asked to describe from their perspectives the qualities of an effective mentoring relationship. A summary of the methodology and conclusions from this study follow.
The research for the study was descriptive and the approach to the research was qualitative. Collecting qualitative data provided depth and detail about the mentoring experiences for both the mentors and proteges.
Phase one of the research was conducted using a pre- assessment survey sent to newly hired county Extension educators who had completed at least 18 months but no more than 30 months of employment. A return rate of 100 percent (N=33) was obtained with no follow-up reminders.
For phase two of the study, three mentor-protege pairs were randomly selected from those pairs whose proteges indicated that their mentoring experience was "not a good experience-very little interaction, very little useful information shared" up to and including, "a fair experience--a fair amount of interaction, some useful information shared." Three additional mentor/protege pairs were randomly selected whose proteges indicated that their mentoring experience was greater than "a fair experience--a fair amount of interaction, some useful information shared," up to and including, "a very good experience--much interaction, a great deal of useful information shared." This type of purposeful, extreme case sampling was used to increase the likelihood that a range of stories and experiences was heard during the interviews (Patton, 1990).
Phase two involved carrying out a modified version of in- depth phenomenologically-based interviews individually with the six mentor/protege pairs. In-depth interviewing allowed for more complete documentation and understanding of the experiences of the mentors and the proteges and the meaning and value they held regarding their mentoring experiences. A semi-structured schedule of open-ended questions was asked of all of the participants.
The interview data were analyzed using content analysis (Patton, 1990) as a process to identify, code, and categorize primary patterns in the data. Initially, the data from the transcripts were coded according to the major research questions. These data were then organized into sub-themes or topics that relate to each of the research questions.
Similar Programmatic Responsibilities
Assigning mentors who shared the same major programmatic responsibility as their proteges was considered a facilitative factor that contributed to the success of the relationship by all of the mentors (100 percent) and the proteges (100 percent). Having both mentors and proteges who are responsible for conducting similar types of programs provided opportunities for more interaction related to the Extension program planning process, more opportunities to meet and interact during professional development activities as well as sharing a common interest in programming. The importance of mentors and proteges sharing similar programmatic responsibilities suggests that mentors fulfilled a career development mentoring function and less of a psycho-social mentoring function (Levinson, 1979; Kram, 1983).
Geographic proximity of mentors and proteges was identified as a facilitative factor. All of the mentors and proteges interviewed indicated that mentors should be assigned to proteges from the same Extension region and with geographic proximity. Supporting this concept is research conducted by Burke, McKeen, and McKenna (1993) who found that more frequent interaction and greater success in mentoring relationships occurred in situations where the mentor and the protege had closer offices.
Frequency and Type of Information Shared
Mentors who shared a variety of information frequently were perceived as contributing to successful mentoring relationships. The information identified as being the most helpful to the proteges was the information on program development or technical/subject information. Because program development and technical information were identified as the most useful information shared by mentors, a career development mentoring function is again supported while the psycho-social mentoring function is less evident.
However, the proteges indicated that for mentors to be effective, they need to possess a great deal of organizational as well as program knowledge to share, supporting both the career and psycho-social role of mentors. Mentors who shared limited information on an as-needed basis appeared to inhibit the success of the relationship. Knox and McGovern (1988) identified willingness to share knowledge as an important characteristic of a mentor which supports the finding from this study.
Initiation of the Relationship
Successful initiation of the relationship affected the perceived success of the relationship. Mentors who initiated contact with their proteges as soon as possible and had face-to- face mentoring meetings appeared to contribute to the success of the relationship. After the initial contact, the proteges in this study indicated that regular structured interaction would support an effective mentoring relationship. As both Kram (1985) and Phillips-Jones (1982) found, the first phase of a mentoring relationship is initiation. Mentors and proteges must progress through each phase of the relationship to be successful.
In this study, in those relationships where the initiation phase was not successful, the subsequent relationship was perceived by the protege as not being helpful. Zimmer and Smith (1992) found that the more time mentors and their proteges spent together, the greater the perceived success. The findings from this study indicate that the interaction needs to be frequent even if it cannot be exclusively face-to-face.
Ability to Establish Mentor/Protege Friendship
The ability of mentors and proteges to establish friendships in their mentoring relationships also appeared to facilitate the success of the mentoring relationships. A friendly, empathetic relationship was also identified by the proteges as a characteristic of an effective mentoring relationship. Phillips- Jones (1982) explored mentoring relationships and described six developmental phases in which a mentoring relationship progresses. The last phase is transformation. The primary task during this stage is the development of peer-like friendships. The findings from this study suggest those relationships which were able to progress to the last stage, transformation, either with their formal mentor or an identified informal mentor, were perceived as being successful. This finding supports the psycho- social function of mentoring as described by Levinson (1979) and Kram (1983).
Clearly defined mentor expectations would comprise an ideal and effective mentoring relationship, according to the proteges and the mentors. Three of the six (50%) mentors indicated that they felt prepared for their mentoring role; however, all mentors indicated that if they had a job description or a checklist of expectations and information to cover with the participants, it would have been very helpful.
Having a mentor who is knowledgeable about Extension surfaced as an important factor in an effective mentoring relationship. With the changing nature of Extension work and the need to keep current in their jobs, proteges felt having a mentor who was knowledgeable about the Extension organization was important. Roche (1979) also identified organizational knowledge as a important characteristic for a mentor to posses. His respondents rated knowledge of the organization and the people in it and a willingness to share knowledge and understanding as two of the most important characteristics for a mentor to possess.
An inhibiting factor in the success of some mentoring relationships identified by the mentors was their feeling that the proteges did not need assistance or orientation because of the perceived level of experience the proteges had when joining the organization. However, even the proteges entering Extension with career experience and knowledge about Extension desired frequent interaction. Mentors appeared to be intimidated when asked to mentor new staff members with relevant strong educational and/or experiential backgrounds.
Poor mentor attitudes about Extension were perceived by the proteges as an inhibiting factor in their relationships. Although a positive attitude was not defined in the mentoring literature (Roche, 1978; Knox and McGovern, 1988) as a trait of a successful mentor, sharing and counseling traits were identified. Assuming that successful counselors are positive in their interactions with those whom they counsel, the concept of mentors possessing a positive attitude is supported.
Both the proteges and the mentors interviewed supported an orientation program for mentors. They indicated mentors needed to be oriented toward their mentoring roles and understand their responsibilities. Also, mentor role confusion with the role of the county Extension director (CED) in the new staff orientation process was identified as an inhibiting factor. Mentors were unclear about what information was being covered by the CED and what information was their responsibility to discuss with their proteges.
Hudson (1991) supported this finding, indicating that there are many professionals who are mentors, but very few who have been prepared for the role. Findings from this study strongly support role definition, orientation, and training for the mentor.
for Cooperative Extension Mentoring Programs
The results generated have important implications for cooperative Extension in structuring a mentoring program, pairing mentors and proteges, and developing training and orientation programs for mentors:
1. The study findings support the establishment of guidelines which outline the roles of the mentor and what his/her responsibilities will be. These guidelines should include: (a) the goals of the mentoring program; (b) the Extension mentoring philosophy; (c) the perceived benefits of mentoring to the protege, the mentor, and the organization; (d) information about positive mentoring behaviors (i.e. active listening, envisioning outcomes, productive confrontation), and (e) information about the roles of the mentor.
2. In-service opportunities for mentor self-development should be made available to present and future mentors.
3. Biodata and other relevant information should be shared between the mentor and the protege to assist with successful initiation of the relationship. Suggested guidelines for frequency of contact should be established and communicated to the mentors prior to the initiation of the relationship.
4. An informal needs assessment conducted by the mentor with the protege would be helpful to identify what information is needed and most important for the mentor to share with the protege.
5. A record-keeping system should be developed to monitor mentoring activities and provide a place for mentors to document time spent on their role as well as the type of information shared. These records could help administer and evaluate the program, provide information for future training, and also serve as a prompt for the mentor to continue to maintain contact with his/her protege.
6. A set of recommendations for those administering the mentoring program should be established. These recommendations should include: (a) factors to consider when assigning mentors to proteges; (b) process to follow to initiate the relationship successfully and early; (c) process to monitor and support the mentoring relationship, and (d) sample letters to send to mentors and proteges to initiate the relationship.
7. When selecting staff to serve as mentors, administrative representatives should select mentors who possess the following personal characteristics: (a) knowledge of the Extension organization; (b) empathy towards new staff; (c)program knowledge in their respective fields, and (d) a friendly personality and a positive attitude.
8. Information to be shared by the mentors with the proteges should include a combination of program development and career development information.
9. County Extension directors should be introduced to the roles and responsibilities of the mentor. It is important that county Extension directors who have new staff members in their counties work in concert with the mentors. The county Extension director also needs to be supportive of the role of the mentor and the time commitment which is necessary for successful mentoring.
10. The formal mentoring program should last for one year to assure that the mentoring relationship has been in place for one full program development cycle.
11. The mentoring guidelines and mentoring program need to be institutionalized within the organization to assure continued success.
Burke, R., McKeen, C., & McKenna, C. (1993). Correlates of mentoring in organizations: The mentor's perspective. Psychological Reports, 72, 883-896.
Collins, N.W. (1983). Professional women and their mentors. Englewood Cliffs, New Jersey: Prentice Hall.
Hudson, F.M. (1991). The adult years: Mastering the art of self-renewal. San Franciso: Jossey-Bass.
Knox, P.L., and McGovern, T.V. (1988). Mentoring women in academia. Teaching of Psychology, 15(1), 39-41.
Kram, K. E. (1983). Phases of the mentoring relationship. Academy of Management Journal, 26, 608-625.
Kram, K. E. (1985). Mentoring at work: Developmental relationships in organizational life. Glenview, IL: Scott Foresman.
Levinson, H. (1979). Mentoring: Socialization for leadership, paper presented at The 1979 Annual Meeting of the Academy of Management, Atlanta, GA.
Patton, M.Q. (1990). Qualitative evaluation and research methods. Newbury Park: Sage.
Phillips-Jones, L.L. (1982). Mentors and proteges. New York: Arbor House.
Roche, G. R. (1979). Much ado about mentors. Harvard Business Review, 57, 14-28.
Zimmer, B. and Smith, K. (1992). Successful mentoring for new agents: Dedicated mentors make the difference. Journal of Extension, 30,(1).
Zey, M. G. (1984). The mentor connection. Homewood, IL: Dow Jones-Irwin.
Establishing Effective Mentoring Relationships for Individual and Organizational Success. The purpose of this article is to report findings from a study conducted to explore and describe mentoring relationships in Pennsylvania State Cooperative Extension's planned mentoring program based on the perceptions and experiences of proteges and mentors in Cooperative Extension. Factors that facilitate or hinder the mentoring relationship were explored and described by the participants. Also, proteges were asked to describe from their perspectives the qualities of an effective mentoring relationship. Data were collected from a series of in-depth qualitative interviews with mentor/protege pairs. | <urn:uuid:2b676b26-c716-4783-b9de-6082ba2e5a5d> | CC-MAIN-2015-35 | http://www.joe.org/joe/1998april/a2.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00162-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.950904 | 3,241 | 2.703125 | 3 |
How the World is Changing
To understand the gravity of the national situation, consider the operational environment already shaping and influencing global events and all aspects of U.S. government policy. Synthesized from several DOD analyses, the following categories characterize an environment and geopolitical cause-and-effect that are evolving rapidly and in unprecedented ways. 2
Population : Global population growth over next 30 years is projected at 60 million per annum. While undeveloped and underdeveloped nations and territories display rapid and significant growth, key developed nations are experiencing population declines and age inversions. These growing demographic imbalances, combined with economic growth failing to keep pace with population expansion, serve as catalysts for regional/global migration and incitement of the disenfranchised. Meanwhile, urbanization continues apace on a massive scale, the majority of it occurring along the world's littorals. To top it all off, non-state ethnic and religious loyalties increasingly eclipse nationalism and create additional demographic friction points.
Globalization : The "shrinking of the world" also seems symptomatic of nationalism eclipsed; the global village is transforming economic, informational, societal, and environmental forces. International alliances, borders, and sentiments are being tested and altered as political, geographic, economic, and cultural fortunes become increasingly intertwined.
Race for Resources : Concomitant with all these growing pains, the world is undergoing a steep rise in the demand for, consumption of, and competition for energy resources—a seismic shift for which existing energy sources, associated industries, and governments are woefully unprepared. Lagging investment in energy, rising energy costs, greater competition for energy resources, and consequences of energy-supply disruption all have cataclysmic economic potential, and are therefore exacerbating national and regional security concerns.
Food for the World : While global food supplies are projected to be adequate well into the future, manipulation of food production/distribution, water distribution, and offshore fishing/mining for political advantage continue as common practice. Whether natural or human-induced, water pollution and water-associated pandemics pose real destabilizing threats.
Jockeying for Dominance : International competition across the global commons—oceans, air, space, and cyberspace—is growing; efforts to achieve dominance threaten regional and global security.
A Changing Climate ? The impact of global climate change has many potent variables—rising sea levels, altered/devastated fishing grounds, new polar routes, resource exploration/development, population displacement. The United States' ability and willingness to provide humanitarian relief in the face of climatological disruptions will be viewed as a key factor in influencing national, regional, and global opinion.
The Techno-explosion : The exponential advancement of technology—particularly global communications—is creating virtually unlimited opportunity to increase global awareness and improve standards of living. The playing field is leveled as a greater percentage of the world population gains access to and engages in technology development. Conversely, the process expands the dichotomy between technology "haves" and "have nots."
Confronting this ever more complex global environment, senior U.S. government leadership is challenged to develop viable strategies. While not elected the interagency "team captain," by dint of its capabilities and leadership, DOD frequently becomes the de facto lead - and finds itself squarely on the horns of a dilemma.
The old paradigm ? DOD's principal mandate is to fight and win the nation's wars. The emerging paradigm ? DOD is a key component in an interagency/international coalition more focused on deterring and preventing war than fighting it. While prevention, preemption, and deterrence of war are critically important, can these be allowed to supersede the Defense Department's principal mandate ? Reconciliation of these paradigms has major implications for our national-security policy.
History is rife with examples of why the United States must not abdicate its conventional military high ground or balk at leveraging its capability to reinforce and influence regional and global security. As the distinguished strategist Dr. Colin Gray notes: "A multipolar world is a world without a sheriff . . . . The U.S. hegemon needs to be able to control the geographies of the global commons. Americans will have to be free to use the sea, the air, space, and cyberspace at will, all the while being able to deny such operational liberty to some other states and political entities." 3 What Alfred Thayer Mahan postulated in 1890 in his seminal work, The Influence of Seapower Upon History, 1660-1783 , in a specifically naval context is no less valid today in a broader sense, and must be extrapolated to encompass all the global commons. Dominance in all four realms is now critical to winning wars; it is equally important in preventing/preempting war. For the Navy, there is no greater strategic mission than preserving freedom on, across, and through the commons - a reality that must govern strategic planning, messaging, and resourcing for DOD in particular and the federal government in general. That being said . . .
Balancing Job #1 and the ' New Normal'
Too few fully comprehend the tectonic shift that occurred with near spontaneity on 9/11, or the extent of the nation's economic Pearl Harbor. A "new normal" confronts the United States; it is a reality we must face head-on.
In terms of defense, the world is now one of friction between the current and conventional and the emerging and asymmetrical—a friction that parallels the dueling paradigms now vying for DOD's attention. In addressing this balancing act, Secretary of Defense Robert Gates writes, "we must not be so preoccupied with preparing for future conventional and strategic conflicts that we neglect to provide all the capabilities necessary to fight and win conflicts such as those the United States is in today." Seeking an elusive balance in the face of conventional and asymmetric threats, Secretary Gates directed the services to become as competent in irregular warfare (IW) as they are in traditional war—no small task given the economic environment, ongoing combat engagements, and service parochialism. 4
Let's be realistic. Historically, the services do not volunteer to shed big-ticket programs, largely for fear they will end up committing budgetary seppuku. Better to join the chorus espousing fiscal responsibility—and subsequently request a bigger budget based on complex assessments—than volunteer to bite a budget bullet. There is not enough money to pay for all the services' requests. Never has been, never will be; everyone knows it. Slapping an IW bumper sticker on a legacy program and marketing it as the latest defense against emerging threats is a disingenuous and unsound strategy. It is also a not-uncommon service dodge. Change is tough.
Being realistic also requires acknowledging the fact that Congress serves constituent interests as well as addressing national security. Not surprisingly, in the absence of a sense of crisis, national security issues take a congressional backseat.
And finally, don't forget the elephant in the room against which President Dwight D. Eisenhower cautioned in 1961:
Crises there will continue to be. In meeting them . . . there is a recurring temptation to feel that some spectacular and costly action could become the miraculous solution to all current difficulties. A huge increase in newer elements of our defense; development of unrealistic programs to cure every ill . . . a dramatic expansion in basic and applied research . . . may be suggested as the only way to the road we wish to travel.
This conjunction of an immense military establishment and a large arms industry . . . we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society.
In the councils of government, we must guard against the acquisition of unwarranted influence . . . by the military industrial complex. The potential for the disastrous rise of misplaced power exists and will persist. . . . 5
For all its guidance and directives, DOD continues to be dragged down by—and often party to—a system that mortgages national security advantages built over many decades in favor of unhealthy compromise. In doing so, U.S. economic security—hence national security—is put at risk.
Secretary Gates understands well how the game is played. Requirements compete; uncertainty is constant; political maneuvering never ceases. He also recognizes IW and conventional warfare are not separate, one-or-the-other options. Lessons learned over the past eight years on and off the battlefield drive this lesson home. To earn their keep, DOD planners must eschew one-dimensional thinking, heed the secretary's message, and develop sustainable, integrated IW/conventional strategies. The services cannot tackle this issue alone; courage and leadership are also required from the Office of the Secretary of Defense, Joint Staff, and Congress. In stepping out, individuals and organizations risk being pilloried by those with vested interests in the status quo. Media reports abound of members of Congress, industry leaders, and lobbyists mongering fear and castigating the Secretary of Defense for undermining national security or abetting "the enemy" as he strives to set DOD on a balanced path.
The ' New Normal' and Meaningful DOD Action
DOD—indeed the greater national security establishment—stands at a crossroads. Yet it is not too late to rethink how we can achieve our strategic objectives without risking economic foreclosure—and challenge Congress to do the same. Under the weight of a far-from-over global recession, U.S. military resourcing will trend downward—a message clearly conveyed by history and the current administration's rhetoric. Embracing this reality and the "new normal" of national security, DOD must exercise bold new thought. Here are a few suggestions to stoke the fires:
- Revamp U.S. government strategic planning procedures. Internecine service warfare to achieve budget preeminence must be relegated to the past. Fiscal reality and operational necessity demand a whole-of-government (W-o-G) approach, but self-imposed interservice/interagency harmony is a bridge too far; cooperation will not be the result of a kumbaya. Disabuse yourself of the notion of rival services and governmental agencies achieving reconciliation through compromise. What is required is a single overarching national-security authority: a disciplined and appropriately empowered National Security Council (NSC) charged with leading the effort and producing a viable W-o-G deliverable. Massive government reorganization is neither required nor desired. What's needed is a reinvigorated and empowered NSC with ultimate responsibility and commensurate authority to bring together senior players from across the government, authorized to speak for and act on the behalf of their respective departments, force cooperation, punish non-participation/non-compliance, and deliver—and maintain—a unified, executable national security plan of action.
Neither novel nor impractical, just such a solution is what the Project on National Security Reform proposes. Sponsored by the Center for the Study of the Presidency, the Project on National Security Reform was created in 2006 to make recommendations for the improvement of the interagency national security system. Following two years of comprehensive study of and interface with the current national security process, the project group issued its report. The guiding coalition for the study "unanimously affirm(ed) that the national security of the United States of America is fundamentally at risk." 6 A focused, centralized national security authority is the entity needed.
- Develop a viable, mutually supporting conventional/IW force balance and appropriate marketing strategy to sell it. Reinforce the enduring necessity of protecting the strategic commons as 21st-century piracy, cyber warfare, and regional hegemony proliferate. For starters, revamp expeditionary strike group (ESG) configuration, training, and deployment to optimize ESG readiness and responsiveness to support both IW and conventional crises. Align and integrate ESG, Naval Expeditionary Combat Command, and Naval Special Warfare IW concept development and training to institute an enduring IW-in-the-littorals strategy. Ensure that future ship and aircraft designs factor in IW/special-operations support requirements. Present Congress a united DOD front and model of IW/conventional interoperability that disrupts congressional two-stepping (i.e., feigning change but foisting real change on a future generation).
- Rethink DOD's boutique approach to research and development (R&D). Too many organizations exist across DOD for development of advanced technologies to be cost-effective or optimally responsive to operational requirements. Largely at fault is an acquisition system overly burdened by bureaucracy and constrained by military-industrial-complex pressures. Think interagency; think 21st-century networking and speed-of-technology. Follow the lead of U.S. Special Operations Command's rapid-exploitation initiative, which comprises a distributed network of operational, technical, and acquisition subject matter experts tasked to identify timely solutions to solve operational problems.
Consider reorienting NASA from a dubious "back-to-the-future lunar program" to overseeing technology incubation for the government. For decades the space program played a huge but largely unsung role in R&D, yielding incredible governmental and commercial spin-offs. There is a definite role for a "new NASA" - revitalizing the U.S. technological edge, achieving efficiencies by integrating full-spectrum R&D, and encouraging industrial partnering without industrial monopolization. Visualize the second- and third-order effects to include reducing national laboratory redundancy, lagging U.S. investment in R&D, and revitalization of higher education.
- Retool the DOD acquisition processes. Bloated, bureaucratic, inequitable, and outdated by the pace of technological development, current acquisition processes are relegated to the dusty-relic status of the rotary telephone. DOD is to the current acquisition process what energy-hungry nations are to OPEC: a hostage. The longer we wait to address the problem, the more costly the solution. The toll of inaction is already clearly visible. Proposals such as the one advanced by the Defense Science Board - create a new Office of the Secretary of Defense organization with a $3-billion annual budget for rapid acquisitions - have merit, but at what cost ? Apply a more-is-less mantra and consider U.S. Special Operations Command's "Buy-Try-Decide" model. This capabilities-based acquisition cycle allows the command to (1) purchase, evaluate, and modify technology and systems without being unduly constrained by acquisition processes, and (2) upgrade systems at or near the pace of technology.
Recover, Regroup, Move Forward
A recent article in the Wall Street Journal provides a nose-under-the-tent perspective of the diplomatic, political, and economic realities, give-and-take, and potential consequences of the "new normal." "The Obama administration's scrapping of long-range missile interceptors in Europe wasn't just about security and diplomacy, according to people close to the process: It also came down to money." 7 From a defense perspective, even if the nation recovers from this economic Pearl Harbor, the margins remain too thin for business-as-usual. We must determine to adequately defend the nation under increasingly austere budgets, making the necessary planning and budgeting adjustments quickly enough to avoid compromising tactical, operational, and strategic supremacy.
Failure to recognize the historic nature of the economic crisis of 2008 - 10 and commit to instituting necessary change is tantamount to cowardice in the face of the enemy. As we work through the current national security crisis, we would be wise to take to heart words of another Commander-in-Chief who witnessed several seismic events in his lifetime, Harry Truman: "Men make history, and not the other way around. In periods where there is no leadership, society stands still. Progress occurs when courageous, skillful leaders seize the opportunity to change things for the better." 8
1. New York Daily News , 24 September 2008, http://www.nydailynews.com/money .
2. Not intended to predict the future, these studies analyze factors which influence the future environment. Among analyses studied: Joint Operational Environment 2008 (Joint Forces Command, 2008), U.S. Marine Corps Vision & Strategy 2025 (USMC, 2008), and Strategic Appreciation , draft (USSOCOM, 2008).
3. Colin S. Gray, After Iraq: The Search for a Sustainable National Security Policy (Carlisle, Pa.: Strategic Studies Institute, U.S. Army War College, 2009), p. 51.
4. Robert M. Gates, "A Balanced Strategy: Reprogramming the Pentagon for a New Age," Foreign Affairs , Vol. 88, No. 1 (January-February 2009), p. 29. For the Secretary's IW orders, see "Department of Defense Directive 3000.07 - Subject: Irregular Warfare," 1 December 2008.
5. Dwight D. Eisenhower, Farewell Address to the Nation, 17 January 1961, www.americanrhetoric.com/speeches/dwightdeisenhowerfarewell.html .
6. Project for National Security Reform, Forging a New Shield , November 2008, www.pnsr.org/data/files/pnsr_forging_a_new_shield_report.pdf .
7. "Cost Concerns Propelled U.S. Missile Pivot: Obama Decision Is Aimed at Saving Pentagon Funds While Helping Nonproliferation Push; Shift Was Years in the Making," Wall Street Journal , 21 September 2009.
8. Harry S. Truman quotation #3607, www.quotationspage.com/ quote/3607.html. | <urn:uuid:1fe1e937-75c8-49c2-a063-2e30dacf70a0> | CC-MAIN-2015-35 | http://www.usni.org/magazines/proceedings/2010-06/dod-strategic-planning-new-normal | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167592.45/warc/CC-MAIN-20150827031247-00045-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.915898 | 3,641 | 2.90625 | 3 |
On 4 July, CERN, the European particle physics laboratory near Geneva, Switzerland, grabbed worldwide attention when it announced that it had found a new particle that looked very much like the long-sought Higgs boson. Two teams of scientists working with the Large Hadron Collider (LHC) at CERN—each using one of the two huge particle detectors at the LHC: ATLAS and CMS—reported that all the high-energy atom smashing that they had been doing over the last couple of years had paid off, providing a glimpse of what might well be the missing piece of the standard model of particle physics. “We have reached a milestone in our understanding of nature,” CERN Director General Rolf-Dieter Heuer stated during a seminar held at CERN.
Because the Higgs boson is extremely short-lived, scientists must look for evidence of its existence by recording its decay into various combinations of lighter particles predicted by the standard model. The efforts of the ATLAS and CMS teams—which each included some 3000 physicists—were organized around a complex chain of data analysis and crosschecks. Among those playing a key role in the discovery were many early-career scientists. Science Careers talked to three of them about what it took to get involved in the project and what it was like to be part of such a large and high-profile scientific endeavor.
Christos Anastopoulos, ATLAS
Breakthrough of the Year
Read more about the discovery of the Higgs boson and other important breakthroughs that occurred this year, in this week’s issue of Science.
Christos Anastopoulos was first exposed to the ATLAS experiment in 2004 when, as a B.Sc. student at the Aristotle University of Thessaloniki in Greece, he contributed to the construction of the ATLAS muon spectrometer, a part of the machinery that detects electrons' heavier cousins. While working on his master’s degree at Thessaloniki and the first year of his Ph.D. at the University of Sheffield in the United Kingdom, he became involved in the computing side of the ATLAS project, writing and validating algorithms. The LHC was still under construction, so the goal was to put in place the key parts of the software needed to analyze ATLAS data. According to Anastopoulos, the effort worked very well: “Most of the things we did for the discovery were already part of that exercise. We figured them out back then,” he says.
Anastopoulos moved on to developing algorithms to improve the identification and analysis of electrons, which, together with muons, make up a class of particles known as leptons. When the first LHC collisions occurred, at the end of November 2009, Anastopoulos was finally able to work with real particles. When that happened, “the game changed completely,” he says.
Inside the collaboration, those heady early days when real data started rolling in generated as much excitement as the Higgs discovery did 3 years later, Anastopoulos says. As the LHC started ramping up to ever-higher energies, the team's efforts focused on studying the decay of already known particles. Still, “at that time, any data, any peak, any small study … was a huge excitement.”
On 30 March 2010, the LHC reached high enough collision energies for the search for the Higgs to really begin. In the final year of his Ph.D., Anastopoulos joined the search, analyzing electron signatures in the channel corresponding to the predicted decay of the Higgs into four leptons. He also joined the effort to estimate the background in the results coming from the four-lepton decay channel.
In 2011, Anastopoulos became a CERN fellow, essentially a 2-year postdoc. He took on increasingly important roles. He became an electron expert, ensuring that the four-lepton group electron observations were made with high efficiency and were properly taken into account in the data analysis. One year before the discovery, he took on responsibility for background-signal calculations. When the ATLAS discovery paper came out on arXiv at the end of July, he was a corresponding editor.
To avoid bias when tweaking algorithms, the team adopted a strategy of looking at the region where they expected the Higgs to appear at only long, predetermined intervals. When they looked in late 2011, they saw something that looked like the Higgs, but they couldn't be sure because the statistics weren't strong enough yet. When they looked again several months later, confirmation came almost overnight. Anastopoulos was one of a handful of people who did the analysis that revealed the particle's signature in the four-lepton channel about 2 weeks before the July announcement. He was the one who presented the results to the whole Higgs search group. It was an intimidating challenge. “The critical part is not only convincing them that the particle is there, it’s convincing them that everything … was done right and all the plots are right.”
The month that followed was spent “working around the clock to … make every crosscheck, make sure that there was nothing wrong or we didn’t fool ourselves,” Anastopoulos says. The pressure came not just from the time crunch but also from the scrutiny of peers. It wasn’t until the 4 July announcement that he felt relaxed enough to experience the excitement of the discovery.
Now that the discovery is in the bag, Anastopoulos is studying the properties of the new particle in the four-lepton channel to find out whether this is the Higgs boson predicted by the standard model or something more exotic. In October, he was put in charge of coordinating the 102 people involved in this study. The greatest challenge, he says, is collaborating directly with so many people.
When Anastopoulos finishes next summer—his fellowship was extended by 6 months—he plans to apply for a 5-year position at CERN or a tenure-track job in another institution that would allow him to continue working with ATLAS.
His contribution to the discovery of the Higgs boson bodes well for his career. “My generation, we were the lucky ones. We just looked for it for 1 to 2 years and we found it. … We get more visibility and it makes it a little easier for us to find jobs.”
Cristina Botta, CMS
CREDIT: Cristina Botta
Cristina Botta joined the CMS experiment in 2007, when the detector was still being assembled and many scientists were working on software to prepare for the data acquisition phase. While she was still a master’s degree student at the University of Torino in Italy—and then, after graduating, during a 1-year stint supported by a grant from Italy's Piedmont region—Botta was involved in planning a strategy for analyzing the decay of the Higgs boson into four leptons.
After starting her Ph.D. at Torino, she worked on the detector itself, optimizing it for the detection of muons. In 2010, when CMS started collecting real data from collisions of sufficient energy to start the search, Botta returned to the analysis strategy that she had worked on before, developing it further and using it to search for the Higgs.
Botta finished her Ph.D. in December 2011 and became a CERN postdoctoral fellow this past March. “These were the most interesting months, from March to July,” she says. CMS stopped taking data on 18 June, and on the run up to the 4 July announcement, “it went so fast and it went so well; it means that really 3000 people were each of them doing their little pieces and everything was working; this was incredible,” she says. Botta’s contribution was especially important because the four-lepton channel was one of the channels with the greatest sensitivity for spotting the Higgs. The day that all of the channels presented their final results to the whole CMS Higgs group, Botta gave the talk for the four-lepton channel team.
Being on the frontlines during the search for the Higgs boson required a great deal of work and resilience, Botta says. “If the data have a problem in the night, you have to be there and to solve it,” she says. She also had to adapt to a very competitive environment. There were about a hundred people in the CMS four-lepton group. In contrast to ATLAS, CMS was further divided into subgroups that competed to present the best analysis strategy. Even within her own subgroup, with the data completely open and no specific tasks assigned, all 30 members of the team competed to produce the best ideas and results. “It is competitive at different levels, but then at the end you have always a moment in which you feel like a group,” she says. When the announcement was made on 4 July, the Higgs group at CMS was “there all together to say to the world, ‘These were our best results,’ and to compare them with ATLAS.”
The analysis strategy that Botta started working on as a master's degree student turned out to have the highest sensitivity in the days leading up to the discovery, she says. “It’s not that you are the only one doing the major work, of course; it is a collaboration of 3000 people working on one experiment,” she says. “But still, you feel that you were there when they discovered it.”
Botta is now studying the properties of the Higgs, but she is looking forward to seeing the LHC upgraded to higher energies, which is expected to be completed in 2015. “We will move from analyzing data to again doing simulation and preparing ourselves for the new era,” she says.
Aaron Armbruster, ATLAS
CREDIT: Aaron Armbruster
Aaron Armbruster joined ATLAS in 2007, about 2 years before the LHC started generating data. He was in the final year of his bachelor's degree in physics at the University of Michigan, Ann Arbor, when he was initiated into the analysis of two possible Higgs decay routes. After graduating, he spent a year at CERN contributing to the construction of the ATLAS muon spectrometer.
In 2008, Armbruster came back to Ann Arbor to begin a Ph.D. in physics. In April 2010, just as the LHC was starting to produce high-energy collisions and real data to fuel the search for the Higgs, Armbruster moved back to CERN to pursue his research. He contributed to the analysis of the decay of the Higgs into two particles called W bosons.
In Geneva, he became a statistics expert, contributing to the statistical interpretation of the results and building a common statistics framework for analyses in the two–W boson decay channel. When scientists across all of the different channels pulled together their 2011 and 2012 results, Armbruster was put in charge of determining how statistically significant the signal was. His results showed that the chance that random backgrounds from run-of-the-mill particles would produce a spurious signal as big as the one they were seeing was about 1 in 3 million.
Armbruster says that the size of the collaboration presented some problems for the younger scientists. “The biggest challenges are really when you’re first starting out, because there are so many people and actually it’s hard to find your place in such a huge collaboration.” Young scientists, he says, can go unnoticed. But things got easier as time went on, with “a move from everybody trying to compete to do the same thing to everyone having specialized tasks,” he says.
As for the project itself, it has been “a roller coaster,” Armbruster says. “The ups usually happened when you saw something exciting in the data, something that could potentially be a signal” for the Higgs, he says. But, several times, promising signals disappeared as scientists did the crosschecks or the LHC produced more collisions. “If you collect more and more data you can see more clearly whether you have a signal or if this is just a statistical fluctuation above the background.”
In December 2011, CERN announced that both ATLAS and CMS had seen “[t]antalising hints” of a Higgs signal. But it wasn’t until late June—just a couple of weeks before the 2012 announcement—that one more data set convinced everyone at ATLAS that the signal was real, Armbruster says. “It was very exciting to see that, as then you know that it’s actually something physical in nature that you’re observing and not just some ghost that you’re chasing,” he says. But for him, the most rewarding time was when ATLAS and CMS shared their results with each other on the same day that they announced those results to the rest of the world. Armbruster continues,“We both saw basically the exact same thing. This is a huge, huge confirmation … that we actually did a good job and didn’t make any mistakes.”
Armbruster will finish his Ph.D. within a month. He plans to continue working on ATLAS as a postdoc, but he would like to explore new topics such as supersymmetry and dark matter. After the LHC upgrade, higher-energy collisions should greatly improve the study of the properties of the Higgs boson and may even allow researchers to “create heavier particles that might be just around the corner, that we just barely can’t create right now.”
What’s most fascinating is “the fact that we can make such statements that we do about the nature of the universe … at such small sizes and such high energies,” Armbruster says. “It’s very exciting to be part of this.” | <urn:uuid:e1cfc03e-b2c9-4b07-bae8-57f779fca74d> | CC-MAIN-2015-35 | http://sciencecareers.sciencemag.org/career_magazine/previous_issues/articles/2012_12_21/caredit.a1200139 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00103-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.977667 | 2,917 | 2.671875 | 3 |
Read an Excerpt
Holistic Game Development with UnityAn All-in-One Guide to Implementing Game Mechanics, Art, Design, and Programming
By Penny de Byl
Focal PressCopyright © 2012 Elsevier, Inc.
All right reserved.
Chapter OneThe Art of Programming Mechanics
Everyone can be taught to sculpt: Michelangelo would have had to be taught how not to. So it is with the great programmers.
In 1979, art teacher Betty Edwards published the acclaimed Drawing on the Right Side of the Brain. The essence of the text taught readers to draw what they saw rather than what they thought they saw. The human brain is so adept at tasks such as pattern recognition that we internally symbolize practically everything we see and regurgitate these patterns when asked to draw them on paper. Children do this very well. The simplicity in children's drawing stems from their internal representation for an object. Ask them to draw a house and a dog and you'll get something you and they can recognize as a house and dog or, more accurately, the icon for a house and dog, but something far from what an actual house and dog look like. This is evident in the child's drawing in Figure 1.1. The title of the book, Drawing on the Right Side of the Brain, also suggests that the ability to draw should be summoned from the side of the brain traditionally associated with creativity and that most bad drawings could be blamed on the left.
Different intellectual capability is commonly attributed to either the left or the right hemispheres. The left side being responsible for the processing of language, mathematics, numbers, logic, and other such computational activities, whereas the right deals with shapes, patterns, spatial acuity, images, dreaming, and creative pursuits. From these beliefs, those who are adept at computer programming are classified as left-brained and artists as right-brained. The segregation of these abilities to either side of the brain is called lateralization. While lateralization has been generally accepted and even used to classify and separate students into learning style groups, it is a common misconception that intellectual functioning can be separated so clearly.
In fact, the clearly defined left and right brain functions are a neuromyth stemming from the overgeneralization and literal isolation of the brain hemispheres. While some functions tend to reside more in one side of the brain than the other, many tasks, to some degree, require both sides. For example, many numerical computation and language activities require both hemispheres. Furthermore, the side of the brain being utilized for specific tasks can vary among people. Studies have revealed that 97% of right-handed people use their left hemisphere for language and speech processing and 70% of left-handed people use their right hemisphere.
In short, simply classifying programmers as left brainers and artists as right brainers is a misnomer. This also leads to the disturbing misconception that programmers are poor at art skills and that artists would have difficulty understanding programming. Programming is so often generalized as a logical process and art as a creative process that some find it inconceivable that programmers could be effective as artists and vice versa.
When Betty Edwards suggests that people should use their right brain for drawing it is in concept, not physiology. The location of the neurons the reader is being asked to use to find their creative self is not relevant. What is important is that Dr. Edwards is asking us to see drawing in a different light—in a way we may not have considered before. Instead of drawing our internalized symbol of an object that has been stored away in the brain, she asks us to draw what we see. To forget what we think it looks like. In the end this symbolizes a switch in thinking away from logic and patterns to images and visual processing.
There is no doubt that some people are naturally better at programming and others at art. However, by taking Edwards' anyone can draw attitude, we can also say anyone can program. It just requires a little practice and a change of attitude.
1.2 Programming on the Right Side of the Brain
While it is true that pure logic is at the very heart of all computer programs, it still requires an enormous amount of creativity to order the logic into a program. The process is improved greatly when programmers can visualize the results of their code before it even runs. You may liken this to a scene from The Matrix where the characters look at screens of vertically flowing green numbers and text but can visualize the structure and goings on in a photorealistic, three-dimensional virtual reality. To become a good computer programmer you need to know the language of the code and be able to visualize how it is affecting the computer's memory and the results of running the program.
Umberto Eco, the creator of Opera Aperta, described the concept of art as mechanical relationships between features that can be reorganized to make a series of distinct works. This too is true of programming. The same lines of programming code can be reorganized to create many different programs. Nowhere is this shared art/programming characteristic more obvious than in fractals.
Fractals are shapes made up of smaller self-similar copies of themselves. The famous Mandelbrot set or Snowman is shown in Figure 1.2. The whole shape is made up of smaller versions of itself. As you look closer you will be able to spot tens or even hundreds of smaller snowman shapes within the larger image.
A fractal is constructed from a mathematical algorithm repeated over and over where the output is interpreted as a point and color on the computer screen. The Mandelbrot set comes from complex equations, but not all fractal algorithms require high-level mathematical knowledge to understand.
The Barnsley fern leaf is the epitome of both the creative side of programming and algorithmic nature of art. Put simply, the algorithm takes a shape, any shape, and transforms it four times, as shown in Figure 1.3. It then takes the resulting shape and puts it through the same set of transformations. This can be repeated infinitum; however, around 10 iterations of this process give a good impression of the resulting image (see Figure 1.4).
Creating images with these types of algorithmic approaches is called procedural or dynamic generation. It is a common method for creating assets such as terrain, trees, and special effects in games. Although procedural generation can create game landscapes and other assets before a player starts playing, procedural generation comes into its own while the game is being played.
Programming code can access the assets in a game during run time. It can manipulate an asset based on player input. For example, placing a large hole in a wall after the player has blown it up is achieved with programming code. This can only be calculated at the time the player interacts with the game, as beforehand a programmer would have no idea where the player would be standing or in what direction he would shoot. The game Fracture by Day 1 Studios features dynamic ground terrains that lift up beneath objects when shot with a special weapon.
1.3 Creating Art from the Left Side of the Brain
Most people know what they like and don't like when they see art. However, if you ask them why they like it they may not be able to put their thoughts into words. No doubt there are some people who are naturally gifted with the ability to draw and sculpt and some who are not. For the artistically challenged, however, hope is not lost. This is certainly Betty Edwards' stance.
A logical approach to the elements and principles of design reveals rules one can apply to create more appealing artwork. They are the mechanical relationships, alluded to by Umberto Eco, that can be used as building blocks to create works of art. These fundamentals are common threads found to run through all good artwork. They will not assist you in being creative and coming up with original art, but they will help in presentation and visual attractiveness.
The elements of design are the primary items that make up drawings, models, paintings, and design. They are point, line, shape, direction, size, texture, color, and hue. All visual artworks include one or more of these elements.
In the graphics of computer games, each of these elements is as important to the visual aspect of game assets as they are in drawings, painting, and sculptures. However, as each is being stored in computer memory and processed by mathematical algorithms, their treatment by the game artist differs.
All visual elements begin with a point. In drawing, it is the first mark put on paper. Because of the physical makeup of computer screens, it is also the fundamental building block of all digital images. Each point on an electronic screen is called a pixel. The number of pixels visible on a display is referred to as the resolution. For example, a resolution of 1024 × 768 is 1024 pixels wide and 768 pixels high.
Each pixel is referenced by its x and y Cartesian coordinates. Because pixels are discrete locations on a screen, these coordinates are always in whole numbers. The default coordinate system for a screen has the (0,0) pixel in the upper left-hand corner. A screen with 1024 × 768 resolution would have the (1023,767) pixel in the bottom right-hand corner. The highest value pixel has x and y values that are one minus the width and height, respectively, because the smallest pixel location is referenced as (0,0). It is also possible to change the default layout depending on the application being used such that the y values of the pixels are flipped with (0,0) being in the lower left-hand corner or even moved into the center of the screen.
On paper, a line is created by the stroke of a pen or brush. It can also define the boundary where two shapes meet. A line on a digital display is created by coloring pixels on the screen between two pixel coordinates. Given the points at the ends of a line, an algorithm calculates the pixel values that must be colored in to create a straight line. This isn't as straightforward as it sounds because the pixels can only have whole number coordinate values. The Bresenham line algorithm was developed by Jack E. Bresenham in 1962 to effectively calculate the best pixels to color in to give the appearance of a line. Therefore, the line that appears on a digital display can only ever be an approximation to the real line as shown in Figure 1.7.
A shape refers not only to primitive geometrics such as circles, squares, and triangles, but also to freeform and nonstandard formations. In computer graphics, polygons are treated as they are in geometry; a series of points called vertices connected by straight edges. By storing the coordinates of the vertices the edges can be reconstructed using straight line algorithms. A circle is often represented by as a regular polygon with many edges. As the number of edges increases, a regular polygon approaches the shape of a circle.
Freeform objects involve the use of curves. To be stored and manipulated by the computer efficiently, these need to be stored in a mathematical format. Two common types of curves used include Bezier and nonuniform rationalbasis spline (NURBS).
A Bezier curve is constructed from a number of control points. The first and last points specify the start and end of the curve and the other points act as attractors, drawing the line toward them and forming a curve, as shown in Figure 1.8. A NURBS curve is similar to a Bezier curve in that it has a number of control points; however, the control points can be weighted such that some may attract more than others.
In computer graphics, a polygon is the basic building block for objects, whether in 2D or 3D. A single polygon defines a flat surface onto which texture can be applied. The most efficient way to define a flat surface is through the use of three points; therefore, triangles are the polygon of choice for constructing models, although sometimes you will find square polygons used in some software packages. Fortunately for the artist, modeling software such as Autodesk's 3DS Studio Max and Blender do not require models to be handcrafted from triangles; instead they automatically construct any objects using triangles as a base as shown in Figure 1.9.
Excerpted from Holistic Game Development with Unity by Penny de Byl Copyright © 2012 by Elsevier, Inc.. Excerpted by permission of Focal Press. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site. | <urn:uuid:b3d36591-0dbd-4b60-844d-8aa1dd1fa4ee> | CC-MAIN-2015-35 | http://www.barnesandnoble.com/w/holistic-game-development-with-unity-penny-de-byl/1111459261?ean=9780240819341&itm=1 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062760.2/warc/CC-MAIN-20150827025422-00224-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.952769 | 2,609 | 3.40625 | 3 |
"The Novel as the Educator of the Imagination." by Miss May Rogers.
Publication: Eagle, Mary Kavanaugh Oldham, ed. The Congress of Women: Held in the Woman's Building, World's Columbian Exposition, Chicago, U. S. A., 1893. Chicago, Ill: Monarch Book Company, 1894. pp. 586-589.
Edgar Fawcett says:
"We, who write novels for existing time,
Should face our task with fortitude sublime.
Twice daily now we hear our critics mourn
The unpleasant fact that we were ever born."
|MISS MAY ROGERS.|
In our Dubuque library last year, out of a circulation of over twenty-five thousand books over nineteen thousand were juveniles and fiction. The report of the Chicago public library for 1892 states that over forty-two per cent of the circulation was English prose fiction, and over twenty per cent was juvenile literature. The Nineteenth Century Magazine for June, 1893, states that the per cent of fiction in the Battersea free libraries of England was four-fifths of the circulation. But of a circulation of five millions in the Boston public library, extending over five years, four-fifths of the books were juvenile and fiction.
Novels are the amusement and refreshment of our practical, overworked, overwrought age. Even children tire of monotony and seek the fairies. Novels are read by those who read no other books, and they are also the recreation of scholars and thinkers. Charles Darwin said they rested him. As long as age cherishes tender memories, and as long as love is the dream of youth, romance will be the most fascinating literature. A description of all the novels now being read would be a mirror of the multiform modern mind. Any human interest is a legitimate theme for the novelist, and it is as useless to dogmatize about the sphere of the novel as it is useless to dogmatize about the sphere of woman. There are novels for those who admire philosophic analysis, and for those who want exciting adventures on land and on sea, and also for those who ask that their love stories shall give information about history, science, reform, theology and politics. Harriet Martineau wrote Political Economy in the story form, and I am surprised that there was not a tariff novel during the last campaign.
In this age of the telegram and the paragraph, the novelist who wishes to be read must be brief as well as brilliant. Tourgeneff's method was to condense and to concentrate. Guy De Maupassant made the short story popular in France by his genius in eliminating the superfluous. His thirteen short tales, published as the Odd Number, are masterpieces of concise but artistically adequate treatment. Our American novelists have been most artistic as writers of short stories, whether we judge the result by effectiveness of story telling, or keenness of character sketching or carefulness in literary construction. In the long list of our successful writers of short stories there has been no discrimination against our sex in the awards of honor. Mrs. Jewett's art is so finished that Howells compares her with Maupassant, to her advantage. I think Miss Woolsen's finest story is her short novel, "For the Major," which has a touch of ideal grace. New England has her Mary Wilkin, and we in Iowa are proud of our Octave Thanet, who spoke at the literary congress of the American flavor of our short stories. International novels have the charm of cosmopolitan culture, but they are not contributions to a distinctive national literature, which must be written from an American point of view about the characteristics of our people, with their local atmosphere. The late Sidney Lanier delivered a series of lectures on the development of the English novel at Johns Hopkins University in 1881. He believed that the novel, [Page 588] modern music and modern science are the simultaneous expressions of the growth of individuality in man. Richardson, the founder of the English novel, was born in 1689; the musician, Sebastian Bach, in 1685, and the scientist, Newton, in 1642. Thus being born in the same half century, he regards them as contemporary results of the Renaissance. He argues that man's desire to have individual knowledge of his physical environments produced the scientist, man's desire to utter his individual emotions toward the Infinite gave us the modern art and artist of music; man's desire to know the life of his fellow-man resulted in the novel. The drama was inadequate for portrayal of the minute complexities of modern personalities. The novelist succeeded the chorus, and the novel was evolved out of the classic and Elizabethan dramas. Before the printing press the multitudes were entertained and instructed by the theater. The reading public of today studies the story of human life. With the progress of the democratic idea of the rights of man has grown a sense of the kinship of men. In England the novel of individual traits, of manners and domestic life, with an avowed or implied moral motive, began with Richardson's Pamela in 1740, and in this field of fiction the English novel is unrivaled. In his history of European morals, Mr. Lecky charges man's intolerance to feeble imagination, which prevents him from understanding people of a different religion, pursuit, age, country, or temperament from his own. He claims that men tortured in the past and persecute today because they are too imaginative to be tolerant or just. What they can not realize they believe to be evil, and he says that this "power of realization forms the chief tie between our moral and intellectual natures." We think that only those who are intentionally cruel would continue to inflict pain if they knew the suffering they caused. He concludes that the "sensitiveness of a cultivated imagination" makes men humane and tolerant. Thus imaginative literature is a civilizer when it develops tolerance through sympathy.
The hesitancy of writers in other branches of literature to grant the importance of the novel is due to their failure to see that it is the popular educator of the imagination. George Eliot said: "If art does not enlarge man's sympathies, it does nothing morally, and the only effect I ardently long to produce by my writings is, that those who read them should be better able to imagine and feel the pains and joys of those who differ from themselves in everything but the broad fact of being struggling, erring human creatures."
What novelists have done to help mankind is incalculable. Imprisonment for debt is now so hateful to us that Dickens' "Little Dorrit" seems a story of a forgotten past. Charles Reade struck heavy blows at abuses in prisons, insane asylums and trade unions in his "Never Too Late to Mend," "Hard Cash," and "Put Yourself in His Place." The People's Palace in London is the result of Walter Besant's "All Sorts and Conditions of Men," and the sorrows of the poor and the oppressed everywhere are told in our novels. It is impossible to measure how much of the preparatory work of emancipation was due to and done by "Uncle Tom's Cabin."
As human nature is the inspiration of literature, characters of a novel must be natural to be of any literary value, and of this anyone can judge who has had the ordinary experience of life. There is so much fiction written only for sensational excitement, and there are tales of silly sentimentality which can justly be called trash. Mature, busy people often feel that it is a waste of time to read of phenomenally gifted heroes and supernaturally beautiful heroines who keep their lovers in awful suspense until the wedding bells of the last chapter. Novels devoted to expert testimony in the art of kissing are unnecessary, and it will always be an experimental science.
John Morley defines literature as the books "where moral truth and human passion are touched with a certain largeness, sanity and attraction of form." A novel has not sanity unless it is true to the probabilities of conduct and represents the passions of love in its ratio to the other interests of life. The "attraction of form" can not be imprisoned in a definition any more than a woman's charms can be described [Page 589] by an adjective. Its presence is the author's diploma of style, his degree of master in the service of beauty. The French are the successors of the Greeks in the arts, and their literary technique makes their fiction supreme in the "attraction of form" and in description of human passion, but it seldom has the largeness that considers responsibility as well as passion. A novel is written "with a certain largeness" when we are shown passion not only in its relation to individuals, but also to their social environment and to our universal humanity. This largeness was the greatness of George Elliot.
While foreign fiction may have an emotional and artistic fascination, we cherish our English novels for more reasons than those of entertainment. It is the history of the manners and customs and daily life of the English speaking people here and in the mother country. On its pages are recorded all our current thoughts and debates, and all our dreams and despairs. It tells of the happiness of love, and of the anguish of bereavement, of secret wrestling with temptation and of the weary questioning of the mysteries of life and death. It also reflects the moral force and philanthropy of our race, which is striving to make tomorrow nobler than today.
We are fortunate to live in this blessed modern age when electric science writes the minds of men, and when the spirit's subtle sympathy makes us one in heart. What Mrs. Browning says of the poet is true of the novelist:
And triumph of the poet, who would say
A man's mere 'yes' a woman's common 'no,'
A little human hope of that or this,
And say the word so that it burns you through.
With special revelation speaks the heart
Of all the men and women in the world."
Miss May Rogers is a native of Dubuque, Iowa. Her parents were Thomas Rogers, of New York, a distinguished lawyer, scholar and orator, and Anna Burton Rogers, of Delaware. She was educated in the public schools and by private instructors in Greek, Latin and French. She has traveled in Europe and extensively in the United States, having lectured in New York City, New Orleans, Washington, Cheyenne, San Francisco and Des Moines. Miss Rogers is one of the Board of Directors of the General Federation of Women's Clubs and is a Daughter of the American Revolution. She was president of the Dubuque Ladies' Literary Association for several years. Miss Rogers is a descendant of Dutch and Huguenot ancestors, who emigrated, that they might enjoy religious liberty. Her literary works are the Waverly Dictionary of the characters in Scott's novels, newspaper editorials, lectures and reviews. Her profession is journalism and lecturing. Her postoffice address is No. 547 Locust Street, Dubuque, Iowa.
This chapter has been put on-line as part of the
BUILD-A-BOOK Initiative at the
Celebration of Women Writers.
Initial text entry and proof-reading of this chapter were the work of volunteer | <urn:uuid:197d6d7b-8510-438c-92f8-b2374223e884> | CC-MAIN-2015-35 | http://digital.library.upenn.edu/women/eagle/congress/rogers.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281325.84/warc/CC-MAIN-20150827031441-00042-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.969137 | 2,316 | 2.59375 | 3 |
Advantages of Natural Gas
What Is Natural Gas?
Natural gas (NG) is a mixture of gases that was formed from fossil remains of plants and animals that are buried deep below the Earth’s surface. Therefore NG is also a fossil fuel. All types of fuel are either in a liquid, gas, or solid state. Coal is in a solid state where oil and gasoline are in liquid form. When a liquid reaches its boiling temperature, it becomes a gas. Although we use the word “gas” for gasoline, it isn’t actually in the gas state, but is a liquid. On the other hand, natural gas is truly a “gas.” Natural gas is composed primarily of methane, although it also contains ethane, propane, and traces of other gases. Depending on where it is extracted, it varies between 87% & 96% methane with about 1.5% to 5% ethane, and .1% to 1.5% propane. Methane is actually an odorless and a colorless gas. But if you have smelled natural gas, you have noticed it has a nasty rotten egg smell. Gas companies add a chemical that is called mercaptan to the gas so it will have this unpleasant smell. The reason for adding this smelly chemical is for our safety. It can be deadly if you breath too much of it. The smell alerts us if there is a natural gas leak. There are a number of advantages of natural gas over other fossil fuels as will be pointed out in the following information.
Natural Gas is Better For, And Friendlier To Our Environment
One of the primary advantages of natural gas over other fossil fuel is that it is much friendlier to the environment. What I mean is it emits fewer emissions into the atmosphere when it is burned than other fossil fuels do. In fact NG is the cleanest burning fuel in existence today. For same amount of heat produced, natural gas emits 30% less carbon dioxide than burning oil and 45% less carbon dioxide than burning coal, thereby, improving the quality of air. The primary products that result from the combustion of natural gas are carbon dioxide and water vapor. This is exactly what we release when we breathe.
Many of the pressing environmental problems we have been fighting are a result of the dirtier burning fossil fuels. Since natural gas is the cleanest burning of all the fossil fuels, there are many ways it can be used in our society to diminish the pollutants that are emitted into our atmosphere. The more we switch to these applications, the more we remove those harmful particulates from our environment.
The reason other fossil fuels release more harmful particles into the atmosphere is their molecular composition. If you use natural gas to cook, you will notice that when you burn it, unlike other petroleum products, it doesn’t produce any smoke. Coal and oil contain more complex molecules, which have a higher carbon ratio. They also have a higher content of nitrogen and sulfur. That’s why they release higher levels of harmful emissions into our environment. In addition, coal and fuel oil release particles of ash into the atmosphere. These are what contribute to pollution. On the other hand, natural gas releases very small amounts of sulfur dioxide and nitrogen oxides, and essentially no ash or particulate matter. It also releases much lower levels of carbon dioxide, carbon monoxide, and other hydrocarbons.
Here are some other environmental benefits to using natural gas. When producing electricity, natural gas powered plants use about 60% less water than coal plants and 75% less later than nuclear power plants for the same electrical output. Additionally, natural gas power plants require the least amount of land per megawatt of production versus renewable energy sources. Wind and solar both require twenty times more land to power the same number of homes as natural gas power-plants.
Natural gas used for power production avoids many of the pitfalls facing wind, solar, nuclear, and biofuel power generation technologies. Those include visual impact, waste disposal, bird strikes and competing land uses.
In the production of electricity, natural gas responds more quickly than any other generation source in responding to fluctuations in U.S. consumer electricity demand. In fact, natural gas powered generation reliably backs up wind generated electricity when the wind doesn’t blow and solar generated electricity when the sun doesn’t shine.
Natural gas burns very hot
Natural gas also gives off a lot of heat as it burns very hot. This property of natural gas makes it good for heating homes and for cooking. NG has a high heating value of 24,000 Btu per pound. As a result, more than 60 million, or about half of the homes in the United States are heated by natural gas. Here are some of the household applications of natural gas:
· Clothes Dryers
· Water Heaters
· Fireplace Logs
· Patio Heaters
· Pool and Spa Heaters
· Fire Pits
· Outdoor Lights
Natural gas is easier and safer to store
Natural gas is much safer to store than other fossil fuels. As a result, it is a very efficient source of energy for heating as well as generating electricity. Natural gas can be stored in liquid form in above ground tanks, but a majority of the reserves in the U.S. are stored underground. Most of the storage in the United States is in depleted natural gas or oil fields that are located near consumption centers. The reason these are used is their wide availability. It works quite well to convert a field from production to storage because they can take advantage of existing wells, gathering systems, and pipeline connections.
Natural gas is very reliable
This gas is also very reliable. Unlike electricity in many areas, natural gas is buried beneath the ground. Therefore when storms hit, the delivery of natural gas is seldom interrupted. That is especially important in extreme cold, where electrical power is often knocked out.
Natural gas is a cost-effective fuel
When gasoline reached well over $4.00 per gallon, natural gas was close to the $2.00 per equivalent gallon price. The law of supply and demand kicks in. With new technology, shale natural gas, which is very prevalent in the United States, has become a popular alternative. With the price so affordable, it pays for many trucking companies to convert their fleets over to natural gas from diesel. With auto makers teaming up with Westport engine manufacturing, we should see an increase in natural gas trucks for sale. Even taxi cab companies are converting their gasoline cars to natural gas.
There is an abundance of Natural Gas
We now have more than a 100 year supply of clean burning natural gas that we didn’t even know about just a few short years ago. Plus, most of the natural reserves of natural gas fields are still underutilized. With shale natural gas playing into the picture, our supply has become huge. It is estimated that the Marcellus Shale alone, which is in the Appalachian Basin, contains close to 50 trillion cubic feet of natural gas that is recoverable. The Marcellus Shale underlies a good portion of Pennsylvania, New York, West Virginia, Ohio and adjacent states. The Marcellus natural gas supply is also in close proximity to the high demand markets of New York, New Jersey and New England. There are a number of other natural gas reservoirs scattered around other parts of the country. These shale natural gas reservoirs are now all accessible through the technologies of horizontal drilling and hydraulic fracking, which is a safe method of extracting natural gas.
Domestic production of natural gas lessens our dependency on foreign oil
As the production of domestic fuel supplies increases, it loosens the noose that foreign oil has around our necks and makes us much less dependent upon them. The less dependent we are on imported fuel, the better off we are. By being more energy independent, it brings down the cost of foreign sources of energy. It also creates a lot more jobs in our own economy. God knows we need that.
Natural gas is a means for improved national security
You may wonder what national security has to do with the production of natural gas. Actually, it can have huge national security implications. When we become more and more dependent on foreign sources for our energy, we not only become a slave to the prices that are demanded, but if they choose to shut off our supply, it could cripple our power. The sooner we become energy independent the better. Not only is the supply of natural gas important in light of national security, but the production of domestic oil as well. Although this is only one of the many advantages of natural gas, it is (or should be) high on the priority list.
Economic Benefits of Natural Gas
The U.S. supply of natural gas is enhancing the global competitiveness of the United States. It is creating tens of thousands of well-paying jobs while expanding our nation’s capacity for generating a cleaner burning fuel and by generating affordable electric power. More than 85% of all new electrical generation built in the U.S. in the last decade uses natural gas to generate that electricity.
According to the Natural Gas Supply Association (NGSA), activities related to the development of natural gas contributed $385 billion to the U.S. economy in 2008. Between 2005 and 2010, the U.S. government received royalty payments by the natural gas industry in the amount of $4.4 billion.
In the middle of the economic downturn in America, many areas of the country have been revived due to the surging shale gas production. These areas include Pennsylvania, West Virginia, Montana, Arkansas, and North Dakota. According to HIS Global Insight, the natural gas industry directly employs 622,000 Americans, and another 2.2 million are employed in supporting industries.
As you can see, there are many advantages of natural gas. There are some challenges and some obstacles to natural gas moving forward. As with most areas of life, when there are advantages, there are also disadvantages. It’s always best to look at both sides before drawing any final conclusions. I encourage you to also read up on the disadvantages of natural gas. | <urn:uuid:b055ff3e-08f7-4710-9550-439e149bd1ac> | CC-MAIN-2015-35 | http://www.advantagesofnaturalgas.net/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645305536.82/warc/CC-MAIN-20150827031505-00277-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.942579 | 2,073 | 3.375 | 3 |
Contextual notes on Yeats' "Purgatory"
The play was first produced at the Abbey Theatre on August 19, 1938. In a letter of March 1938, Yeats wrote,
"I have a one-act play in my head, a scene of tragic intensity . . . . I am so afraid of that dream. My recent work has greater strangeness and I think greater intensity than anything I have done. I never remember the dream so deep."
In an interview published in the Irish Independent in August 1938, Yeats answered questions posed by an American Jesuit priest:
"Father Connolly said that my plot is perfectly clear but that he does not understand my meaning. My plot is my meaning. I think the dead suffer remorse and re-create their old lives just as I have described. There are medieval Japanese plays about it, and much in the folklore of all countries.
"In my play, a spirit suffers because of its share, when alive, in the destruction of an honoured house; that destruction is taking place all over Ireland today. Sometimes it is the result of poverty, but more often because a new individualistic generation has lost interest in the ancient sanctities.
"I know of old houses, old pictures, old furniture that have been sold without apparent regret. In some few cases a house has been destroyed by a mesalliance. I have founded my play on this exceptional case, partly because of my interest in certain problems of eugenics, partly because it enables me to depict more vividly than would otherwise be possible the tragedy of the house.
"In Germany there is special legislation to enable old families to go on living where their fathers lived. The problem is not Irish, but European, though it is perhaps more acute here than elsewhere."
"Purgatory," when it was first performed, had a substantial number of detractors, people who were offended by what seemed to them a lack of respect for religion. But the controversy was short-lived; Yeats wrote to a friend in September 1938 that "Most people seem to be on our side and the daily newspapers had leaders [i.e., leading articles] in our support."
The critic Richard Ellmann has noted the "clipped rhythm and intentionally awkward syntax which Auden had made available for confiscation" and the Auden-like abruptness of the minor characters' speeches.
Numerous of the ideas and images in the play had already been expressed, in various ways, in Yeats' poetry and other works. For example, he emphasizes elsewhere the momentous importance that one choice can make; he wrote in "If I were Four-and-Twenty" that "a single wrong choice may destroy a family, dissipating its traditon or biological force, and the great sculptors, painters, and poets, are there that instinct may find its lamp."
The idea of archetypes is also behind what's at work in this play. According to Yeats, there is an experience of "timeless individuality" that "contains archetypes of all possible existences whether of man or brute, and as it traverses its circle of allotted lives, now one, now another prevails. We may fail to express an archetype or alter it by reason, but all done from nature is its unfolding into time. Some other existence may take the place of Socrates, yet Socrates can never cease to exist . . . Plotinus said that we should not `baulk at this limitlessness of the intellectual; it is an infinitude, having nothing to do with number or part'; yet it seems that it can at will re-enter number and part and thereby make itself apparent to our minds. If we accept this idea many strange or beautiful things become credible . . . All about us there seems to start up a precise inexplicable teeming life, and the earth becomes once more, not in rhetorical metaphor, but in reality, sacred."
At the end, the Old Man utters a prayer for release from his sins. It's an open question whether this prayer really constitutes a way out of the typically Yeatsian cyclic repetition. The Old Man says,
"But there's a problem: she must live
Through everything in exact detail,
Driven to it by remores, and yet
Can she renew the sexual act
And find no pleasure in it, and if not,
If pleasure and remorse must both be there,
Which is the greater?"
The prayer may provide the only solution, however much it may contain of Yeats' own despair, which is evidenced in the following passage:
"Unless there is a change in the public mind, every rank above the lowest must degenerate, and as inferior men push up into its gaps, degenerate more and more quickly. The results are already visible in the degeneration of literature, newspapers, amusements, and, I am convinced, in benefactions like those of Lord Nuffield, a self-made man, which must gradully substitute applied science for ancient wisdom."
This sense of ominous uncertainty represents a move on Yeats' part away from earlier moments in which he had bolstered his pessimism with a belief that the worst could be overcome by a willed assertion of joy (as he suggests, for example, in the poem "Lapis Lazuli").
In "Purgatory" he wants his audience to realize the dangers of what he perceived as a degradation of the human stock and soul, to understand, and, perhaps, to try to stop the process by "a change in the public mind."
Source of the information above: A Commentary on the Collected Plays of W. B. Yeats, ed. A. Norman Jeffares and A. S. Knowland (Stanford: Stanford Univ. Press, 1973).
Note also what Yeats said to his friend the writer Dorothy Wellesley, as documented by her:
"He had been talking rather wildly about the after life. Finally I asked him: 'What do you believe happens to us immediately after death?' He replied: 'After a person dies he does not realize that he is dead.' I: 'In what state is he?' W.B.Y.: 'In some half-conscious state.' I said: 'Like the period between waking and sleeping?' W.B.Y.: 'Yes.' I: 'How long does this state last?' W.B.Y.: 'Perhaps some twenty years.' 'And after that' I asked, 'what happens next?' He replied: 'Again a period which is Purgatory. The length of that phase depends upon the sins of the man when upon this earth.' And then again I asked: 'And after that?' I do not remember his actual words, but he spoke of the return of the soul to God. I said: 'Well, it seems to me that you are hurrying us back to the great arms of the Roman Catholic Church.' He was of course an Irish Protestant. I was bold to ask him, but his onlyretor was his splendid laugh" (qtd. in Helen Vendler, Yeats' Vision and the Later Plays 195).
As Helen Vendler observes, there seems something comic about this exchange--the possibility that Yeats is just drawing the number 20 years out of a hat, plus Wellesly's own rather naive characterization of Yeats as "Irish Catholic" as if that could explain him! But what can explain his message in this play?
Eliot thought its "theology" was lacking. In an essay on Yeats, he said, "The play Purgatory is not very pleasant, either . . . I wish he had not given it this title, because I cannot accept a purgatory in which there is no hint, or at least no emphasis on Purgation." And elsewhere, Eliot wrote, "Mr. Yeats' 'supernatural world' was the wrong supernatural world. It was not a world of spiritual significance, not a world of real Good and Evil, of holiness or sin, but a highly sophisticated lower mythology" (qtd. in Vendler 196).
But not even the distinguished critic Eliot gets the last word, of course. Others have thought this play is successful. The levels on which it works include the way it serves as an allegory for the fallen state of the Anglo-Irish aristocracy, which (in Yeats' view) had become "contaminated" through its association with lower classes; and the way in which it seems to foreshadow the senseless "purgations" of World War II.
We can turn to one more comment by Yeats to see that he might have intended a link between purgation and the very act of creation. "This earth-resembling life is the creation of the image-making power of the mind, plucked naked from the body, and mainly of the images in the memory . . . Like the transgressions, all the pleasure and pain of sensible life awaken again and again, all our pasisonate events rush up about us and not as seeming imagination, for imagination is now the world." That last phrase, "imagination is now the world," might as well be describing the moment of artistic creation (see Vendler 197).
Finally some commentary from Harold Bloom, who is not a fan of the play.
"Yeats intended Purgatoryto stand at the end of his last volume, which he knew would be published posthumously [the volume is called On the Boiler]. . . . Yeats himself insisted that the play expressed his own conviction about this world and the next. It is, then, the poet's deliberate testament, the work in which [he] passes a Last Judgment on himself. We turn to the play expecting to encounter the wisdom and the human powers developed through a lifetime of imaginative effort. What do we find?"
In short, Bloom finds a play that renders an "exercise in practical eugenics" (i.e., the father's murder of the son, which ends the faulty genetic line). "If the poet's conviction about this world is in the play, it would seem that the old wanderer acts for Yeats in preventing 'the multiplication of the uneducatable masses' [quoting from a prose passage in On the Boiler]. That leaves the poet's conviction about the next world, if a reader is still minded to seek enlightenment from this testament. The somewhat more aesthetic purgation outlined in [Yeats' big mystical work] A Vision has little to do with the notion of purgaion in the play, and apologists for the play have been driven to strained allegories to justify the play's apparent conviction as to the next world. The next world, toward the end, looked to Yeats like a cyclic repetition of this one, and so the lustful begetting of the murderous wanderer is doomed always to be re-enacted, despite the wanderer's violence and his anguished prayer. Whether or not Yeats fully intended it, the closing prayer is simplyinaccurate and becomes an irony, for the actual repetition in the play is not one of remorse, but of fierce pleasure, of lust fulfilled andyet again fulfilled. The old wanderer anticipates the irony, saying of his parents' repeated sexual act: 'If pleasure and remorse must both be there, / Which is the greater?' but he cannot answer the question, and neither can Yeats, whose confusion is in the play as much as his conviction is."
Bloom's major complaint is that "Yeats is not separate enough from the old man's rage to render the play's conclusion coherent. That hardly makes the play less powerful," he says, "but perhaps we ought to resent a work that has so palpable a design upon us. Eugenic tendentiousness is not a formula for great art, even in Yeats." | <urn:uuid:657206d9-939f-418e-98c8-4b894c3ea61a> | CC-MAIN-2015-35 | http://www.ibiblio.org/sally/Purgatory_notes.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00161-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.970969 | 2,438 | 2.796875 | 3 |
The Norse Discovery of America, by A.M Reeves, N.L. Beamish and R.B. Anderson, , at sacred-texts.com
IT will be remembered that a passage in the Book of Settlement [Landnamabok] recites the discovery, by one Ari Marsson, of a country lying westward from Ireland, called White-men's-land, or Ireland the Great. This White-men's-land is also mentioned in the Saga of Eric the Red, and in both places is assigned a location in the vicinity of Wineland the Good. Many writers have regarded this White-men's-land as identical with a strange country, the discovery of which is recounted in the Eyrbyggja Saga, having been led to this conclusion, apparently, from the fact that both unknown lands lay to the "westward," and that there is a certain remote resemblance between the brief particulars Of the Eric's Saga and the more detailed narrative of Eyrbyggja.
It is related in the Eyrbyggja Saga that a certain Biorn Asbrandsson became involved in an intrigue with a married woman named Thurid, which resulted in his wounding the affronted husband and slaying two of the husband's friends, for which he was banished from Iceland for the term of three years. Biorn went abroad, led an adventurous life, and received the name of "kappi" [champion, hero] on account of his valorous deeds. He subsequently returned to Iceland, where he was afterwards known as the Broadwickers'-champion. He brought with him on his return not only increase of fame,
but the added graces of bearing due to his long fellowship with foreign chieftains, and he soon renewed his attentions to his former mistress. The husband, fearing to cope alone with so powerful a rival, invoked the aid of one skilled in the black art to raise a storm, which should overwhelm the object of his enmity. The hero, however, after three days of exposure to the preternaturally-agitated elements, returned exhausted, but in safety, to his home. The husband then prevailed upon his powerful brother-in-law, the godi ( 72) Snorri, to come to his assistance, and as a result of Snorri's intervention, Biorn agreed to leave the country. He accordingly rode "south, to a ship in Laga-haven, in which he took passage that same summer, but they were rather late in putting to sea. They sailed away with a north-east wind, which prevailed far into the summer, but nothing was heard of this ship for a long time afterwards."
Further on in the same saga we read of the fortuitous discovery of this same Biorn by certain of his fellow-countrymen, and as the account of their strange meeting contains the sole description of this unknown land, it may best be given in the words of the saga. "It was in the latter days of Olaf the Saint that Gudleif engaged in a trading voyage westward to Dublin, and when he sailed from the west it was his intention to proceed to Iceland. He sailed to the westward of Ireland, and had easterly gales and winds from the northeast, and was driven far to the westward over the sea and toward the southwest, so that they had lost all track of land. The summer was then far spent, and they uttered many prayers that they
might be permitted to escape from the sea, and it befell thereupon that they became aware of land. It was a great country, but they did not know what country it was. Gudleif and his companions determined to sail to the land, for they were weary with battling with the tempestuous sea. They found a good harbour there, and they had been alongside the land but a short time when men came toward them. They did not recognize a single man, but it rather seemed to them that they were speaking Irish; soon so great a throng of men had drawn about them that they amounted to several hundreds. These people thereupon seized them all and bound them, and then drove them up upon the land. They were then taken to a meeting, at which their case was considered. It was their understanding that some [of their captors] wished them to be slain, while others would have them distributed among the people and thrown into bondage. While this was being argued they descried a body of men riding, and a banner was carried in their midst, from which they concluded that some manner of chieftain must be in the company; and when this band drew near they saw a tall and warlike man riding beneath the banner; he was far advanced in years, however, and his hair was white. All of the people assembled bowed before this man and received him as he had been their lord; they soon observed that all questions and matters for decision were submitted to him. This man then summoned Gudleif and his fellows, and when they came before him he addressed them in the Northern tongue [i. e., Icelandic], and asked them to what country they belonged. They
responded that they were, for the most part, Icelanders. This man asked which of them were the Icelanders. Gudlief then advanced before this man, and greeted him worthily, and he received his salutations graciously, and asks from what part of Iceland they came, and Gudleif replied that he came from Borgarfirth. He then enquired from what part of Borgarfirth he came, and Gudleif informs him. After this he asked particularly after every one of the leading men of Borgarfirth and Breidafirth, and in the course of the conversation he asked after Snorri Godi and Thurid, of Froda, his sister, and he enquired especially after all details concerning Froda, and particularly regarding the boy Kiartan, 1 who was then the master at Froda. The people of the country, on the other hand, demanded that some judgment should be reached concerning the ship's crew. After this the tall man left them, and called about him twelve of his men, and they sat together for a long time in consultation, after which they betook themselves to the [general] meeting. Thereupon the tall man said to Gudleif and his companions: 'We, the people of this country, have somewhat considered your case, and the inhabitants have given your affair into my care, and I will now give you permission to go whither ye list; and even though it may seem to you that the summer is far spent, still I would counsel you to leave here, for the people here are untrustworthy and hard to deal with, and have already formed the belief that their laws have been broken.' Gudleif replied: 'If it be vouchsafed us to reach our native land, what shall we say
concerning him who has granted us our freedom.' He answered: 'That I may not tell you, for I cannot bear that my relatives and foster-brothers should have such a voyage hither as ye would have had if ye had not had my aid; but now I am so advanced in years,' said he, 'that the hour may come at any time when age shall rise above my head; and even though I should live yet a little longer, still there are those here in the land who are more powerful than I who would offer little mercy to strangers, albeit these are not in this neighbourhood where ye have landed.' Afterward this man aided them in equipping their ship, and remained with them until there came a fair wind, which enabled them to put to sea. But before he and Gudleif parted, this man took a gold ring from his hand and handed it to Gudleif, and with it a goodly sword; and he then said to Gudleif: 'If it be granted thee to come again to thy father-land, then do thou give this sword to Kiartan, the master at Froda, and the ring to his mother.' Gudleif said: 'What shall I reply as to who sends these precious things?' He answered: 'Say that he sends them who was more of a friend of the mistress at Froda than of the Godi at Helgafell, her brother. But if any persons shall think they have discovered from this to whom these treasures belonged, give them my message, that I forbid any man to go in search of me, for it would be a most desperate undertaking, unless he should fare as successfully as ye have in finding a landing-place; for here is an extensive country with few harbours, and over all a disposition to deal harshly with strangers, unless it befall as it has in this case.' After
this they parted. Gudleif and his men put to sea, and arrived in Ireland late in the autumn, and passed the winter in Dublin; but in the summer they sailed to Iceland, and Gudleif delivered the treasures, and all men held of a verity that this man was Biorn Broadwickers'-champion; but people have no other proof of this, save these particulars, which have now been related."
It will be observed that the narrator of the saga does not in this incident once connect this unknown land with White-men's-land, nor does he offer any suggestion as to its situation. The work of identifying this strange country with White-men's-land, and so with Wineland the Good, has been entirely wrought by the modern commentator. If we accept as credible a meeting so remarkable as the one here described, if we disregard the statements of the narrative showing the existence of horses in this unknown land, which the theorist has not hesitated to do, and, finally, if we assume that there was at this time an Irish colony or one speaking a kindred tongue in North America, we may conclude that Biorn's adopted home was somewhere on the eastern North-American coast. If, however, we read the statements of the saga as we find them, they seem all to tend to deny this postulate, rather than to confirm it. The entire story has a decidedly fabulous appearance, and, as has been suggested by a learned editor of the saga, a romantic cast, which is not consonant with the character of the history in which it appears. A narrative, the truth of which the narrator himself tells us had not been ratified by collateral evidence, and whose details are so vague and indefinite, seems to afford
historical evidence of a character so equivocal that it may well be dismissed without further consideration.
Of an altogether different nature from the narrative of discovery above recited is the brief notice of the finding of a new land, set down in the Icelandic Annals toward the end of the thirteenth century. In the Annales regii, in the year 1285, the record reads: "Adalbrand and Thorvald, Helgi's sons, found New-land;" in the Annals of the Flatey Book, under the same year, "Land was found to the westward off Iceland;" and again in Gottskalk's Annals an entry exactly similar to that of the Flatey Book. In Hoyer's Annals the entry is of a different character: "Helgi's sons sailed into Greenland's uninhabited regions."
In the parchment manuscript AM. 415, 4to, written, probably, about the beginning of the fourteenth century, is a collection of annals called "Annales vetustissimi," and here, under the year 1285, is an entry similar to that of the Flatey Book: "Land found to the westward off Iceland." In the Skalholt Annals, on the other hand, the only corresponding entry against the year 1285 is: "Down-islands discovered."
It required but the similarity between the names Newland and Newfoundland to arouse the effort to identify the two countries; and the theory thus created was supposed to find confirmation in a passage in a copy of a certain document known as Bishop Gizur Einarsson's Register [brefa-bok], for the years 1540-47, which is contained in a paper manuscript of the seventeenth century, AM. 266, fol. This passage is as follows: "Wise men
have said that you must sail to the southwest from Krisuvik mountain to Newland." Krisuvik mountain is situated on the promontory of Reykianess, the southwestern extremity of Iceland, and, as has been recently pointed out, to sail the course suggested by Bishop Gizur would in all probability land the adventurous mariner in southeastern Greenland. The record of the Annals, however, is so explicit, that in determining the site of "Newland" we do not need to orient ourselves by extraneous evidence. We are informed, that, in 1285, Helgi's sons sailed into Greenland's "obygdir," the name by which the Greenland colonists were wont to designate the uninhabited east coast of Greenland; and as it is elsewhere distinctly stated that the "Newland," which these men discovered in the same year, lay to the "westward off Iceland," there can be little room for hesitancy in reaching, the conclusion that "Newland," and the "Down-islands" all lie together, and are probably only different names for, the same discovery. However this may be, it is at least manifest, from the record, that if Newland was not a part of the eastern coast of Greenland, there is nothing to indicate that it was anywhere in the region of Newfoundland.
A few years after this discovery is recorded, namely in 1289, we find the following statement in the Flatey Annals: "King Eric sends Rolf to Iceland to seek Newland;" and again in the next year: "Rolf travelled about Iceland soliciting men for a Newland voyage." No additional information has been preserved touching this enterprise, and it therefore seems probable that if the voyage
was actually undertaken, it was barren of results. The Flatey Annals note the death of Rolf, Land-Rolf, as he was called, in 1295, and as no subsequent seeker of Newland is named in Icelandic history, it may be assumed that the spirit of exploration died with him.
This brief record of the Annals is unquestionably historically accurate; moreover there may be somewhat of an historical foundation for the adventures of the Broadwickers'-champion recounted in the Eyrbyggja Saga; neither of these notices of discovery, however, appears to have any connection with the discovery of Wineland; they have been considered here chiefly because of the fact that they have been treated in the past as if they had a direct bearing upon the Wineland history.
The historical and quasi-historical material relating to the discovery of Wineland has now been presented. A few brief notices of Helluland, contained in the later Icelandic literature, remain for consideration. These notices necessarily partake of the character of the sagas in which they appear, and as these sagas are in a greater or less degree pure fictions, the references cannot be regarded as possessing much historical value.
First among these unhistorical sagas is the old mythical tale of Arrow-Odd, of which two recensions exist; the more recent and inferior version is that which contains the passages where Helluland is mentioned, as follows: "'But I will tell thee where Ogmund is; he is come into that firth which is called Skuggi, it is in Helluland's deserts . . . .; he has gone thither because he does not wish to meet thee; now thou mayest track him home, if
thou wishest, and see how it fares.' Odd said thus it should be. Thereupon they sail until they come into Greenland's sea, when they turn south and west around the land . . . They sail now until they come to Helluland, and lay their course into the Skuggi-firth. And when they had reached the land the father and son went ashore, and walked until they saw where there was a fortification, and it seemed to them to be very strongly built."
In the same category with Arrow-Odd's Saga may be placed two other mythical sagas, the Saga of Halfdan Eysteinsson, and the Saga of Halfdan Brana's-fostering; in the first of these the passage containing the mention of Helluland is as follows: "Raknar brought Helluland's deserts under his sway, and destroyed all the giants there." In the second of these last-mentioned sagas the hero is driven out of his course at sea, until he finally succeeds in beaching his ship upon "smooth sands" beside "high cliffs;" "there was much drift-wood on the sands and they set about building a hut, which was soon finished. Halfdan frequently ascended the glaciers, and some of the men bore him company . . . . The men asked Halfdan what country this could be. Halfdan replied that they must be come to Helluland's deserts."
Belonging to a class of fictitious sagas known as "landvættasogur" [stories of a country's guardian spirits], is the folk-tale of Bard the Snow-fell god. The first chapter of this tale begins: "There was a king named Dumb, who ruled over those gulfs, which extend northward around Helluland and are now called Dumb's sea." Subsequently we find brief mention of a king of Helluland, of
whom Gest, the son of the hero of the saga, says: "I have never seen him before, but I have been told by my relatives that the king was called Rakin, and from their account I believe I recognize him; he at one time ruled over Helluland and many other countries, and after he had long ruled these lands he caused himself to be buried alive, together with five hundred men, at Raknslodi; he murdered his father and mother, and many other people; it seems to me probable, from the reports of other people, that his burial-mound is northward in Helluland's deserts." Gest goes in quest of this mound, sails to Greenland's deserts, where, having traversed the lava-fields [!] for three days on foot, he at length discovers the burial-mound upon an island near the sea-coast; "some men say that this mound was situated to the northward off Helluland, but wherever it was, there were no settlements in the neighbourhood."
The brief extracts here quoted will suffice to indicate not only the fabulous character of the sagas in which they appear, but they serve further to show how completely the discoveries of Leif, and the exploration of Karlsefni had become distorted in the popular memory of the Icelanders at the time these tales were composed, which was probably in the thirteenth or fourteenth century. The Helluland of these stories is an unknown region, relegated, in the popular superstition, to the trackless wastes of northern Greenland.
111:1 This Kiartan was Thurid's son. | <urn:uuid:877550dd-6e2b-4fed-9a8e-70a16b1059b8> | CC-MAIN-2015-35 | http://sacred-texts.com/neu/nda/nda10.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167592.45/warc/CC-MAIN-20150827031247-00044-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.982974 | 3,998 | 2.609375 | 3 |
|New Products Books & Maps Archival Products Printing & Binding News & How-To Upcoming Events Contact Us|
News & How-To
Formerly branded as GlobalGazette.ca
Articles, press releases,and how-to information for everyone interested in genealogy and history
Subscribe to our free newsletter
[an error occurred while processing this directive]
Article Published July 31, 2001
IrishLand Records- Part 1
By: Kyle Betit
Landholding in Ireland
Many of our Irish ancestors were tenant farmers who leased or rented their land directly from a landowner or indirectly from a "middleman." Only a small percentage of people in Ireland owned their land outright (this was called holding your land "in fee.").
There could be several layers of subleasing between the actual landowner and your ancestor. Changes over time in the nature of tenants' arrangements with their landlords were vitally important features of the lives of Irish tenant farmers. Land holding arrangements affected economic well being, the nature of their farming, inheritance and emigration patterns. One common type of lease with great potential for genealogical information was the "lease for lives." A "lease for lives" is in effect as long as the person(s) named in the lease are still living. As soon as all of the "lives" named in the lease have died, the lease ceases to be in effect. A lease could alternatively be granted for a set number of years, or a tenant could rent from year to year without holding a lease of any kind. A tenant could occupy land completely at the landlord's discretion. This was called holding land "at will."
The Penal Laws
1695 Anti-Catholic legislation began in Ireland.
1704 The Irish Parliament, exclusively Protestant, enacted a Bill "To Prevent the Further Growth of Popery" (2 Anne c. 6) which placed many restrictions on Catholics. It excluded them from Parliament, from the local government corporations, from the learned professions, from civil and military offices, and from being executors, or administrators, or guardians of property. It prevented Catholics from buying land and from leasing land for more than thirty-one years.
The act required the estate of a Catholic to be "gavelled" at his death, meaning divided among all his sons rather than inherited by one son (unless one son was a Protestant). The act allowed a son in a Catholic family to convert to Protestantism (the Established Church, called the Church of Ireland) and thereby take over the whole family property. Some of the Penal Laws such as those governing property holding were carefully followed by some people in some time periods and places in Ireland. However, many of the laws were largely ignored or circumvented.
1709 Another anti-popery Bill was enacted (8 Anne c. 3) which tightened the restrictions on property and other penalties against Catholics. Protestant "discoverers" were also allowed to obtain possession of Catholics' land if the law had been evaded.
1772 An Act of emancipation (11 & 12 Geo. III c. 21) allowed Catholics to reclaim and hold under lease for 61 years fifty acres of bog but it should not be within a mile of any city or market town.
1774 An Act (13 & 14 George III c. 35) was passed to permit the King's subjects of whatever religion to take an oath to testify their loyalty and allegiance to him to promote peace and industry in the kingdom.
1778 An Act (17 & 18 Geo. III c. 49) was passed repealing the provision in the 1704 Act (2 Anne c. 6) whereby a son could convert to the Church of Ireland and take over the family property. The 1778 Act relieved those Catholics who took the oath prescribed under the 1774 Act from certain restrictions contained in the 1704 and 1709 Acts. Catholics who took the oath were no longer limited to 31-year leases; they could hold leases for up to 999 years or five lives. A Catholic also no longer had to divide his estate among all his sons at his death.
1782 The Relief Act of 1782 (21 & 22 Geo. III c. 24) allowed Catholics who took the 1774 oath to purchase lands in fee, that is, outright ownership.
1793 A Relief Act (33 Geo. III c. 21) was passed giving Catholics the parliamentary and municipal franchise (vote) on the same basis as Protestants and admitting them to the university and to government offices. They were still excluded from sitting in Parliament and from the higher offices, but in other respects they were placed on a level with Protestants.
1829 The Catholic Relief Bill was passed. Catholics were admitted to Parliament and local government corporations; but they were still excluded from some of the higher offices. Further, the franchise was raised to ten pounds, so the forty-shilling freeholders were disfranchised (no longer allowed to vote).
Registry of Deeds
Beginning in 1708 land transactions in Ireland were registered with the Registry of Deeds in Dublin. Because registration was not mandatory, not every land transaction was registered. In the Registry of Deeds you can find deeds of sale, lease agreements, marriage settlements and wills. When a deed was registered in the Registry of Deeds it was not filed there; rather, it was returned to the party who delivered it for registration. What was filed in the Registry of Deeds was a "memorial" which is a synopsis of the deed.
Don't assume that just because your ancestor was not rich and prominent, no information about him or her can be found in the Registry of Deeds. One of the most valuable finds you can make is a deed with a list of the tenants on the land being sold or leased. It's just unfortunate that there aren't more such deeds. The Registry of Deeds was mainly a Protestant source during the eighteenth century. However, there were some Catholics in the deed books also. The Penal Laws did not prevent Catholics from retaining land they already owned, so some Catholics owned land all throughout the period of the eighteenth-century Penal Laws.
By the nineteenth century with religious freedom guaranteed in Ireland, records of persons of all religions are included as lessors and owners. However, even with emancipation, the majority of the population, Catholic and Protestant, was still landless. They were renting or leasing. That was all to change by the turn of the twentieth century when the government helped many tenant farmers purchase their farms from their landlords. The Land Purchase Acts set up a Land Commission to carry out this transfer.
You will find that there are actually two useful indexes to the Registry of Deeds in manuscript form, the Surname Index to grantors and the Lands Index arranged geographically. The Lands Index is an important source because all registered transactions for a particular townland can be accessed.
The huge collection of records of the Registry of Deeds from 1708-1929, and the corresponding Surname Index and Lands Index, are available on microfilm from the FHL. The Registry of Deeds on Henrietta Street in Dublin has books of memorials dating 1708 to present and microfilm copies dating 1930 to present.
A freeholder was a man who held his property either "in fee," which means outright ownership, or by a lease for one or more lives (such as the term of his life or the term of three lives named in the lease). A tenant who held land for a definite period such as 31 years or 300 years did not qualify as a freeholder. A person with a freehold of sufficient value, depending on the law at the time, could register to vote. Books recording freeholders who had registered to vote are known as freeholders registers, and they were generally arranged by the county and sometimes the barony. A freeholders register may list some or all of the following information about the freeholders:
Unfortunately, many original manuscript freeholders registers for the Irish counties were destroyed in the Public Record Office fire of 1922. However, some freeholders registers were published prior to the fire, such as in newspapers. The County Roscommon Family History Society has, for example, published some County Roscommon freeholders lists found in newspapers of the 1830s. Copies of the destroyed records did exist in some cases. In other cases, freeholders registers had been kept by private individuals, such as landowners. The NAI, the NLI, and the PRONI each have significant collections of freeholders records which they have built up over the years from the surviving material.
Kyle J. Betit's article "Freeholders, Freemen and Voting Registers" in The Irish At Home and Abroad lists known surviving freeholders records for every county in Ireland (You can purchase back issues of The Irish At Home and Abroad from Global Genealogy & History Store, Milton, Ontario or online at GlobalGenealogy.com). Other freeholders records not listed in this article may be found in some other inventories, such as (1) John Grenham's book Tracing Your Irish Ancestors, which lists many records at the NLI; (2) James G. Ryan's book Irish Records; and (3) the card catalogue in the Manuscript Reading Room of the NLI which lists the library's latest manuscript acquisitions.
Some common places to find surviving freeholders lists include landed estate papers and newspapers. Because landowners had an interest in knowing what voting freeholders lived on their estates, copies of freeholders lists are often found among the papers of landed estate owners.
If you want to find out how the family property in Ireland came into the ownership of a family member, you may need to consult records of the Land Commission. The Land Commission was responsible for making loans from public funds to tenants so they could buy their farms from their landlords. The commission operated according to the various Land Purchase Acts, 1881 to 1923. "LAP" in the Griffith's Valuation revision lists for your townland refers to the transfer of ownership to the tenant by "Land Purchase Act."
The NLI holds two card indexes to the Land Commission records, a "Topographical Index" arranged by county, barony, and landowner, and a "Names Index" arranged alphabetically by landowner. Each card in the "Names Index" gives the baronies in which the estate lay and the estate number. Using the estate number, you can consult bound volumes which give a summary description of the estate's documents.
The Land Commission is now located in the same building as the NAI on Bishop Street, Dublin. It has the records for the counties that are now in the Republic of Ireland. Its holdings are vast but difficult to access. You must call ahead for permission to access any of their records. In preparation for land being transferred to tenants, the commission created documents listing the tenants and their acreage and prepared maps showing the boundaries of farms in each townland in the estate. Once the land had been processed by the Land Commission, the tenant's deed and subsequent transactions relating to the property became the concern of the Land Registry.
The Land Commission records for the Northern Ireland counties were sent from the Land Commission to the PRONI after the political division of Ireland. You can access Land Commission records at the PRONI by using the Land Registry Archive inventory in the PRONI's Guide to Landed Estate Records. You may consult Ian Maxwell's book Tracing Your Ancestors in Northern Ireland for further discussion of the PRONI's Land Registry Archive.
Land Registry (Republic of Ireland)
The Land Registry was established in 1892 to provide a system of compulsory registration of land titles. When a title is registered in the Land Registry, the deeds are filed in the Registry and all relevant particulars concerning the property and its ownership are entered on registers called folios maintained in the Land Registry. Once under the jurisdiction of the Land Registry, records of a plot of land are no longer found in the Registry of Deeds. The Land Registry has maps that go with the folios. The Registration of Title Act, 1891, made registration of title compulsory in the case of all land bought under the Land Purchase Acts. This meant that all subsequent transactions affecting the land would have to be registered. The Land Registry is split into offices covering different counties:
More Irish Resources | <urn:uuid:1e9dbfcf-d02d-4287-8253-3461b287cd5e> | CC-MAIN-2015-35 | http://globalgenealogy.com/globalgazette/gazkb/gazkb68.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00276-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.971575 | 2,514 | 3.5 | 4 |
Scenario 1: You don't use the family computer as much as the kids, but you trust them. One day you need to find that letter you typed to Aunt Margie last year. In searching your files, you find a folder full of nasty pictures! You confront your twelve-year-old, who honestly knows nothing about it. He has been using downloaded peer-to-peer (P2P) software to share music. He did not know how to configure this software, however, and he unwittingly opened up your entire hard drive to the world. Pornographers were using your computer to store and transfer files!
Scenario 2: Your daughter downloads TV shows off the Internet without paying for them. When you ask her about it, she says, "It's OK, so long as the show isn't being sold yet on DVD."
Scenario 3: Sometimes you see unsavory items pop up in your results when you are using a search engine. You realize that your children often use the same search engine for their own projects, and you worry about what they might see.
Kids Don't Get It
It will certainly come as no surprise to any parent that there are just as many ways to get into trouble on the Internet as in real life. The fact that startles many adults is the revelation that many kids don't make the connection between online actions and real-life consequences. Harris Interactive conducted a nationwide survey of kids at the start of this year and found that 92% of youth thought it was "always wrong" to take something from a store without paying and 85% thought it was always wrong to copy test answers, but only 60% thought it was always wrong to download music without paying. While 63% were worried about accidentally downloading a virus while illegally accessing files, only 38% were worried that the action itself was wrong.
It's not hard to understand why downloading illegally seems to a kid like an insignificant offense compared to stealing from a store. In a store you hold an actual product, which somebody obviously had to make. You can perceive that it must be worth something. In contrast, digital media costs nothing to duplicate hundreds of times, and it does not take up physical space. It's not easy to see that you are actually hurting anyone, and it is simple to imagine that nobody will ever catch you. Little Johnny and Kelly are especially at a disadvantage in grasping that illegal downloading is a crime, since kids' consciences often need prompting even for real-life stealing or lying.
Which is where parents come in. Some might prefer to keep their kids off the Internet entirely. However, that is not an option for many homeschoolers, since lots of us rely on the virtual world for our curriculum.
Sites that Can Help
Conveniently, plenty of resources exist to help parents explain Internet safety and ethics to their children. One of the best I've found is Net Family News (netfamilynews.org), which is an online newsletter on all topics safety- and ethics-related. It is excellent, but is obviously meant for parents, not kids, since the information covered is often disturbing. When I checked the site, their latest feature was on "How social influencing works," or the subtle ways in which people manipulate each other into making bad decisions. While this topic is directly related to Internet safety, it is also important knowledge for anyone who wants to function as an independent individual. The rest of the current issue has stories on everything from the growing popularity and risks of sites like MySpace to a French 18-year-old who lobbies to change copyright laws. If you want to stay educated on what is really happening in the virtual universe, this newsletter is for you.
For kids, check out the Cyber Tree House (cybertreehouse.com). The Business Software Alliance has created this friendly, Flash-based website with helpful games and links to teach kids how to conduct themselves on the Internet. It is geared toward the 9-12 age range, and can be used to reinforce instruction from parents. NetSmartz Kids (netsmartzkids.org) is another, similar website.
A number of these websites recommend that parents and children sign an "Internet use contract." You can find a sample on kids.getnetwise.org. These contracts contain provisions like, "I will never have a face-to-face meeting with someone I've met online." or, "I will never go into a new online area that is going to cost additional money without first asking permission from my parent or teacher." I think a contract is a good idea; it outlines exactly what you expect from your children, and it requires that they put their signature to the agreement. Once they've given their word, it's much harder for them to disobey. Also, it shows that you respect them as people who can stick to what they say.
Parental Control Software
Some parents may choose to use a "parental control utility," one of the programs that blocks kids from using certain software, conducting particular searches, or visiting smut websites. We haven't tried these ourselves, since we use mostly Macintosh machines and there just weren't any such blockers available for the Mac OS until a few months ago. In April, safeeyes.com released a Mac version of its popular parental control utility, which can operate cross-platform for households that use both Macs and PCs. Other options for PC include Safe Surf (safesurf.com), Net Nanny (netnanny.com) and Cyber Patrol (cyberpatrol.com). You can search in an extensive database of similar resources on kids.getnetwise.org to find one that fits your specific needs.
Right now, the U.S. government regulates Internet activity very little, which is the way most of us would prefer it to remain. As the history of education tells us, the easiest way for the federal government to gain control of any area of society is to claim that a danger exists for children. If we want to keep the Internet free and open, we must reduce the danger ourselves by education and self-government. The resources exist, if we make ethical Internet behavior a priority.
Copyrights and Wrongs
And now for some specifics. One year at Patrick Henry College, the college administration was forced to tell us students that we were not allowed to watch movies in our dorm lounges because they were "public spaces." We had been watching movies freely in our lounges for some years, so we didn't know what to think. Our dorms were our temporary homes, and we were watching with our friends. How was that different from inviting people over to our actual homes? As a member of the student Technology Commission, I trekked the awesome tangles of Title 17 of the U.S. Code, studied case history, and found that there are "public" and "private" spaces, as well as a nebulous category in between. College dorm lounges are usually considered "public," although the rooms themselves are "private." We had to buy a license to watch movies in our lounges.
This scenario is typical of federal copyright law. It is cumbersome. Since much of the Internet is new, case law hasn't caught up to every development. Therefore, people are often ready to rationalize their illegal actions. Myths abound, such as the idea that it is OK to download a song and "try it out" once to see if you want to buy it. This is false, as is the justification for downloading TV shows "so long as the show isn't being sold yet on DVD."
In fact, it should be unnecessary for the government to define and explain every possible type of copyright violation, since a few basic principles govern all of copyright law. First, if you create something, it's yours. If you don't create it, it's not yours. Second, if you purchase something created by another person, you are buying it for your own, personal enjoyment. You may not distribute it, either for free or for a price. Third, fair use doctrine allows you to use part of another person's work for the purpose of parody, news reports, or scholarly papers-provided that you give appropriate attribution. And fourth, if in doubt, ask the owner. If your conscience tells you that the author would probably disapprove of your action, you are most likely violating copyright law.
Peer-to-Peer Sharing Traps
Because of this, almost all peer-to-peer (P2P) sharing is illegal. For those who don't know much about P2P programs, individuals use them to share files directly with each other instead of uploading to a central network first. AOL Instant Messenger provides some P2P capabilities through the use of "get files," folders in which users put files they want others to be able to access. Get files are limited by their very nature, however, since you generally have to know a person's nickname to access his get file at all, and then you don't know what you'll find there. On the other hand, straight-up peer-to-peer software like BitTorrent allows you to search for specific files on everyone who is linked into the network. You don't have any idea from whom you are downloading or who is downloading from you.
At this time, music companies cannot sue the networks that provide P2P services, because the networks are not sharing the files themselves. Instead, various organizations are suing individuals who are downloading or sharing illegally. Any time you use a P2P program such as BitTorrent for downloading a file for which you did not pay the owner you put yourself at risk for a lawsuit. You can read more about this and other issues at the United States Copyright Office website (copyright.gov).
Besides legal trouble, some P2P software can put your computer system at considerable risk. If a kid uses these programs without configuring them properly, he may unwittingly give over your entire Internet bandwidth to people downloading from the outside, thereby reducing all your own Internet activity to a crawl. Or he may render your whole system and all its software available to the outside world. Also, some people attach spyware or viruses onto a desirable item, so that even if you configure your P2P software correctly, a simple download can harm you. Spyware can send all your stored personal information to someone else, leading to identity theft or worse. Viruses can destroy your system. Most parents are not willing to take these risks, but lots of kids operate under the delusion that these things won't happen to them.
Despite common belief, it is not easy to wander into a dirty website by accident. You would have to follow a link without reading it, since the pornographers tend to make themselves as blatant as possible. Nonetheless, Internet smut of all sorts is only ever a few clicks away, and many otherwise worthy sites contain individual foul references. It has never been easier to give in to temptation. We must teach kids how to discern and avoid such sites and links.
Lately, the problem of finding unhealthy material on otherwise innocent sites has grown with the popularity of video-sharing websites such as MySpace, YouTube, and Google Video. Users upload homemade videos that run the whole gamut from unbelievably lame to hilariously funny to disturbingly explicit. These websites do their best to police for illegal or pornographic material, but some unavoidably slips through. Also, at least one of these websites merely flags smutty videos as "adult"-a flag that curious, unsupervised young people could choose to ignore. And when my mom used Google Video for the first time, to see a recommended juggling video, the site engine helpfully added links (to the right of the requested video) for "juggling," "Chris Bliss" the most popular juggler, and "sex," a category my mom most definitely has never requested! "Sex" might be a popular search term on Google Video, but by offering a link to sex-related videos the first time you visit their site, Google makes it shockingly easy for young people to venture into the world of sleaze. If your own kids use MySpace or one of the others, perhaps you should take a look yourself and see if they are being exposed to anything unhealthy.
Further, many sites geared to parents give bad advice on the topic of smut. One parenting website I read during my research recommended that parents take care not to overreact if they find a child accessing pornography every once in a while, because such curiosity is "natural." That may be true, in the same way that it is natural for an angry three-year-old to hit her friend in the head with a block, or for a first-time driver to crash his car. "Natural" does not always mean "desirable." Pornography can be a habit just as hard to lose and as spiritually damaging as drug addiction. Moreover, it is easier to hide. I would recommend that every family use a parental control program, if only to log the websites that children access. The knowledge that their parents will find out if they go anywhere inappropriate will eliminate temptation for most kids.
It is essential for homeschool parents to explain to their teenagers the true spiritual damage that pornography can cause to their future lives and families. Too often parents are uncomfortable with this topic, and so kids pick up their sexual knowledge by osmosis. In this world, osmosis is not enough. If teens understand how they really can hurt themselves, they will be better able to self-govern. See the two recommended books on page 22 for a good place to start.
Security vs. Freedom
The truly difficult question of Internet safety and ethics is the same as in the real world-how much freedom should kids have to thrive on their own or to learn from their own mistakes? In this messed-up world, how do we keep ourselves as "wise as serpents and as innocent as doves?"
I think the answer must be, "by moving slowly, with caution." No parents would send a fifteen-year-old with a new driver's permit on a two-hour road trip alone. Even if he has passed the test that shows he knows the rules of the road, he still must learn to apply them in actual driving experience before he can earn his license. For some teens this takes only a few weeks, but others need the full six months or even more. And once your young adult has his license, he will most likely still need governing rules. Some parents these days are even installing devices in their teens' cars to monitor how fast and far they drive. Why be so careful about driving? Because a mistake can physically cripple or kill.
Traveling the Internet may not be as dangerous as driving the highways, but the same principles apply-only you know when your children are ready to traverse it alone. And you know that at some point before they leave your roof they will have to be able to do so. Knowing this, why not start training now?
copyright.gov. U.S. government's official copyright office website. Forms, explanations, more.
cyberpatrol.com. Popular PC-based parental control software
cybertreehouse.com. Site to teach kids safe and proper net conduct
getnetwise.org. Site with an amazing number of helpful resources for parents, teachers, and kids, including "Internet use" contracts, detailed lists of kidsafe filters and browsers, and much more.
netfamilynews.org. Online safety and ethics newsletter for parents
netnanny.com. Popular PC-based parental control software
netsmartzkids.org. Site to teach kids net safety and ethics
safeeyes.com. A new parental control product that works on Macintoshes
Was this article helpful to you?
Subscribe to Practical Homeschooling today, and you'll get this quality of information and encouragement five times per year, delivered to your door. To start, click on the link below that describes you:
USA Librarian (purchasing for a library)
Outside USA Individual
Outside USA Library | <urn:uuid:3fa91a39-5cff-48fa-8759-a25fd1db0ed5> | CC-MAIN-2015-35 | http://www.home-school.com/Articles/online-dangers-how-to-e-proof-your-kids.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281325.84/warc/CC-MAIN-20150827031441-00042-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.966284 | 3,309 | 2.875 | 3 |
Nikola Tesla - The Forgotten Father
of Today & Tomorrow
Plasma International works with high frequencies
and high voltages, and thus we are frequently referencing
the theories of the original “mad scientist” Nikola
Tesla and the thoughts of his best friend, Mark Twain.
The life and work of Nikola
Tesla is in the focus of interest above all for his ingenuity
and contribution to world science and engineering. Had the
alternating electric current system been the only thing he
ever invented, the name of Nikola Tesla would still remain
permanently inscribed on the list of the most renowned people
whose work has been of pivotal importance for the development
Moreover, knowing that Tesla invented or theoretically anticipated
almost all technical devices people are using today, with which
he helped usher in the Second Industrial Revolution, his role
Tesla wrote more than 1800 patents, most now “missing”.
See 135 Tesla patents.
Tesla gave us alternating current and the first hydro-electric
dam powered from Niagara Falls When Nikola Tesla discovered
the electron, he wrote to J.J. Thomson in 1891 saying his experiments
prove the existence of charged particles ("small charged
balls"). After Tesla died in 1943 the Supreme Court of
the USA overturned Marconi's patent of modern radio in favour
of Nikola Tesla. The
fool can make things bigger, more complex, and more
violent. It takes a touch of genius and a lot of
courage to move in the opposite direction”.
That was a quote from Albert Einstein
pictured here with Tesla. Tesla had nothing but contempt
for the "physics"
of Einstein. He absolutely believed in the ether and
the possibility of taking electricity out of this ether
without splitting the atom and causing dangerous radiation.
Tesla didn't think about splitting these atoms to obtain
enormous power in such a potentially hazardous manner.
He knew that his system of
wireless transmission harnessed to Niagara Falls was a safe
template to be copied again and again to provide all the safe,
clean power that was necessary to run the modern industrial
At the beginning of the war, the US government
desperately searched for a way to detect German submarines.
Thomas Edison was put in charge of the search and when Tesla
proposed the use of energy waves ( what we know today as radar)
to detect these ships, Edison rejected Tesla's idea as completely
|A Few of
1. Tesla Coil & auto ignition system
2. AC induction engine (no carbon brushes)
3. Solar powered engines
4. Transmitting Power without Wires (called WiTriciity
5. Seeing by Telephone and wirelessly (TV & Radio)
6. A Means of Employing Electricity as a Fertiliser
7. Fluorescent Lighting & neon lights.
8. Specialized lighting and a precursor to the X-ray machine
9. Vertical Take Off and Landing (VTOL) aircraft
10. Terrestrial Stationary Waves
13. Valvular Conduit
14. Earthquake Machine
15. Magnifying transmitter
17. Death Rays
18. Thermo-Electric Power
19. X-Ray machine
21. Electrotherapeutics and Biotronics
22. Computing Logic Circuits/Remote Control/Communications
23. Bladeless Turbine
24. Solar Tower
First Hydro-Electric Powerhouse
At Niagra Falls, Tesla was the first to successfully harness
the mechanical energy of flowing water. Change it to electrical
energy, and distribute it to distant homes and industries. His
revolutionary model set the standard for hydroelectric power as
we know it to day. Since his childhood, Tesla had dreamed of harnessing
the power of the great natural wonder. And in late 1893, his dream
became a reality, when Westinghouse was awarded the contract to
create the powerhouse. It was the most likely power source for
Tesla's wirelessly powered car.... think about that..
In the past NASA used a 12 mile long wire,
it charged freely from the potential electricity in the area
above the magnetosphere. Tesla knew this, and NASA used his
research to launch STS 75.. The tether incident of STS 75 launched
in 1997 proved the fact that electricity can be produced in
abundance for free, unexpectedly it produced many, many more
times the voltage than was originally expected, and calculated.
All this power was free energy, the technology was theorized
by the man of light himself, Nicola Tesla.
"All matter comes from a primary substance, the luminiferous
ether," stated Nikola Tesla. He sensed the universe
was "composed of a symphony of alternating currents
with the harmonies played on a vast range of octaves,".
To explore the whole range of electrical vibration, he sensed,
would bring him closer to an understanding of the cosmic
symphony. Tesla understood that the cosmic symphony is resonance.
Nothing exists in the Universe that does not have harmonic
Tesla Taps the Cosmos Tesla’s patents
in this direction are based on alleged discovery by him that
when cosmic rays or radiations are permitted to fall upon or
impinge against an insulated conducting body P connected to
one terminal of a condenser, such as C in Fig. 4, while the
other terminal of the condenser is made by independent means
to receive or carry away electricity, a current flows into
the condenser so long as the insulated body P is exposed to
such rays; so that an indefinite, yet measurable, accumulation
of electrical energy in the condenser takes place. This energy,
after a suitable time interval, during which the rays are allowed
to act in the manner aforementioned, may manifest itself in
a powerful discharge, which may be utilized for the operation
or control of a mechanical or electrical device consisting
of an instrument R, to be operated and a circuit-controlling
device d (Fig.
Tesla bases his theory on the fact that the earth is negatively
charged with electricity and he considers same to act as a
vast reservoir of such a current. By the action of cosmic rays
on the plate P there is an accumulation of electrical energy
in the condenser C. A feeble current is flows continuously
into the condenser and in a short time it becomes charged to
a relatively high potential, even to the point of rupturing
the dielectric. This accumulated charge can then, of course,
be used to actuate any device desired.
An illustration of a proposed form of apparatus which may be
used in carrying out his discovery is referred to in Fig.4.
Centre to Tesla’s Letterhead was the antenna of Tesla's
"World's radio station" which he constructed in Long
Island in the vicinity of New York, it testifies his farsightedness
and ingenuity. His idea was that this station, build in 1900
should by remote wireless control transmit throughout the world
not only the news but music and photographs as well. However,
that great plan could not be carried out because when it was
realised free unmetered energy could be made available to everyone
Tesla’s funding was terminated and his tower was destroyed.
In 1960 the International Commission for Electrical Engineering,
at its session in Philadelphia decided that the unit of magnetic
induction is to be universally called “Tesla".
Electrotherapeutics - Nikola Tesla discovered that alternating
currents of high frequency (10kHz or greater) could pass over
the body without harm. In fact, levels of electrical energy
that would prove fatal at a reduced frequency could be tolerated
when the frequency was above l0kHz. During his lecture before
the American Institute of Electrical Engineers (AIEE) at Columbia
College on May 20, 1891, Tesla predicted that medical use would
be made of this phenomenon. A year later, d'Arsonval independently
reported similar observations on the physiological effects
of high frequency currents before the Society of Biology in
Paris. In early 1892, Tesla met d'Arsonval on a lecture tour
of France where Tesla was pleasantly surprised to find that
d'Arsonval used his oscillators to investigate the physiological
effects of high frequency currents.
It is clear from Nikola Tesla's lectures and publications
beginning in 1891 that he was the first to discover that
radio frequency (rf) currents could be employed safely for
therapeutic benefits, Tesla also suggested that rf currents
could be used for other medical purposes--the sterilization
of wounds, as an anesthesia, for stimulation of the skin,
and to produce surgical incisions. As Patton H. McGinley,
Ph.D., of the Emory Cancer Clinic has stated: History has
not been kind to Tesla in the sense that the credit for all
of the pioneering work in the field of electrotherapy has
gone almost exclusively to d'Arsonval.
Tesla was a pathfinder in rf communication and communication
theory. In the early 1890s, Tesla entertained the scientist
and general public alike with his demonstrations of high frequency,
high voltage experiments. This type of electricity was virtually
unheard of, indeed, even unimaginable, before Tesla developed
the Tesla coil and demonstrated it before the IEE at an 1891
lecture in London, England.
Tesla's experiments with high frequency, high voltage electricity
continued throughout the decade. During this period, he invented
several types of lights based on this unique power source.
In fact, he utilized fluorescent lighting in his laboratory
thirty years before it was to be in general use in industry.
Perhaps it is because of these experiments, Tesla believed
that wireless power was possible!
In 1898, at Madison Square Gardens he publicly
demonstrated a remote control submersible boat. This clearly
established that Tesla was a man years-decades-ahead of conventional
science and technology! In this amazing feat of engineering,
he incorporated the use AND gates (logic circuits), digital
communication, electromechanical interfacing (robotics), and
radio--all of which were virtually undeveloped (and unimaginable)
at the time! Despite the Madison Square demonstration, the
Navy turned its back on Tesla's invention at the time because
it was too advanced for them to comprehend.
Transmission of Power
Tesla considered his crowning achievement
to be the wireless transmission of power at Colorado Springs
in 1899. In 1900, upon his return to New York, Century Magazine
published Tesla's article, The Problem of Increasing Human Energy
which was amply illustrated with photos from Tesla's Colorado
Tesla's work in Colorado Springs allowed him to return
to New York to pursue the next phase of the wireless
technology development... the construction of a full
scale transmitter at Wardenclyffe on Long Island. To
do this required immense amounts of money... money which
Tesla did not have at the time. To get the money, Tesla
approached the one person in New York who would have
the sums necessary... J. Pierpont Morgan.
In The Problem of Increasing Human Energy,
Tesla laid out his vision for the evolution of power production
and the furtherance of mankind. It is quite a remarkable philosophical
work in that it gives us deep insight to Tesla's thought formation
processes. Perhaps when J.P. Morgan read this fine essay, he
realized how dangerous Tesla was to the status quo and decided
to fund Tesla's work in order to control the direction that
Tesla's work took.
Unfortunately, Tesla's funds ran out halfway through the
project and the Morgan interests refused to further fund
Tesla's work. Tesla was forced into bankruptcy and his beloved
Wardenclyffe tower was destroyed on the pretext of "national
security!" Bankrupted and cut off from funds, Tesla
nevertheless continued his work in a new field... mechanical
Means of Employing
Electricity as a Fertiliser
Not the least ingenious of Tesla's great
schemes is was an invention to fertilise impoverished land
by electricity. No longer would it be necessary for the farmer
to spend half his year's receipts in purchasing fertilisers,
he only had to buy an electric fertilizer machine of his
own. Dumping a few loads of loose earth into the fertiliser
machine, it comes out at the other end, ready to be spread
over the surface of the impoverished ground, where it will
insure for the following season the luxurious crop of the
The explanation which Tesla gave of just why
so simple a piece of work should be productive of such wonderful
results is not difficult to comprehend. "
Everyone knows," said Tesla, " that the constituent
of a fertiliser which makes the ground productive is its nitrogen.
Everybody knows also that nitrogen forms four-fifths of the volume
of the atmosphere above that piece of unfertile land. This being
the case it occurred to me: 'Where is the sense in the farmer
buying expensive nitrogen when he has it free of cost at his
own door? All the agriculturist needs is some method by which
he can separate some of this nitrogen from the atmosphere above
the ground and place it on the surface.' And it was to discover
this means that I set to work."
As far as the non-technical eye can perceive, the working
model of the electric fertiliser consists of nothing but
an upright copper cylinder with a removable top, with a spiral
coil of wire running throughout the length of the cylinder.
Through the bottom of the cylinder are two wires, which connect
with a specially constructed dynamo. A quantity of loose
earth, treated by a secret chemical preparation in liquid
form, is shovelled into the cylinder, a high frequency electric
current is passed through the confined atmosphere; the oxygen
and hydrogen are thus expelled, and the nitrogen which remains
is absorbed into the loose earth. There is thus produced
as strong a fertiliser for a nominal price at home, rather
than purchase at a large cost miles and miles away.
In an effort to return to profitability, Tesla developed
a new type of bladeless pump and turbine that would have reduced
the conventional pumps and turbines to the scrap heap. His
initial work at the Watertown Power Station in New York indicated
that his method could take advantage of the latent power of
vaporization by using saturated steam. Later, he worked with
Allis-Chalmers engineers in Milwaukee to develop the turbine.
However, internal friction led to the disruption of the project
and it was abandoned. Scientists today continue to scour through
his notes. Many of his far-flung theories are just now being
proven by our top scientists. For example, Tesla’s bladeless
disk turbine engine, when coupled with modern materials, is
proving to be the most efficient motor ever designed.
Teslas 1901 patented experiments with
cryogenic liquids and electricity provide the foundation for
modern superconductors. He talked about experiments that suggested
particles with fractional charges of an electron - something
that scientists in 1977 finally discovered - quarks!
Einstein turned the world upside down with his theory of relativity,
the only one who opposed him was Tesla. According to Tesla,
Einstein’s relativity wasn’t
sufficiently relative. He proved to Einstein that he could
create velocities that are much greater than the speed of light.
He considered Constant C the basis, and not the fastest velocity
in the universe.
There are three things to think about Tesla
when talking about this particular project. The first ... we
should think of Tesla every time we look at a microwave oven;
again the radiation frequency of the microwave oven and the
concept of the microwave oven was Tesla's.
The second thing is, it is a frequency transformer. Tesla,
with the Tesla coil, changes one frequency to another frequency.
What we are doing up there, we're taking at 5 megahertz a frequency
which radiates in the ground and we transform it into 1 hertz,
5 hertz, 10 hertz, or whatever it is. So we have really a frequency
transformer similar to what Tesla was thinking. Third, and
most important, once we create the waves they propagate exactly
the way Tesla conceived it through the earth ionosphere waveguide. Source:
Selections from an interview with Dr. Dennis Papadopoulos Professor
of Physics, University of Maryland Senior Science Advisor,
H.A.A.R.P (High Frequency Active Auroral Research Program)
Tesla was one of the world's most original
and greatest inventors and thinkers, but because he was so
original and out of his time, his genius was mistaken for insanity
and science fiction. Tesla technology is still promising, it
continues to run up against a wall of "organized opposition".
Tesla was a “true” inventor in that he did not
merely improve on existing technology, but instead he had a
tendency to create entire new industries with his radical ideas.
Although much of Tesla's work remains to be reconstructed,
he will at least be an active topic of discussion well into
the 21st century.
Tesla Patents » | <urn:uuid:e09c7717-672f-4293-8160-63c1ab387803> | CC-MAIN-2015-35 | http://www.plasma-i.com/nikola-tesla.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064865.49/warc/CC-MAIN-20150827025424-00224-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.934593 | 3,631 | 2.703125 | 3 |
There's something curious going on at the pick-your-own strawberry farm amid the bland expanse of tract homes and strip malls southwest of Miami. In row after row on the ten-acre property, the plants appear uniform, but in a far corner set off by a line of habanero chili vines, each strawberry plant has a slightly different color and growth pattern. This is a test plot where a stubborn University of Maryland horticulturist named Harry Jan Swartz is attempting to breed a strawberry unlike any tasted in the United States for more than a century. He's searching for what may be the most elusive prize in the highly competitive, secretive, $1.4 billion-a-year strawberry industry—marketable varieties with the flavor of Fragaria moschata, the musk strawberry, the most aromatic strawberry of all.
Native to the forests of central Europe, the musk strawberry is larger than fraises des bois, the tiny, fragrant, wild alpine strawberries beloved by backyard gardeners, and smaller than the common strawberry, the supermarket-friendly but often dull-tasting hybrid that dominates sales worldwide. The musk strawberry has mottled brownish red or rose-violet skin, and tender white flesh. Its hallmark is its peculiar floral, spicy aroma, different from and far more complex than the modern strawberry's, with hints of honey, musk and wine; a recent analysis by German flavor chemists detected notes of melon, raspberry, animal and cheese. Adored by some people, detested by others, the aroma is so powerful that a few ripe berries can perfume a room.
From the 16th to the mid-19th centuries, the musk strawberry—known as moschuserdbeere in Germany, hautbois in France and hautboy in England—was widely cultivated in Europe. In Jane Austen's Emma, guests at a garden party rave about it: "hautboy infinitely superior—no comparison—the others hardly eatable." But because growers in those days did not always understand the species' unusual pollination requirements, musk cultivations typically had such scanty yields they seemed virtually sterile. Thomas A. Knight, an eminent horticulturist and pioneering strawberry breeder, wrote in 1806: "If nature, in any instance, permits the existence of vegetable mules—but this I am not inclined to believe—these plants seem to be beings of that kind." Also, the berries are very soft, so they don't keep or travel well. By the early 20th century, musk varieties had mostly disappeared from commercial cultivation, replaced by firmer, higher yielding, self-pollinating modern strawberries.
From This Story
But the legend of the musk strawberry persisted among a few scientists and fruit connoisseurs. Franklin D. Roosevelt, who became enamored of its musky flavor as a boy traveling in Germany, later asked his secretary of agriculture and vice president, Henry A. Wallace, to encourage government strawberry breeders to experiment with musk varieties at the Agriculture Department's breeding collection in Beltsville, Maryland. It was there, in the early 1980s, that the musk aroma captivated a young professor at the University of Maryland, in nearby College Park.
After years at the forefront of berry science, Swartz in 1998 launched an audacious private program to overcome the biological barriers that had thwarted breeders for centuries. "If I can grow a huge, firm fruit that's got the flavor of moschata," Swartz told me a few years ago, "then I can die in peace."
On this unusually chilly January dawn outside Miami, we're checking up on his dream at his test plot next to a weed-choked canal. Swartz, 55, is wearing a black polo shirt and chinos. He is shivering. He bends over and examines a plant, ruffling the leaves to expose the berries. He picks one, bites into it. "Ugh." He makes notes on a clipboard. He tries another, and wrinkles his nose. "That’s what I call a sick moschata." The fruit has some of the elements of musk flavor, he explains, but with other flavors missing or added, or out of balance, the overall effect is nastily deranged, like a symphony reduced to cacophony.
Before the day is done Swartz will have scoured the test patch to sample fruits from all 3,000 plants, which are seedlings grown from crosses made in his Maryland greenhouse. They belong to his third generation of crosses, all ultimately derived from wild strawberry hybrids devised by Canadian researchers.
Swartz keeps tasting, working his way down the seven rows of plants sticking out of the white-plastic covered ground. "Floor cleaner," he says of one. "Diesel." "Sweat socks." He is not discouraged—yet. For many years, until his knees gave out, Swartz was a marathon runner, and he's in this project for the long haul, working test fields from Miami to Montreal in his unlikely quest to discover a few perfect berries.
“You’ve got to kiss a lot of frogs in order to find a princess,” he says.
The modern cultivated strawberry is a relative newcomer, the result of chance crosses between two New World species, the Virginian and the Chilean, in European gardens starting about 1750. This "pineapple" strawberry, called F. x ananassa, inherited hardiness, sharp flavor and redness from the Virginian, and firmness and large fruit size from the Chilean. In the 19th century, the heyday of fruit connoisseurship, the best varieties of this new hybrid species (according to contemporary accounts) offered extraordinary richness and diversity of flavor, with examples evoking raspberry, apricot, cherry and currant.
Alas, no other fruit has been so radically transformed by industrial agriculture. Breeders over the decades have selected varieties for large size, high production, firmness, attractive color and resistance to pests and diseases; flavor has been secondary. Still, fresh strawberry consumption per capita has tripled in the past 30 years, to 5.3 pounds annually, and the United States is the world’s largest producer, with California dominating the market, accounting for 87 percent of the nation's crop.
What's missing most from commercial berries is fragrance, the original quality that gave the strawberry genus its name, Fragaria. To boost aroma, strawberry breeders, particularly in Europe, have long tried to cross alpine and musk varieties with cultivated ones, but with little success. Only in 1926 did scientists discover why the different species are not readily compatible: the wild and musk species have fewer sets of chromosomes than modern strawberries. As a result of this genetic mismatch, direct hybrids between these species typically produced few fruits, and these were often misshapen and had few seeds; the seeds in turn usually did not germinate, or produced short-lived plants.
Strawberry science took a big leap forward in Germany, starting in 1949, when Rudolf and Annelise Bauer treated young seedlings with colchicine, an alkaloid compound in meadow saffron, to increase the number of chromosomes in hybrids of alpine and common strawberries, producing new, genetically stable varieties. Over the years, some breeders have taken advantage of this method to create new hybrids, including a cultivar introduced last year in Japan that has large but soft pale pink fruit with a pronounced peach aroma. Such attempts have often run into dead ends, however, because the hybrids are not only soft but cannot be further crossed with high-performing modern varieties.
To be sure, there's still one place where the original musk strawberry survives in farm plantings, although on a very small scale: Tortona, between Genoa and Milan, where the Profumata di Tortona strawberry has been grown since the late 17th century. Cultivation peaked in the 1930s, and lingered into the 1960s, when the last field succumbed to urban development. Until a few years ago only a few very small plots existed in old-timers' gardens, but recently the municipal authorities, together with Slow Food, an organization devoted to preserving traditional foodways, started a program that has increased Profumata plantings to more than an acre, on nine farms. These pure musk berries are a luxurious delicacy, but they’re expensive to pick and very perishable—a prohibitive combination for commerce. In the United States, most growers would sooner raise wombats than fragile strawberries, no matter how highly flavored.
Swartz says he came to love strawberries as a child in the Buffalo, New York, gardens of his Polish-born grandparents. He majored in horticulture at Cornell, and after finishing his doctoral research in 1979 on apple dormancy, he started teaching at the University of Maryland and helped test experimental strawberry varieties with U.S. Department of Agriculture researchers Donald Scott, Gene Galletta and Arlen Draper—giants in the breeding of small fruits.
Swartz conducted trials for the 1981 release of Tristar, a small but highly flavored strawberry now revered by Northeastern foodies; it incorporates genes for extended fruiting from a wild berry of the Virginian species collected in Utah. But he chose to go his own way and concentrate on raspberries. Working with other breeders, and often using genes from exotic raspberry species, he has introduced eight raspberry varieties, of which several, such as Caroline and Josephine, proved quite successful.
Swartz, who is married to his college sweetheart, Claudia—she and their 23-year-old daughter, Lauren, have had raspberry varieties named after them—has been described by colleagues as a "workaholic," a "visionary" and a "lone wolf." For many years he participated in professional horticultural organizations, attending meetings and editing journals, but in 1996 he gave all that up to focus on fruit breeding. "I can't put up with a lot of academics," he says. To pursue opportunities as he saw fit, Swartz in 1995 formed a private company, Five Aces Breeding—so named, he says, because "we're trying to do the impossible."
Swartz is working on so many ventures that if he were younger, he says, he would be accused of having Attention Deficit Disorder. He's helping develop raspberries that lack anthocyanins and other phytochemicals, for medical researchers to use in clinical studies assessing the effectiveness of those compounds in fighting cancer. He's an owner of Ruby Mountain Nursery, which produces commercial strawberry plants in Colorado's San Luis Valley, possibly the highest—at an elevation of 7,600 feet—fruit-related business in the United States. He's got a long-term project to cross both raspberries and blackberries with cloudberry, a super-aromatic arctic relative of the raspberry. And he recently provided plants for a NASA contractor developing systems for growing strawberries on voyages to Mars.
His musk hybrid project relies on breakthroughs made by other scientists. In 1998, two Canadian researchers, J. Alan Sullivan and Bob Bors, allowed him to license their new strawberry hybrids, bred using colchicine, from a diverse range of wild species, including alpine and musk strawberries. (Sullivan and Bors, after years of experimentation, had created partially fertile musk hybrids with the requisite extra chromosomes.) Swartz's breeding strategies can be idiosyncratic. Like an athlete training at high altitude to boost his stamina, he deliberately chooses difficult growing environments (such as sultry Miami) for his test plots, so that successful varieties will be more likely to excel in more temperate commercial growing districts. His main challenge with the musk hybrids is to increase their size and firmness, so they can be picked and marketed economically. It's a trade-off. Strawberry plants produce limited amounts of photosynthates, which they use for high yield, firmness or sweetness. "You move one up, the others are going to move down," says Swartz, "and it's very rare that you can have all three qualities."
Walking the rows at his Miami test plot, Swartz shows me a puny, malformed fruit, which lacks seeds on one side. "That's what 99 percent of them used to look like a few generations ago," he says. "For years I'd be eating sterile, miserable things, nubbins with two or three seeds." The hormones produced by fertile seeds, he explained, are needed for proper development of the strawberry, which is actually a swollen receptacle, the end of the flower stalk. Still, he would grind up even the most unpromising fruits, take the few good seeds and grow them as parents for future generations.
Could he show me a large-fruited strawberry with full musk flavor? Through seven years of crossing the original Canadian hybrids with cultivated varieties, the musk genes have become increasingly diluted, and it has been hard to retain the sought-after aroma. Typically, only one in 1,000 seedlings offers it, and I've heard that he's nervous we might not find any that do.
But after an hour or so, he picks a medium-sized, conical berry and bites into it. "That's moschata!" From the same plant I choose a dead-ripe fruit. It has an almost mind-bogglingly powerful, primeval aroma. Swartz ties an orange ribbon around the plant, to mark it for use in future crosses, and beams like an alchemist who has found the philosopher's stone.
By late afternoon it's pleasantly balmy, but Swartz is wearing down. He says his knees ache. His fingers are stained winy red. "I'm starting to lose it, frankly," he says. "I've had too many strawberries." What would drive him to spend his own money and more than a decade tasting roughly 100,000 berries, many of them dreadful, with the prospects for reward uncertain? "It's just a stupid donkey attitude—I've got to do this or else there's no reason for me to do anything. I have the religion of moschata."
By the second morning of my Florida visit, Swartz has identified three musk hybrids with promising characteristics. From one plant, he clips runners and wraps them in moist paper towels; he'll take them back to his greenhouse in Maryland and propagate them into genetically identical offspring—clones. From another plant he plucks unopened flowers, pulls off the pollen-coated anthers and drops them into a bag, for direct use in pollinating other plants to make new crosses. "It's really cool," he says. "After seven years of hard work, I can actually eat this and show people—here's a large-sized fruit with this flavor."
This past spring, Swartz says he made further progress at a test plot in Virginia after he crossed a bland commercial strawberry with his hybrids and obtained more new plants with good moschata flavor. Swartz says he's about three or four years from developing a musk hybrid with commercially competitive yield, size and shelf life. Still, he may have a hard time bucking the American fruit marketing system's demand for varieties that appeal to the lowest common denominator of taste. But he has always been motivated less by financial gain than by curiosity, the promise of a bit of adventure—and a touch of obsession. "I really don't care if this works or not, it's just so much fun getting there," he says. "When it happens, it'll be, 'I've found the holy grail, now what do I do with it?'"
David Karp, a freelance writer and photographer specializing in fruit, is working on a book about fruit connoisseurship. | <urn:uuid:9893de1b-f2a6-45e0-8c24-a13856874fff> | CC-MAIN-2015-35 | http://www.smithsonianmag.com/science-nature/berried-treasure-120534521/?all | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00047-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.964263 | 3,287 | 2.671875 | 3 |
1998 IMF Survey Supplement
on the Fund / September 1998
IMF Evolves in Response to Over Half a Century Of Challenge and Change
- July 1–22
- IMF and World Bank Articles of Agreement formulated at the International Monetary and Financial Conference, Bretton Woods, New Hampshire.
- December 27
- Articles of Agreement enter into force upon signature by 29 governments, representing 80 percent of original quotas.
Camille Gutt, from Belgium, served as the IMF's first Managing Director from 1946 to 1951.
- March 8–18
- Inaugural meeting of Board of Governors in Savannah, Georgia: by-laws adopted, agreement to locate IMF headquarters in Washington, first Executive Directors elected.
- May 6
- Twelve Executive Directors five appointed and seven elected hold inaugural meeting in Washington.
- September 27–October 5
- First Annual Meetings of Boards of Governors of IMF and World Bank in Washington.
- March 1
- IMF begins operations.
- May 8
- First drawing from IMF (by France).
Ivar Rooth, from Sweden, was Managing Director from 1951 to 1956.
- August 13–14
- Germany and Japan become members.
- October 1
- Executive Board approves proposals for standardized Stand-By Arrangements.
- January 5
- Executive Board adopts terms and conditions of General Arrangements to Borrow (GAB).
- February 27
- Compensatory Financing Facility created.
Per Jacobsson, from Sweden, was Managing Director from 1956 to 1963.
- September 29
- Board of Governors approves plan to establish special drawing rights (SDRs).
- June 25
- Buffer Stock Financing Facility established.
- July 28
- First Amendment to Articles of Agreement, establishing a facility based on the SDR, takes effect after acceptance by three-fifths of membership representing four-fifths of voting power.
- January 1
- First allocation of SDRs.
Pierre-Paul Schweitzer, from France, was Managing Director from 1963 to 1973.
- August 15
- United States informs IMF it will no longer freely buy and sell gold to settle international transactions. Par values and convertibility of the dollar—two main features of Bretton Woods system—cease to exist.
- December 18
- After four months of negotiating, Smithsonian Agreement provides for realignment of industrial country currencies and increase in price of gold. IMF establishes temporary regime of central rates and wider margins.
- July 26
- Board of Governors adopts resolution establishing a Committee on Reform of the International Monetary System, known as the Committee of 20.
H. Johannes Witteveen, from the Netherlands, was Managing Director from 1973 to 1978.
- March 19
- “Generalized floating” begins as European Community countries introduce joint float for their currencies against U.S. dollar.
- June 12–13
- Committee of 20 concludes work, agreeing on immediate program to help monetary system evolve. Executive Board establishes oil facility; adopts “Guidelines for the Management of Floating Exchange Rates” and new method of SDR valuation based on basket of 16 currencies.
- September 13
- IMF sets up Extended Fund Facility to give medium-term assistance to members with balance of payments problems owing to structural economic changes.
- October 3
- Interim Committee holds inaugural meeting, following its establishment on October 2.
- August 1
- Executive Board establishes a Subsidy Account, funded by contributions, to assist the most seriously affected members using the oil facility.
- January 7–8
- Interim Committee agrees on “interim reform” of monetary system, including amendment of Article IV and other issues.
- May 5
- Executive Board establishes a Trust Fund to provide balance of payments assistance to developing country members with profits from sale of gold. The Board decides on policies and procedures for selling gold.
- June 2
- IMF holds first gold auction under Interim Committee understandings on disposition of one-third of IMF gold holdings. Proceeds of sales to go to Trust Fund to benefit developing countries.
- February 4
- IMF makes first loan disbursements under Trust Fund.
- August 29
- Executive Board establishes Supplementary Financing Facility.
Jacques de Larosière, from France, was Managing Director from 1978 to 1987.
- April 1
- Second Amendment of Articles of Agreement enters into force, establishing right of members to adopt exchange rate arrangements of their choice.
- September 24
- Interim Committee approves 50 percent quota increase under Seventh Review, which, when accepted by all members, raises IMF general resources to SDR 58.6 billion; it also agrees on new allocations of SDR 4 billion each year for three years beginning January 1979.
- February 23
- Supplementary Financing Facility enters into force.
- April 25
- Interim Committee agrees IMF should be ready to play growing role in adjustment and financing of payments imbalances by providing assistance over longer periods and in larger amounts.
- September 17
- IMF decides to unify and simplify, as of January 1, 1981, currency baskets determining value and interest rate on SDR. Unified basket to be composed of currencies of five members with largest exports of goods and services during 1975–79—U.S. dollar, Deutsche mark, French franc, Japanese yen, and pound sterling.
- December 1
- IMF announces that 128 members have consented to quota increases under Seventh General Review, meeting the minimum participation requirement for quota increase, under which aggregate quotas would be raised to SDR 60 billion.
- January 1
- IMF begins to use simplified basket of five currencies to determine daily valuation of SDR.
- March 13
- IMF decides to institute policy of enlarged access to its resources following full commitment of resources from Supplementary Financing Facility and until Eighth General Review of Quotas takes effect.
- April 23
- IMF announces decisions to enhance SDR’s attractiveness as reserve asset. Measures include making interest rate more competitive and eliminating reconstitution requirement (allowing members to use SDRs permanently).
- May 7
- IMF Managing Director and Governor of Saudi Arabian Monetary Agency sign loan agreement allowing IMF to borrow up to SDR 8 billion to finance IMF’s policy of enlarged access, which thus becomes operative.
- May 13
- IMF reaches agreement in principle with central banks or official agencies of 13 industrial countries, under which they will make available SDR 1.1 billion over two years to help finance the IMF’s policy on enlarged access.
- May 21
- IMF extends financing to members encountering balance of payments difficulties produced by excesses in cost of cereal imports. Assistance integrated into IMF’s Compensatory Financing Facility.
- January 13
- Executive Board adopts guidelines for borrowing by IMF as important temporary measure, but member country quotas remain main source of IMF financing.
- August 13
- Mexico encounters serious problems servicing its foreign debt, marking onset of debt crisis. In following months, IMF supports major adjustment programs in Mexico and several other countries facing severe debt-servicing difficulties.
- Interim Committee agrees to increase IMF quotas under Eighth General Review. IMF Board of Governors adopts resolution on quota increase.
- November 30
- Increases in quotas under Eighth General Review take effect.
- December 30
- Ten participants in General Arrangements to Borrow (GAB) concur on plans to revise and enlarge the GAB.
- October 6–7
- Interim Committee agrees that approximately SDR 2.7 billion in Trust Fund reflows to become available during 1985–91 be used to provide concessional lending to low-income members.
- December 2
- IMF Managing Director and World Bank President express broad support for the debt initiative proposed by U.S. Treasury Secretary James A. Baker. It calls for comprehensive adjustment measures by debtors, increased and more effective structural lending by multilateral development banks, and expanded lending by commercial banks.
- March 27
- IMF establishes Structural Adjustment Facility (SAF) to provide balance of payments assistance on concessional terms to low-income developing countries.
- April 9–10
- Interim Committee calls for enhanced policy coordination to improve functioning of floating exchange rate system.
Michel Camdessus, from France, has been Managing Director since 1987.
- February 22
- Finance ministers of six major nations meet; IMF Managing Director participates. Ministers agree, in Louvre Accord, to intensify policy coordination and to cooperate closely to foster stability of exchange rates “around current levels.”
- December 29
- IMF establishes Enhanced Structural Adjustment Facility (ESAF) to provide resources to low-income members undertaking strong three-year macroeconomic and structural programs to improve their balance of payments and foster growth.
- August 23
- IMF Executive Board establishes Compensatory and Contingency Financing Facility to compensate members with shortfalls in export earnings because of circumstances beyond their control and to help maintain adjustment programs in the face of external shocks.
- September 25–26
- Interim Committee endorses intensified collaborative approach to arrears problem.
- April 3–4
- Interim Committee asks Executive Board to consider proposals for developing country debt relief, based in part on proposals by U.S. Treasury Secretary Nicholas F. Brady.
- May 23
- Executive Board adopts guidelines to deal with developing country debt problem. These include linking support for debt-reduction strategies to sustained medium-term adjustment programs with strong element of structural reform and access to IMF resources for debt or debt-service reduction.
- May 7–8
- Interim Committee agrees to 50 percent quota increase. Committee suggests Executive Board propose Third Amendment to Articles of Agreement, providing for suspension of voting and other membership rights for members that do not fulfill financial obligations to IMF. Committee also approves rights accumulation program, which permits members with protracted arrears to establish a track record on policies and payments performance and accumulate rights for future drawings.
- June 28
- Executive Board proposes increasing total IMF quotas from SDR 90.1 billion to SDR 135.2 billion under the Ninth General Review of Quotas.
- Executive Board approves temporary expansion of IMF facilities to support countries affected by Middle East crisis.
- October 5
- U.S.S.R. signs agreement with IMF providing for technical assistance, pending its application for full membership.
- Executive Board approves membership of many states of the former Soviet Union.
- August 5
- IMF approves SDR 719 million Stand-By Arrangement for Russia.
- Executive Board adopts Third Amendment of Articles of Agreement. Executive Board also determines that requirements for quota increases under Ninth General Review of Quotas have been met.
- April 16
- Executive Board approves creation of Systemic Transformation Facility (STF) to assist countries facing balance of payments difficulties arising from transformation from a planned to a market economy to be in place through 1994.
- May 13
- Kyrgyz Republic is first member to use STF.
- February 23
- Executive Board initiates operations under renewed and enlarged ESAF.
- IMF approves arrangements for 13 countries of the CFA franc zone, following January realignment of CFA franc.
- June 6
- IMF announces creation of three Deputy Managing Director posts.
- October 2
- Interim Committee adopts the Madrid Declaration, calling on industrial countries to sustain growth, reduce unemployment, and prevent a resurgence of inflation; developing countries to extend growth; and transition economies to pursue bold stabilization and reform efforts.
- February 1
- Executive Board approves a Stand-By Arrangement of SDR 12.1 billion for Mexico, the largest financial commitment by the IMF up to this time.
- March 26
- Executive Board approves an SDR 6.9 billion Extended Fund Facility for Russia, the largest EFF in IMF history.
- April 16
- IMF establishes voluntary Special Data Dissemination Standard for member countries having, or seeking, access to international capital markets. A General Data Dissemination System will be implemented later.
- Interim and Development Committees endorse joint initiative for heavily indebted poor countries (HIPCs).
- January 27
- Executive Board approves New Arrangements to Borrow (NAB) as the first and principal recourse in the event of a need to provide supplementary resources to the IMF.
- April 25
- Executive Board approves issuance of Public Information Notices following conclusion of members’ Article IV consultations with the IMF at the request of the member to make the IMF’s views known to the public.
- September 20
- Executive Board reaches agreement on proposal to amend Articles of Agreement that will allow all members to receive an equitable share of cumulative SDR allocations.
- December 4
- Executive Board approves a Stand-By Arrangement of SDR 15.5 billion for Korea, the largest financial commitment in IMF history.
- December 17
- In the wake of the financial crisis in Asia, the IMF establishes the Supplemental Reserve Facility (SRF) to help members cope with sudden and disruptive loss of market confidence. The SRF is activated the next day to support the Stand-By Arrangement for Korea.
- April 8
- Uganda becomes first country to receive debt relief (approximately $350 million in net-present-value terms) under HIPC, to which IMF will contribute about $160 million.
- July 20
- IMF activates GAB for first time for a nonparticipant and for the first time in 20 years to finance SDR 6.3 billion augmentation of Extended Arrangement for Russia. | <urn:uuid:cd517de2-850c-4890-9100-8e1d27d345fc> | CC-MAIN-2015-35 | http://www.imf.org/external/pubs/ft/survey/sup0998/14.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00046-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.931315 | 2,827 | 2.515625 | 3 |
When I saw the statue of Chester Nimitz by the USS Missouri; I knew I had to get a pic with it.
Chester Nimitz was a Fleet Admiral of the United States Navy. He held the dual command of Commander in Chief, United States Pacific Fleet for U.S. naval forces and Commander in Chief, Pacific Ocean Areas, for U.S. and Allied air, land, and sea forces during World War II.
When I was in the Navy I was aboard the USS Theodore Roosevelt which is a Nimitz-class aircraft carrier; being named for the Admiral.
The “Punchbowl” was formed some 75,000 to 100,000 years ago during the Honolulu period of secondary volcanic activity. A crater resulted from the ejection of hot lava through cracks in the old coral reefs which, at the time, extended to the foot of the Koolau Mountain Range.
Although there are various translations of the Punchbowl’s Hawaiian name, “Puowaina,” the most common is “Hill of Sacrifice.” This translation closely relates to the history of the crater. The first known use was as an altar where Hawaiians offered human sacrifices to pagan gods and the killed violators of the many taboos. Later, during the reign of Kamehameha the Great, a battery of two cannons was mounted at the rim of the crater to salute distinguished arrivals and signify important occasions. Early in the 1880s, leasehold land on the slopes of the Punchbowl opened for settlement and in the 1930s, the crater was used as a rifle range for the Hawaii National Guard. Toward the end of World War II, tunnels were dug through the rim of the crater for the placement of shore batteries to guard Honolulu Harbor and the south edge of Pearl Harbor.
During the late 1890s, a committee recommended that the Punchbowl become the site for a new cemetery to accommodate the growing population of Honolulu. The idea was rejected for fear of polluting the water supply and the emotional aversion to creating a city of the dead above a city of the living.
Fifty years later, Congress authorized a small appropriation to establish a national cemetery in Honolulu with two provisions: that the location be acceptable to the War Department, and that the site would be donated rather than purchased. In 1943, the governor of Hawaii offered the Punchbowl for this purpose. The $50,000 appropriation proved insufficient, however, and the project was deferred until after World War II. By 1947, Congress and veteran organizations placed a great deal of pressure on the military to find a permanent burial site in Hawaii for the remains of thousands of World War II servicemen on the island of Guam awaiting permanent burial. Subsequently, the Army again began planning the Punchbowl cemetery; in February 1948 Congress approved funding and construction began.
Prior to the opening of the cemetery for the recently deceased, the remains of soldiers from locations around the Pacific Theater—including Wake Island and Japanese POW camps—were transported to Hawaii for final interment. The first interment was made Jan. 4, 1949. The cemetery opened to the public on July 19, 1949, with services for five war dead: an unknown serviceman, two Marines, an Army lieutenant and one civilian—noted war correspondent Ernie Pyle. Initially, the graves at National Memorial Cemetery of the Pacific were marked with white wooden crosses and Stars of David—like the American cemeteries abroad—in preparation for the dedication ceremony on the fourth anniversary of V-J Day. Eventually, over 13,000 soldiers and sailors who died during World War II would be laid to rest in the Punchbowl.
One of the places we wanted to see because we have family listed on the Tablets of the Missing. It was sombering to see our family and name eteched with honor.
The map galleries extend from the right and left sides of the tower. Inscribed upon the frieze of the galleries are the names of places which attained notable significance in the proud record of our Armed Forces: PEARL HARBOR, WAKE, CORAL SEA, MIDWAY, ATTU, SOLOMONS, GILBERTS, MARSHALLS, MARIANAS, LEYTE, IWO JIMA, OKINAWA, TOKYO, AND KOREA. The orginal maps in the galleries are each ten feet high, were designed by Richard and Carlotta Gonzales Lahey of Vienna, Virginia from data prepared for that purpose by the American Battle Monuments Commission. They wre of scagliola, i.e. paintings on a special composition applied to Carrara marble surface and glazed. This material didn't far well due to Hawaii's climate and were replaced in 1968-1972. The new maps are of mosaic concrete and colored glass aggregate and designed by Mrs. Mary Morse Hamilton Jacobs of Glenelg, Maryland. Each of these maps descripes each of the battles during WWII.
They are fantastic in person and I feel it is an honor to finally see them in person.
Built in 1964 and located behind the Lady Columbia, is the non-sectarian chapel. The altar has the cross in the middle and flanked by the Star of David and the Buddhist Wheel of Righteousness Buddhists. The chapel altar, stairs, and floor are Verde Antico marble. The Latin Cross is displayed on Rojo Alicante marble. Sculptor Bruce Moore designed the glass cabochons that are within the art-deco-ish bronze altar, which are lit electrically. The cabochons on the chapel's windows and doors are beautiful lit by the sun.
She is a 30 foot female figure standing on the symbolized prow of a U.S. Navy carrier with a laurel branch in her left hand. The words below her were written by President Lincoln to a Mrs. Bixby, mother of five sons who had died in battle. "The Solemn Pride tht must be yours to have laid so costly a sacrifice upon the altar of freedom." She was designed by Bruce Moore of Washington D.C. Fillippo Cecchettio of Tivoli and Ugo Quaglieri of Rome, Italy carved the sculpture under the direction of Mr. Moore.
David Kalākaua was born on November 16, 1836. He succeeded to the throne on February 12, 1874, and ruled with his queen, Kapi‘olani. King Kalākaua was the catalyst for the revival and flowering of Hawaiian intellectual and artistic traditions that took place in the last quarter of the 19th century.
He was an accomplished musician and, among other chants and songs, composed he words of “Hawai‘i Pono’i,” now the State of Hawaii’s official anthem. His motto was “Ho‘oulu lāhui” (Let the Hawaiian race flourish). He was also a skilled sailor and loved the sea. ‘Iolani Palace, the only royal palace in the United States and one of Hawai‘i’s most famous landmark, was built during his reign.
Thoroughly Hawaiian but also cosmopolitan, he completed a tour around the world in 1881, including a visit to the United States in 1874, the first monarch in the world to have done so. His coronation took place on the grounds of ‘Iolani Palace on February 12, 1883. Kalākaua died on January 20, 1891. He was buried in the Royal Mausoleum in Nu‘uanu Valley on O‘ahu.
“Kukui ‘ā mau i ke awakea.” (The torch that continues to burn in daylight.) —Kalākaua family motto.
Plaque on the opposite side: David Laamea Kamanakapuu Mahinulani Naloiaehuokalani Lumialani Kalakaua, 1836–1891. This statue of King David Kalakaua (1836–1891) was commissioned by the Oahu Kanyaku Imin Centennial Committee on behalf of the Japanese-American community in 1985 in observance of the arrival of the first ship carrying 944 Kanyaku Imin, or government-contract immigrants, from Japan to Hawaii on February 8, 1885, to work on the sugar plantations.
King Kalakaua visited Japan in May, 1881, on his trip around the world and appealed to Emperor Meiji to send immigrants to Hawaii to relieve the shortage of laborers on sugar plantations. This resulted in the signing of the Japan-Hawaii Labor Convention. Japanese numbering 220,000 immigrated to Hawaii from 1885 to 1924 when the Oriental Exclusion Act was enacted by the congress of the United States.
The Japanese-Americans, who are descendants of these immigrants, have been successful in numerous fields and prospered here in Hawaii. The King is honored as the “Father of Japanese Immigration to Hawaii.” This statue is a symbol of appreciation and Aloha to King Kalakaua, a visionary monarch, for inviting their forebears to Hawaii.
Hawaii Visitors and Convention Bureau
Suite 801, Waikiki Business Plaza
2270 Kalakaua Avenue
Honolulu, HI 96815
We explored the USS Bowfin Submarine. It was really neat to see the actual size of the interior of a sub, especially if you like to watch the WWII movies. While on the topside, we saw a large Jellyfish next to the Bowfin, which was a real nice surprise. I could not imagine all the guys that they had aboard one of these vessels, and moving around during battles would invite some serious head trauma from hitting pipes, knobs, low ceilings, ect. The Bowfin was nicknamed the Avenger of Pearl Harbor, since it was commissioned on Dec 7th, 1942.
This is the actual submarine, U.S.S Bowfin "The Avenger of Pearl Harbor", that you get to walk through. It is amazing that this sub had 80 men onboard and the efficiences that they developed to adapt to confined quarters. I bet there were a lot of head injuries due to bumping their heads against pipes, railings, low ceilings and knobs. It is a very easy walk through that also includes a museum on the shore. Added bonus, we saw a nice sized Jelly Fish next to the sub that was pointed out to us by park personel.
To me it would be just another statue, one of many, hadn’t it have such an interesting “biography” which I learned from a guard at the Punchbowl cemetery.
“Did you know that this is not the original one, he said. This is a replica and the original one stands somewhere on the Big Island, at the birth place of the King. A high priest predicted that if the statue is not placed at the King’s birthplace it will never find peace. And he was right, because on its way from Paris, where it got its bronze color, the statue sank with the ship that was carrying it back to Hawaii.”
A new statue was created and placed at its present location, in front of the Oahu’s Court House.
Shortly, after the new statue’s inauguration, a sailor found the original one and sold it to King Kalakaua, who remembered the priest’s prediction and ordered to place it on the Big Island’s birth place of Kamehameha the 1st.
If you are visiting O'ahu for the first time, no trip would be complete without paying your respects to our fallen heroes at the U.S.S. Arizona Memorial. This holds a special place in the hearts of all Americans, but people from all over the world get the same sobering feeling when they visit as well. The tour includes a movie about the attack of Pearl Harbor on Dec. 7, 1941, which marked the United States' entry into World War II. Then a short ferry ride out to the memorial, where you can't help but feel emotions for what took place in this very spot over 60 years ago. Seeing the oil that still drifts to the surface of the water from the wreckage below gave me chills. A definate O'ahu must see.
THIS UNIQUE CEMETERY LOCATED IN PUNCHBOWL CRATER. ONCE AN ACTIVE VOLCANO ON OAHU, IS A MUST SEE.
OPENED IN 1949, IT NOW IS HOME TO OVER 13,OOO SERVICE MEN WHO GAVE THEIR LIFES IN WWII. IN ALL OVER 44 THOUSAND SERVICE MEN HAVE BEAN BURIED IN THE CRATER.
The cemetery is open daily.
September 30 thru March 1, from 8:00 a.m. until 5:30 p.m.
March 2 thru September 29, from 8:00 a.m. until 6:30 p.m.
On Memorial Day, the cemetery is open from 7:00 a.m. until 7:00 p.m
ALSO VISIT THE OFFICAL HOMEPAGE PUNCHBOWL MEMORIAL
This is looking towards downtown Honolulu from Punch Bowl lookout. Directly in the center of the picture you can see the State Capitol Building, and behind that is the Honolulu Harbor.
The beautiful thing in the right side of the picture is Nadine.
National Memorial Cemetery of the Pacific, is known to the locals as Punch Bowl. Here is a look towards Waikiki, and it's world famous landmark, Diamond Head Volcano. This picture was taken from Punch Bowls lookout.
The cemetery covers 116 acres and is located in the Puowaina Crater (an extinct volcano known as the Punchbowl). Construction began in 1948 and the first remains were interred on January 4, 1949. The remains of military personnel who fought in battles such as Guadalcanal, China, Burma, Saipan, Guam, Okinawa, Iwo Jima, Korea as well as prisoners of war from camps in Japan are buried here. Two examples include the remains of Ernie Pyle (WW II correspondent) and Sgt. Henry Hansen (one of the original flag raisers on Iwo Jima.
When visiting Pearl Harbour and the USS Arizona Memorial , don't miss going to the
Battleship Missouri Memorial right next door.
This is one of the most the most famous battleships and on its deck the instrument of Surrender was signed by theEmpire of Japan and the USA to end the war. | <urn:uuid:9ec4705b-4390-4afe-b0c8-c487a7cb336e> | CC-MAIN-2015-35 | http://www.virtualtourist.com/travel/North_America/United_States_of_America/Hawaii_State_of/Oahu/Things_To_Do-Oahu-Memorials-BR-1.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066017.21/warc/CC-MAIN-20150827025426-00220-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.964886 | 3,025 | 2.921875 | 3 |
Whether you’re raising chickens, ducks, geese or other fowl for meat or eggs, knowing how to incubate and hatch various species of fowl is an incredibly useful skill to have. There are many benefits to incubating your own eggs, including the fact that you can hatch a far greater number of chicks in an incubator than you can under even the best of hens.
Being able to hatch more birds is especially advantageous in a post-SHTF or TEOTWAWKI (The End Of The World As We Know It) situation because those chicks will be valuable, potentially worth many times their weight in gold, in trade and barter.
Even if you don’t intend to hatch a large number of birds, knowing how to incubate and hatch your own fowl is still good because you can’t always rely on a broody hen to do the job. Some hens may go broody but then give up after a week or two, leaving their clutch to die as a result.Then there is the fact that many popular breeds, especially those used for egg laying, have been selectively bred for decades specifically to get rid of broody behavior since it decreases egg production, which again results in a lack of broody hens to hatch the next generation.
To ensure that your eggs are fertile, your laying hens should be left to roam with a suitable rooster. Most roosters can handle 6 – 8 hens for reliable fertilization of the eggs, while some breeds (like the bantams) can handle upwards of 10 – 12 hens, so stock your flock accordingly.
If you’re collecting eggs over the course of several days, you can store them in a cool, dry place away from direct sunlight or heat. In nature, hens often let their eggs sit for upwards of 7 – 10 days while they lay a full clutch. The eggs will remain in a dormant state until the temperature rises to 99 – 101 degrees Fahrenheit, with the best viability being within 4 – 8 days of being laid.
Preparing Your Eggs
The first step in preparing your eggs for the incubator is to ensure that they are all clean. Bacteria on the outer shell of your eggs can permeate through the pores of the shell to contaminate the egg. Such contamination can result in the death of developing embryos. Worse still, contaminated eggs can actually incubate the bacteria, causing toxic gas to build up within the egg until it explodes, in which case your whole clutch can wind up contaminated.
The best way to avoid contaminating your eggs is to keep the nests where your hens are laying clean and fresh, so that the eggs need little or no extra cleaning when you take them for incubation. If you must use eggs that have been dirty, then wash them thoroughly with warm water. Don’t use cold water to wash your eggs since the difference in temperature can cause the contents of the egg to contract and suck bacteria through the pores of the eggshell as a result.
Once your eggs are clean, you’ll want to label them; many people use a marker to label their eggs, but if you’re concerned about the ink permeating the eggshell you can use a pencil instead. Numbering your eggs for easy identification is highly recommended, but at the very least you should mark one side so that you can tell which way to rotate the eggs, especially if you intend to rotate them by hand instead of using an automatic egg turner.
Incubation Temperature & Humidity Levels
The ideal temperature for incubation is a nice solid 100 degrees Fahrenheit, while relative humidity should generally be kept at about 50% for the first 18 days. During the final 2 – 3 days you will need to increase humidity levels to 60 – 70%.
To maintain the proper humidity levels, simply follow the instructions that came with your incubator. Please note, though, that when you need to add additional water to maintain the humidity, you should preheat the water to 100 degrees in order to avoid causing any ambient cooling from the cold water.
Slight variations in temperature, while tolerable to a certain extent, should be avoided when at all possible. A degree or two lower will extend incubation times and may result in birth defects, while eggs that cool to even 95 degrees run the risk of killing the embryo. Temperatures above 100 degrees are, again, likely to result in birth defects or the death of the developing embryos.
For the purpose of candling your eggs, or if you need to turn them by hand, the incubator should be opened for as little time as possible and as infrequently as possible. Maintaining a constant temperature inside the incubator is one of the chief reasons why an automatic egg turner is such a good idea.
Automatic egg turners also relieve you of the responsibility of turning your eggs 3 – 4 times a day, a fact that is crucially important and all too easy to forget. Eggs that are not properly turned run the risk of birth defects, spontaneous death, or the chick becoming stuck to the shell.
Candling is an old technique that you can use to determine whether the eggs you are incubating are developing properly or not. A small flashlight is ideal for this process, but you can use just about any sufficiently bright light.
You can candle eggs starting as soon as 24 hours after you have begun incubating them, but for the best results it’s generally good to wait until the third or fourth day at least, if not the sixth or seventh.
To candle your eggs, darken the room and simply take the egg to be candled gently in your hand. Holding the egg horizontally between the thumb and forefinger of one hand, bring the flashlight to press flush against the fat side of the egg.
The light will shine through the shell of the egg, rendering it translucent and allowing you to observe what’s happening inside. Eggs with white or light-colored shells are easier to candle than darker brown eggs.
As a point of reference, especially if you’re new to incubating fowl, you can also candle your eggs prior to putting them in the incubator. This will give you something to compare to when you candle them at 4 – 7 days. When you’re ready to candle the incubating eggs, you’ll be looking for a network of blood vessels; you may also be able to see a dark spot, roughly the side of a pencil eraser or smaller depending on how early you candle, which is the developing eye.
If you see no development, or you’re not sure, you can leave the eggs in the incubator a while longer; some eggs simply take longer to start developing, so you can make a note of suspect eggs and candle them again at 12 – 14 days incubation.
If you see a blood ring within the egg while candling at any stage of development, then you can remove and discard it, as a blood ring indicates that the embryo has died. A blood ring is pretty easy to identify, and often looks like someone drew a ring on the inside of the eggshell with a marker.
Pipping & Hatching
Starting on the 18th day, you should remove your eggs from the egg turner if you used one. At this stage they can be left simply laying on the floor of the incubator, as the chicks will begin rotating within their eggs to poke their beaks into the air pocket inside the egg.
By the 19th day you’ll probably be seeing the eggs move, and you may also hear peeping from within the eggs. Then the chicks will start pipping, with the earliest hatchers emerging on the 19th day and everyone else coming out between the 20th – 22nd day.
When a chick is ready to hatch, it will start to break out of its shell by first breaking a small hole in the shell. This is called pipping, and in most cases the chick will complete the process of hatching within 8 – 12 hours of initially pipping.
After pipping, the chick will drill through its eggshell in a horizontal circle before busting out. Once a chick has begun pipping, pay attention to how much time has passed; most chicks will get out of their eggs just fine on their own, but occasionally you may need to help a chick.
If a chick has pipped for more than 24 – 36 hours, it’s time to consider lending a hand. You must be extraordinarily careful if you assist a chick, since it is very easy to hurt them at this stage. When helping a chick hatch, pay close attention to the inner membrane and use plenty of warm water to keep the egg and membrane moist. If there is any bleeding, stop immediately and wrap the egg back up in a moist paper towel and return it to the incubator.
Unlike mammals, chickens do not have an umbilical cord, instead they are attached to the network of blood vessels that line the membrane of their egg. These blood vessels are the last thing to be absorbed before hatching, and if you assist a chick in hatching before it has fully absorbed the blood vessels you run the risk of causing it to bleed to death. A few drops of blood often accompany the hatch, but more than that becomes dangerous for the chick.
Once your chicks have successfully hatched, you can remove the egg shells from the incubator, but leave your chicks inside until they have thoroughly dried out and fluffed up. After that, you’ll move them to their nursery location where they should have an adequate heat lamp to keep them warm in the absence of a mother hen.
Don’t Count Your Chickens…
Until They Hatch
When it comes to incubating your own eggs, it’s easy to get excited about your upcoming hatch, especially if you’ve candled your eggs and can see the little chicks developing.
Still, resist the temptation to tally up your chicks because that good old phrase is around for a reason. Spontaneous death of the developing chick(s) can and does happen.
Usually, if it’s going to happen, it will happen early during incubation, or by the 12 – 14 day mark, but some chicks die just a day or two before hatching, or they simply don’t make it out of their shells once they start.
Sad though this can be, it’s one of those things that happens, and you should be prepared for the possibility. So abide by the wisdom passed down from generations before you, and don’t count those chickens until they hatch.
- a short youtube video showing how to candle a few different types of eggs
Video first seen on Charlie Trevino
- UC Davis Department of Animal Science – a useful page with photos of several different egg types during candling, and photos of various non-viable eggs, blood rings, etc.
This article has been written by Gaia Rady for Survivopedia.
|This article is sponsored by “Building Chicken Coops Guide” - Know the essential tips on building a predator-proof chicken coop to let the chickens lay the eggs safely.|
34,643 total views, 3 views today
- Composting And Handling Garbage After SHTF - November 11, 2013
- Life-Saving Skills To Develop Now For Survival - November 6, 2013
- DIY Projects You Can Start Now To Survive Later - November 4, 2013
- Winter Survival And Blizzard Prep Tips - October 30, 2013
- Relocating Before SHTF - October 28, 2013
- Meds To Stockpile For A Crisis - October 23, 2013
- 7 Vital Herbs From The Herbalist’s Garden - October 21, 2013
- Useful Skills And Items For Bartering After SHTF - October 16, 2013
- Best Fuels For Off-Grid Survival - October 14, 2013
- Bug Out Vehicles & Locations - October 9, 2013 | <urn:uuid:e64a83d1-dd10-4989-bc72-25116485aec2> | CC-MAIN-2015-35 | http://www.survivopedia.com/how-to-incubate-and-hatch-eggs/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060413.1/warc/CC-MAIN-20150827025420-00282-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.940565 | 2,477 | 3.0625 | 3 |
Shylock and History
by Jami Rogers
Towering over Shakespeare's romantic comedy The Merchant of Venice is the tragic figure of Shylock. Before we can begin to understand Shylock, though, we must understand the historical and dramatic influences under which Shakespeare wrote.
Although Shakespeare wrote possibly the most famous Jew in English literature, there were virtually no Jews in England during his lifetime. It isn't known whether Shakespeare would have come into contact with anyone who was Jewish. It would also be impossible to surmise how detailed his knowledge of the historical facts about Jews in England was, but fact and myth were certainly handed down through the ages, and it is safe to assume that he would have been aware of his country's historical folklore.
Jews in Early England: Assimilation to Expulsion
One of the first documented groups of Jews residing in England comes from Oxford in 1075. For more than a century, English Jews were not confined to ghettos, unlike many of their European counterparts. Eyewitness accounts report that Jews and non-Jews visited each other's houses, indicating that they lived side by side in relative harmony. Jews, however, were not citizens. They were viewed as outsiders, and were often barred from many professions because of their religion. Only Christians could belong to the artisan guilds -- the professional associations of the era -- and own land, which left Jews with few means of earning a living. Christians, however, could not lend money with interest, and many Jews earned a lucrative living as usurers. This profession was not a sure path to riches, as debts often had a way of going unpaid. The Jewish lender often had to become his own debt collector, and in trying to regain the debt owed to him, he frequently became the target of resentment. As usury was a profession comprised exclusively of Jews, religion eventually became the focus of much of this bad feeling.
In the late 12th century, preparation for the Third Crusade brought a heightened level of anti-Jewish sentiment. Anti-Semitic violence culminated in two massacres, one at the coronation of Richard I in 1189, when 30 Jews were killed, and the other in 1190 in the city of York, when 150 Jews were massacred. The Magna Carta, the basis for English constitutional law, is itself a testament to the growing unpopularity of Jewish money-lending activities. Two clauses in the 1215 document state that if a debtor dies before his debt is paid, neither his heir nor his widow will be responsible for repaying the debt.
Repressive measures against Jews continued to grow as the century wore on until finally, in 1275, they were forbidden to be money-lenders. Several more edicts against Jews were implemented at this time, including the taxation of any Jew over the age of 12 and the wearing of badges that identified people as Jewish. With the loss of their primary source of income, and thus their value to the King's coffers, Jews became expendable to the Crown and were expelled from England in 1290, not to be readmitted until 1655.
After the Expulsion, the English view of Jews began to be formed by several myths that grew in popularity through the centuries. The strongest of these myths was undoubtedly that of ritual murder (or "blood libel"), which remained in circulation in England long after the Jews had been expelled. There were several variations of this ritual murder legend, the most prevalent one being that of Jews kidnapping children at Easter and using them in ritual practices. It was also believed that adult Christians would be killed and their blood used for Passover ceremonies. Not one of these myths had any basis in fact; instead they stemmed from fear of an unknown culture and, yet, they were regarded as truth by many.
Jews in Elizabethan Society
The world in which Shakespeare lived was an exceedingly dangerous one. The threat of a civil war was never far away. When Elizabeth ascended the throne in 1558, she staved off the threat of rebellion by dealing ruthlessly with any hint of treason. Many of her enemies -- perceived or actual -- were beheaded.
Much of the plotting against Elizabeth I had its origins in the religious intolerance of the era, begun when Elizabeth's father, Henry VIII, broke with the Catholic Church. Desperate for a male heir to guarantee the Tudor succession, Henry was eager to divorce his then-wife and marry Anne Boleyn, whom he hoped would give him an heir. The Pope refused to grant him a divorce, and Henry VIII's solution to this conundrum was to break from the Catholic Church and create the Church of England, installing himself as head of a new Protestant religion.
A religious war for the soul of England then developed. By the time Elizabeth was Queen, the threat was not just internal but international, as the Pope and Catholic European countries plotted against the Protestant monarch in the hope of returning a Catholic to the throne of England. Elizabeth was forced into a series of reprisals against recusant Catholics (people who outwardly seemed to be Protestant, but who secretly practiced Catholicism), many of whom were murdered.
Elizabeth, however, was by no means the first Tudor monarch to engage in such actions. Her own father was guilty of similar intolerance, and her Catholic sister, Mary, had imprisoned Elizabeth herself in the Tower of London when it became clear that she was the focus of Protestant insurrections against Mary, then queen. She soon earned the epithet "Bloody Mary" because of her murderous actions against the Protestants. It would be no understatement to say that religion was serious business in Elizabethan England.
Where do the Jews fit in this climate of religious intolerance? Despite their expulsion 300 years earlier, small groups of Jews sought refuge in England from the Spanish Inquisition and were living quietly during Elizabeth's reign. These Jews, known as Marranos or Conversos, people who had converted to Christianity from Judaism, and though they outwardly appeared to be Christians, many retained their Jewish heritage, even if they did not actively participate in Jewish religious practices. Another small group of Jews made its way to London in the 1500s and became musicians at the Court of King Henry VIII. Some scholars have even suggested that the "Dark Lady" of Shakespeare's sonnets was one Emilia Bassano, descended from these same musicians.
Most people of Jewish descent living in England in the 16th century were not persecuted by their Christian neighbors. But there was one notorious event which could hardly have escaped Shakespeare's notice. In 1593, a few years before The Merchant of Venice was written, Queen Elizabeth I's physician Roderigo Lopez was accused of trying to poison her. Lopez, allegedly in league with the King of Spain, was convicted of treason, hung, and drawn and quartered in 1594. His was a very public execution, and the fact that he was a Marrano led to an outbreak of anti-Jewish sentiment in the country. He was taunted by slurs on the scaffold as he died, still proclaiming his innocence. It was a clear but unfortunate sign that there was a latent anti-Semitism within the English public. Suspicion was not reserved for Jews alone, though. At this time all foreigners were regarded with suspicion and distrust, at this time, because they were seen as a threat to the security of the English nation. Anti-Spanish sentiment, for instance, was even more prevalent than anti-Semitism at the time of Lopez's death.
Shakespeare and Shylock
William Shakespeare, being a man of the theatre, would have been heavily influenced not only by history, but also by the theatre that had preceded him. He was also an exceptionally good businessman with a keen sense of what his audience wanted. Portrayals of Jews in drama were a long-standing tradition by the time Shakespeare wrote The Merchant of Venice. The Jew seems to have been the guy audiences loved to hate in medieval and Renaissance drama -- the equivalent to Americans' glee at watching the television exploits of the fictional J.R. Ewing two decades ago.
The roots of Shakespearean drama begin with mystery and miracle plays. During the Middle Ages, touring troupes primarily sponsored by the church performed the stories of the Old and New Testaments for a largely illiterate audience. Within these performances lurked the medieval dichotomy of feeling about the Jewish race that had dogged Christianity. On the one hand, Jewish patriarchs such as Moses were admired, while on the other Jews were often seen to be responsible for Christ's crucifixion.
With the coming of the Renaissance this strictly biblical, if somewhat biased, portrayal of Jews gave way to an overly melodramatic perception. Jews became the evil villains of Elizabethan drama. Frequently portrayed as Machiavellian or greedy or both, they were not complex characters. In fact, many of Shakespeare's contemporaries simply told a story, rather than added any psychological layers to characters and their motives. Even Christopher Marlowe, Shakespeare's greatest rival, fell into the one-dimensional trap in his play The Jew of Malta, written in 1589 -- nearly a decade before The Merchant of Venice. Both Barabas in The Jew of Malta and Shylock are money-lenders and they both have daughters who leave home with their father's money, but there the similarity ends. Barabas is an over-the-top villain who steals, cheats, and indulges in murder until he finally meets a gruesome end -- boiling in oil. Shakespeare's characterization of Shylock broke with theatrical tradition. Shylock is a complex man, whose every action can be understood and who, finally, elicits understanding from his audience.
Elements of all these influences -- historical, societal, and theatrical -- helped to mold Shylock's character. What we can draw from the play regarding Shakespeare's ideas about the Jewish people, however, is pure supposition. Shakespeare left no journals, no lifetime correspondence from which a biographer could draw a full picture of the author and his work. It is even questionable his plays would have survived if it weren't for a band of actors pooling their memories together seven years after his death to publish the First Folio.
Shylock began the play much as an Elizabethan audience would expect: He exhibited every sign of being the piece's villain. As the money-hungry Jewish usurer that had become a stock character in Elizabethan drama, Shylock made himself thoroughly unpleasant, with asides to the audience stating that he hated Antonio because Antonio was a Christian -- "but more" he continued, because he lent money without interest, thus competing with Shylock's business and threatening Shylock's sole means of supporting himself and his family.
In Shylock's final scene, Shakespeare had him act out another stereotype: a ritual murder. Of course, there is no mention in the play that Shylock would use Antonio's blood in any religious ritual. But the audience would have immediately associated the stage action with the myth. Shakespeare seemed to be giving his audience exactly what they expect from a stage Jew. In Portia, the audience got the means to stop the ritual murder because she would not let the Jew shed one drop of Christian blood. The text specifically says "Christian," reinforcing the "blood libel" legends.
While he perpetuated received notions of Jews, Shakespeare also did an extraordinary thing for an Elizabethan playwright: He created a Jewish character who was flawed, and human, and oppressed by the Christians surrounding him. The audience was told time and again of Shylock's encounters with Christians and how they spat upon him, called him nasty epithets, and spurned him. Shylock was the very picture of a man who suffered much at the hands of his fellow men and who had finally reached his breaking point. Growing scholarship points to the possibility that Shakespeare's family were themselves recusant Catholics, oppressed in Stratford and fallen from their high place in local society while Shakespeare was still a boy. If this is true, then perhaps Shylock's oppression was a metaphor for England's religious oppression during Shakespeare's lifetime. His forced conversion also fits with this notion, as it was not only Jews being forced to become Christians, but also Catholics forced to become Protestants and vice versa, depending on who was in control of the throne at the time. They had to convert or lose their lives. This theory is pure speculation, but it would hardly be the first time -- or the last -- that theatre was used to make a covert political statement.
Viewing the play through modern eyes, Shylock can be seen as both an Elizabethan stereotype and a fully drawn human being. Ironically, it is precisely because of the stereotypical elements in Shylock's character that many people argue against The Merchant of Venice, viewing it as an anti-Semitic work -- an understandable reaction in a post-Holocaust era. Shakespeare, however, did not write a one-dimensional villain, but a complex character who defies explanation and who will probably never be fully understood.
Jami Rogers received her training as a Shakespearean actress at the London Academy of Music and Dramatic Art. She has performed at the MacOwan Theatre in London, written a play performed by Boston Theatre Works, and has worked for Masterpiece Theatre and MYSTERY! since 1997.
Essays + Interviews:
An Interview with Trevor Nunn | Experiencing the National
Shylock and History | The Shortest Shakespeare
Essays + Interviews | Who Was Shakespeare?
Drama to Film | Story Synopsis | Will's Words | Who's Who
Teacher's Guide | The Forum | Links and Bibliography
Home | About The Series | The American Collection | The Archive
Schedule & Season | Feature Library | eNewsletter | Book Club
Learning Resources | Forum | Search | Shop | Feedback
Masterpiece is sponsored by: | <urn:uuid:d33cac07-2da3-4251-bf19-624d6be65802> | CC-MAIN-2015-35 | http://www.pbs.org/wgbh/masterpiece/merchant/ei_shylock.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00101-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.985852 | 2,833 | 3.90625 | 4 |
What Are Hearing Disabilities?1
Hearing loss affects a student’s ability to speak and to understand spoken language. A person with deafness (persons first language) cannot process any linguistic information, and a “hard of hearing” person can process some linguistic information. The Individuals with Disabilities Education Act (IDEA) offers the following definitions:
- A hearing impairment is a hearing loss, (STET)whether permanent or not, that affects a child’s educational performance. This definition includes children who have the capacity to receive some auditory stimuli, including speech and language; this capacity is known as residual hearing, which can be supported by the use of a hearing aid.
- Deafness is a severe hearing impairment that impedes the child’s processing of linguistic
- information through hearing, with or without amplification. A student with this condition cannot receive sound in all or most of its forms.
In other words, the student with a hearing impairment or hearing disability can respond to certain auditory stimuli, whereas the student who is deaf cannot process any information through hearing. Students anywhere along the hearing continuum will, of course, require appropriate accommodations. During the 1998–1999 school year, about 1.3 percent of the students who received special education services (and 0.1 percent of the overall school-age population) were classified as either hearing impaired or deaf (U.S. Department of Education, 2000).
Degrees of Hearing Loss
Hearing losses can be more precisely described in terms of the degree to which hearing (the ability to receive sound) is impaired. Sound is measured in two ways:
- Intensity (loudness) of the sound, measured in decibels (dB)
- Frequency (pitch) of the sound, measured in hertz (Hz)
A hearing disability can occur in one or both areas and may affect one or both ears (NICHCY, 2001a).
Figure 7.1 shows two audiograms (graphs of hearing ability) that compare a person who has normal hearing to a person with a typical hearing loss. The individual represented in audiogram B cannot hear high-frequency sounds without a hearing aid. This person has difficulty understanding what others say, because much speech information, particularly for consonants, is inaccessible.
We typically classify degrees of hearing disabilities as follows:
- Slight: 15–25 dB of hearing loss
- Mild: 20–40 dB of loss
- Moderate: 40–65 dB of loss
- Severe: 65–95 dB of loss
- Profound: more than 95 dB of loss
A child who cannot hear sounds at less than 90 decibels is considered deaf for the purposes of educational placement.
Types of Hearing Loss
Hearing loss is further categorized into four types:
- Conductive hearing loss is caused by disease or obstruction in the outer or middle ear. An individual with this condition can usually use a hearing aid.
- Sensorineural hearing loss is the result of damage to the delicate sensory hair cells of the inner ear.
- Mixed hearing loss combines both conductive and sensorineural losses, meaning that a problem exists in the outer or middle ear as well as in the inner ear.
- Central hearing loss results from damage to the central nervous system, either in the nerves that occupy the pathways to the brain or in the brain itself.
In planning instruction for a student with a hearing disability, you may benefit from knowing which type of loss is involved, so that you can determine what technological aids are in place and which might further your student’s educational goals. Technologies have developed to assist individuals with specific types of structural damage to the ear. For example, some types of hearing aids can help students with conductive hearing loss. Cochlear implants can assist persons with sensorineural damage.
Effects of Hearing Loss on Language Development
Hearing loss can be either prelinguistic—that is, it precedes a child’s language development—or postlinguistic, occurring after a child has acquired some degree of speech and language skill. Hearing disabilities can also be described as congenital or adventitious. Congenital means that the hearing loss is genetic or occurred at birth, and adventitious means that it occurred because of an accident or illness after birth.
Students who have lost their hearing postlinguistically or adventitiously may continue to use speech as a method of communication. Or they may use speech together with sign language or speech reading (lip reading). Students who lose their hearing before developing speech may not use speech at all, communicating solely with sign language. However, as with all other types of disability, no two students are alike. Trying to fit a student into a preconceived category will likely lead to embarrassment or frustration for you, the student, and the parents.
The long-term educational effects of a hearing loss can depend to a great extent on the age at which the loss occurred. Children with hearing and those with hearing losses (of normal or above average intelligence) follow the same pattern of cognitive development, including initial phases of language development such as babbling and the production of other sounds. Further development may, however, proceed at a different rate in children with hearing loss. Between the ages of one and three, the average child’s vocabulary jumps from 200 words to 900 words. This is when both hearing and nonhearing children make the greatest gains in language acquisition. A child who has not begun to build a vocabulary or to figure out the rules of grammar by age three can find these tasks extremely difficulty later on. That is why early intervention programs for students with hearing disabilities are extremely important. (See Principles & Practice: Early Intervention Programs.) It is critical, according to some experts (Solit, Taylor, & Bednarczyk, 1992), for children to experience rich language environments whether they can hear or not.
Educator who must decide whether to include a child with a hearing problem in the regular classroom follow the same general process as they do when evaluating inclusion for other exceptionalities. In order to provide the best learning environment for a child, teachers, parents, and related services specialists review the child’s particular needs before making any placement decision.
In the case of a child with a hearing problem, communication issues are the first addressed. A student who has learned speech reading and sign language in an early intervention program and is comfortable around hearing persons might be best placed in a regular classroom where he can focus on age-appropriate academic material. A student who is not comfortable with speech reading and lacks good communication skills might learn best in a self-contained classroom.
Although some students with hearing disabilities find inclusion easy, others struggle with loneliness in inclusive classrooms. They have difficulty communicating with peers and with hearing teachers and parents; often they feel “different.” As a teacher, be prepared to help the hearing-disabled student in your classroom who does experience feelings of isolation.
Many students succeed in the regular classroom once they have mastered communication techniques. However, some parents of children with hearing disabilities prefer that their child be placed with children who have similar issues and experiences. Many in the deaf community assert that deaf people have their own culture, with distinct folkways and a separate language (American Sign Language). These convictions should not be ignored (not the least of why is that it would be a form of audism). As is the case with all the laws relating to students with exceptionalities, parents have the final say in where their child will be placed.
Some children with hearing disabilities may attend a residential program. For example, the Arizona State School for the Deaf and Blind in Tucson offers both day school and residential programs. Because Arizona is a fairly large state with many rural areas, a residential program—rather than a long daily commute—might be the most logical choice for a child who is learning sign language or mobility skills. In fact, many families of children who are deaf or hard of hearing want their children to have a school experience with other students like them, rather than be included in a regular school where they might be the lone student with hearing disabilities.
In addition to school-year programs, students with hearing disabilities may participate in summer-school programs. The Texas School for the Deaf has summer sports camps, driver-education programs, communication skills workshops, and high school retreats.
Meeting the Needs of Students with Hearing Disabilities
Teachers who have students with hearing disabilities in a regular classroom find a challenge in achieving effective communication that assures the child complete access to an education. The challenge implies both a classroom environment and instructional techniques that have a strong visual orientation. Here are some suggestions to keep in mind:
- The teacher should refrain from speaking with his or her back to the students. This is of particular importance when a child is using speech reading.
- The student with a hearing disability should be able to see the teacher and peers from his or her vantage point in the classroom. During discussions, too, the student should be able to see the faces of all the other students (a circle can work nicely).
- If a sign-language interpreter is present, the lesson pace should allow the interpreter enough time to convey the information before the instructor moves on to the next point.
- The student should receive visual aids to reinforce the instructor’s verbal delivery of lessons. Copies of overhead transparency lecture notes, writing on the board, and written handouts of instructions can all reinforce learning. As a side note, giving all your students copies of a presentation outline can be helpful, especially for any student who has trouble taking notes or focusing on the important elements of a lecture or presentation.
Self-Esteem for Children with Hearing Disabilities
How children perceive and act towards other children is very important to young ones. Whether they are accepted by their friends or society is what gives a child good self-esteem. Most kids with hearing disabilities feel as though they are different and people perceive them differently and don't give them a fair chance. How can children learn and want to be in a classroom when they feel that the other kids don't like them because they are deaf or wear hearing aids. This article discusses the attitudes of the hearing impaired children.
Self-Esteem Ramifications of the Hearing Impaired in Schools
Facilitating Interaction with Other Students
Successful classroom discussions and socialization rely on helping your hearing students to understand how to communicate effectively and respectfully with a student who has a hearing disability. Be sensitive in planning discreet opportunities to convey this information to keep the student with special needs from feeling separated from the rest of the class. A few modifications to your basic classroom procedures can give the student with a hearing impairment an opportunity to be included more fully with his or her peers.
To make communication easier for the student with a hearing impairment, have students arranged in a circle or semicircle and remind them to speak one at a time. A simple strategy is to point to the person who will speak next and wait for the hearing-disabled student to locate the speaker. You might also pair each student with a partner or study buddy. Each person can count on her partner’s help to fill any gaps in class notes, clarify directions or assignments, or assist with class work. The buddy would not be responsible for taking care of the student with a hearing disability, but could provide support and act as a special contact in the classroom. Using a buddy system can also give the student with hearing loss an opportunity to share responsibilities and to feel that he or she can contribute to the learning of another.
As mentioned earlier, loneliness or lack of social interaction with the hearing world may be the most significant challenge a hearing-disabled student faces. As a teacher, you should make every effort to plan classroom activities that include the student to the greatest extent possible. For the most part, this student should be treated no differently than others; focus on the student’s challenges only when hearing is an integral part of your lesson. In those instances, your role is to provide assistance, restructuring, or other interventions necessary to help the student participate in the learning activity.
To learn about deaf persons learning through Deaf Theater, click here.
Working with Specialists
Your school may have a sign-language interpreter available to assist students with hearing disabilities. Some schools and school districts assign an interpreter to each student with a hearing disability. Others place several students with hearing difficulties in a classroom where the interpreter is working. A sign-language interpreter translates the spoken communication of the classroom into signs for the hearing-disabled student, and also voices (speaks aloud) the signs that the student is making.
Keep in mind these two points if you have a sign-language interpreter in your classroom:
- The interpreter’s job is to facilitate communication between the student with hearing problems and you and the other students, not to teach. Do not expect the interpreter to act as a teacher’s assistant or classroom aide.
- You and your students should be sure to address the student, not the interpreter, when talking or asking questions. Don’t ask the interpreter, “What did he just say (sign)?” Instead, tug on the interpreter’s sleeve and then turn to the student and ask, “What did you say?”
- Don’t be afraid of using words like say or hear when addressing a student with a hearing disability. These words are regular parts of our vocabulary, and the student knows what you mean.
Close communication with the school speech-language specialist can also furnish you with practical suggestions for modifications in the curriculum or its presentation. The speech-language specialist may also work individually with a particular student and have tips to share about past successes with that student. | <urn:uuid:f9d12618-bb8a-423c-b738-e7ffe0045540> | CC-MAIN-2015-35 | http://sped.wikidot.com/hearing-disabilities | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00102-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.956374 | 2,820 | 4.25 | 4 |
By Janet Klug
Perforations and the proper use of the device used to measure them, called a perforation gauge, often baffle beginner collectors and sometimes even those who are not so new to collecting.
Perforations have their own lexicon of terms that might puzzle a newcomer. So what exactly are perforations?
In 1840, Great Britain issued the world's first postage stamp, the Penny Black. The stamps were printed in sheets of 240. Each stamp on the sheet had to be cut from the sheet with scissors before it could be affixed to an envelope. This was time-consuming, and harried postal clerks frequently cut into the stamp design.
Stamps that are not perforated and have to be cut apart with scissors are called "imperforate."
Inventors began working on the problem of making stamps easier to use. By 1853, the British Post Office was using a perforating machine invented by Henry Archer.
The machine punched holes in the margins surrounding the stamp design. The holes weakened the stamp paper, allowing even tearing along the line of perforations. Scissors were no longer necessary for separating stamps.
Holes that are not cleanly perforated or not perforated at all because of broken or missing pins in the perforating equipment are called "blind perfs."
The bit of paper between two holes is called a "bridge." After stamps are torn apart, the protruding paper tips are known as perforation "teeth."
Perforating machines evolved and improved, but with the growing popularity of die-cut self-adhesive stamps, perforations are rapidly becoming a relic of a bygone era.
Knowing how to use a perforation gauge is an essential stamp-collecting skill.
Being able to correctly measure the gauge of perforations is crucial for collectors in many ways, including using a stamp catalog, purchasing stamps, correctly identifying stamps, and placing a stamp in its proper place on an album page.
Different gauges of perforation can mean the difference between a common, inexpensive stamp and a scarce, valuable stamp.
Many different varieties of perforation gauges are available. Although the gauges do not look the same, using any of them requires the same easy-to-learn techniques.
To use a perforation gauge, begin by picking up a stamp with stamp tongs. Slide the stamp up or down along the markings on the perforation gauge until the tips of the perforations exactly match the markings on the gauge. Once a match is made, check the corresponding number printed at the side. That figure indicates the number of perforations per 2 centimeters.
Some stamps have the same gauge of perforation on all four sides. This is expressed in catalogs with one number, as "Perf. 12" or "Perf. 10."
Stamps with compound perforations have different gauges of perforations on the sides than at the top and bottom.
Compound perforations are expressed by two numbers, such as "Perf. 12 x 12½." The first number refers to the horizontal perforations at the top and bottom of the stamp. The second number is the gauge of the vertical perforations at both sides.
Some stamps can gauge differently on all four sides. The 1906 issue of Bosnia and Herzegovina is an example.
Figure 1 shows three perforated Canadian 6¢ orange Queen Elizabeth II stamps from the Centennial definitive series.
This definitive series was in use from 1967 to 1972. Some stamps in the series were issued in several formats.
The stamps shown in Figure 1 all have the same color, design and denomination, but collectors consider them to be different stamps.
The stamp shown on the left was issued in a coil roll, a long strip of stamps rolled up for sale.
Coil stamps are generally sold in stamp-vending machines and are used in automatic stamp-affixing machines. Most (but not all) coil stamps are perforated on only two sides.
The 6¢ orange Queen Elizabeth II coil stamp shown in Figure 1 has horizontal perforations that measure gauge 10. The Scott catalog number for this stamp is 468A. The Scott Standard Postage Stamp Catalogue description for this stamp says "Perf. 10 Horiz."
Unlike the coil stamp, the stamp at the center of Figure 1 and the stamp on the right have perforations on all four sides. But even without using a perforation gauge to measure them, you can see that the stamp at the center does not have as many teeth as the one on the right.
Using a perforation gauge, I measured the stamp at the center as being perforated gauge 10 on all four sides.
The stamp on the right has compound perforations that measure gauge 12½ at the top and bottom and gauge 12 on both sides. The Scott catalog lists the center stamp as Scott 459. The stamp on the right is listed as 459b.
These stamps have different gauges of perforations because they were issued in different formats: as coil, sheet and booklet stamps.
Different gauges of perforation can also be the result of switching from one type of perforating machine to another during stamp production.
Many modern stamps are die cut. In this method, a matrix of sharp cutting edges, called a die, is used to cut completely through the paper between the stamps. Most die-cut stamps are self-adhesive and are held together by the backing paper until they are peeled from the pane and used.
When straightline die cuts are used, there is nothing to measure. Some die-cut stamps are also cut to irregular shapes with a combination of curved and straightline die cuts. But many stamps have serpentine die cuts that mimic perforations. Those serpentine edges can be measured with a perforation gauge the same way that perforations are measured.
In the early days of experimenting with stamp separation, some nations began using equipment that made small slits in the margins between the stamps. This type of separation is called "rouletting."
The big difference between rouletting and perforating is that no paper is removed during the rouletting process, while perforating punches out holes from the paper.
A Mozambique 1-centavo gray-green Coats of Arms and Allegory war tax stamp (Scott MR1) is shown in Figure 2. This stamp was valid for payment of postal or telegraph war tax, but the cancellation reveals that this one was used on a telegram.
The rouletting on this stamp is gauge 7. The stamp listed as Scott MR3 has the same design, color and denomination but has gauge 11 rouletting.
Shown in Figure 3 is an imperforate Obock 10-centime black and green Somali Warriors stamp (Scott 50), part of the imperforate issue of 1893-94.
Although the stamps had to be cut apart with scissors, they bear printed simulated perforations. I wonder what the reasoning behind this might have been, but it gives stamp collectors something cool to collect.
It is possible to enjoy stamp collecting without using a perforation gauge, but perforations and other forms of separation are an important part of the history of stamps. Learn the simple techniques. You will enjoy stamp collecting all the more for doing so.
blogOn June 28, 1914, by assassinating Archduke Franz Ferdinand of Austria, Yugoslav nationalist Gavrilo Princip with the squeeze of a trigger sparked would become to be known as “The Great War” and “The War to End All Wars.” Read More ›
blogEleanor Roosevelt said, “Great minds share ideas …,” and Linn’s is fortunate to have thoughtful leaders of the stamp hobby on its Editorial Advisory Board. Board members participated in a lively discussion of “The State of the Stamp Hobby” Aug. 21 at the American Philatelic Society Stampshow in Grand Rapids, Mich. Read More ›
August 19, 2015 01:58 PMIn an unusual development for our hobby, the Office of Inspector General of the United States Postal Service is blogging about stamp collecting. Read More ›
August 17, 2015 12:19 AMFrom 1967 to 2006, Royal Mail (Great Britain’s post office) advertised all new issues with posters displayed in post offices. Most of these posters had pictures of the stamps along with basic information such as the date of issue, instructions for first-day covers, etc. Some were a little more elaborate. Read More ›
Watch as Scott catalog senior editor Marty Marty Frankevicz discusses the controversy in Canada over increasing postage rates, the elimination of home mail delivery and the erecting of cluster boxes.
Watch as Linn’s associate editor Michael Baadke discusses happenings at the recent APS Stampshow from the show floor.
Watch as Linn's/Scott editorial director Donna Houseman discusses the early release of the new U.S. Elvis stamp, the possibility of a Peanuts stamp and Linn's at the upcoming APS Stampshow.
Watch as Linn’s Stamp News managing editor Chad Snee discusses highlights of Robert A. Siegel Auction Rarities Week sales in late June, and reports that the 49¢ price for a first-class United States stamp will remain in effect until April.
It is always a treat to get to see stamp dealers’ own collections.
In the recently concluded Linn’s United States Stamp Popularity Poll, the Circus Posters set of eight stamps was chosen as the overall favorite issue of 2014.
Dispersal of the splendid Daniel B. Curtis collection continued March 25, with Robert A. Siegel Auction Galleries gaveling items from United States back-of-the-book and possessions.
The 175th anniversary of the first postage stamp, Great Britain's Penny Black, is May 6, but the stamp was placed on sale May 1, 1840, for mailers to use beginning on May 6, the designated issue date. | <urn:uuid:a35f8de0-7d74-495b-be2c-a4b654481bc1> | CC-MAIN-2015-35 | http://www.linns.com/en/insights/stamp-collecting-basics/2007/november/stamps-with-teeth--perforations-need-not-perplex-collectors.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00339-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.953324 | 2,101 | 3.640625 | 4 |
Since 1991, the author has been engaged in the EDICT (Electronic DICTionary) project to develop a series of computer-based Japanese-English dictionaries, capable of being used both as traditional dictionaries and as semi-automated aids for reading Japanese text. The main EDICT glossary file now has over 95,000 entries, and has been joined by subject-specific files covering bio-medical terminology, legal terms, computing, telecommunications, business, etc., as well as a proper names file with 350,000 entries and a kanji database covering over 12,000 kanji. A variety of software packages have been released for use on a number of computer systems, and the files are used within several free or shareware Japanese word-processor systems. The files, which have also been used in a number of natural-language processing (NLP) and machine translation (MT) projects, are all available free of charge.
The development of the World-Wide Web as an information retrieval system on the Internet in 1993 opened the possibility of providing a comprehensive dictionary facility from a small number of servers. The facilities within the WWW to combine server-based software with text input from almost any browser has meant that an identical service can be provided regardless of the user's type of computer. Also complex software distribution and installation is avoided, and the central lexicographical databases can be expanded and the services enhanced without requiring software changes on the part of users.
A large number of WWW-based dictionaries have become available in the last decade, covering most of the world's major languages. Sites such as yourDictionary list several hundred such servers. Many dictionary servers emulate traditional published dictionaries in that entries can only be accessed using the specified head-words.
The first WWW-based dictionary using the EDICT files began operating in 1993, and since then approximately 10 different server systems have been developed to use these files. In addition there are several servers based on other Japanese dictionaries in electronic form, the most notable being the oddly-named Goo servers operated by NTT in Japan, and based on a set of dictionaries published by Sanseido.
This chapter describes the dictionary and related services provided by the author's WWWJDIC server which operates at Monash University, and from mirror servers in the USA, Canada, Poland, Germany and Japan. This server was initially designed to provide an integrated word and character dictionary, and related services such as text glossing. It has been extended to incorporate a number of additional facilities of assistance to students of Japanese.
2. Integration of Japanese Dictionaries
As with other languages, Japanese dictionaries, whether monolingual or bilingual make use of ordered head-words to assemble and organize entries. In Japanese dictionaries the head-words are written either in the hiragana and katakana syllabaries, or in the case of some dictionaries intended for non-native speakers, in romanized Japanese.
In addition, the use of kanji characters necessitates the use of special character dictionaries which contain information about each kanji, plus a selection of words using that kanji. These dictionaries are ordered using some identifiable aspect of the individual kanji, such as a radical component shape and the count of strokes. A student of Japanese using dictionaries to assist with reading a text will typically have to switch between the two forms of dictionary in order to determine the meaning of a new word. This is often found by students to be a time-consuming and frustrating task.
The availability of dictionaries in file form and a database of kanji information enables a Japanese dictionary package to integrate the two so that a user can move easily between the two. Also dictionary packages with appropriate indexing are not limited to a single headword per entry, as is the case with printed dictionaries. Thus an entry in a Japanese dictionary could be accessed by its pronunciation, its full written form using kanji, or potentially by any kanji used to write the word.
The author pioneered the integration of word and character Japanese electronic dictionaries with the release of the JDIC package for DOS computers in 1991. This was the beginning of a series of packages for a number of computer platforms which employed similar integration techniques.
The WWWJDIC server provides the following facilities:
Figure i: WWWJDIC result when searching for こうじょう.
By following the links at the end of the display of each entry, the user can carry out a number of additional searches, e.g. in another dictionary or via a search engine, or as described below, view a selection of sample sentences using the word, or view a table of conjugations generated for each verb.
The ability to identify a kanji by a number of index methods is unique to dictionary software packages. Printed kanji dictionaries must use a primary indexing system for publication, and relegate other indices to appendices.
In addition to providing information about a kanji, the server enables linking at the character level to other WWW-based databases of Japanese and Chinese characters.
An additional educational feature of the server is the facility to
view a stroke-by-stroke animation of a kanji being written. Learning correct
stroke order is considered an important element of kanji acquisition, and many
instructional software packages include some support for this, often in the
form of video clips. The WWWJDIC server uses
animated images of approximately 2,000 kanji
constructed by the author from the diagrams in the Kodansha
"Kanji Learner's Dictionary" compiled by Jack Halpern. Mr Halpern kindly
permitted the digitized version of the diagrams to be converted and used
by the server.
Figure ii: Kanji dictionary display for 番.
Figure iii shows the 番 kanji partially-written in the animated form.
Figure iii: Animation of the writing of the Kanji 番.
4. Text Glossing
The ability to use dictionary files to gloss text is a powerful adjunct to computerized dictionaries. The files of the EDICT project have often been used for this purpose, with earlier examples including the author's JREADER program, Hatasa & Henstock's AutoGloss/J Package, Yamamoto's Mailgloss system, Kitamura & Tera's DLink system DLink system etc.
In carrying out a glossing of Japanese text, a degree of processing of the text must be carried out beforehand, in particular to segment the text into its lexemes and to convert the inflected forms of words into their dictionary forms. These tasks are non-trivial for Japanese text, and have led to the development of powerful morphological analysis software tools such as ChaSen and JUMAN. These tools are generally too large and slow to use within a WWW server, where a rapid response is essential.
With WWWJDIC a simpler approach to segmentation has been employed in which the text is scanned to identify in turn each sequence of characters beginning with either a katakana or a kanji. The dictionary is searched using each sequence as key, and if a match is made, the sequence is skipped and the scan continues. In addition, a small supplementary dictionary file of words and phrases typically written in hiragana is also used. Thus the dictionary file itself plays a major role in the segmentation of the text in parallel with the accumulation of the glosses. The technique cannot identify grammatical elements and some words written only in hiragana, however is it quite successful with gairaigo and words written using kanji.
A further element of preprocessing of text is required for inflected forms of words, as the dictionary files only carry the normal plain forms of verbs and adjectives. An inverse stemming technique previously employed in the author's JREADER program is used here, wherein each sequence which could potentially be an inflected verb or adjective, e.g. a kanji followed by two hiragana is treated as a potential case of an inflected word. Using a table of inflections, a list of potential dictionary form words is created and tested against the dictionary file. If a match is found, it is accepted as the appropriate gloss. The table of inflections has over 300 entries and is encoded with the type of inflection which is reported with the gloss. Although quite simple, this technique has been extensively tested with Japanese text and correctly identifies inflected forms in over 95% of cases. (In Figure iv this can be seen where 思います has been identified as an inflection of 思う.)
Figure iv: Example of the glossing of words in Japanese text.
When preparing glosses of words in text, it is appropriate to draw on as large as a lexicon as possible. For this reason, a combination of all the major files of the EDICT project is used, unlike the single word search function where users can select which glossary to use. This can introduce other problems as the inappropriate entry may be selected. For example, for the word 人々 the ひとびと entry must be selected, not the much less common にんにん. To facilitate this, a priority system is employed in which preference is given in turn to entries from:
5. Example Sentences
It is generally considered desirable for dictionaries, especially bilingual dictionaries used by students, to have representative clauses and sentences showing the usage of words. The EDICT file did not include such examples, and the compilation process using volunteers did not lend itself to the task of generating and including such examples.
In 2002 the author received a copy of a file of some 210,000 Japanese-English sentence pairs compiled by Professor Yasuhito Tanaka at Hyogo University and his students (see Pacling2001). The collection, which has been placed in the Public Domain, consists of material drawn largely from instructional texts. After editing to remove unsuitable and duplicated material, the remaining 180,000 sentence pairs were processed using the Chasen morphological analysis system to extract the Japanese words from each sentence, a process that identified approximately 20,000 unique words. The collection of sentences was then integrated with the WWWJDIC server so that a user can link to the example sentences and view examples of a word's usage.
Figure v shows some of the sample sentences available for the verb 食べる (to eat.)
Figure v: Example sentences using 食べる.
The collection of sentences still requires considerable editing to remove errors and duplicated sentences, as well as reducing the number of pairs to something more manageable. Initial feedback, however, is that it is proving a useful addition for students of Japanese.
6. Verb Conjugations
A further extension to WWWJDIC for language education purposes is the option to see a table of verb conjugations for almost all of the approximately 9,000 verbs in the main EDICT dictionary file. As most Japanese verbs are quite regular, it was originally thought that such an option would be of limited use, however the possibility received strong support from a sample of instructors and students.
The conjugation table is generated as required from a set of rules for each verb type, and relies on the verb classification being indicated in the dictionary file.
Figure vi: Verb Conjugation Table for 食べる.
7. Use of WWWJDIC by other systems
As well as the traditional user interface via a browser screen, another interface has been provided to enable other WWW-based systems make requests to the WWWJDIC system. An interesting example of this is the Japanese Text Initiative at the University of Virginia library. As part of this project, a "portal" system has been developed which allows individual words to be selected from texts and passed to WWWJDIC for display of the meanings, etc.
A further interesting application of WWWJDIC has been its use via the NTT "DoCoMo" WAP mobile telephones in Japan. The DoCoMo telephones have a small screen and a built-in "micro-browser" which enables access to WWW services via NTT's proxy servers. In order to make WWWJDIC services accessible to DoCoMo users, a special interface with a smaller screen usage and abbreviated dialogue has been provided. In addition, an option to operate using the "Shift-JIS" coding commonly employed in Japan has been added, as the DoCoMo browser does not support other standard encodings such as EUC.
The WWW, with its ability to associate central data files and server software, and be accessed flexibly by innumerable users, has opened the possibility of extensive sophisticated dictionary facilities being provided to many people at little cost. These facilities can extend beyond those of traditional paper dictionaries by providing additional services such as integrated kanji and text dictionaries, access using several different keys and automated glossing of text, as well as providing integrated educational tools such as linking to text examples and generation of sample verb conjugations.
A singular advantage of WWW-based approaches is they lend themselves to continual update and enhancement without having to burden users with new acquisitions and installations, or the developers with preparing and distributing new editions. The immediacy also serves to encourage feedback and suggestions from users, which ultimately can lead to a system better "in tune" with the user requirements than traditional publishing and production techniques can achieve.
At present many of the systems are experimental, however as more extended lexicons become available online, and as server and browser software become more advanced, the WWW is likely to play an increasingly important role in language study and multi-lingual communications.
1. These are all numeric codes based on the stroke-counts of identifiable portions of kanji. Halpern's SKIP (System of Kanji Indexing by Patterns) is used to order and index kanji in his New Japanese-English Character Dictionary (Kenkyusha, Tokyo 1990) and Kanji Learner's Dictionary (Kodansha, Tokyo 1998). De Roo's code is used in his "2001 Kanji" (Bonjinsha). The Four Corner code was developed by Wang Chen in 1928 and is widely used in Chinese and Japanese dictionaries. As an example, the kanji 村 has a SKIP of 2-4-3 indicating a vertical division into 4 and 3 stroke portions, a De Roo code of 1848 representing 木 (18) and 寸 (48), and a Four Corner code of 4490 because there is a 十 (4) at the top two corners and a 小 (9) at the bottom left.
Appendix: Technical Aspects of WWWJDIC
WWWJDIC operates as a CGI program running under the control of a WWW server. All the operational systems use the Apache server. The code is largely drawn from the author's XJDIC dictionary system for Unix/X11. In summary each dictionary file consists of a relatively simple text file which is searched using a form of binary search via an index file of sorted pointers to lexical tokens in the target file.
As the total set of dictionary and index files used by WWWJDIC amounts to approximately 80Mb, it is important that the searches be efficient, and that a minimal amount of time be spent loading software. Initially it was intended that the searching be carried out by a permanently-running daemon at the request of the transient CGI program instances. This could have been implemented relatively easily as the XJDIC system has an option for its dictionary search module to be daemon interacting with multiple user-interface client programs. In fact this proved not to be necessary for relatively efficient WWW operation, as the use of memory-mapped input/output has meant that the file system tends to keep the object code and pivot pages of the dictionary in disk cache to such an extent that there is little or no advantage in having a more complex client/daemon arrangement.
All the Japanese text in the files is handled internally in the EUC (Extended Unix Code) in which each character is typically encoded as a pair of bytes each with the MSB set to distinguish them from the normal ASCII characters. Most characters are from the JIS X 0208 set, which encodes 6,355 kanji, all the kana and a number of special characters. Most WWW browsers can display these characters once the appropriate fonts are installed. In addition there are some kanji from the supplementary JIS X 0212 set, which has a further 5,801 kanji. As few browsers can support these kanji, the server software provides bit-mapped image files. Normally the generated HTML delivered to the browsers is in EUC coding and is identified by an appropriate "charset" value in the header as recommended by the W3C. | <urn:uuid:3f8c0e7f-0d70-4f89-9009-a6885a9ed011> | CC-MAIN-2015-35 | http://www.csse.monash.edu.au/~jwb/wwwjdic_article2.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645305536.82/warc/CC-MAIN-20150827031505-00277-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.938586 | 3,491 | 2.859375 | 3 |
Previous / Next / Index
2. History and Development of Speech Synthesis
Artificial speech has been a dream of the humankind for centuries. To understand how the present systems work and how they have developed to their present form, a historical review may be useful. In this chapter, the history of synthesized speech from the first mechanical efforts to systems that form the basis for today's high-quality synthesizers is discussed. Some separate milestones in synthesis-related methods and techniques will also be discussed briefly. For more detailed description of speech synthesis development and history see for example Klatt (1987), Schroeder (1993), and Flanagan (1972, 1973) and references in these.
2.1 From Mechanical to Electrical Synthesis
The earliest efforts to produce synthetic speech were made over two hundred years ago (Flanagan 1972, Flanagan et al. 1973, Schroeder 1993). In St. Petersburg 1779 Russian Professor Christian Kratzenstein explained physiological differences between five long vowels (/a/, /e/, /i/, /o/, and /u/) and made apparatus to produce them artificially. He constructed acoustic resonators similar to the human vocal tract and activated the resonators with vibrating reeds like in music instruments. The basic structure of resonators is shown in Figure 2.1. The sound /i/ is produced by blowing into the lower pipe without a reed causing the flute-like sound.
Fig. 2.1. Kratzenstein's resonators (Schroeder 1993).
A few years later, in Vienna 1791, Wolfgang von Kempelen introduced his "Acoustic-Mechanical Speech Machine", which was able to produce single sounds and some sound combinations (Klatt 1987, Schroeder 1993). In fact, Kempelen started his work before Kratzenstein, in 1769, and after over 20 years of research he also published a book in which he described his studies on human speech production and the experiments with his speaking machine. The essential parts of the machine were a pressure chamber for the lungs, a vibrating reed to act as vocal cords, and a leather tube for the vocal tract action. By manipulating the shape of the leather tube he could produce different vowel sounds. Consonants were simulated by four separate constricted passages and controlled by the fingers. For plosive sounds he also employed a model of a vocal tract that included a hinged tongue and movable lips. His studies led to the theory that the vocal tract, a cavity between the vocal cords and the lips, is the main site of acoustic articulation. Before von Kempelen's demonstrations the larynx was generally considered as a center of speech production. Kempelen received also some negative publicity. While working with his speaking machine he demonstrated a speaking chess-playing machine. Unfortunately, the main mechanism of the machine was concealed, legless chess-player expert. Therefore his real speaking machine was not taken so seriously as it should have (Flanagan et al. 1973, Schroeder 1993).
In about mid 1800's Charles Wheatstone constructed his famous version of von Kempelen's speaking machine which is shown in Figure 2.2. It was a bit more complicated and was capable to produce vowels and most of the consonant sounds. Some sound combinations and even full words were also possible to produce. Vowels were produced with vibrating reed and all passages were closed. Resonances were effected by deforming the leather resonator like in von Kempelen's machine. Consonants, including nasals, were produced with turbulent flow trough a suitable passage with reed-off .
Fig. 2.2. Wheatstone's reconstruction of von Kempelen's speaking machine (Flanagan 1972).
The connection between a specific vowel sound and the geometry of the vocal tract was found by Willis in 1838 (Schroeder 1993). He synthesized different vowels with tube resonators like organ pipes. He also discovered that the vowel quality depended only on the length of the tube and not on its diameter.
In late 1800's Alexander Graham Bell with his father, inspired by Wheatstone's speaking machine, constructed same kind of speaking machine. Bell made also some questionable experiments with his terrier. He put his dog between his legs and made it growl, then he modified vocal tract by hands to produce speech-like sounds (Flanagan 1972, Shroeder 1993).
The research and experiments with mechanical and semi-electrical analogs of vocal system were made until 1960's, but with no remarkable success. The mechanical and semi-electrical experiments made by famous scientists, such as Herman von Helmholz and Charles Wheatstone are well described in Flanagan (1972), Flanagan et al. (1973), and Shroeder (1993).
2.2 Development of Electrical Synthesizers
The first full electrical synthesis device was introduced by Stewart in 1922 (Klatt 1987). The synthesizer had a buzzer as excitation and two resonant circuits to model the acoustic resonances of the vocal tract. The machine was able to generate single static vowel sounds with two lowest formants, but not any consonants or connected utterances. Same kind of synthesizer was made by Wagner (Flanagan 1972). The device consisted of four electrical resonators connected in parallel and it was excited by a buzz-like source. The outputs of the four resonators were combined in the proper amplitudes to produce vowel spectra. In 1932 Japanese researchers Obata and Teshima discovered the third formant in vowels (Schroeder 1993). The three first formants are generally considered to be enough for intelligible synthetic speech.
First device to be considered as a speech synthesizer was VODER (Voice Operating Demonstrator) introduced by Homer Dudley in New York World's Fair 1939 (Flanagan 1972, 1973, Klatt 1987). VODER was inspired by VOCODER (Voice Coder) developed at Bell Laboratories in the mid-thirties. The original VOCODER was a device for analyzing speech into slowly varying acoustic parameters that could then drive a synthesizer to reconstruct the approximation of the original speech signal. The VODER consisted of wrist bar for selecting a voicing or noise source and a foot pedal to control the fundamental frequency. The source signal was routed through ten bandpass filters whose output levels were controlled by fingers. It took considerable skill to play a sentence on the device. The speech quality and intelligibility were far from good but the potential for producing artificial speech were well demonstrated. The speech quality of VODER is demonstrated in accompanying CD (track 01).
Fig. 2.3. The VODER speech synthesizer (Klatt 1987).
After demonstration of VODER the scientific world became more and more interested in speech synthesis. It was finally shown that intelligible speech can be produced artificially. Actually, the basic structure and idea of VODER is very similar to present systems which are based on source-filter-model of speech.
About a decade later, in 1951, Franklin Cooper and his associates developed a Pattern Playback synthesizer at the Haskins Laboratories (Klatt 1987, Flanagan et al. 1973). It reconverted recorded spectrogram patterns into sounds, either in original or modified form. The spectrogram patterns were recorded optically on the transparent belt (track 02).
The first formant synthesizer, PAT (Parametric Artificial Talker), was introduced by Walter Lawrence in 1953 (Klatt 1987). PAT consisted of three electronic formant resonators connected in parallel. The input signal was either a buzz or noise. A moving glass slide was used to convert painted patterns into six time functions to control the three formant frequencies, voicing amplitude, fundamental frequency, and noise amplitude (track 03). At about the same time Gunnar Fant introduced the first cascade formant synthesizer OVE I (Orator Verbis Electris) which consisted of formant resonators connected in cascade (track 04). Ten years later, in 1962, Fant and Martony introduced an improved OVE II synthesizer, which consisted of separate parts to model the transfer function of the vocal tract for vowels, nasals, and obstruent consonants. Possible excitations were voicing, aspiration noise, and frication noise. The OVE projects were followed by OVE III and GLOVE at the Kungliga Tekniska Högskolan (KTH), Sweden, and the present commercial Infovox system is originally descended from these (Carlson et al. 1981, Barber et al. 1989, Karlsson et al. 1993).
PAT and OVE synthesizers engaged a conversation how the transfer function of the acoustic tube should be modeled, in parallel or in cascade. John Holmes introduced his parallel formant synthesizer in 1972 after studying these synthesizers for few years. He tuned by hand the synthesized sentence "I enjoy the simple life" (track 07) so good that the average listener could not tell the difference between the synthesized and the natural one (Klatt 1987). About a year later he introduced parallel formant synthesizer developed with JSRU (Joint Speech Research Unit) (Holmes et al. 1990).
First articulatory synthesizer was introduced in 1958 by George Rosen at the Massachusetts Institute of Technology, M.I.T. (Klatt 1987). The DAVO (Dynamic Analog of the VOcal tract) was controlled by tape recording of control signals created by hand (track 11). In mid 1960s, first experiments with Linear Predictive Coding (LPC) were made (Schroeder 1993). Linear prediction was first used in low-cost systems, such as TI Speak'n'Spell in 1980, and its quality was quite poor compared to present systems (track 13). However, with some modifications to basic model, which are described later in Chapter 5, the method has been found very useful and it is used in many present systems.
The first full text-to-speech system for English was developed in the Electrotehnical Laboratory, Japan 1968 by Noriko Umeda and his companions (Klatt 1987). It was based on an articulatory model and included a syntactic analysis module with sophisticated heuristics. The speech was quite intelligible but monotonous and far away from the quality of present systems (track 24).
In 1979 Allen, Hunnicutt, and Klatt demonstrated the MITalk laboratory text-to-speech system developed at M.I.T. (track 30). The system was used later also in Telesensory Systems Inc. (TSI) commercial TTS system with some modifications (Klatt 1987, Allen et al. 1987). Two years later Dennis Klatt introduced his famous Klattalk system (track 33), which used a new sophisticated voicing source described more detailed in (Klatt 1987). The technology used in MITalk and Klattalk systems form the basis for many synthesis systems today, such as DECtalk (tracks 35-36) and Prose-2000 (track 32). For more detailed information of MITalk and Klattalk systems, see for example Allen et al. (1987), Klatt (1982), or Bernstein et al. (1980).
The first reading aid with optical scanner was introduced by Kurzweil in 1976. The Kurzweil Reading Machines for the Blind were capable to read quite well the multifont written text (track 27). However, the system was far too expensive for average customers (the price was still over $ 30 000 about ten years ago), but were used in libraries and service centers for visually impaired people (Klatt 1987).
In late 1970's and early 1980's, considerably amount of commercial text-to-speech and speech synthesis products were introduced (Klatt 1987). The first integrated circuit for speech synthesis was probably the Votrax chip which consisted of cascade formant synthesizer and simple low-pass smoothing circuits. In 1978 Richard Gagnon introduced an inexpensive Votrax-based Type-n-Talk system (track 28). Two years later, in 1980, Texas Instruments introduced linear prediction coding (LPC) based Speak-n-Spell synthesizer based on low-cost linear prediction synthesis chip (TMS-5100). It was used for an electronic reading aid for children and received quite considerable attention. In 1982 Street Electronics introduced Echo low-cost diphone synthesizer (track 29) which was based on a newer version of the same chip as in Speak-n-Spell (TMS-5220). At the same time Speech Plus Inc. introduced the Prose-2000 text-to-speech system (track 32). A year later, first commercial versions of famous DECtalk (tracks 35-36) and Infovox SA-101 (track 31) synthesizer were introduced (Klatt 1987). Some milestones of speech synthesis development are shown in Figure 2.4.
Fig. 2.4. Some milestones in speech synthesis.
Modern speech synthesis technologies involve quite complicated and sophisticated methods and algorithms. One of the methods applied recently in speech synthesis is hidden Markov models (HMM). HMMs have been applied to speech recognition from late 1970's. For speech synthesis systems it has been used for about two decades. A hidden Markov model is a collection of states connected by transitions with two sets of probabilities in each: a transition probability which provides the probability for taking this transition, and an output probability density function (pdf) which defines the conditional probability of emitting each output symbol from a finite alphabet, given that that the transition is taken (Lee 1989).
Neural networks have been applied in speech synthesis for about ten years and the latest results have been quite promising. However, the potential of using neural networks have not been sufficiently explored. Like hidden Markov models, neural networks are also used successfully with speech recognition (Schroeder 1993).
2.3 History of Finnish Speech Synthesis
Although Finnish text corresponds well to its pronunciation and the text preprocessing scheme is quite simple, researchers had paid quite little attention to Finnish TTS before early 1970's. On the other hand, compared to English, the potential number of users and markets are quite small and developing process is time consuming and expensive. However, this potential is increasing with the new multimedia and telecommunication applications.
The first proper speech synthesizer for Finnish, SYNTE2, was introduced in 1977 after five years research in Tampere University of Technology (Karjalainen et al. 1980, Laine 1989). SYNTE2 was also among the first microprocessor based synthesis systems and the first portable TTS system in the world. About five years later an improved SYNTE3 synthesizer was introduced and it was a market leader in Finland for many years. In 1980's, several other commercial systems for Finnish were introduced. For example, Amertronics, Brother Caiku, Eke, Humanica, Seppo, and Task, which all were based on the Votrax speech synthesis chip (Salmensaari 1989).
From present systems, two concatenation-based synthesizers, Mikropuhe and Sanosse, are probably the best known products for Finnish. Mikropuhe has been developed by Timehouse Corporation during last ten years. The first version produced 8-bit sound only from the PC's internal speaker. The latest version is much more sophisticated and described more closely in Chapter 9. Sanosse synthesizer has been developed during last few years for educational purposes for University of Turku and the system is also adopted by Sonera (former Telecom Finland) for their telecommunication applications. Also some multilingual systems including Finnish have been developed during last decades. The best known such system is probably the Infovox synthesizer developed in Sweden. These three systems are perhaps the most dominant products in Finland today (Hakulinen 1998).
Previous / Next / Index | <urn:uuid:96bb2a1d-fd47-49cf-b61d-fc135d965431> | CC-MAIN-2015-35 | http://research.spa.aalto.fi/publications/theses/lemmetty_mst/chap2.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645339704.86/warc/CC-MAIN-20150827031539-00040-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.960025 | 3,269 | 3.4375 | 3 |
A unique collaborative project uses a variety of tools and technologies to connect the neuroscience community and promotes a “wisdom of the crowds” approach to solving challenging problems in brain research.
In November 2009, a team of researchers from the University of California at San Diego created the Whole Brain Catalog (http://hcp.lv/hyqaON), a ground-breaking, 3-D virtual environment aimed at connecting “members of the international neuroscience community to facilitate solutions for today’s intractable challenges in brain research through cooperation and crowd sourcing.”
Co-creators of the catalog, Mark Ellisman, PhD, Stephen Larson, and Maryann Martone, PhD, of the National Center for Microscopy and Imaging Research at UCSD, discuss the massive, open source, open-access database of brain imagery.
What is the Whole Brain Catalog?
It is one piece of an assembly of systems and software meant to make it easy for anyone interested in the brain to find their way into a treasure trove of information and to help assemble all information about the brain in a way that makes it assessable and freely available. The Whole Brain Catalog itself should be thought of as a 3-D window into a community-assembled asset pool of information. The information resides in databases that are seen through the catalog and connected by the Neuroscience Information Framework (http://hcp.lv/iivLtw
), which is like Wikipedia on steroids. With regards to how effectively the information on the brain is linked, you have data and search capabilities that bring all kinds of information that’s accruing from researchers around the world. Because most of the imagery that’s obtained is static, the system allows one to bring in animations that are the results of simulations using everything up to super computers to generate them.
The Whole Brain Catalog is the flagship product of the Whole Brain Project (http://hcp.lv/gQiEgp
). We like to think of it as Google Earth for the brain. There’s a whole community, computational neuroscience, which works on the problem of how to make predictive models of systems of neurons. Many of those models are generally developed with some relationship to the actual cells, but not all is done in a full anatomical context, like where exactly those cells should be in relation to the rest of the brain. With the Whole Brain Catalog, you can take either pre-assembled simulations or create new simulations and see, for example, neuron patterns firing.
It’s an idea that we’ve had around for some time—to create a way of representing the brain that capitalized on special maps using spatial navigation technologies that are frequently used in maps of the Earth or cities, and then to link that information over the open Internet. We previously worked on this for a project called the Smart Atlas (http://hcp.lv/fHFHhj
). The Whole Brain Catalog took that to a new level by basing the front end on an open-source game engine in an environment where you could recruit others to build components so that the work to build and populate it could be done openly through crowd-sourcing. In our field, there’s not a culture of sharing research data. We’re trying to use this project to make available as much data as possible and to encourage scientists to convert from holding data closely within their laboratory to making data available for reanalysis openly.
The Whole Brain Catalog is meant to be a platform for sharing fairly complicated types of data, unlike a wiki where somebody comes and writes some text. We do maintain the NeuroLex (http://hcp.lv/encgfB
), which is a wiki that includes all of the neuroscience concepts that are needed to describe this data. Scientists can use it to relay and describe data to each other. Unlike a traditional wiki, it’s more structured and amenable to computational processing. Plus, the Whole Brain Catalog is dealing with special graphics and 3-D representations and many, many different data types. It’s not as easy as going to a wiki page and opening a text editor.
Who contributes the images?
Basically, the entire neuroscience and computational neuroscience communities are potential contributors. So far, there are several thousand datasets in, so I can’t tell you where they all come from.
One example would be the core funding that we’ve received from the Waitt Foundation (http://hcp.lv/eFgp6D
). Ted Waitt, co-founder of Gateway Computers, saw what we were interested in doing, thought it would be a game-changer, and gave us the assets to build the first instance of this. Also, the International Retinal Research Foundation (http://hcp.lv/dLxwKB
) has given us money to add the eye and the optic nerve, with the idea that we would reach out to researchers working on glaucoma or other diseases that affect the eye and bring those data in. We’ve already successfully brought in—with Johns Hopkins University investigator Nicholas Marsh-Armstrong—datasets that will contribute significantly to our understanding of the degenerative process associated with glaucoma. Other researchers interested in glaucoma will be expected by the foundation to start contributing their data to this environment.
There are similar contributions for things that are not disease-related and extremely basic. For example, our work with the Gage Lab (http://hcp.lv/hgHngA
) looks at how the nervous system, in places where new cells are born in the adult, take those cells and allow them to wire up so that they contribute to some new functional capability.
There are researchers working on different key areas of the brain and the cortex, for example, trying to understand how the visual information is represented in patterns of wiring within the cerebral cortex. A group at Harvard is right now contributing probably the largest single dataset taken on this very large area and at very high resolution.
There are very few places where one can explore data across so many scales, because it’s very expensive to store date, especially when you want to access large data, but we’re trying to do so and provide efficient ways of letting those data sets be viewed on a thin client.
How many images and videos are contained within the catalog?
Around 3,700, but when you load it on your system, or when you open the Web version that is just launching, you actually don’t have all that data loaded into your system; you pick the pieces. Since the system looks across many data sources, as long as people link their information in from other repositories, the number can grow astoundingly, rapidly.
The Neuroscience Information Framework (NIF) project that Mark mentioned has really been charged with accounting for how many of these electronic research portals are scattered around the globe that would be relevant for neuroscience. The NIF currently provides access to 60, with about 30 million pieces of data, not all directly related to the brain, but to genetic pathways that are certainly active there. And the number of databases that are potentially there to link to the Whole Brain Catalog is well over 1,000.
The Whole Bain Catalog should cause, once people explore it more effectively, many of the sorts of datasets that the NIF project has made visible to be spatially integrated. Right now there hasn’t been sufficient work done to curate them into a small street corner of the brain. The tools of the catalog will allow you to grab the dataset and then morph it, warp it, and fit it. And then it becomes available to somebody else as something that’s been fitted in like a puzzle piece.
What’s the user experience like?
We have areas of the catalog that are marked, essentially like bookmarks on your browser, where we house a significant amount of data. You click on one of the bookmarks, and it takes you to a spot in the catalog. There’s also a browser that’s like the layers of Google Earth; as you launch and see the different datasets, you can see the different layers. There’s a layer for all the cells in the catalog, and there’s a way to see things that are smaller. When each of those items load up, you can double-click on them in the data browser, and it will take you directly to that spot in space where they’re located. Additionally, we have a search functionality that allows you to look for things that have been tagged inside the catalog. There’s currently a preview of our Web version that’s a completely browser-based tool. It previously required a download; we’ve worked hard to put that inside the browser.
How can practicing psychiatrists and neurologists benefit from the Whole Brain Catalog?
The Whole Brain Catalog started out with the rodent brain as it’s main center point, because rodents are heavily studied inside experimental neuroscience. Practicing psychiatrists and neurologists, I believe, are familiar with the anatomy of the rodent and do pay attention to the literature that exists inside studies that are done on animals. To the extent to which animal science is valuable to them in their understanding of the human brain and the way the human brain works, this a valuable resource for seeing what those structures actually look like when they’re inside the brain. We do aspire to include the human brain as part of the catalog, and I think that’s going to be a really exciting opportunity that crosses over even better with psychiatrists and neurologists.
To the extent that you’d think of the Whole Brain Catalog as a portal to vast amounts of neuroscience data, NIF covers all of neuroscience, and so it offers access to deep databases across the literature and to different registries. Coming from the Whole Brain Catalog, one can very easily end up inside the NIF offerings.
It’s much more focused and direct access to more relevant information than a patient or a practicing psychiatrist would obtain just going to Google or Bing, because the NIF has very neuroscience-smart ways of organizing what you see when you instigate some sort of a complex query.
We also try to go after the content that’s not really well indexed by search engines. In the NeuroLex wiki, we expose all of the concepts that neuroscientists use for search and use those inside the NIF search engine to help us grab information a little bit more effectively from these resources than otherwise is possible.
What have you been able to learn because of the existence of the Whole Brain Catalog?
Recently, we collaborated with a laboratory that’s doing basic science work on the sense of smell. When you take a whiff of a nice glass of wine, there are probably hundreds of molecules that act like keys connected to locks inside your nose that are little receptors that form a map in the first level from your sense of smell into the brain. That means that the same type of molecule will go to a specific cell in your brain, and that forms a map that tiles the space of sense of smells. What this research was interested in is whether that map goes one more level into the brain.
You can think of the brain in terms of sensory systems as different layers of input. The first layer is a map; now what does the second level look like? They used some very sophisticated imaging techniques with which they can make a single cell in the brain turn into a fluorescent green and look at it under the microscope and reconstruct the pathway of that cell.
They’ve used the Whole Brain Catalog to take the results of these imaging experiments, put them into 3 dimensions, and then manipulate them so they can essentially collect, across individual animals, a single picture of what these cells look like in 3 dimensional space. To do that, they have to take 3D neurons and put them into the same space. At that point, you can start to do computational analysis. They looked at different endpoints from these neurons to begin to understand overlap. They found that it’s a lot less obvious of a map at the next level. In fact, they don’t find the same type of regularity that we see below, but that’s important to know because it means that there’s some other function at work there.
Another example is the project we undertook because the International Retinal Research Foundation needed the addition of the retina and optic nerve to the Whole Brain Catalog, which lead us to understand a previously unknown mechanism by which non-neuronal cells in the optic nerve, astrocites, appear to take up the debris that accumulates in neurons during their life. Instead of those materials, which in the conventional view of how cells work would be handled by machinery in the cell body by the nucleus to digest and clear old proteins, it looks like these optic nerve cells have an accessory mechanism just behind the eye to clear that material, and that was not discovered until we took it upon ourselves to put data related to these experiments, and new kinds of data, and put them in the catalog. It shows the power of cooperative work, collaboration, and data sharing. | <urn:uuid:b2d65c87-d6e4-4720-8540-87ced96eb4e9> | CC-MAIN-2015-35 | http://www.hcplive.com/journals/mdng-Neurology/2011/february_2011/google_earth_for_the_brain | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645294817.54/warc/CC-MAIN-20150827031454-00338-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.944751 | 2,712 | 3.515625 | 4 |
From the Winter 1997 Issue, Volume Six, Number One:
Somme casualties being evacuated from Charing Cross Station
by J. Hodgson Lobley
War and the First Century of Heart Surgery
By Alan S. Coulson, MD, PhD, FACS and Michael E. Hanlon, Research Editor
In July 1896, Stephen Paget's classic textbook, SURGERY OF THE CHEST, was published. In this book, he declared the heart to be off-limits to surgeons. He wrote, "Surgery of the heart has probably reached the limits set by nature, no new methods and no new discovery can overcome the natural difficulties that attend a wound of the heart." On the Continent, Professor Billroth concurred: " A surgeon who tries to suture a heart wound deserves to lose the esteem of his colleagues." Paget and Billroth's pessimism not withstanding, Ludwig Rehn of Frankfurt - a former German hussar turned surgeon -- made the first successful suture of a human heart wound in September of the same year, 1896. This was the beginning of cardiac surgery -- exactly one century ago. That same period would see the two costliest wars in history fought. As human tragedies, they were unsurpassable; but as for medicine, especially the new field of heart surgery, they were a boon.
Less than 20 years of Paget and Billroth's cautions and Rehn's procedure, World War I began. With ill-informed commanders repeatedly relearning that bravery was no match for machine guns fired over open sites, there were unprecedented scenes of horror in the casualty-clearing stations. It is difficult to imagine the carnage. Men were driven insane by the sight and sound of it.
Most soldiers with heart wounds die on the battlefield. They perish from the immediate trauma, from shock -- the failure of the cardiovascular system to deliver sufficient blood flow --, or from the accumulation of blood in the pericardium. According to the British Official Medical History one typical Great War casualty clearing station saw only one patient among 123 with chest wounds who had survived with a missile in his heart. [Note 1.] But, even though this phenomena was a statistical rarity, the blood-letting on the Great War's battlefields produced a substantial number of patients with bullets and metallic fragments in their hearts who survived their initial injury. These wounded survivors demanded some attention because their prospects were still dismal:
"The practical significance of the retained foreign body is twofold, in the early stages as a cause of infection usually giving rise to pericarditis, in one of the cases recorded to an abscess in the wall of the ventricle; in the later stages as causing disturbances of the heart's action. It is noteworthy that in one of Sir Berkeley Moynihan's cases an abscess in connection with an obsolete infection was met with as late as two month after the entry of the missile." [Note 2.]
Although it was still thought by the medical establishment that nothing could be done, before its conclusion, World War I would forever change the attitude of physicians towards heart surgery. Dedicated and resolute surgeons working under desperate circumstances bucked conventional medical wisdom and found new ways to work successfully on or near the heart. The British Official History tersely describes the post-war shift in attitudes:
The feature of the surgery of the war as regards wounds of the heart is therefore a familiarity with conditions which were previously rare, and the evidence which is afforded that the treatment of injuries to heart has now become a definite and promising field for the surgeon. [Note 3.]
But how did this shift in attitudes come about? Once again the British Official History is instructive. It gives some excellent examples of the development of new procedures to drain the Pericardium, three new methods for repairing wounds to the heart and techniques for removing foreign bodies in it. [Note 4.]
A case study from the new methodology for repairing heart wounds demonstrates how progress was made in an ad hoc fashion. The Victoria Cross is the highest British award for bravery in the face of the enemy. Regimental surgeons were awarded Victoria Crosses on three occasions during the 1916 Battle of the Somme as many of them went into no man's land in close support of their men. In our opinion, some type of medal should also have gone to the English surgeon, Mr. George Grey Turner [British surgeons are addressed as "Mr." rather than "Dr."]. He made one of the earliest attempts to remove a bullet from a soldier's heart in the following spring after the Battle of Cambrai at a British base hospital. While the bullet was never removed, Grey Turner's surgical team saved the patient's life and advanced cardiac surgery.
Fired from 500 yards, a machine gun bullet went through the victim's left breast pocket of his tunic and through the left nipple into the heart. On x-rays, the surgeons could see the tip of the bullet moving around in the left ventricle. With no blood bank, no antibiotics, with just primitive ether anesthesia and poor lighting, Mr. Turner exposed the heart through the left chest. The spot where the bullet had entered the heart was marked by a depression surrounded by a roughened, whitish area.
At first, he could not find the bullet, even after probing with a needle. Finally, in desperation, he rotated the heart and palpated it; he could feel the bullet right in the middle of the heart, lodged in the septum. At that point, the surgical team got a fright because the heart totally stopped beating (pacemakers had not been invented yet). Somehow, they got the heart restarted and what happened next during the hour and three-quarter procedure was described in the BRITISH MEDICAL JOURNAL in a 1940 article:
"...the pericardium was sutured, a small drainage-tube being left in the lowest part of the sac and another laid along the outside near the suture line. The flap of chest was replaced and held in position with catgut sutures...the drains being brought through the centre of the lower oblique incision." [Note 5.]
After the Great War, Grey Turner had a distinguished career as a surgical innovator developing new procedures for abdominal and esophageal cancer. The soldier he had operated on in 1917 subsequently lived through 1940 when Mr. Turner reported the case. The soldier was quite well; just a little tired at times - more from his home front work for the Second World War than his wounds from the First!
During the inter-war years minor advances were made in France and the U.S. on opening up the mitral valve, but progress in heart surgery slowed. Peacetime had reduced the numbers of the most challenging kind of cardiac patient -- those with missiles like shrapnel, bullets and splinters lodged in the heart. The individuals with such difficulties suffered a high mortality rate from operations, so foreign bodies were not regularly removed simply because of their presence. Unfortunately however, even without discomfort or other symptoms, patients with fragments left in their hearts had up to a 25 percent chance of dying of infection or other complications. Another war, however, was coming and, by greatly expanding the number of casualties with wounds to the heart, would challenge the practitioners of cardiac surgery to greater innovations.
Traumatic shock was still a tremendous problem on both World War II battlefields and in surgical suites. Steven Johnson tells how a group of Thoracic surgeons at New York's Bellevue Hospital responded:
"[A] grant was approved and the Bellevue group began to study the physiologic mechanisms and treatment of shock. They found that a 40 to 50 percent loss of blood volume caused a profound reduction in cardiac output, venous return, and a peripheral blood flow. To replace this blood loss, whole blood was more effective than plasma, a finding demonstrated again and again on the battlefields. Their work...was described in forty-nine reports to the Committee on Medical Research and the Office of Scientific and Research Development and in several reports to medical journals.
Johnson goes on to discuss other results from this study that benefited heart surgery:
"The Bellevue shock study was [also] a milestone in the evolution of cardiac catheterization. It demonstrated that cardiac catheterization was a safe procedure that yielded much useful physiologic and clinical information...
[Another] important advance made during the war years in the field of cardiac catheterization was the development of accurate methods of measuring intracardiac pressure," [Note 6.]
But as in the First World War, the greatest breakthroughs came from dealing with the flood of casualties from the war's battlefields. July 1946 marks the date of a series of important publications by the U.S. surgeon, Dr. Dwight Harken. At the start of the Second World War surgeons had continued the conservative approach of not removing missiles from the hearts of patients who had survived the original trauma and were in a non-emergency state. Later, though, special thoracic surgery hospitals were established to treat casualties from D-Day. Dr. Harken was director of the Fifteenth Thoracic Center based at Cirencester in England and he knew that, if bullets or shell fragments were left in or near the heart, many patients would still die of sepsis or embolism. Harken and his team set out to remove as many missiles as possible using a variety of the latest surgical techniques:
"...Prior to operation, the position of the missile was pinpointed by fluoroscopy. At operation, the patient was induced by intravenous pentothal sodium anesthesia; intubated with a large-bore endotracheal tube; and maintained with nitrous oxide, ether, oxygen, and assisted respiration. To remove the missile, the heart was often split wide open, with tremendous blood loss. Rapid, massive, blood transfusions were needed to keep the patient alive. Whole blood was often administered, under pressure, at rates up to one and one-half liters per minute. Penicillin, which was just beginning to make an impact on thoracic surgery, was often given in 10,000 unit injections..." [Note 7.]
As a result, in the 10 months after D-Day, 134 operations were done to remove retained shell particles in and around the heart. Remarkably, Dr. Harken reported that there were no deaths among these patients. His wartime results inspired other surgeons to rethink surgical approaches to the heart. Following this pioneering boost to heart surgery, surgeons in peace time practice were encouraged to try to open up diseased mitral and pulmonary valves in the heart. In this way, with support from military medicine, heart surgery was truly established by the late 1940s. Many advances have been made since Grey Turner's heroic effort in the First World War. Dr. Harken's methods from the 15th Thoracic Center have been further improved upon, surgical suites are better illuminated, specially trained thoracic nurses help recover patients, and surgeons have the instruments designed by brilliant British surgeon and war-time consultant, Mr. Tudor Edwards. Most important, today's patients have blood banks and they are able to receive blood under pressure at the rate of six pints in a minute.
It took fifty years for surgeons to prove that Dr. Paget was wrong about operating on the heart and that the former soldier, Ludwig Rehn, had been on the right side of medical history. A large part of this was due to the pioneering efforts of war surgeons working under desperate circumstances. They brought about revolutionary changes in the approach to the heart. This was possibly one of the very few good things to come out of the suicidal conflicts that engulfed the world. We still have a tremendous debt to Mr. Turner, Dr. Harken and their colleagues.
Ibid.; pp 461.
Ibid.; pp 442.
Ibid.; pp 462-5.
BRITISH MEDICAL JOURNAL, 2:487-489, 1940.
Johnson, Stephen L., THE HISTORY OF CARDIAC SURGERY, Johns Hopkins Press, 1970, pp 132.
Ibid.; pp 11-12.
- MacPherson, Maj. Gen. Sir W.G. et. al., HISTORY OF THE GREAT WAR -- MEDICAL SERVICES SURGERY OF THE WAR, VOL. 1, H.M.S.O., 1922, p431.
Co-author Dr. Alan Coulson (second from right)
performing heart surgery in 1996.
Dr. Alan S. Coulson is a Cardiovascular and Thoracic surgeon practicing in Stockton, California. His hobby is the study of the First World War, particularly the Somme and Ypres. He and his wife Jan joined the Great War Society's 1991 Western Front tour. This is Dr. Coulson's second article for RELEVANCE. Michael Hanlon is Research Editor and frequent contributor to RELEVANCE. He will be leading battlefield tours of the Western Front and Italy in 1997. | <urn:uuid:0ff56831-c32c-4e5f-989b-48ec78f4243e> | CC-MAIN-2015-35 | http://www.worldwar1.com/tgws/rel009.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00045-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.966837 | 2,682 | 3.078125 | 3 |
Hausa Folk-Lore, by Maalam Shaihua, tr. by R. Sutherland Rattray, , at sacred-texts.com
ON first proceeding to West Africa (the Gold Coast), and on commencing a study of the Hausa language, the compiler of this work was struck by the comparatively high standard of education found among the Hausa MAALAMAI or scribes. Arabic characters are used by them, as by the Swahili of East and Central Africa; but, whereas any natives met with there possessed but a very superficial knowledge of the Arabic language or writing, the Hausas could boast of a legal, historical, and religious literature, which was to be found preserved by manuscripts. The MAALAMAI were everywhere the most respected and honoured members of the community. It was disappointing, however, at any rate for one who wished to study Hausa, to find that all their manuscripts were written not only in Arabic characters, but also in that language. This appears to be universally the case, even in Nigeria. The use of Arabic to-day among the educated Hausas corresponds to that of French and Latin in England in the middle ages.
The writer's intention was, as soon as he had acquired a sound colloquial knowledge of the Hausa language, to collect some of their folk-lore and traditions, taking down such information as was required verbatim, and translating afterwards into English. This plan he had adopted when collecting his Chinyanja folk-lore.
The advantage of such a system is that the original text will help the student of the language to appreciate its structure and idioms, in a way that the best grammars could hardly do. The translator will also be bound down thereby. There will thus be no room for embellishments or errors creeping in, as is liable to be the case when the investigator has had to rely on the vagaries of his cook, 'boy,' or other interpreter for his information. It follows that such a collection will be of more value from the anthropological standpoint. Indeed, of late years many collections of native folk-lore compiled according to this method have been called into being by the demand created by this new science of anthropology.
As is to be expected, there are not many persons who have the fortune-or misfortune-to spend four or five preliminary years in acquiring a knowledge of the language of the people whose traditions they hope to study; yet such a probation is very necessary, if the collection is to be of any real value to the anthropologist.
Stories and traditions collected through the medium of an interpreter are amusing, and might prove of interest in the nursery (though much would have to be omitted or toned down, as savage folk-lore is often coarse and vulgar according to our notions, and hardly fit pour les juenes filles); but for the student of anthropology such collections cannot be considered to possess much value.
The anthropological theorist, who is probably some learned professor at one or other of our great Universities, where he made a life-study of primitive customs and beliefs, has, in most cases, to rely for his data on the field-worker. He needs to feel perfectly convinced that the information on which he is seeking to base some far-reaching generalization is absolutely correct; and this can hardly be the case, however skilled, conscientious, or well trained the field-worker may be, if the latter be wholly ignorant of the language of the people from whom he is collecting his information.
Now the literary skill of the Hausas, already referred to, led the writer to depart somewhat from the modus operandi employed in his Chinyanja folk-lore, the subject-matter of which was taken down from the lips of the raconteur. For the present work the services of a learned MAALAM, by name MAALAM Shaihu, were secured. He himself wrote down, or translated from manuscripts in Arabic, such information as was required. Much of the work contained in the present volumes involved, first, a translation from Arabic into Hausa, secondly, a transliteration of the Hausa writing, and thirdly, a translation into English from the Hausa.
During the writer's 'tours' of service in West Africa, as also during his furloughs in England, this MAALAM, who was entirely ignorant of English, made a collection of many hundreds of sheets of manuscripts (1907-11).
In the meantime the present writer was making a study of the Hausa language and script, by way of securing the key to their transliteration and translation. He was fortunate, in the course of his official duties, in being stationed for some time at YEGI on the VOLTA river. YEGI lies on the main caravan route between Nigeria and Ashanti. Each month thousands of Hausas from all parts of Nigeria cross the river here, going to and from Nigeria with kola or cattle. Such a position enables a student, even better perhaps than if he were resident in Hausaland, to get into touch with Hausas from all parts of Nigeria. It was thus possible to select such stories or traditions as seemed most generally and widely known, and therefore likely to be of historical value on account of their antiquity.
The Hausa given in the text is that of Kano or Sokoto, where by general consent the purest dialect is spoken.
The Hausa Manuscript. The writing is throughout clear, correct, and legible. It has been written with the aya between most of the words to facilitate easy reading. Some of the specimens of Hausa writing that have been reproduced from time to time are obviously the work of illiterate Hausas, or at best are very carelessly written manuscripts, and as such afford little criterion of the best work of these people. The hasty scrawls, which, it is true, form the larger part of the existing manuscripts, in which vowel-signs are missed out and words run together, often cannot be deciphered by the Hausas, and sometimes not even by the writers themselves, unless they know the context or subject by heart. Such manuscripts are therefore worthless for scientific purposes. They cannot, for instance, serve to disclose those nice points of grammatical construction which the perusal of a carefully written manuscript will reveal, though they can hardly be noted in the spoken language.
The Transliteration. This has been given, letter by letter, word for word, line by line. Thus it is easy for the student to follow the original on the page opposite.
The Translation. As literal a translation as is consistent with making the subject-matter at all readable has been given throughout. It is primarily as a text-book for students of the language that this work is intended, and for such a literal translation will be of most use. The author would crave the pardon of the general reader for the baldness and utter sacrifice of the English idiom which such a style of translation must necessarily involve. The latter may, however, find here and there a certain touch of 'local colour' in the phraseology, which may compensate for its other obvious defects.
The value of Hausa writings. Hitherto, perhaps, it has not usually been deemed essential to know much about Hausa writing. (A slight knowledge of it is necessary, it is true, for the higher standard Government examination.) This work attempts to go somewhat fully into the subject of the writing and the signs used, in order to assist the student who desires a knowledge of the writing that will enable him to decipher manuscripts as apart from the printed type. The writer is convinced that a thorough knowledge of Hausa writing is essential for any advanced study of the language. Thus he has so far been rewarded for the time spent in the minute perusal of the manuscripts comprising the Hausa portion of this book by the further elucidation or confirmation therein of grammatical structures not perhaps wholly accepted as proved, and by the discovery of some new idioms which, to the best of his knowledge, had apparently escaped the vigilance of previous writers on this subject, or else had taxed their powers of explanation.
The length of vowels, which is so distinctly shown in the written word, does not hitherto appear to have had that attention paid to it that it undoubtedly deserves. Yet the length of a vowel may alter the meaning of a word entirely, e.g. guuda, guda; suuna, suna; gadoo, gado, and soon. Indeed, an educated MAALAM would consider a word as wrongly spelt whenever a long vowel was written where it should have been short, or vice versa. In Hausa writing such an error would amount not merely to the dropping of an accent, as in English, but to the omission of a letter. Moreover such a slip may lead to serious confusion, since the tense of a verb, or even, as has been seen, the entire sense of a word, may depend on the length assigned to the vowel.
The author of Hausa Notes, perhaps the best treatise on the language yet written, remarks at some length on the apparent 'absurdity' of the want of any inflexion for the 1st, 2nd, and 3rd persons singular of the past tense, for the plural of which the well-known forms in ka exist, and thinks the forms for these persons are the same as those used for the aorist tense. Yet a perusal of almost any half-dozen pages of the present manuscript will reveal the hidden missing forms. Were the student to search for these by ear only, he might easily never discover them, as they are almost indistinguishable in the spoken word.
Again, the definite article, for many years conspicuous by its
[1. First noted by Professor A. Mischlich.]
absence, will be met with repeatedly in these pages in the final nun, or ra, or the wasali or rufua bissa biiuu.
Enough has been said to show the value and importance of a close perusal of Hausa manuscripts; but emphasis must be laid on the fact that such writing must be the work of a learned MAALAM, or probably these very details, which are of such importance to the scientific investigator, will be omitted, either through carelessness or ignorance.
Proverbs. So far as possible, the endeavour has been made to omit such proverbs as have already been collected and published.
The Notes. The student is expected to be familiar with the well-known works on the Hausa language by Canon Robinson, Dr. Miller, and others; hence only such phrases, words, or grammatical points as are not considered in these works are noticed here.
Acknowledgments. The debt is vast which the student of any language owes to those who have by their labours reduced that language to a definite form. This makes it possible in a comparatively short time for him to master what it has cost the pioneers many years of ceaseless labour to create out of nothing. Availing himself of the fruits of their labour, he can thus move forward to fresh fields of research. Such is the debt that the writer owes to Canon Robinson, Dr. Miller, and others. His thanks are also due to his friend Mr. Diamond Jenness, of Balliol College, Oxford, for revising the English translation; to Mr. Henry Balfour, Curator of the Pitt-Rivers Museum, Oxford, for having had the photographs taken that appear in this work, and for his valuable notes on the same which are again published through the courtesy of the Royal Anthropological Institute; to Professor Margoliouth for having translated the Arabic lines which occur in the Hausa script; to Mr. R. R. Marett, of Exeter College, Oxford, Reader in Social Anthropology, his tutor, who by his wonderful enthusiasm and ability may be said to have organized a school of working anthropologists, building upon the noble foundations laid by Sir E. B. Tylor and Dr. Frazer; to the authorities of the Clarendon Press, who, besides dealing most generously with a work not likely to prove remunerative, have likewise laid the author under deep obligation by their friendly interest and advice.
Finally, the publication of this work has only been made possible by the generous grant from the Government of the Gold Coast, to whom, as also to the Secretary of State for the Colonies, on whose recommendation the grant was made, the writer has the honour to tender his sincerest thanks.
R. SUTHERLAND RATTRAY.
Sept. 8, 1911. | <urn:uuid:209cec75-9058-4b07-8c48-26317a13671e> | CC-MAIN-2015-35 | http://www.sacred-texts.com/afr/hausa/hau02.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281325.84/warc/CC-MAIN-20150827031441-00044-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.973281 | 2,623 | 2.734375 | 3 |
In June 2000 scientists joined U.S. President Bill Clinton at the White House to unveil the Human Genome Project's "working draft" of the human genome—the full set of DNA that makes us human (quick human genetics overview).
As the tenth anniversary of that achievement approaches, scientists weigh in on the scientific discoveries the Human Genome Project enabled, as well as some hopes and predictions for future advances that could be made using the project's data.
BREAKTHROUGHS POWERED BY THE HUMAN GENOME PROJECT
1. Democratized Data
Coordinated by the U.S. Department of Energy and the National Institutes of Health (NIH), the Human Genome Project formally lasted from 1990 to 2003. The project helped pioneer the now common practice of making scientific data freely available online.
This open model of research has enabled researchers to make discoveries much more quickly than in the past, said Francis Collins, NIH director and former leader of the U.S.-government effort to sequence the human genome.
"For example, the search for the cystic fibrosis gene finally succeeded in 1989 after years of effort by my lab and several others, at an estimated cost of U.S. $50 million," Collins writes in an opinion piece published in this week's issue of the journal Nature.
"Such a project could now be accomplished in a few days by a good graduate student. ... ," he writes. All the budding geneticist needs, Collins says, is the Internet, some inexpensive chemicals, a thermal cycling machine to amplify specific DNA segments, and access to a DNA sequencer, which "reads" DNA via light signals.
2. Added DNA to Human-Origins Tool Kit
The Human Genome Project has proven to be a valuable new tool for studying human origins and the history of our species' migrations, said Mark McCarthy of the University of Oxford in the U.K., who studies the genetic causes of diabetes and obesity.
"We've learned how young a species we are and how similar so many of us are, particularly those populations that came out of Africa 70,000 years ago"—such as the ancestors of modern Europeans or East Asians or South Asians—McCarthy said.
The genetic data largely back up theories derived from archeological and linguistic studies, such as the idea that ancestors of many modern human populations originated in Africa, he added. (See "Massive Genetic Study Supports 'Out of Africa' Theory.")
Furthermore—by working under the assumption that the more closely related different human populations are to one another, the more similar their genomes will be—scientists have been able to roughly chart out the path that humanity took as it spread around the world.
(Explore an interactive atlas of the human journey based on genetics.)
3. Snipped Away at Diseases' Prehistoric Origins
The Human Genome Project set a foundation for later efforts such as the International HapMap Project, which aims to uncover single nucleotide polymorphisms, or SNPs ("snips").
SNPs are differences in the lettering of genes among members of the same species. The written language of DNA uses four "letters," or nucleotides: A, T, C, and G.
HapMap is a catalog of common SNPs that occur in human beings. SNPs that lie next to each other on a chromosome and are inherited together are called haplotypes; clusters of related haplotypes are called haplogroups.
SNPs can greatly influence our susceptibility to certain diseases, such as cancer, heart disease, and diabetes, scientists say. (Find out how an SNP-laden hairball helped reveal the face of a 4,000-year-old human.)
Geneticist Spencer Wells, who leads the National Geographic Society's Genographic Project, called HapMap "the biggest payoff of the Human Genome Project so far." (The National Geographic Society owns National Geographic News.)
HapMap "revealed the relatively high frequency of genetic diversity that exists across the entire genome. Using that data, scientists were able to start looking at disease associations at a genome-wide level," said Wells, who is also a National Geographic explorer-in-residence.
This is important because scientists are finding that many diseases have multiple gene influences.
"For the really interesting diseases, you've got a lot of genes that have relatively low effect" by themselves, Wells said.
(More on HapMap: "New DNA Mapping to Trace Genetic Ills.")
4. Found Lack of Junk in Our Genetic Trunk
Before the Human Genome Project, some scientists had estimated the known three billion or so DNA letters combined to form a hundred thousand or more genes.
"That seemed sensible, because we're such big, complicated organisms," said Christopher Wills, a biologist at the University of California, San Diego.
"But the amazing thing is that there are much fewer genes in the human genome than expected"—only about 20,000 to 25,000—"which means that each gene has to be very sophisticated in what it does," Wills said.
Because the number of DNA letters per gene is limited, the new, lower gene count made clear that about 98.5 percent of our DNA has nothing to do with genes—junk DNA, some called it.
But even junk DNA strands—long seen as useless or as relics of vestigial genes—are proving they hold a few gems.
"The part of [DNA] that doesn't code for proteins, which is about 98.5 percent of it, turns out to be much more rich in functional characteristics than I think a lot of people had imagined," NIH's Collins told National Geographic News.
"There doesn't seem to be much reason to use the word 'junk DNA' anymore," Collins added.
5. Supercharged Genetic Research
The Human Genome Project has helped foster the creation of newer, faster, and cheaper methods of gene sequencing, said George Church, who heads the Personal Genome Project at Harvard University.
That's because the rough draft of the human genome that resulted from the Human Genome Project serves as a reference against which the data from new sequencing methods can be compared.
"It's like doing a jigsaw puzzle," Church explained. "If you've got the final picture on the cover of the box, ... you can say, This little piece goes here."
PREDICTIONS FOR THE NEXT TEN YERS
1. Science Will Pinpoint What Makes Us Homo Sapiens
In the near future, scientists will be able to compare our genome against those of our evolutionary cousins, such as chimpanzees and Neanderthals, to get a clearer sense of which genes are involved in making us Homo sapiens, the University of California's Wills said.
"The thing I'm really looking forward to is finding out how we differ from our close relatives, what has driven us toward becoming human beings, and in particular, which genes are responsible for our astonishing talents," Wills said.
NIH's Collins called the recent success at partially sequencing Neanderthal DNA "fascinating."
"I think most people ten years ago would not think it would be possible to reconstruct an accurate rendition of a sequence of Neanderthals," Collins added, "and yet we're pretty close to that."
2. Gene Therapy Will Cure Diseases
Gene therapy—curing ailments by replacing faulty copies of genes with normal ones—will finally become a reality, likely within the next decade, the University of California's Wills said.
"The big problem has been, How do you get the genes to the cell?" he said.
Scientists have been using viruses to "infect" animals' DNA with new genes, Wills noted, "and that's dangerous.
"But I think a breakthrough is going to be happening fairly soon. When it does, it's going to be very exciting."
3. The Very Meaning of "Gene" Will Change
The traditional definition says a gene is a region of DNA that encodes for a protein.
But in recent years, scientists have discovered stretches of so-called junk DNA that don't make proteins but are nonetheless important.
For example, some regions of DNA appear to hold instructions for producing a DNA-like, but non-proteinaceous, molecule type called double-stranded RNA.
"These double-stranded RNAs"—part of the body's RNA interface or RNAi—"turn out to be very strong regulators of the way that genes function," Wills said. (Find out why the discovery of RNAi led to a Nobel Prize.)
Some double-stranded RNA, for example, can "silence" genes by preventing their protein products from being produced. They do this by binding to and blocking a messenger molecule in the protein-creation pathway, called messenger RNA.
Wills estimates that if bits of double-stranded RNA were counted as genes, they would double the estimated number of genes in the human genome.
"As far as I'm concerned, I'm happy to call them genes without worrying about semantics," he said.
NIH's Collins agreed. "I think we're at a bit of a semantic difficulty here, in terms of deciding what to consider a gene," he said. "Genes are units of inheritance that need not be thought of in such simplistic ways anymore."
4. Personal Genomes Will Spawn Made-to-Measure Drugs
Thanks to improving technology, within the next five years a person should be able to have his or her entire genome sequenced for about a thousand U.S. dollars, many experts say.
Soon after, that figure could drop as low as a hundred dollars, the Genographic Project's Wells said. "I could imagine a time, ten years from now, where it could get down that cheap."
NIH's Collins said the pace of technological innovation has been dizzying to watch.
"I thought we would get to this point, but I didn't think we would get here so quickly," he said.
The cost of sequencing a human genome "has come down by a factor of more than 10,000. That means DNA sequencing is moving forward more quickly than that classical example of exponential growth, which is Moore's law from computers." Moore's law speculates that the processing power of computer chips doubles every two years.
Collins envisions a day soon when everyone's genome will be sequenced and included as a routine part of their medical records.
By "knowing what you're at risk for and individualizing your preventative medicine plan," doctors will be better able to treat their patients, Collins said.
The era of personal genomes will also be a boon to pharmacogenomics, the science of tailoring drugs to an individual's genetic makeup.
5. Personality Will Move From Art to Science
As scientists learn to better understand the information contained in our genomes, they will get better at predicting how genes influence the development of physical and mental traits and even behaviors.
In the distant future it may be possible to look at the genome of a human—or a close human relative—and roughly deduce not only what she looked like, but, for example, how she acted.
"Will we ever be able to do it with complete confidence? I suspect not, and I rather hope not," the University of Oxford's McCarthy said.
"But I do suspect that by the time we've finished this journey that we've started on ... we'll be able to do better than we're doing at the moment." | <urn:uuid:520c495c-89ab-40b9-a589-8333f39c68b1> | CC-MAIN-2015-35 | http://news.nationalgeographic.com/news/human-genome-project-tenth-anniversary/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068749.35/warc/CC-MAIN-20150827025428-00105-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.959168 | 2,373 | 3.765625 | 4 |
Assessment of Chemical Exposures (ACE) Program
When chemical releases happen suddenly, ATSDR can provide local authorities with valuable help through the ACE program.
In June 2011, after a chemical incident at a poultry processing facility, 600 workers were potentially exposed to chlorine, 170 of whom needed transportation to five area hospitals for medical evaluation. The state Department of Health asked ATSDR to assist in the investigation of the incident, including an evaluation of the emergency response. An ACE team was deployed in response. The team discovered that the state Department of Emergency Management was not required to notify the health department when the incident occurred. This resulted in missed opportunities for assistance, such as coordinating with local hospitals regarding where patients were transported for care or providing treatment protocols for chlorine gas.
After the ACE investigation identified the issue, the Department of Emergency Management modified their procedure for notification of the state health department to include any incident involving a biological, chemical, radiological, or nuclear substance. About two weeks after the procedures were modified, two different ammonia releases occurred in the same city as the previous release. Because of the new procedures, emergency management officials immediately notified the health department of the incidents, resulting in a more coordinated and effective response.
What is ACE?
When toxic substance spills or chemical emergencies happen, ATSDR helps state and local health departments by providing ACE resources to perform a rapid epidemiologic assessment.
What resources does ACE provide?
ACE provides training on how to perform an epidemiologic assessment after a chemical incident. The ACE Toolkit is a helpful resource to assist local authorities in responding to or preparing for a chemical release. The toolkit contains materials that can quickly be modified to meet the needs of a local team performing an epidemiologic assessment, including:
- Consent forms
- Medical chart abstraction form
- Interviewer training manual
- Epi Info™7 databases to enter and analyze the data
When an incident occurs ACE provides technical assistance by forming a multi-disciplinary, often multi-agency, team to assist the state and local health department. Team members may assist from ATSDR headquarters in Atlanta, Georgia or deploy to the scene.
Other support the ACE team can provide is:
- GIS mapping and assistance with sample methodologies
- Clinical testing, if appropriate
- Liaising with other federal agencies
What happens during an ACE investigation?
ACE talks with incident responders and hospital staff that treated patients to understand
- what happened,
- who was exposed,
- steps taken to protect public health, such as an evacuation or shelter-in-place order,
- communication during the response, and
- lessons learned during the response.
ACE also interviews people who may have been exposed to collect detailed information on
- exposure history,
- symptoms experienced,
- who was exposed,
- health services used,
- needs resulting from the exposure,
- medical history,
- how people received information about the release, and
- health impacts on pets.
ACE typically reviews hospital medical charts and veterinary chart abstractions to learn more details about health effects experienced as a result of the release. ACE may also assist in collecting and analyzing clinical samples if a laboratory test is available to determine exposure to the substance. If testing is done, results are sent to participants to share with their physicians.
Why perform an ACE investigation?
State and local health departments can use information obtained from rapid assessments to
- assess impact of the release on individuals as well as the community,
- direct the public health response,
- target outreach to prevent similar incidents,
- assess the need to modify emergency response procedures, and
- identify a group of exposed people that may need to be followed-for long-term effects.
A body of data from multiple incidents can be used for education and training to prepare for future incidents.
What are some examples of ACE investigations?
The ACE team worked with the state or local health agency on the investigations and public health actions described below. Additionally, ATSDR has partnered with other public health and safety agencies, like the National Institute for Occupational Safety and Health (NIOSH) to work on chemical releases. The goals of each investigation were determined by the inviting agency. Each investigation involved multiple components, including interviewing responders and owners/managers of facilities, surveying exposed persons and staff at hospitals where patients were treated, and reviewing hospital charts for patients treated for chemical exposure.
Chlorine release at a metal recycling facility
Chlorine gas was released when a 1-ton, low-pressure tank was cut at metal recycling facility. Most workers and customers followed the planned evacuation route, exiting the facility through the main gate and meeting in an open field that was downwind from the tank. The ACE team, working with the state and local health department:
- Interviewed responders and facility owners, surveyed exposed persons, and the state partners abstracted hospital charts. A report of the investigation has been published.
- Prepared a chemical release alert [PDF - 618 KB] to send to metal recycling facilities throughout the state. The alert was also made available in Spanish [PDF - 426 KB]. Key messages included:
- Only accept containers that are cut open, dry, and without a valve or plug.
- Treat closed containers as potential hazardous waste.
- Develop and practice an evacuation plan. Train workers to stay upwind when evacuating for a chemical release.
- The state health department conducted follow-up interviews and medical record review of the affected workers 6 months after the incident, determining that some workers had ongoing respiratory and psychological symptoms. As a result of their findings, they provided technical assistance to the treating providers.
Ammonia release at a refrigeration facility
A pipe ruptured on the roof of a refrigeration facility, releasing anhydrous ammonia. A cloud of ammonia drifted over a canal behind the facility, exposing personnel on ships docked at the refrigeration facility and at a large facility across the canal where work was taking place outdoors. The ACE team, in conjunction with the local health department and the state’s CDC Career Epidemiology Field Officer:
- Interviewed personnel at the refrigeration facility, responders, and employees of a large facility that was downwind; surveyed exposed persons at the downwind facility; and reviewed hospital charts. County partners surveyed hospitals where patients were treated.
- Participated in a Hotwash (after action review) of the response to the incident and reported that there was a lack of notification of the people in the area of the release. The county later obtained a reverse 9-1-1 system to be able to call telephones belonging to residents and businesses in a defined geographic area and deliver recorded emergency notifications.
Chlorine release at a poultry processing facility
A worker accidently mixed sodium hypochlorite with an acid-containing disinfectant, releasing approximately 40 lbs of chlorine gas within the facility. Due to the air flow within the building, workers were exposed both at their work stations and in a major hallway used as an evacuation route. The ACE team, assisting the state and local health department:
- Partnered with NIOSH on the investigation. The NIOSH team performed a Health Hazard Evaluation at the facility and surveyed workers to learn their health effects.
- Interviewed responders, surveyed staff at hospitals where patients were treated, and reviewed hospital charts of patients treated for chlorine exposure.
- Determined that the existing emergency response protocols had an excessively high threshold for notification of the health department about chemical incidents. After the ACE investigation identified the issue, the notification protocol was modified to include health department notification of any incident involving a biological, chemical, radiological, or nuclear substance.
Vinyl chloride release from a train derailment
A tanker car punctured during a train derailment released approximately 24, 000 gallons of vinyl chloride on the edge of a small town. A shelter-in-place order was established for surrounding areas, then was lifted and reestablished repeatedly over four days, as vinyl chloride levels in the air fluctuated due to weather conditions. The ACE team, in partnership with the state and local health department:
- Surveyed community members who were potentially exposed, surveyed staff from hospitals where patients were treated for vinyl chloride exposure, surveyed staff from a facility whose only access road was blocked by the derailed train, and performed hospital chart abstractions. State partners mailed a survey to all households in the community.
- Partnered with a NIOSH team which interviewed representatives from responder groups and created a written survey for responders. A report of the NOISH investigation has been published.
- Answered responders’ questions during their meetings and collected information needed to address community concerns.
- Reports from investigations of the incident are available at:
4-Methylcyclohexanemethanol and propylene glycol phenyl ether contamination of a public water supply
A tank containing chemicals used in coal processing leaked into a river just upstream from the intake of the municipal water supply for approximately 300,000 people. A “Do not use” water order was issued for a nine-county area. The ACE team, working with the state health department:
- Performed hospital chart abstractions for patients treated for exposure to the contaminated water. Surveyed area hospitals to learn of their experiences with the “Do Not Use” water order. A review of disaster epidemiology capacity within the inviting agency was also performed.
- Used results from the hospital chart reviews for local outreach and education efforts in an effort to alleviate the public’s concerns about spill-related health effects. Findings of the hospital survey were used to provide information to hospitals planning for emergencies where their water supply is compromised. The disaster epidemiology capacity report was used to aid in planning for health department responses to future disasters.
How do we request ACE assistance or learn more about ACE?
ACE representatives can be contacted via phone and email. A representative can help local authorities determine what assistance is needed. If an ACE investigation—on-site field assistance—is appropriate, approval of the state epidemiologist must be secured. An ACE team can then be rapidly deployed to the field to provide assistance for up to 30 days. After leaving the field, the team continues working with the local authorities to analyze data and prepare reports.
ACE investigations will be carried out in the event of an acute chemical release of toxic substances, which can cause serious health effects. An event must involve:
- the release of a toxic substance at levels that may cause acute human health effects
- people who are exposed and experience acute health effects.
To request information or assistance, call the ACE program at 404-567-3256 or e-mail ATSDRACE@cdc.gov. You can also contact the CDC Emergency Operations Center 24/7 at 770-488-7100 and ask to speak with someone from the ACE team. ACE is part of the National Toxic Substance Incidents Program (NTSIP), a federal program at the Agency for Toxic Substances and Disease Registry (ATSDR).
- Page last reviewed: March 5, 2014
- Page last updated: July 23, 2015
- Content source: | <urn:uuid:bc35061f-6dee-4213-82e4-8712f3ee9bb0> | CC-MAIN-2015-35 | http://www.atsdr.cdc.gov/ntsip/ace.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645294817.54/warc/CC-MAIN-20150827031454-00335-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.955142 | 2,282 | 2.75 | 3 |
<br><h3> Introduction </h3> <i>U. S. Grant: American Hero, American Myth</i> is about a true hero, celebrated for his strength, his resolve, and his ability to overcome severe obstacles, banishing the possibility of failure. Grant once wrote, "One of my superstitions had always been when I started to go any where, or do anything, not to turn back, or stop until the thing intended was accomplished." His feats attained mythic status and, like many national myths, contained elements of truth and exaggeration, accuracy and distortion. "As for Grant," a contemporary observed, "he was like Thor, the hammerer; striking blow after blow, intent on his purpose to beat his way through." Grant's reputation is inevitably entwined with that of the Civil War, the tragic American epic. Like the president he served, Grant stood firm in his faith in a future beyond the terrible bloody battlefields of war. Unlike the president he served, Grant survived the war to implement their shared vision of reunion and emancipation, in a country still riven by dangerous crises. Inevitably, the hero stumbled, the myth was tarnished. Even heroes have flaws, and Grant's heroism lay not in his moral perfectionism but in his resolute determination to defeat those who would split the Union. This book traces the shifting legacy of general and president Ulysses S. Grant, who emerged from obscurity to claim victory as the North's greatest military leader. <p> Grant's meteoric rise between 1861 and 1865 was not necessarily predicted by his first thirty-nine years. An undistinguished student in the West Point class of 1843, Grant gathered honors in the Mexican War but later resigned from the regular army in 1854 under questionable circumstances. He took up farming in Missouri, failing to achieve success in that occupation and then in a number of others as well. When Lincoln asked for volunteers in 1861, Grant was clerking in his father's leather goods store in Galena, Illinois. He responded eagerly to his country's call and rapidly won fame in the Western Theater, scoring decisive and morale-raising victories at Fort Donelson, Shiloh, Vicksburg, and Chattanooga. Promoted to lieutenant general in early 1864, Grant assumed direction of the entire Union military effort in the last year and a half of the war. That spring, Grant and Confederate general Robert E. Lee waged titanic battles across the Virginia countryside, ending only when Grant crossed the James River and pinned Lee's army inside Petersburg. While Grant conducted the siege, his two principal lieutenants, Maj. Gens. William T. Sherman and Philip H. Sheridan took the war to Georgia and Virginia's Shenandoah Valley, conquering territory, defeating Rebel armies, and destroying large swaths of the southern countryside. Their combined victories vindicated Grant's strategic vision and guaranteed Lincoln's reelection. <p> The Union's greatest military hero was praised for the magnanimous terms of surrender he offered, and Lee accepted, at Appomattox Court House on April 9, 1865. Shortly afterward, he became the first four-star general in U.S. history, remaining as head of the army until 1868, when he was elected to the first of two terms as president on the Republican ticket. Grant's political career proved troublesome. Most Americans believed him to be honest and well meaning, but his administration was plagued by corruption and bungling, with the fragile promise of emancipation diminished by a white South redeemed. Immediately after leaving office, Grant embarked on a triumphal world tour that lasted for two years. Returning to live in New York City, he lost his entire savings in a disastrous business venture. To earn money, he agreed to write about his wartime experiences for <i>Century</i> magazine. Grant's articles proved so popular that he decided to write his memoirs, just as he was diagnosed with inoperable throat cancer. While sick with the cancer in 1884, he courageously completed <i>The Personal Memoirs of U. S. Grant</i>, which became a classic in American literature. Ulysses S. Grant died in 1885, the most famous of Americans both at home and abroad. <p> My project began with a question about Grant's life, and his death. Why did Grant's star shine so brightly for Americans of his own day, and why has it has been eclipsed so completely for Americans since at least the mid-twentieth century? Most Americans indisputably are ignorant of the <i>extent</i> of the once-powerful national legacy of Ulysses S. Grant. To recover that legacy, I advance two arguments. First, Ulysses S. Grant was a gigantic figure in the nineteenth century, and second, the memory of what he stood for—Union victory—was twisted, diminished, and then largely forgotten. Some may think that the first argument is axiomatic. It is not. The book explains how and why Ulysses S. Grant became the embodiment of the American nation in the decades after the Civil War, analyzing him as a symbol of national identity and memory, equal in stature to George Washington and Abraham Lincoln. More than a million and a half people watched his funeral procession in New York City on August 8, 1885, while the dedication of his massive tomb in Manhattan in 1897 drew a similar number. Even as the general was praised in lofty speeches at the end-of-the-century dedication, however, his reputation was subjected to a constant drumbeat of criticism from a small but influential group of ex-Confederate partisans; at the same time, eager reconciliationists from the North began to distort his legacy in pursuit of national unity. <p> Why is it important to recover the memory of Ulysses S. Grant as experienced by nineteenth-century Americans who forgave his transgressions in life and revered him in death? It is important because of the huge place that the Civil War still commands in American historical memory. Both a blessing and a curse, the war bequeathed a rich and riveting story of valor and idealism but also a distressing bequest of destruction, bitter recrimination, and racism. Grant had essential roles to play in the great national drama—his generalship was a major reason why the North won the Civil War, and his presidency determined in large part the success or failure of Reconstruction. Depending on one's point of view, he was either the brilliant leading U.S. military commander or the mediocre general who won by brutal attrition alone, either the stalwart and honest president trying to implement the northern vision of the war or the imposer of hated "Republican Rule" on a helpless, defeated region. In the long run, the image of the brutal general and inept president lingers most powerfully. <p> In his own era, the passage of time and memory softened Grant's image, so much so that by the 1880s and 1890s he symbolized national reconciliation as well as embodying Union victory. Grant was not a foe of sectional harmony—his famous 1868 campaign slogan summed up his sentiments, "Let Us Have Peace." But it was never peace at any price. In Grant's mind, reconciliation and the "Union Cause" had to be founded on southern acceptance of the victor's terms. The premier goal of the Civil War was to preserve the American republic and, after 1863, to fight for freedom and the destruction of slavery. To Grant, those were noble ideals worth fighting for, dying for, and remembering in distinctive ways. Thus, his "version" of sectional harmony rejected, indeed found repugnant, the increasingly popular idea that the Union and Confederate causes were "separate but equal," or even worse, that the two were somehow morally equivalent. <p> The book is divided into six chapters, with an interlude bridging the two halves of the text, and a brief epilogue. The first three chapters chronicle Grant's life and career, interweaving history, memory, and memorialization, introducing the man, the soldier, and the politician. Taken together, their purpose is to provide just enough of a background for understanding how and why Grant became a major American hero, and how and why Grant came to occupy such a huge place in American myth and memory. Reader, beware: my book employs the biographical method, but does not cover in depth Grant's military and presidential career. That is not its intent. For those who wish to pursue the details of Grant's life in full, I advise consulting one of the existing biographies or one or more of the specific, and numerous, studies of his career that have been published. Many of these works—indispensable to building my case for Grant's centrality in Civil War history and memory—are quoted in the text and cited in the footnotes. <p> Chapter 1 covers the years from Grant's birth to the eve of the Civil War. Here I draw attention to competing myths regarding Grant's early life—the one of unrelieved failure that made his later success inexplicable, and the one showing that Grant experienced the ordinary struggles of life, which many Americans could relate to, that produced in him a strong character and resilience that boded well for his future and the future of the country. Chapter 2 surveys the war years, 1861–65, ending with Appomattox. Examining the most unmartial of military heroes, this chapter explains the origins and flowering of Grant's fame and mythic status. It chronicles his rise from an unknown officer in the war's distant Western Theater to lieutenant general commanding the United States armies (he was the first officer to receive that rank since George Washington). Unlike the aristocratic Washington, Grant demonstrated the potential of the common man in the democratic, free-labor North. Unprepossessing in appearance and deliberately eschewing military grandeur, Grant in 1865 enjoyed wild popularity and wielded immense power. Huge crowds greeted his every appearance, and Republicans and Democrats both sought his approval. <p> What did he do with that power? Chapter 3 picks up Grant's story from Lincoln's assassination and carries it up through 1877. As military commander overseeing Reconstruction policy, and as two-term president, soldier-statesman Grant struggled to define, defend, and preserve Union victory over an utterly defeated and embittered southern white population, as well as establish and protect freedom for ex-slaves. Grant admitted his lack of expertise in the humdrum but important world of national political machinations. "He had a true political sense, for he could see big things and big ideas," wrote one historian, "but he possessed no political cunning, he could not see the littleness of the little men who surrounded him." His reputation suffered immense damage—some, but not all of it, deserved—from charges of policy failures, "cronyism," and abandonment of principle. An interlude offers a transition from Grant's life to his memorialization, focusing on his international tour, in which he symbolized for the world the powerful American nation that emerged from the Civil War. As a private citizen, Grant struggled to find a satisfactory place for himself and his family. <p> The three chapters that make up the second part of the text are the heart of the book, covering Grant's illness and death, the writing of his memoirs, his funeral, and the building of Grant's Tomb in New York City. Chapter 4 records the extraordinary national response to his agonizing death from throat cancer while struggling to complete his justly celebrated memoirs. <i>The Personal Memoirs of U. S. Grant</i> is a powerful example of an autobiography that swayed history, establishing its author as a principal architect in shaping the Civil War's historical memory. Chapter 5 reveals how both North and South seized on and singled out Grant's legacy as the magnanimous victor at Appomattox as <i>the</i> major theme of his commemoration. Here, Grant becomes a case study of the fascinating ways in which historical memory is shaped, and then reshaped, to suit current needs. Chapter 6 recounts the vigorous debate over Grant's monument and the proper way in which his memory should be honored. No Civil War monument was more spectacular or famous in 1897, and yet by the mid-twentieth century Grant's Tomb was a neglected site. A short epilogue sums up Grant's legacy in the twentieth century and the twenty-first. <p> This is the first scholarly work devoted to Grant's commemoration, adding a unique perspective to the existing literature. My primary research included reading scores of sermons, eulogies, memorial programs, newspapers, and pamphlets, in addition to letters, reports, diaries, memoirs, and scrapbooks; examining artifact collections and visual representations; and visiting Grant memorial sites. A few books and articles have focused on Grant's deathwatch, funeral, and monument, and will be cited accordingly. But those are pretty rare among Grant publications, virtually a cottage industry from the 1860s. As Grant emerged as a popular war hero, journalists scoured locales in Ohio, Kentucky, Missouri, Illinois (where he grew up and lived), and other states, interviewing family, friends, enemies, former teachers, soldiers, and current and former military colleagues. The insatiable search for Grant tidbits (fodder for friends and enemies, creating stories true and false) only intensified in the decades afterward, appearing in newspaper articles, forming the basis of campaign publications, providing color and content for hagiographies. More serious biographies by Hamlin Garland and Owen Wister appeared early in the twentieth century and were augmented later by scholarly studies published by Lloyd Lewis, Bruce Catton, William McFeely, and Brooks D. Simpson. According to the 420-page <i>Ulysses S. Grant: A Bibliography</i>, books and journal articles about his over-all military career, individual battles, or separate campaigns far outnumber biographies or political studies, confirming America's hunger for military history. Only a small part of the massive bibliography covers memory and memorialization, the major focus here. <p> The recent rise of "memory studies" exploring the gap between history and memory, which expose a manipulated, "invented" past, has been nothing short of a phenomenon. Cutting across disciplines, fields, centuries, and continents, scholars applying memory analysis have brought new insights to the ways in which the past has been used to justify present agendas, usually, but not always, servicing the needs of the nation-state. Traumatic events such as World Wars I and II, the Holocaust, Wounded Knee, and Gettysburg have been revisited using this method, illuminating the power of memory to create selective narratives that elevate some while leaving out others. The work of Maurice Halbwachs, Jacques Le Goff, and Pierre Nora on collective memory versus individual memory, and on the tendentious relationship between history and memory, informs my discussion of Grant on several levels. But I am even more indebted to scholars of American memory, such as Michael Kammen, John Bodnar, and David Blight, who have examined the different ways in which the American Civil War has been commemorated, and for whose benefit. Long before memory studies became the vogue in academic circles, the story of the Civil War haunted generations of ordinary citizens, intellectuals, writers, and historians. <p> Remarkably, the literature on Confederate identity and memory, especially on the continuing power of the Lost Cause, flourished, while similar studies for the Union Cause lagged. Recent publications have begun to correct the imbalance, and my book will be added to the list. The end of the war brought forth a new nationalism, sanctified by death and embraced by a majority of northerners and southern freed people, that made the Union Cause just as much the subject of myth and reverence as the Lost Cause. This has too often been overlooked in both recent academic literature (which finds fault with the powerful strain of American exceptionalism that characterized postwar nationalism) and in popular culture. Indeed, the moral seriousness and earnest patriotism that animated a sizeable portion of wartime northern society—soldiers and civilians alike—has seemingly been obliterated from current historical consciousness. So too has the immense prestige and respect once held by military heroes. "The generals stood as public symbols of the meaning of the conflict," wrote Philip S. Paludan. "They organized victory, shaping the choreography of the war, and no one more so than Grant." <p> <i>(Continues...)</i> <p> <p> <!-- copyright notice --> <br></pre> <blockquote><hr noshade size='1'><font size='-2'> Excerpted from <b>U. S. Grant</b> by <b>Joan Waugh</b> Copyright © 2009 by THE UNIVERSITY OF NORTH CAROLINA PRESS. Excerpted by permission of THE UNIVERSITY OF NORTH CAROLINA PRESS. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.<br>Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site. | <urn:uuid:904e97cc-651a-423f-a23a-90a70e7fcb15> | CC-MAIN-2015-35 | http://www.worldcat.org/wcpa/servlet/org.oclc.lac.ui.DialABookServlet?oclcnum=317929504 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00166-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.962188 | 3,545 | 2.640625 | 3 |
There is much confusion about vitamin D and vitamin D toxicity. I encourage you to take the quiz and even pass it along to your doctor, as very few U.S. physicians are aware of vitamin D's importance.
Winter is the time of year when most of us in the United States need to be very diligent about keeping our vitamin D levels within optimal levels. I recommend that most take a high-quality cod liver oil, which is an excellent source of vitamin D, regularly from fall until early spring. However, it is essential to understand that in order to know how much vitamin D you should be taking, you should get your blood level checked. If you use beneficial products like cod liver oil without doing blood tests for vitamin D levels, you should keep the dose at one to two teaspoons per day to prevent overdosing.
This is a major point: excess vitamin D will cause, not prevent, osteoporosis and hardening of your arteries. Please be very careful with cod liver oil. If you are unable to obtain vitamin D testing, then please do not exceed one to two teaspoons of cod liver oil. So please do yourself a favor--read the article on vitamin D testing and be sure to have your level measured. As I mentioned above, nearly all physicians are not aware how to have this checked and how to interpret the normal reference ranges, so I encourage you to print out the article on vitamin D testing not only for your own records but also for your doctor so he or she can become aware of this vitally important nutrient.
The Vitamin D Council, the non-profit group that contributed the excellent quiz below, is another great resource for vitamin D information. The Vitamin D Council is a group of citizens concerned about vitamin D deficiency and the diseases associated with that deficiency. I encourage you to check out their website and sign up for their informative newsletter. Their goal is an important one: to draw attention to the problem of vitamin D deficiency through the education of professionals, the media, government officials and average citizens.
By John Jacob Cannell, M.D.
Executive director of The Vitamin D Council
1. If an otherwise healthy adult tried to kill himself by taking an entire bottle (250 capsules) of 1,000 iu cholecalciferol, which of the following would happen?
a) The person would die within 24 hours from severe hypercalcemia and widespread calcinosis.
b) If the person received intensive treatment for hypercalcemia he may survive.
c) Hypercalcemia would be severe but require only supportive treatment.
d) Such doses are called "Stoss" therapy and are occasionally used therapeutically although they do not replicate normal physiology. As most Americans are vitamin D deficient, such a one-time dose would probably be a health benefit for the majority of Americans.
The correct answer is d. One of the most recent examples is the use of stoss therapy to reduce fracture rates in the elderly (100,000 IU of oral cholecalciferol every four months for five years) by Dr. Trivedi and colleagues (University of Cambridge School of Clinical Medicine) published in the British Medical Journal. How high do you think average 25-hydroxyvitamin D levels were in the subjects after they received 100,000 IU of cholecalciferol every four months for five years? Answer: about 29 ng/ml, still mildly deficient! (Source)
2. Acute poisoning leading to rapid death from ingestion of vitamin d capsules (successful suicide attempt),
a) Has frequently been reported in the literature.
b) Has occasionally been reported in the literature
c) Has never been reported in the literature.
The answer is C, as far as we know. If you know of a report of a successful suicide attempt, accidental death or murder from overdosing on vitamin D supplements, let us know. We do know of one interesting case that demonstrates the relative safety of vitamin D. Industrial strength crystalline vitamin D was added to table sugar, either by accident or on purpose. The two men poisoned were getting about 1,700,000 IU of cholecalciferol every day for seven months. Again, they were getting at least, 1,700,000 units [440 times the Institute of Medicine's toxicity warning (LOAEL)] every day for seven months! Both got very sick but recovered. (Source)
3. True of false: water has a higher (safer) therapeutic index (the median lethal dose divided by the median effective dose) than cholecalciferol?
c) About the same
The answer is b. Although exact human studies have never been done for obvious ethical reasons, water intoxication leading to hyponatremia, cerebral edema and occasional death is common in psychiatric populations and may become evident if one drank 80 glasses of water a day, instead of eight. Heaney, et al, recently showed healthy humans utilize about 4,000 IU of cholecalciferol a day, if they can get it. 40,000 IU a day is certainly not acutely toxic. In fact, some research reported that young white humans get up to 50,000 IU from one full body summer sun exposure. (Source)
4. If a person totally avoided the sun and regularly took two standard multivitamins a day for several years, each containing 400 iu of ergocalciferol, as his sole source of vitamin d, he would,
a) Rapidly become vitamin D toxic and require medical attention for symptoms of hypercalcemia.
b) Slowly become vitamin D toxic and eventually become symptomatic.
c) Slowly develop hypervitaminosis D but remain asymptomatic.
d) Obtain a healthful vitamin D blood level.
e) Inexorably become vitamin D deficient.
The answer is e. Two standard multivitamins contain 800 IU of ergocalciferol, equivalent to about 500 IU of cholecalciferol. If you totally avoided the sun, as many dermatologists routinely recommend with impunity (so far), one would have enough vitamin D to prevent rickets and osteomalacia but would still have a suboptimal 25-hydroxyvitamin D and thus be at risk to develop numerous other chronic inflammatory diseases, not just osteoporosis. For a review of such illnesses, see Zittermann. (Source)
The key is "totally avoided the sun." Remember, most people get 90 percent of their vitamin D requirement from very casual sun exposure, like the sunlight that strikes the uncovered and unsunblocked face, arms and hands when you walk to your car. Vitamin D production in the skin is that fast. Of course, some people follow their doctor's advice and take obsessive steps to prevent sunlight from ever striking their unprotected skin. A host of chronic inflammatory diseases may await the patients who follow such advice, just as trial lawyers may await the doctors that give it.
5. Of the three medications listed below, which is the safest in overdose?
a) Vitamin D (250 of the 1,000 IU capsules)
b) Aspirin (250 of the 325 mg tablets)
c) Tylenol (250 of the 500 mg tablets)
The answer is a. In fact 250,000 IU of vitamin D at one time is used as "stoss" therapy, especially in Europe. For a review of many such studies and the doses needed to achieve toxic 25-hydroxyvitamin D levels, see Vieth. (Source)
6. Which drug has the highest (safest) therapeutic index?
The answer is g. All of the medication listed except cholecalciferol have narrow therapeutic indices and can easily cause death in overdose. Such is not true for vitamin D and, because of the huge number of capsules needed, is not likely unless one has the industrial strength compound. See below for a sample calculation.
7. In 1997, adams and lee wrote a widely publicized paper about vitamin d toxicity in the annals of internal medicine. The adams and lee paper was accompanied by a stern editorial warning of the dangers of vitamin d written by marriott of the national institute of health. The three authors,
a) Correctly diagnosed all five of the patients
b) Were thanked by nationally acclaimed vitamin D scientists for their contributions to understanding vitamin D toxicity.
c) Showed frightening ignorance about vitamin D toxicity and appeared not to know the difference between the two standard deviation upper limit of a Gaussian distribution and levels known to reflect vitamin D toxicity.
The Adams and Lee paper and the editorial by Dr. Marriott are a continued embarrassment to the usually stellar Annals of Internal Medicine. However, the papers are instructive in that they remind us that otherwise educated and intelligent research physicians can confuse the two standard deviation upper limits of a Gaussian distribution with toxicity. For a more detailed critique, as well as several other problematic articles about vitamin D, see this link.
8. By sunbathing for a few minutes in the noonday summer sun, one can easily obtain five times the vitamin d toxicity warning (lowest observed adverse effects level or loael) of the institute of medicine's food and nutrition board.
The answer is a, at least for young whites. The IOM lists the Lowest Observed Adverse Effects Level (LOAEL) as 3800 IU for vitamin D. Studies show young whites can make between 10,000 to 25,000 IU in a single, relatively brief, sun exposure. Numerous factors affect the body's ability to make such high amounts of cholecalciferol, with age, race, latitude, clothing, season and sunblock being the main factors. (Source)
9. If humans are twice as sensitive as the most sensitive mammal tested (male rats), then a 110-pound human would have to injest 88,000 capsules (352 bottles containing 250 of the 1,000 iu capsules) of cholecalciferol in order to have a 50 percent chance of dying (ld50) from an acute overdose.
False, about 168 bottles would do it. The LD50 for male rats (the most sensitive mammal tested) is 42 mg/kg. If humans were twice as sensitive that would be an LD50 of 21mg/kg or 21,000 ug/kg or 1,050,000 ug for a 50 kg human which is 42,000,000 units or 42,000 capsules or 168 bottles of the 250 capsules of 1,000 IU cholecalciferol. [Dorman DC (1990) Toxicology of selected pesticides, drugs, and chemicals. Anticoagulant, cholecalciferol, and bromethalin-based rodenticides. Vet Clin North Am Small Anim Pract 20(2):339-352].
10) As most american blacks suffer from vitamin d deficiency, some black activists feel unwarranted fear and scare techniques about vitamin d toxicity may be racially motivated. That is, racists may be intentionally repeating and promulgating vitamin d toxicity scares in order to prevent relevant government agencies from dealing with the problem of widespread vitamin d deficiency in the black community.
True. The recent NIH conference on vitamin D was most interesting in this regard. Very few Blacks were attendees but several were helping with registration. As the conference progressed into the second day, Blacks helping with registration began to listen to the lectures and became increasingly angry as speaker after speaker pointed out how vitamin D deficiency adversely impacts the black community. One young black man told a sad story of how his infant son was recently diagnosed with rickets. Although the 1997 Food and Nutrition Board was an all-white board, most of the Blacks were angry that nothing is being done currently.
Certainly, it is true that one of the most effective ways to paralyze the government into continued inaction on the pandemic of vitamin D deficiency would be to raise false and frightening toxicity fears. However, remember that it is easy to suspect vast conspiracies, but in the end it is usually simple incompetence. That is certainly true of the mistakes I've made in my life.
11. In the most recent case of vitamin d toxicity described in the literature, a man recovered uneventfully after taking a health supplement every day for two years that contained 156,000 iu of cholecalciferol.
True. Actually, it is likely he took more than that. An industrial manufacturing error was implicated. Such reports help confirm what is known from animal data and that is that it takes a lot of vitamin D to hurt you, but it can be done. (Source)
12. One of the world's foremost authorities on vitamin d metabolism and physiology recently said, "worrying about vitamin d toxicity is like worrying about drowning when you are dying of thirst."
True. The quote is from one of the vitamin D scientists listed below. One of the problems is that there are so few vitamin D scientists in the world, that misconceptions, especially about toxicity, are the rule rather than the exception, even among medical researchers.
In 1999, Dr. Reinhold Vieth, perhaps the world's leading expert on vitamin D toxicity and metabolism, wrote a systematic and scholarly review of the world's literature debunking the hysteria surrounding fears of vitamin D toxicity. (Source)
Later, Vieth demonstrated the safety of daily dosing with 4,000 IU of cholecalciferol, a dose that exceeded the current toxicity warnings of the IOM's FNB. (Source)
Two years later, Heaney, et al, demonstrated the safety of doses up to 10,000 IU a day while also demonstrating for the first time that healthy humans utilize 3,000 to 5,000 IU of cholecalciferol a day (10 times the Institute of Medicine Food and Nutrition Board's current recommended Adequate Intake). What the human body does with such high amounts of cholecalciferol remains unknown, but we suspect Nature has a plan. (Source)
In a reply to critics of his paper, Vieth challenged anyone in the scientific community to present even a single case of vitamin D toxicity in adults from ingestion of up to 1,000 ug (40,000 IU) a day of cholecalciferol saying, "I welcome any discussion of evidence of harm with vitamin D3 (not D2) in adults at doses <1,000 ug/d." Vieth's challenge remains unanswered and his work remains unrefuted. (Source) | <urn:uuid:fe16395c-7e3b-4ac6-9639-f2e4c37da990> | CC-MAIN-2015-35 | http://articles.mercola.com/sites/articles/archive/2003/12/27/vitamin-d-quiz.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00338-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.939083 | 2,991 | 2.578125 | 3 |
These are the questions from the lectures on May 25 & 26 for the non-PLTL
students. I will be adding questions as we go through next week's
1. The evolution of a coelom allows for
A. bilateral symmetry to develop.
B. cephalization to occur.
C. the development of a hydrostatic skeleton.
D. the expansion of gonads.
E. an air-filled bladder.
2. A biologist discovered a new animal. Upon studying embryonic development, she
observed radial cleavage with the blastopore developing into an anus. This
animal was categorized as a
3. All of the following features are associated with bilateral symmetry in
A. conferring anterior and posterior areas to the body.
B. allowing for greater efficiency in movement.
C. creating a body design of two mirror images.
D. allowing for efficiency in seeking food and mates.
E. being sessile.
4. Which of the following is not true about animals?
A. They constitute millions of species.
B. They are very diverse in form.
C. They were some of the first organisms on the earth.
D. They show great mobility.
E. They are found in every conceivable habitat.
5. The fate of the embryonic germ layers is:
A. The endoderm forms the gut, the ectoderm forms the epidermis and parts of the
nervous system, and the mesoderm forms muscles and most internal organs.
B. The endoderm forms the gut, the ectoderm forms the reproductive tract and
endocrine system, and the mesoderm forms muscles and most internal organs.
C. The endoderm forms the inner part of all internal organs, the mesoderm forms
the middle parts, and the ectoderm forms the outer coverings.
D. The layers are sequential structures that all disappear during development,
with the endoderm appearing first and then is replaced by the mesoderm, which in
turn is supplanted by the ectoderm.
E. The endoderm is an embryonic structure that disappears early, whereas the
ectoderm persists as the skin and the mesoderm as the internal organs.
6. Coelomates are:
A. two embryos that develop in the same coelom
B. animals that have a true coelom
C. animals that have two coeloms
D. animals that have no coelom
E. animals in which the coelom disappears at gastrulation
7. Ecdysis is:
A. development of the ectoderm layer in the gastrula
B. development of the epidermis from the ectoderm
C. disintegration of the ectoderm
E. the feeding tentacles of the Ecdysozoa
8. An acoelomate is an animal that:
A. has a coelom
B. has a pseudocoelom
C. has both a coelom and a pseudocoelom
D. has neither a coelom nor a pseudocoelom
E. has a coelom during early development but later loses it
9. Which of the phyla of animals has the greatest number of species?
10. Most animals undergo the following patterns of embryonic development.
11. In which part of a sponge's body does fertilization occur?
A. the ostium
B. the mesohyl
C. the spicule
D. the spongin
E. the amoebocyte
12. The Porifera and Cnidaria have superficially similar body plans. Indicate
which of the following does NOT describe equivalent features in the two groups.
A. spongocoel and gastrovascular cavity
B. mesohyl and mesoglea
C. osculum and mouth/anus
D. epithelial cells and epidermis
E. spongin and polyp
13. The protrusible rasping organ in a mollusk's mouth that is used to scrape
food from the substrate is:
A. a veliger
B. a visceral mass
C. composed of nacre
D. a radula
E. an ammonite
14. The change of a larval form of insect to a different adult form is known
B. incomplete metamorphosis
C. complete metamorphosis
15. Most species of tapeworms live in the
A. stomachs of vertebrates.
B. lungs of vertebrates.
C. livers of vertebrates.
D. intestines of vertebrates.
E. hearts of vertebrates.
16. Flukes are parasitic worms whose hosts during the larval stage are usually
A. aquatic insects.
B. cyprinid fishes.
D. free-living flatworms.
17. The phylum that includes snails, clams, oysters, and octopuses is the
18. The nitrogenous waste in mollusks is removed by
A. flame cells.
C. Malpighian tubules.
D. incurrent siphon.
19. Annelids possess all of the following except
A. muscles to swim, crawl, and burrow.
B. ganglia to respond to light and respond to other environmental cues.
C. circulatory, excretory, and neural elements in each segment.
D. setae in each segment.
E. adductor muscles.
20. The evolutionary innovation that first appeared in arthropods and is
characteristic of the most successful of all animal groups is that of
A. bilateral symmetry.
B. coelomic body architecture.
C. jointed appendages.
E. three primary types of tissues.
These are the questions from the May 31 lecture on Vertebrates and
21. It is generally accepted that the vertebrates that evolved
during the mid-Devonian period are the
A. sharks and bony fish.
22. Traditionally, amphibians were thought to have evolved
A. ray-finned fish.
B. lobe-finned fish.
C. spiny fish.
D. skates and rays.
23. Mammals are thought to have evolved from
24. Apes and humans together make up a group called
C. Homo sapiens.
25. The lateral line is:
A. a lateral stripe on the side of male fish that aids in
B. a series of sensory organs that detects pressure waves in the
C. the row of fins along the sides of eels
D. the row of pharyngeal slits along each side of the heads of
E. the row of small blocks of cartilage along each side of the
notochord of agnathan fishes
26. Evolutionarily, the jaws of vertebrates developed from:
A. the circular mouth of lampreys
B. the last pharyngeal arch of jawless fishes
C. the third pharyngeal arch of jawless fishes
D. the operculum
E. the claspers
27. Buccal pumping is:
A. pumping blood from the atria to the ventricle
B. pumping air into the lungs by raising the floor of the throat
C. pumping blood from the ventricle to the body
D. pumping wastes through the kidneys
E. pumping blood from the ventricle to the atria
28. Birds are different from all other living vertebrates because
A. can fly
B. lack teeth
C. have feathers
D. are bipedal
E. All of the choices provided are correct.
29. The Order Primates had its origin in:
A. small arboreal monotremes
B. bipedal marsupials
C. small, arboreal, insect-eating mammals
D. bipedal ornithischian dinosaurs
E. theropods with hair
30. The human lineage began to diverge from those of other primates
A. 153 million years ago
B. 210 million years ago
C. 6 million years ago
D. 18 million years ago
E. 64 million years ago
- These are the take-home questions for the non-PLTL
students. They are taken from the Animal Form/Homeostasis
31. Bone cells can remain alive even though the extracellular
matrix becomes hardened with crystals of calcium phosphate. This
type of cell is also called a(n)
32. The glands of vertebrates are derived from ____________
B. simple stratified
C. stratified squamous
D. simple columnar
E. squamous keratinized
33. Which muscle contraction is involuntary?
C. cardiac and smooth
E. cardiac, smooth, and skeletal
34. Myelin sheaths are found along
D. cell bodies.
For #35, choose the letter of the best match from the following
definitions of negative feedback.
A. deviation from set point
B. causes changes to compensate for deviation
C. constantly monitors conditions
D. compares conditions to a set point
E. body temperature rises
36. Which of the major tissue types has shortening of cells (i.e.,
contraction) as its major function?
E. All of the choices are correct.
37. In an organism with either a closed or open circulatory system,
most of the body fluids are in the
A. intracellular fluid.
B. extracellular fluid.
C. interstitial fluid.
38. The release of factors by cells that influence the activity of
nearby cells is referred to as
A. autocrine signaling.
B. paracrine signaling.
C. pheromonal signaling.
D. electrical signaling.
E. exocrine signaling.
39. Which of the following is true of both neurotransmitters and
A. They are both involved in paracrine signaling.
B. They both interact with receptors inside or on the surface of
C. They are both produced in glands.
D. They both travel through the bloodstream to target cells.
E. They both influence the activity of multiple organs
40. Solutes move between body compartments by using
B. ATP-powered active transport.
D. facilitated diffusion.
E. Solutes use all of these mechanisms to move between
These are the take-home questions for the non-PLTL students.
They are taken from the portion of the Digestion lecture finished in class on
June 2. This covered material is highlighted in pink on the Review.
41. Saliva contains the hydrolytic enzyme, salivary amylase,
which initiates the breakdown of the polysaccharide _________
into the disaccharide, maltose.
42. In humans, and other vertebrates, the digestive system
A. a one-way tube with a separate mouth and anus and accessory
B. a two-way tube with a separate mouth and anus.
C. a tube with a single opening that serves as mouth and anus.
D. a two-way tube with a separate mouth and anus and accessory
E. a one-way tube with a separate mouth and anus and no
43. Gizzards designed to grind food are found in
44. The rhythmic contractions of the esophagus are called
A. esophageal contractions.
B. esophageal convulsions.
D. heartburn sensations.
45. Which of the following describes the sequential
processes occurring in the stomach?
I-Peristaltic waves of
contraction propel the food along the esophagus.
II-Gastric juices are
secreted with the arrival of food into the stomach.
III-The acidic chyme is
transferred through the pyloric sphincter.
pepsinogen into pepsin starting digestion of proteins into
A. IV, III, and II
B. II, III, and IV
C. I, II, and III
D. I, II, and IV
E. II, IV, and III
46. The first organ to receive the products of digestion after
absorption is the
47. The gallbladder secretes into the small intestine a fluid
that can make the fat partially water soluble. It is called
48. Excess blood glucose is removed by the liver to convert it
B. maltose and other disaccharides.
E. all of these.
49. A friend describes to you a choking event in her life. She
says that "the food went down the wrong pipe." She meant that
A. some food went down her esophagus.
B. some food was stuck in her pharynx.
C. some food went down her trachea.
D. some food lodged in one of her salivary gland ducts.
E. some food was stuck in her esophagus.
50. Which of the following is the most accurate statement
regarding the ingestion of two tablets of Rolaids?
A. The fluid in the stomach would begin reabsorbing.
B. The HCl production would decrease.
C. The pH of the stomach fluid would decrease.
D. The pH of the stomach fluid would increase. | <urn:uuid:30f98488-3e8f-4c3f-a534-ccf79747b569> | CC-MAIN-2015-35 | http://bioserv.fiu.edu/~walterm/gen_bio_II/sum11_exam2_nonpltl_questions.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00164-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.86277 | 2,852 | 3.96875 | 4 |
Branson, Missouri is known today as the "live music capital of the world" but it has a rich history dating back to its first days in the 1800's. Starting with a small store at a riverboat stop, the city now boasts over 40 theaters with 60,000 theater seats, over 70 live theater shows, over 200 lodging facilities with over 23,000 lodging rooms, 5,000 camping spaces, over 350 restaurants, three lakes, 9 golf courses, over 200 retail outlets, numerous attractions, caves to explore and year round activities and entertainment.
Here is a brief time capsule of Branson's history.
1837: Taney County was established with Forsyth, a popular and important river town, named as county seat.
1882: Rueben Branson opened a general store which became the post office and was listed as Branson, Missouri - obviously named after Rueben. During the 1880's & 90's one of the largest industries in the area was tomato canning.
1884: Settlers began to move to the Ozarks for the promise of free land and the area was homesteaded on 160 acre lots.
1894: William Henry Lynch bought a cave 6 miles outside of Branson. Later, the Marvel Cave would become a tourist attraction in the heart of Silver Dollar City.
1903: The men who founded Branson were planning an industrial center that would generate trainload after trainload of logs, lumber, and manufactured products for the world outside the Ozarks.
1904: A new bank, livery stable and hotel, and resorts began to spring up to accommodate travelers and fisherman.
1907: "The Shepherd of the Hills," a book written by Harold Bell Wright about this area of the Ozarks, was published and became a nationwide best seller. Overnight, tourists from across the country began coming to "Shepherd of the Hills Country" and tourism was born.
1912: This was a banner year for Branson with incorporation on April 1 with 1200 residents, and the idea of Branson as a resort began to take root. Major industry came to Branson in the form of The Winch Spoke Company, which built spokes and wagon parts, and American Pencil Company of New York established a logging factory in Branson. The business section of Branson burned in August of 1912 and was rebuilt. The Powersite Dam at Ozark Beach created Lake Taneycomo with its construction in 1912 and 1913.
1914: The women of Branson, many of whom were employed or helped operate family businesses, organized a Civic League. They began a decades long effort to beautify the streets, establish parks, and make life better in their community. This included a well-equipped municipal bathing beach and picnic grounds on Lake Taneycomo.
Post World War II: Many artists, craftsmen and retirees came to the area, along with returning servicemen and war industry workers. Branson proved to be the perfect spot for a growing hand-craft community.
1949: Hugo and Mary Herschend bought the Marvel Cave from Mr. Lynch's daughters and began square dances in the cave. Artist Steve Miller and businessman Joe Todd, with the help of local carpenters, created and constructed a huge lighted Adoration Scene on the bluff of Mount Branson overlooking the downtown and Lake Taneycomo. The crèche's figures, up to 28 feet tall, were lighted on the first day of December in front of thousands of awe-struck visitors, beginning a Branson tradition.
1953: With more people coming for the lighting of the Adoration Scene each year, the Chamber of Commerce included with the lighting of the scene the Adoration Parade, adding to the long history of other Branson parades. Today it draws crowds as large as 30,000 people.
1959: The first show in Branson, The Baldknobbers Hillbilly Jamboree Show, opened, taking the name of their show from a vigilante group of the Civil War Era which roamed the area making their own justice.
1960: "Shepherd of the Hills" opened its Old Mill Theater and Silver Dollar City opened its doors for the first time as a theme park. The Presley Family began a music show in the Underground Theatre, now known as Talking Rocks Cavern near Branson West. Just as tourism began to increase rapidly in the area, the Missouri Pacific canceled its service on the White River Line. With so many visitors now arriving by automobile travel often slowed to a crawl on the 75 mile winding route between Springfield and Branson. So, dynamite crews and massive earth moving equipment blasted a new road through our limestone hills, shortening the route to 40 miles.
1963: Table Rock Dam was completed and the area's largest man-made lake, Table Rock Lake, was formed.
1964: The Baldknobbers music show moves into a downtown Branson theater.
1967: The Presley family opened the first theater on "the Strip," Hwy. 76.
1968: The Baldknobbers moved to a theater on Hwy. 76. The movement to Hwy. 76 had begun and the first two shows were followed closely by the Plummer Family Music Show on West Hwy. 76.
1974: The Foggy River Boys, who had been performing since 1971 at a theater in Kimberling City, moved to Hwy. 76. Mutton Hollow Entertainment Park opens. A four lane by-pass was completed in the mid-1970's routing traffic away from Branson's congested downtown district, creating interchanges at Hwy. 76 and Hwy. 248, and a new bridge across Lake Taneycomo. At that time, businesses were just beginning to develop along W. Hwy. 76 with only a few scattered shops and music shows. Today the number of theaters top 40 and there are over 70 live theater shows.
1981: The Wilkerson Brothers Theater, Hee Haw Theater and Starlite Theater are completed.
1983: While tourism remained steady throughout the 1970's and 1980's, 1983 marked the start of a tremendous boom. The Swiss Villa with 7,500 seats opens. The Lowe Family moves to "The Strip." The Roy Clark Celebrity Theater, The Thunderbird Theater, and the Echo Hollow Amphitheater at Silver Dollar City open.
1984: The Braschlers Music Show opens in the old Lowe's Theater. Musicland USA opens with the Lester Family and The Sons of the Pioneers opens at Lowe's theater
1985: The Braschlers Music Show moves to Musicland USA, The Hee Haw Theater becomes Country Music World and the Sons of the Pioneers join the Foggy River Boys.
1986: The Texans join Bob Mabe and open the Texans/Bob-O-Links Music Show. The Ozark Mountain Amphitheater opens with 8,500 seats.
1987: "Box Car Willie" becomes the first celebrity entertainer to perform on a permanent schedule in his own theater. Campbell's Ozark Country Jubilee and the 76 Music Hall open their doors.
1988: The first Ozark Mountain Christmas is held and The Factory Merchants Mall opens.
1989: Inspiration Tower opens at Shepherd of the Hills. Shoji Tabuchi opens a music show. Christy Lane buys the Starlite Theater and Danny Davis & the Nashville Brass perform at Country Music World.
1990: Shoji Tabuchi moves to Shepherd of the Hills Expressway. Mel Tillis moves to Branson and starts a music show. Mickey Gilley starts a new theater.
1991: National news organizations "discover" Branson. In August of 1991 "Time" magazine published a story about their "discovery" of Branson and the interest by that media giant was followed closely by coverage in "People," "The Los Angeles Times" and the "Wall Street Journal." "60 Minutes" put the television spotlight on this small town in the Ozarks that had more theater seats than Broadway and a host of impressive names headlining its then 22 theaters.
1991: Shepherd of the Hills and Ray Stevens start the Ray Stevens Theater. Moe Bandy opens the American Theater and Buck Trent opens a dinner theater.
1992: Mel Tillis and Andy Williams each open their own theaters. Willie Nelson plays at the Ozark Theater and Jim Stafford starts performing at Stars of the Ozark Theater. Kenny Rogers and Silver Dollar City start the Grand Palace. The Osmonds and Jennifer Wilson come to town.
1993: Pump Boys and Dinettes, John Davidson, Tony Orlando, Bobby Vinton, Five Star Theater, Yakov Smirnoff, IMAX, Branson Scenic Railways and Wayne Newton all start shows and open venues in Branson.
1994: The Polynesian Princess sets sail on Table Rock Lake. Charley Pride, The Welk Resort & Champagne Theater, $25,000 Game Show, Will Rogers Follies, Radio City Rockettes and Country Tonite all open shows.
1995: The Dixie Stampede and The Showboat Branson Belle open their new dinner theaters.
1997: Shepherd of the Hills becomes America's most performed outdoor drama with it 5,000th show.
1999: Grand Palace opened its doors to feature a host of legendary stars appearing for select dates. The Oak Ridge Boys, Tony Bennett, Charlie Pride, LeeAnn Rymes and more.
2000: Silver Dollar City opened its Red-Gold Heritage Hall entertainment facility. The name, "Red-Gold," refers to the time of the huge tomato harvests here in the Ozarks. At that time the canned, train-load shipments were referred to as being "Red Gold."
2000: Branson Creek Golf Club opened. It is a Tom Fazio designed 18 hole championship course.
2001: Silver Dollar City opened Wildfire, another massive roller-coaster thrill ride.
2001: The Duttons purchased and renamed the Box Car Willie Theater. It became the Dutton Family Theater. They also began building lodging and shopping facilities there.
2002: The Kirkwood Motel was completely rebuilt and renovated into Music City, a combination theater and motel. The Haygoods were the opening act there.
2002: The Promise Theater was renovated into the White House Theater and opened.
2003: The Herschends and Silver Dollar City opened their $40,000,000 Celebration City theme park on the West edge of Branson. It was formerly Branson USA, opened in 1999 by the Bob Wehr family.
2003: The Highway 13 Bypass, connecting Branson West to 160, north of Reeds Spring and The Junction, was completed June 3rd, 2003, at a cost of 19 million dollars. It was started in February of 1999. This removed all through traffic from Reeds Spring and allowed rapid travel both north and south.
2004: Ground breaking ceremonies for the Branson Landing project began. The City of Branson would build a new bridge over Roark Creek, with a circle-drive and a new highway to serve the project.
2004: In Branson, there were a number of items on the lakefront built during the Great Depression, during the 1930's, under the auspices of the Works Projects Administration (the WPA). The last of these remaining were the well-worn and beloved old concrete "steps" down by the waterfront. These were removed without ceremony when The Landing was under construction. Also, the "Liberty Tree," the ancient oak believed to be young when "The Liberty Bell" was first rung in Philadelphia in 1776, diseased and probably dangerous, was removed to clear the way for The Landing. The many tree pieces were stored outdoors south of Lampe, Mo.
2004: The Missouri Highway Department began rebuilding all of Highway 65, south of Branson, to the Arkansas line. It became a divided, 4-lane expressway, with overpasses and cloverleaves. It was completed and opening ceremonies held in November of 2006.
2004: The north phase of The Ozark Mountain High Road was completed April 3rd, at a cost of 48 million dollars. It was started in September of 1994. The Herschends were powerfully behind getting the Missouri Department of Transportation (MoDot) to start the project. Box Car Willie was one of the many Branson folks who tried to get it stopped, fearing it would divert all of the potential Branson traffic directly to SDC. A south phase, continuing on from 76 to 65 Highway, crossing Taneycomo just below the Table Rock Dam, will be started one day soon.
2005: Ground breaking for the 220,000 square foot Branson Convention Center and the 12-story Branson Hilton Convention Center Hotel, with 553 rooms. They are to open in 2007.
2005: New building construction in Branson achieved $173.5 million. This broke the record $119.5 million, achieved in 1993, at the height of the "theater and motel building boom."
2005: The Branson Mill, a historically themed shopping center, opened on Gretna Road.
2005: The Shang-Hai Theater on Highway 165 opens.
2005: The RecPlex, Branson's 44,000 square foot recreation center, opened on Branson Hills Parkway, near the location of the future Target and Home Depot, far to the north on Highway 65.
2005: Branson opened a unique public services building overlooking Highway 65 at the 76 junction. It is a one-million gallon, concrete water tower. But the "stand" for the 114' tall tower houses 5 stories and 5,600 square feet of office space where the entire Public Works Administrative offices are housed.
2005: At the end of the year, Branson boasted of: 47 theaters with 57,623 seats and more than 100 different shows, 205 motel/hotels with 17,904 rooms (with another 4,000 rooms nearby), and 410 restaurants with 35,266 seats. That's more shows and seating than Las Vegas or Broadway! Wow!
2005: By the end of 2005, Skaggs Hospital had caused 21 satellite facilities to be built around the area and now employed some 150 physicians in all its facilities. Skaggs was joined in the lakes region by a Springfield's Cox Hospital satellite, together with a major satellite clinic and helicopter facility built by St. Johns Hospital, also centered in Springfield.
2005: The Keeter Charcoal business financed the rebuilding of the "Main Hunting and Fishing Lodge" at the College of the Ozarks to replace the aging welcome center and dining facilities there. The original Main building was brought from the 1904 World's Fair and erected at Point Lookout. It became a sportsmen's club for many years. It was finally sold to the "School of the Ozarks" for $15,000, and became their first building. That valuable, historic building later burned to the ground. /p>
2006: The giant new stores, Target and Home Depot, opened in the new 141 acre Branson Hills Plaza area, way north of the Downtown area, on 65 Highway. This will be a three phase development starting with 300,000 square feet of lease space.
2006: The Titanic, The Legend Continues, opened in April on the 76 Strip. It is the largest museum quality production in the world. Regis Philben hosted the grand opening. The outside of the museum/theater is made to look exactly like the front half of the Titanic, complete with the "iceberg," and some of the "Atlantic Ocean," but it is 1?2 the size of the original. Still very BIG!
2006: Opening of the 300 million dollar Branson Landing project. It covers 95 acres on 1.5 miles of Lake Taneycomo waterfront. It has over 100 shops occupying 1.2 million square feet. It is anchored by Belk Department Store with 68M square feet and Bass Pro with 60M square feet of space. It also has luxury condos and a 243 room Hilton full-service hotel, as well as restaurants, cafes and kiosks. It has 3500 parking spaces. The public square can accommodate 5,000 people for festivals and so on.
2006: The Herschends opened the Butterfly Palace on the west edge of Branson's 76 Strip.
2006: Dick Clark's Bandstand Theater and 1957's Auto Show opened across from Dolly Parton's Dixie Stampede on the 76 Strip.
2006: Work started on the huge highway bypass to take through-traffic around the rapidly growing Branson West. It will be a limited access, 4-lane expressway with one traffic signal at the junction with 76 Highway through Branson West, going to Branson. It will be completed by MoDot in 2008, at a cost of 23 million dollars.
2007: The new, state-of-the-art Branson Convention Center opens in Branson.
2008: In May, Sight & Sound's brand new, state-of-the-art, 2000-seat theatre opened its doors to the public
2009: The $155 million Branson Airport opens East of Highway 65 and just North of the Missouri-Arkansas border. May 11, 2009 - Branson Airport's first commercial flight, Sun Country Airlines flight SY509 from Minneapolis/St. Paul, touched down at 9:00 AM followed at 11:55 AM by AirTran Airway's flight 1582 from Milwaukee.
Every year holds new surprises for the residents and guests of Branson, as familiar faces thrill and entertain our audiences while talented new ones continue to join the ranks of Branson veterans. The changing seasons bring a panorama of the mountains, valleys, and lakes of our beautiful Ozarks. The scenic beauty of the lakes, fabulous fishing and water sports keep the outdoor enthusiasts busy. The list of challenging world-class golf courses continue to grow along with the number of outlet stores (somewhere over 200) and other one-of-a-kind shops. Daily life in Branson is history in the making.
©2015 Branson's Best Reservations
|Design by WebWorks Website Design| | <urn:uuid:326d4712-ad3d-4320-a9f3-4702ba86aa83> | CC-MAIN-2015-35 | http://www.branson-mo-vacations.com/history.asp | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645323734.78/warc/CC-MAIN-20150827031523-00160-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.952868 | 3,726 | 2.53125 | 3 |
Updated by Norris Chambers
The history of White Settlement dates back to the earliest days of the Texas Republic. Soon after his election to the presidency of the Republic of Texas in September of 1836, Sam Houston attempted to increase land values by encouraging immigration to Texas. A General Land Office was established in 1837 for the purpose of granting large tracts of land to those who would homestead it. The "Homestead Law" was passed to guarantee that the land could not be taken away from the settlers for any reason other than defaulting on the terms of the acquisition.
One of the earliest men to take advantage of the liberalized land policy was Logan Vandiver, who received a "headright certificate" dated February 16, 1838, to a 1476 acre tract just west of the Trinity river where the present city of White Settlement is located. The area was heavily populated by Indians, and in 1840, across and east of the Trinity river, Bird's Fort was built. This stockade was about twenty miles east of the settlers on the west side of the Trinity and afforded them little, or no, protection. In September of 1843 a treaty was signed at Bird's Fort by representatives of the Republic of Texas and the Indian tribes. This opened the door for more settlers to claim the fertile plains of the "grand prairie" in what is now Tarrant and Parker Counties.
Apparently neither whites nor Indians were too eager to observe the terms of the treaty. Other tribes, not included in the original agreement, moved into the area. After many pleas from the settlers, a small fort was established on the bluff overlooking the junction of the Clear Fork and West Fork of the Trinity. The camp was started on June 5, 1849 and on November 14, 1849, the War Department officially named it Fort Worth. On December 20, 1849, the creation of Tarrant County from the northern portion of Navarro County was signed into law by Gov. George T. Wood and was named in honor of Gen. Edward H. Tarrant, a veteran Indian fighter and a representative from Navarro and Limestone Counties.
In the 1840's there were seven Indian Villages in the vicinity of what later became Fort Worth. There was one non-Indian settlement west of the Fort, and it was known as White Settlement because no Indians lived there. Some of the pioneers jokingly said that "No Indians lived here because they couldn't stand the Rowlands!" The Rowland family was highly respected, and the remarks were made in "fun" only. A street in western White Settlement is named "Rowland".
Pioneers from Tennessee and Kentucky came to Texas in search of a better way of life for themselves and their families. They were willing to take their chances with Indian uprisings and other hardships that were common in this area during those times. Texas was thought of as the "Land of Promise" where settlers could buy land for fifty cents an acre, and those who were willing to settle on land without deeds were given preemptive rights to buy 320 acres at that price.
One such settler was John Press Farmer, who, with his wife and daughter, was living in a tent on the new site. A native of Tennessee, Farmer had sampled East Texas before moving westward.
He and his wife had cut some timber and their home was almost complete when they sighted a band of Indians. The Farmers fled on horseback, and when they returned their house was a mass of charred rubble.
Life was not easy for these early settlers. The pioneers produced a sturdy type of citizenship that the people of this area are proud to honor. White Settlement can be rightfully proud of its first citizens.
White Settlement became a trading outpost on which comparatively peaceful Indians came to rely because of the honesty of the white settlers and the goods they dealt in. Here the migrating pioneers from the east found a fine rich country carved out of homesteads among the Indians, and others called their area the "white settlement." The road leading to Fort Worth was called White Settlement Road. It extended on west into Parker County and on to Weatherford.
In 1854, a well-equipped ten wagon train with a number of residents from Kentucky, leaving crowded conditions and what they believed to be exhausted land, headed west with Texas as their destination. The new arrivals hoped to get a fresh start. They settled west of Fort Worth in a community that had came to be known as White Settlement. Some of the planters brought slaves with them. Some settled on land pre-empted from the state and grazed herds of fine cattle along the banks of a creek known as Farmers Branch. Early settlers streamed in and made their living from the rich land. Cabins were built near a branch or creek as this was the source of pure, clear water. The bottom lands were rich and fertile, and the virgin land yielded bumper crops. The settlers caught fish, trapped deer, wild turkeys and prairie chickens for food.
As the pioneers continued to move westward, bringing their families to the area, the need for a school came about in the early 1860's. It was a small, one room log cabin, which stood where the runway of Carswell Air Force Base, now known as the Naval Air Station Joint Reserve Base, is now located. It was known as Pecan Grove. It also served as a central gathering place for the community.
Like other settlements in those early days, the religious needs of the people were soon provided. Soon after the one room log school house was built, the Baptist Church was organized and on February 8, 1868 became known as the New Prospect Baptist Church.
During the early days, as other pioneers migrated to this part of Texas, the area of White Settlement became a prosperous farming community. This way of life continued through the reconstruction period, and the growth of White Settlement was steady. The community went through World War I and the depression of the 1930's. The depression did not begin to ease until the threat of war became a reality, and the awful struggle had already begun in Europe. As the United States began preparing for the inevitable war, a boom of prosperity came to White Settlement, whose population was about 500 at the beginning of the growth period.
An aircraft factory was constructed on its northeast boundary, and an air field on the east side.
The City of White Settlement was incorporated in 1941, and later expanded its boundaries by consolidating with Worth Hills, an area on the south side of the city.
In a matter of months, White Settlement's population grew to 10,000. Most of the new residents were employed at the new Consolidated Aircraft Corp. plant and as civilian employees at the new Army Air Field. Many families of servicemen stationed at the air base also lived in White Settlement. The school district doubled in January of 1943 and the homes increased from 200 to 1200. The government constructed thousands of apartments for defense workers. The largest group, located on the east and west side of Cherry Lane, was known as Liberator Village. The name "Liberator" was taken from the bomber that was being built at the aircraft plant. The B-24's were known as the Liberator bombers. Another group of smaller, two-story apartments was located at the north end of Cherry Lane at Clifford, and was called "Victory Apartments."
As housing developments progressed, streets were named for early pioneers. Cherry Lane, Dale Lane, Farmer Road, Grants Lane, Harwell Street, Normandale Blvd., Redford Lane, Rowland Street, Smith Street, Tinsley Lane and Mirike Drive as well as many more, such as Carlos, Richard, Downe, Pemberton, etc. These pioneers were well respected throughout the area and were proud of White Settlement.
The building boom after the war continued, as the aircraft plant continued to operate. Many of the houses in White Settlement were built entirely by the owners, as the pioneer cabins were. Developers such as Curby Mirike made available thousands of building lots for $15.00 up on terms of $10.00 down nd $10.00 a month. Several builders would construct a shell house, which included the four walls, doors and windows, and a roof for prices ranging from $1500.00 to about $3000.00 for one with inside walls and sheet rock installed. The wiring and plumbing were done by the owner as he could afford it. There were no sewer lines, no gas lines, no phone service and only unpaved streets for access.
City services were practically non-existent. A private water company provided water from several wells. What street maintenance there was came from the County Commissioner from the district.
Bryan Henderson was the commissioner during this period, and he provided graveled driveways, graveled streets in many areas, civic work for the schools, churches and city as well as many other organizations. There is a ball field in the present Central Park named in his honor.
The city did not provide any enforcement of building codes, and every home owner built his house according to his own ideas and his available capital. In most cases, the capital was small, and the house building progressed a few boards per payday. A few developments provided sewer lines that connected with the Fort Worth system. These lines, when accessible, were soon overloaded and worked only in dry weather.
In the early 1950's Lone Star Gas provided service to all homes. This replaced butane and propane tanks and simplified heating and cooking. The telephone company introduced phone service, although for several months only 8-party service was available. This was not a good service, but soon 4-party was offered and eventually single party lines were available. The City of White Settlement, under the leadership of Curby Mirike, formed a volunteer fire department and built a new city hall where Farmers Branch crosses Meadow Park Drive. The city purchased the private water company and provided an improved service to the customers. There had been no garbage service, and every resident disposed of his trash by his own method. This was usually burning in a barrel and dumping at the side of a road. The city established a garbage pickup service, and this did much to clean up the city.
With these improvements came taxes. There had been no city tax, and many residents did not even know they lived in a city. For the first time the people of White Settlement were unhappy with their government, and many protest meetings were held about the new taxes. School taxes were paid with the county bills, and did not emerge as an irritant until the school district became independent in the late 50's and began sending tax bills. The average tax payer in the early 50's paid less than $30.00 total taxes.
But responsible citizens realized that with progress must come financing, and bond issues were passed in both the city and school district to assure continued improvement. The government offered grants for street improvements, code enforcement and education. The school district received hundreds of thousands of dollars for the construction and operation of a school system.
White Settlement's school system emerged as one of the finest in the state. The high school was named for a pioneer school teacher and later superintendent, Mr. C. F. Brewer, who served the system for many years. Brewer High School has excelled in both academics and athletics and has offered an enriched curriculum to its students in most modern trades. Its students have repeatedly attended colleges and universities and excelled in advanced studies.
At the same time, the city has progressed to a strong, well financed government. A City Manager, Mayor and Council has operated the city's facilities so that White Settlement can point to a city park and recreation program that is second to none in the area, a fine library with many books and services available, a Senior Citizen center that is recognized by most as the best in the Metroplex, streets that are all paved and well maintained, upgraded and regularly improved. The water system is state approved and is available to the entire city. Large sewer mains have provided sewer service to homes that had struggled with septic tanks for years.
From the early days of one constable, White Settlement has established one of the finest police forces in the state with a new headquarters and updated equipment. Police protection is provided at the enviable rate of more than one patrol car per square mile. This type of protection is difficult to provide in larger cities.
With strict code enforcement and continual upgrading of ordinances, the quality of construction in White Settlement is equivalent to that of other contemporary cities. Replacement of the old Village Apartments with modern housing and continual code enforcement on older structures, the architecture of White Settlement is as good as, or even better, than other metroplex communities.
The building boom of the eighties gave White Settlement many large retail outlets such as Sam's Wholesale Club, Wal-Mart, K-Mart, Home Depot, Payless Cashways, Texas Motors and others. A city sales tax of 1 percent provides over $100,000 per month in revenue. The property tax rate is only $.50 per hundred dollar valuation in 1991.
Large and small churches of just about every denomination are located in the city. The original Baptist church has grown into the First Baptist Church of White Settlement. Wesley Methodist, Bethany Christian, Las Vegas Trail Church of Christ, West Freeway Church of Christ, Normandale, Terrace Acres, Cherry Lane, Corina Drive, Tower and many other Baptist churches have impressive congregations and installations. Other denominations include Catholic and Lutheran who have a strong presence in the community, offering schooling as well as religion.
Three motels in the city provide lodging for tourists and visitors to the area. The revenue generated by the room tax funds an active Chamber of Commerce, which has been working in White Settlement on a volunteer basis since the early 50's. The city now provides an office for the Chamber.
A freeway loop around the Fort Worth area, Loop 820, is the border for White Settlement on the west and south. This road was completed in the late 1970's with the dedication of the bridge across Lake Worth on March 31, 1978. The road from the bridge south through White Settlement was name Jim Wright Freeway, in honor of Jim Wright, who represented Fort Worth and White Settlement so well for many years. Much of the progress and growth of White Settlement can be attributed to Mr. Wright, who became Speaker of the House in Washington. This loop opens White Settlement to the entire Metroplex, and makes it an attractive place to locate any enterprise. The continual growth of the city is assured by its location and progressive government.
And to preserve the city's and area's heritage, the White Settlement Historical Society was established in the seventies. This group meets on a regular basis, and has restored the old Allen log cabin. The land for the display was donated by Frances and Sheila Allen, descendants of the Allen pioneers, and is located on the old Allen land on Las Vegas Trail. Another long term project of the society has been the improvement and maintenance of early cemeteries, such as the Thompson Cemetery and the Judd Street Cemetery. A marker has been placed at the old Jud Rowland Spring, an early site on Farmers Branch providing pure, cool water for community residents and campers. It is said that settlers going and coming from Fort Worth made this their camping spot. It was also a favorite Indian campsite at times.
A museum has been established by the society to preserve items relating to the pioneer and ensuing periods in the history of the White Settlement area. The City of White Settlement provides and maintains a building for this purpose. The museum was established in 1991, and it is hoped that it will grow and become an institution that the area can cherish and enjoy for many years in the future.
If the city can continue to prosper and improve in
the next fifty years as it has in the last fifty, it will indeed be a
that future citizens of White Settlement can be proud of, as the
of the 1800's were at that time.
Return to Index | <urn:uuid:b204ba16-58d0-411c-89c8-2e38e39203f8> | CC-MAIN-2015-35 | http://www.wsmuseum.com/WHSETL.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00047-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.98487 | 3,277 | 3.5 | 4 |
Skip to: Main Navigation | Main Content
or try the A-Z Library
The ability to probe your health at the genetic level is bringing about a fundamental shift in healthcare. What issues should you consider before you take the plunge?
Your young child has a mystery illness that a string of specialists can't explain and you wonder if some kind of genetic problem might be to blame. Or perhaps your mother died from an inherited form of bowel cancer and you want to know if you share the genetic flaw behind her disease. Or maybe a cancerous tumour has been found in your own body, and probing its genetic subtype will help doctors work out how best to treat it.
Welcome to the world of genetic testing a field that's rapidly changing the face of medicine. The latest figures show more than 400 different tests are available in Australia and their popularity is soaring. With 60 per cent of us expected to develop a genetic disorder by the age of 60, it could help to know about this new testing phenomenon and how it could affect you.
Genetic testing in Australia some key facts
Most common categories of testing:
Screening: 40 per cent
This involves testing of an unaffected person who is not recognised as being at increased risk of carrying a genetic mutation eg screening of newborn babies for cystic fibrosis, which is routine in most Australian states.
Diagnostic: 28 per cent
This involves the testing of an affected person, including an unborn baby, to determine the genetic basis of disease.
Somatic: 8 per cent
This is testing for specific non-inherited gene forms, such as the particular form of a gene expressed in someone's cancer tissue.
Family: 5 per cent
This is testing of an unaffected person, including an unborn baby, at increased risk of carrying a gene mutation on the basis of family history. Usually the mutation has already been identified in another family member.
Other: 1 per cent.
Unknown: 18 per cent.
Source: Report of the Australian Genetic Testing Survey, 2006 (released March 2009).
Genetic testing involves taking some of a person's DNA, generally from a swab of saliva or a sample of blood. The sample is then searched for signs of mutations or irregularities in the DNA code. But diseases are complex and the test results can be difficult to interpret.
In the case of some diseases such as Huntington's disease and cystic fibrosis the cause is almost 100 per cent genetic. Testing for these kinds of illnesses has been around for some years now and is able to provide a clear diagnosis.
But other diseases are far more complex and need multiple genetic markers to be present. These diseases are also affected by environmental factors such as diet and lifestyle, for example cardiovascular disease and diabetes. Predictive genetic testing, which is a relatively recent practice, can give you some idea of your risk of developing these types of diseases.
When performed appropriately, genetic testing is "very powerful and useful", says Ron Trent, a professor of molecular genetics at the University of Sydney who has been involved in the field for more than 20 years. "But if not used appropriately, it is a waste of time."
If you think you might benefit from a test, you'll need advice on which test (if any) is appropriate and where to get it. Your GP may be able to help or you can go to a specialist centre.
"Within our public hospital system we have lots of clinical genetic units," Trent says. "They are a very good first point of call. In the clinical genetics units, the counsellors and the clinical geneticists can assess the risk and can refer the patient on for testing or refer them onto another specialist."
Availability and cost of genetic tests useful things to know
The particular tests offered to you may vary depending on where in Australia you live and your doctor's level of knowledge.
The cost to you may also vary widely depending on whether the test is covered by Medicare, whether you are requesting the test as a private patient of a doctor, or if at a public clinic, the budget of that clinic. In 2006, only five of the 437 tests available were covered by Medicare.
Most tests offered through public clinics are wholly or partly subsidised, however, there may be long waiting periods (up to 12 months) for an appointment.
The best way to find out about options regarding the availability and cost of a particular test is to contact a clinical genetics service (for a listing of the services in each state visit the Centre for Genetics Education website).
Dianne Petrie was forced to consider genetic testing when she was looking for answers as to why her 15-month-old daughter Natasha was ill.
"From birth we thought there was something wrong. She was jaundiced, she didn't feed, she cried all the time, she'd projectile vomit, she was constipated and she would sweat… She also had a heart condition."
When Dianne was finally advised to take Natasha to a clinical geneticist, they suspected Williams Syndrome a rare condition characterised by distinctive physical features and behaviours. A genetic test on the baby's blood confirmed the hunch was correct.
Seventeen years have passed since that day. In that time, Dianne has gone on to found the Williams Syndrome Group and is now the director of the Association of Genetic Support of Australasia (AGSA), which provides support and information to individuals and families affected by a genetic condition.
"Research has shown that people who do join a support group cope far better," Petrie says.
While finding out what was wrong with her daughter gave her a direction to seek treatment, the knowledge was a double-edge sword.
"It was also quite horrible too, because you hope that all your fears are not true and then you realise that they are confirmed."
But not everyone welcomes the insight from a genetic test even when it could prewarn them of a challenging and painful illness.
When Jillian Critchley became pregnant to her husband Peter, who has the inherited condition Charcot-Marie-Tooth (CMT), the couple deliberately chose not to have prenatal genetic testing, even though they knew there was a one in two chance of the problem gene being inherited. Charcot-Marie-Tooth affects movement and balance.
"The tests could increase my risk of miscarriage and anyway, we knew we would not terminate the pregnancy," says Jillian Critchley.
Instead the couple waited until their daughter Matilda was 12 months old before testing eventually confirmed a positive diagnosis. But they had no regrets, even when the same circumstances were repeated with the birth of their second daughter, Eleanor.
"I realised had I listened to others and terminated my babies just because they had CMT, then I would most likely not be a mother," Jillian says. "And I cannot imagine life without our two beautiful girls."
(For more about Jillian's experience read Your Stories: Charcot-Marie-Tooth and my family.)
But the dilemmas posed by testing can be especially profound when the results are not definitive. Unlike diagnostic tests, predictive tests merely confirm the presence of particular genetic 'markers' that increase your risk of an illness. A negative result in a predictive test is no guarantee either because environmental factors like stress, diet and lifestyle can be enough to trigger some diseases on their own.
Such ambiguities can make it difficult to consider the implications of a result. For example, a positive result for mutations (or mistakes) in some breast cancer marker genes known as BRCA1 and BRCA2 can mean your risk of developing breast cancer is as high as 85 per cent. Should you have a mastectomy to prevent a cancer arising? Or if you choose to do nothing, will the awareness of your risk become an unbearable burden?
Knowing you carry a mutated gene linked to a disease may also have ramifications for other family members. Would they want you to tell them? And how would they react if they found out they too carried the gene? There may be issues of guilt, blame and secrecy.
Alex Wilde is a PhD candidate at the School of Psychiatry at the University of New South Wales and is conducting research into public attitudes towards genetic testing for mental illnesses.
Currently genetic testing for mental illnesses is not available, though progress is being made in research. But Wilde warns it could have powerful consequences.
"You don't want people going off and injuring themselves or ending their life because they think they are at risk. Or, not quite as extreme, people could stigmatise themselves if they find they are at risk. Or they might get discriminated against if they apply for insurance."
Nonetheless, predictive tests for diseases mental or physical could have enormous potential to improve our lives.
Says Ron Trent: "There are a lot of genetic illnesses that don't have treatment but the information that becomes available sometimes can help people make decisions about whether they'll have children, whether they want prenatal testing and there are also situations where you can look at preventative measures."
If you're considering a genetic test and might need to apply for life insurance products (such as cover for death, trauma or loss of income) think carefully about the implications first.
Insurers will not demand you have a genetic test if you take out a new policy or change an existing one. But like other factors affecting your risk assessment, you must tell them about any family history or result of any genetic test that you, or other family members, have had. If you don't, you may invalidate your policy contract. However, once your insurance is in place, you don't have to disclose the results of any subsequent tests.
There's some evidence that the insurance industry has not always treated genetic information appropriately. A recent study, published in the journal Genetics in Medicine, documented 11 cases of proven genetic discrimination in Australia and most of them related to life insurance products particularly those that cover the onset of illness and its impact on income. (Health insurance is not risk rated, so the same potential for discrimination does not apply.) In several cases insurance was denied and in others, the exclusions imposed on applicants were found to be unreasonably broad given the specific nature of their genetic defects. The authors say this proves the need for tighter regulation and more input from scientific experts.
If you feel you've been treated unfairly, contact the insurance company's internal complaints office. Other anti-discrimination tribunals such as the Australian Human Rights Commission may also deal with genetic discrimination.
When undergoing a genetic test, most people in Australia attend specialist clinics where the results are fully explained and counselling is offered. But now tests are available over the internet that bypass the medical profession. And Trent advises consumers to beware.
"The rationale behind these companies, if you read their websites is to empower individuals to take care of themselves… But the other rationale is to cut out the middle party, like the doctor and so you can have a genetic test which is your information, your test, no one knows about it," he says.
"What the companies don't say is that these tests aren't easy to interpret. These tests may not be appropriate for your circumstance and who is going to help you when you don't understand what the test means?"
The industry is not currently regulated and concerns have been raised over some of the claims being made by various companies.
Says Alex Wilde: "Genetic tests marketed online for about US $1000 as 'whole genome scans' claim to look at more than 500,000 points on the DNA to find out people's susceptibility to a range of diseases. Simply following a healthy diet, taking regular exercise and doing other healthy things may be just as useful." | <urn:uuid:fe542b47-2b5a-48a7-a1c9-cfb6d0a998cb> | CC-MAIN-2015-35 | http://www.abc.net.au/health/features/stories/2009/07/15/2624556.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645315227.83/warc/CC-MAIN-20150827031515-00220-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.968959 | 2,385 | 3.015625 | 3 |
- Albert Einstein
This piece was written while the author was completing a Master of Arts degree in Peace Studies at the Joan B. Kroc Institute for International Peace Studies at the University of Notre Dame.
It is a shame that in the twenty first century, a century heralded by great advances in technology and developed economies that drought and famine still persists in some parts of the world. The end of the Cold War increased the hope of many people that the world's political and economic system would be changed for better, following the narrowing of ideological differences that had so polarized the world. It was hoped that humanity would be better off, as everyone benefited from a new era of world peace and economic development.
However, this has not been the case. On a daily basis, the international mass media is awash with reports of various conflicts across the world. At the same time, there is much news coverage of drought situations in various parts of the globe.
In the Horn of Africa especially, drought is part and parcel of daily life. It is so common that in many African societies, the drought season marks an important part of the annual calendar. In a recent BBC report, the UN expressed fears that ... "The world is in danger of allowing a drought in East Africa to become a humanitarian catastrophe". At the same time, I came across news headline that said, "Kenya drought worsens conflict." These headlines made me think more deeply about the two issues: if conflict and drought are the scourge of our modern world, it would therefore be appropriate to question their symbiotic relationship. If they are related, how do they influence each other? Is drought a cause of conflict or is conflict a cause of drought? Will drought always trigger conflict? Will conflict exacerbate drought? (Conflict cannot change weather patterns, but it can affect agricultural practices, land use, and other social factors that intensify the effects of diminished rainfall, particularly by causing famine).
This paper will show the relationship between drought, famine, and conflict. Drought is mainly a natural phenomenon that affects parts of the world. Some areas of the world with strong economies and viable political structures have successfully responded to the advent of drought in their countries by adjusting water storage, allocation, and usage patterns, while other parts of the world have dismally failed to do so. Africa is an example of an area that suffers from recurring drought and desertification. Short-lived droughts are seldom dangerous; but sequential drought years are. Though sequential droughts are common in the Horn of Africa, people there have not successfully responded to it; rather they have been devastated by it. Is this because almost all of the recent droughts and famines in the Horn of Africa region have occurred in situations of armed conflicts? A relationship seems likely.
In this paper, I argue that drought is a contributing factor to conflict and conflict exacerbates drought, making famine more likely. Therefore, drought, conflict, and famine are inextricably linked, with each acting as a catalyst to the other. The situation in the Horn of Africa will be a showcase to support the thesis.
Drought is a period of aridness, particularly when protracted, that causes widespread harm to crops or prevents their successful growth. Insufficient rainfall and unfavorable weather conditions are natural causes of drought. Environmental degradation caused by the overuse of farmland and deforestation--cutting of trees for household and other purposes--aggravate drought. People's lack of capacity to respond to natural disasters and inefficient or lack of early warning systems also worsens the effects of drought.
Famine is often associated with drought. Different scholars have given various definitions of famine. Amartya Sen defines famine as "unequal distribution of food supply." His argument is that famine is not a shortage of aggregate food supply, but the inability of individuals to afford available food. In this sense, a good harvest throughout a year does not guarantee that there will be no famine. In some instances, for example, governments have manipulated the food supply for political reasons, using food as a weapon.
De Waal has another definition of famine that is drawn from a local perception of famine in Western Sudan. Famine, he says, is "a disruption of life, involving hunger and destitution and sometimes, [but] not always death." De Waal has tried to make a distinction between the European perception of famine-- starvation to death-- and the African view that says that famine involves hunger and destitution, but not necessarily death. Other scholars relate behavioral changes with famine. Edkins, for example, states that famine is "a socio-economic process that causes the accelerated destitution of the most vulnerable and marginal groups in society." Trying to bring all these ideas together, I suggest that for this paper, a working definition of famine as a phenomenon in which a large percentage of population is so undernourished that death by starvation becomes very common.
Drought, prolonged conflict and regional instability, mismanagement of food supplies, and policy failures are causes of famine. In many cases, these situations exist together, each reinforcing each other
Drought is one of the causes of conflict. Many areas affected by drought are arid and semi-arid areas. Under normal circumstances, these areas are low in resources and under substantial ecological pressure. When drought occurs in such arid areas, the living conditions of local people become very difficult. In these conditions, the land yields no crops and water is insufficient for human consumption as well. People compete for the meager available resources. Pastoral communities are an example of this. Pastoralists depend on their livestock (camels, cattle, sheep, and goats) and move from place to place with their livestock to look for usable pasture land and water. During drought, their movement increases. Sometimes, different pastoral groups move to the same place and want to use the same scarce resources, which cause conflicts between the two communities. There is a history of pastoral communities fighting for scarce resources in Southern parts of Ethiopia, Northern Kenya, parts of Somalia and the Sudan. Most of the conflicts in those areas were manageable, and tend to be resolved by elderly leaders through traditional conflict resolution mechanisms on an ad hoc basis. However, these conflicts are exacerbated and more difficult to resolve when drought occurs.
The present conflict in Turkana in Northern Kenya is a case in point. The region is badly affected by drought. According to a recent World Food Program report, 3.5 million people are currently affected (WFP). People there are fighting for scarce resources. Oxfam, which has a food program in the region, told the BBC that the drought had worsened the conflict there. People are dying from starvation, and they are also dying from conflict, as they fight for water and food. Families are loosing their livestock, which is their main source of livelihood. Subsequently, drought-affected people migrate into other parts of the country. This spreads the pressure on resources and results in conflict spreading into other areas as well. In addition, nomadic groups take their cattle to farmlands in search of pasture. Often there is a conflict between farmers and cattle herders, a situation that is still happening in Northern Kenya and Southern Ethiopia.
Similarly, when the State of Somalia collapsed in early 1990s, the country was also suffering from drought and human caused famine. Rival pastoral clans who had been deprived of development investment invaded the fertile Juba River farming area. Many farmers were caught unprepared and they bore the brunt of the fighting.
The availability of small arms and light weapons along border areas where pastoral communities reside also contributes greatly to conflict. Arms ownership is regarded as necessary for the protection of one's community and livelihood in such areas, as they are situated in remote regions, far from the protection of regular state security. But the prevalence of arms also means the prevalence of armed conflict.
The response of the central government to the drought-affected region determines, to some extent, when and where conflict breaks out. Delays of aid often create a feeling of alienation and marginalization among the affected groups. These communities may form different factions and rebel groups to address their frustration with the central government. In such contexts, conflict erupts among the rebel groups and between the rebels and the government in power.
For example, drought-caused famine was part of the cause of the Sudanese conflict. The Khartoum government was silent when the southern part of Sudan was hit by drought and famine. This angered the Southern people and strengthened their opposition to the Khartoum government. Similarly, the Ethiopian revolution of 1974 and the replacement of authoritarian rule was exacerbated by the monarchy's clumsy handling of famine in the northern part of that country. Likewise, while 8 million Ethiopian people were at risk of drought in 2000, Ethiopia and Eritrea were waging war. According to one BBC report, "War and drought are the two words forever associated with the Horn of Africa." This suggests that drought and conflict always reinforce each other or are two sides of a coin.
By the same token, conflict has been a contributing factor to drought-led famine. A government that engages in armed conflict has a high military expenditure. Shifting scarce resources to the military budget always weakens critical development needs of a country. When the government's full attention is on the conflict; they cannot pursue drought-relief, social, or development programs. In addition, the government usually spends all the available resources on the conflict, which also prevents it from addressing the economic needs of its people. Such a situation leads to famine. Poor communities are especially exposed to drought and famine since they lack the capacity to respond to natural disasters. Furthermore, when either the government or the rebels recruit soldiers that means taking productive labor from the individual households.
Landmines are an additional serious problem that has a profound impact on health, the economy, and the environment. In many war-torn countries these weapons have been scattered in farm fields, roads, even around schools and health centers. According to Adopt a Minefield, a UK-based organization, more than 80% of landmine causalities are civilians. Every day women and children are killed by landmines or injured during and after violent conflicts. Besides causing death and injury, landmines prevent people from using their farm lands and they block roads needed to fetch water. Landmines additionally cause village markets to close and communication between different villages to stop. Therefore, people either starve to death or wait for relief aid. But aid is also hampered or blocked entirely by mines in the roads. The Horn of African countries have been infested by landmines. For example, a UN Mine Action Center survey indicated that the rural and nomadic people in Ethiopia and Eritrea are highly affected by landmines and unexploded ordinance left from long-lasting struggle of Eritrea for independence, Ethiopia's conflict with neighboring countries and the recent conflict between Ethiopia and Eritrea. The report states that there are around 6,295victims of mine accidents in those two countries. Making the land safe and available for farming and grazing is even more challenging. This is yet another way in which armed conflict intensifies the effects of drought and causes famine.
Running away from conflict and persecution, leaving their home and land, many people become refugees in neighboring countries. According to a 2004 UNHCR report the total number of refugees reached to 9.2 million in the world. Food aid, health care and human rights protection are the basic needs of refugees. Often it is beyond the capacity of host countries to provide such assistance. It even becomes challenging to humanitarian organizations and UNHCR. Hence, people at refugee settlement areas are exceptionally susceptible to famine. Relief aid is sometimes looted by rival groups which make humanitarian assistance additionally difficult. For instance, in the early 1990s in Somalia, fighting and looting made providing humanitarian assistance very difficult. As a result, many people died from famine, unable to obtain aid.
Attempts to Address the Problem:
If drought contributes to conflict and conflict has the potential to cause famine, what attempts have been done to address the problem? What mechanisms have been developed? How are such efforts integrated with peacebuilding?
One response to the problem of drought and famine was IGAD, the Intergovernmental Authority on Development. IGAD is a regional grouping of the Horn-Eastern African Countries of Djibouti, Eritrea, Ethiopia, Kenya, Somalia, Sudan and Uganda. It has its head office in Djibouti. It was established in 1986 by heads of the member states with a narrow mandate to address the severe drought and other natural disasters that caused widespread famine in the region. Initially, as a result of its limited role and focused program area, IGAD did not address conflict and related issues. In addition, some organizational and structural problems made the organization ineffective.
Yet the many conflicts in the region made efforts to address the problems of drought and famine more difficult. Internal conflicts in Sudan, the secession of Eritrea from Ethiopia, the civil war that led to the collapse of Somalia and other conflicts around border areas among neighboring countries all contributed to suffering and famine. Establishing an organization that could address the conflicts of the region was very vital. Although the former IGAD served as a forum for states to discuss issues related to drought, no state has dared to raise the question of resolving conflicts or differences.
In 1995 the heads of member States and governments decided to rejuvenate the organization into a regional political, economic, security, trade and development entity. At a regional summit in 1996, council ministers endorsed a plan to enhance regional cooperation in the areas of conflict prevention, management and resolution, humanitarian affairs, food security, environmental protection, and economic cooperation and integration. The organization's name officially changed to IGAD.
The presence and potentially threatening inter- and intrastate, communal and clan-based conflicts in the region were the main reasons that forced member states to expand IGAD's vision and mission. IGAD has been successful in mediating the Sudanese conflict, which resulted in the signing of a Comprehensive Peace Agreement in 2005 between the Sudanese government and the Southern People's Libration Army (SPLA). In addition, IGAD was involved in initiating the Somalia peace talks which later were able to provide a framework for a five-year transitional government in Somalia.
The Conflict Early Warning and Response (CEWARN) Mechanism was born out of the new IGAD in 2002. Its objectives are to:
- support member states to prevent cross border pastoral conflicts,
- to enable local communities to play an important role in preventing violent conflicts,
- to enable the IGAD secretariat to pursue conflict prevention initiatives and
- to provide members technical and financial support (IGAD).
So far through CEWARN, IGAD is working on capacity-building and awareness about early warning signs of conflict.
Since 2004, IGAD has developed a project that targets pastoral communities of Southwestern Ethiopia, Northwestern Kenya, southeastern Sudan and North Eastern Uganda that is named the Karamoja Cluster. Armed conflict in the cluster is increasing tremendously. According to baseline reports of CEWARN, adverse climatic conditions have been aggravated by violent incidents in the area. Further, the crisis caused unusual migratory movements of people and ongoing competition for scare resources (CEWARN). Interestingly, CEWARN mentioned that some peace initiatives are underway. However, none have succeeded, and the conflict has renewed. Apart from analyzing the Karamoja conflict, CEWARN has made recommendations to responsible bodies that include the local communities and respective governments. The CEWARN project again illustrates the argument that there is a direct relationship between drought and conflict and it is impossible to solve one problem without addressing both.
Drought, famine, and conflict are highly interlinked. None of the problems can be solved without addressing the others.
Key aspects of drought response include:
Build an Early Warning System.
Developing a strong early warning system for drought and desertification is crucial. It should be adopted at local, national, and regional levels.
Strengthen Intergovernmental Cooperation
States should strength cooperation among neighboring countries to combat drought and prevent conflicts. Furthermore, building networks and collaboration with various actors in the area helps to tackle problems of drought and conflict. For instance, the UN Convention to Combat Desertification has recommended research on "drought and desertification, identifying causal factors both natural and human, addressing specific needs of local populations and enhancing local knowledge, skills, and know how." This, they say, is an important area of collaboration.
Add Greater Capacity and Preparation to Traditional Mechanisms
Building the capacity and preparation of traditional mechanisms for combating drought is a third key factor. Some of the traditional mechanisms are collecting/harvesting rainwater in man-made ponds, diversifying grazing lands, and planting trees such as Cassava that adopt to dry climates. In addition, strengthening and empowering traditional conflict resolving mechanism contributes to building relationship among and across communities, which diminishes the frequency and intensity of armed conflict, and encourages cooperative solutions to other problems-for instance, drought and famine.
"UN Warns World on Africa Drought." BBC News. 23 Feb 2006. http://news.bbc.co.uk/2/hi/africa/4744812. stm.
"Kenya Drought 'worsens conflict'." BBC News. 6 Feb 2006. http://news.bbc.co.uk/1/hi/world/afr ica/4684766.stm .
Keen, David. The Benefits of Famine: A Political Economy of Famine and Relief in Southwestern Sudan, 1983-1989. New Jersey: Princeton University Press, 1994. 4.
Edkins, Jenny. Whose Hunger?: Concepts of Famine, Practices of Aid. Minneapolis: University of Minnesota Press, 2000. 20.
Ibid. 20.
Keen. 228.
von Braun, Joachim, Teklu Tesfaye, and Webb Patrick. Famine in Africa: Causes, Responses and Prevention. Baltimore, MD: The John Hopkins University Press, 1999. 18.
Biles, Peter. "War and Drought in the Horn." BBC News. 24 May 2000. http://news.bbc.co.uk/1/hi/world/afri ca/762235.stm.< /p>
UNHRC. "Refugees by the Numbers (2005 Edition)." 8 Jul 2005. http://www.unhcr .org/cgi-bin/texis/vtx/basics/opendoc.htm?tbl=BASICS&id=3b028097c#Refugees.
- Intergovernmental Authority on Development (IGAD)
- The Conflict Early Warning and Response (CEWARN) Mechanism in the Intergovernmental Authority on Development (IGAD)
- United Nation Convention to Combat Desertification
- http://www.un ccd.int/convention/text/c onvention.php?annexNo=-3#art17
- "The UN Mine Action Service 2006", United Nation Mine Action Center
- Available at: http://www.mineaction.org/
- World Food Program. "WFP Warns Kenya Drought Will Lead to Human Tragedy." 5 Mar, 2006. Available at: http://www.wfp.org/english/? ModuleID=137&Key=1987. [Accessed 28-02-06] | <urn:uuid:13dc0815-f290-4f98-a1b3-9a67c5f1b6b6> | CC-MAIN-2015-35 | http://www.beyondintractability.org/casestudy/mekonnen-drought | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.25/warc/CC-MAIN-20150827025424-00282-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.950726 | 3,963 | 2.875 | 3 |
The Lower School
Waldorf education is noted for the depth and breadth of its curriculum, which follows the developmental stages of childhood and mirrors the inner transformation of the child from year to year. Thus the child’s educational experience is relevant and satisfying. The curriculum awakens in the child an appreciation and respect for cultural origins and historical foundations. This fosters a sense of world citizenship and belonging in the child. The Waldorf approach to the sciences, as to most other aspects of the curriculum, is experiential and encourages a true interest in and love for nature and scientific inquiry. The arts are integrated into all aspects of the curriculum and develop in the children imagination, creativity, joy in learning, self-discipline, confidence, and skills that will enrich them individually and in community for their whole lives. Read more
At the Honolulu Waldorf School, every morning begins with the Main Lesson. For the first two hours of the day, each class studies an academic subject (math, science, social studies, or language arts, for example) in blocks of about four weeks. This concentrated period allows the children and their teacher to immerse themselves in the material: There is time for projects, individual and group work, artistic activities, field trips, and deep exploration and discussion. Each child produces a book which summarizes the topic, and expresses the major ideas or rules through their written word and illustrations. The result is the student’s own text book of the subject.
The art of storytelling is a major component of the Waldorf curriculum, and each grade is associated with a particular story theme. Beginning with fairy tales and folk tales in first grade, and moving all the way up to powerful biographies of significant historical and modern individuals in eighth grade, stories relate the curriculum content to the human experience, thus giving the student a vivid and personal relationship to the lessons. Through stories, the student’s feeling life is touched, the memory is stimulated, and the will to act is aroused.
The Main Lesson is taught by the Class Teacher. The Class Teacher remains with the class for a number of years. Some teachers start with the class in the First Grade and may continue with it through the Eighth Grade; sometimes a class teacher may remain with a group until fifth or sixth grade when another teacher steps in to carry the group to the end of elementary school. The Class Teacher also teaches the class one or more lessons during the school day and ends each day with the class.
During the rest of the day, the children study and participate in many other subjects. Japanese and Spanish are each taught five times a week in alternating blocks. In seventh and eighth grade the students choose one language, which they focus on exclusively. Periods of singing or chorus, painting, games or physical education, eurythmy, instrumental music, handwork, and, for the older grades, supplemental classes in mathematics and language arts weave rhythmically through the week. These subjects are usually taught by Special Teachers. Special events and excursions as well as many festivals and celebrations are sprinkled throughout the year. Every class performs a grade appropriate play which deepens their connection to a topic studied during the year, music classes perform for holidays and at a school concerts, and all students participate in the May Day, Michaelmas, and Winter festivals. Our classes take advantage of artistic and cultural activities presented by community groups and take excursions around the island to visit farms or sites that expand their experience of a subject and connect them to their island home. Many visitors come to our school to share their expertise in a craft or with a subject. Overnight class trips begin in Grade Four with a visit to a local farm and expand to meet the needs of the growing child.
The grade one through eight curriculum presents a unified whole, covering the stages of development of the six/seven year old through the fourteen year old. Each year gives a foundation for the next, and builds on what came before. The fulfillment of the Lower School curriculum, however, truly comes to fruition in the High School, where the work of the Lower School is transformed through thinking, reason, and the qualities of intellect that are developing in the High School student.
The overview of each grade presented here is instructive but not definitive: Each year the curriculum for a particular grade will be very similar but the ways in which it is presented will vary from class to class. Close
Entering the first grade is a major transition in the life of the child. The first grader leaves the homelike world of the Early Childhood Department, and enters the more structured environment of the grades classroom. Fairy tales, folktales, and nature stories constitute the basic story material of First Grade and these stories provide food for the soul of the child and nurture the capacity to learn in a way that is joyful and enriching. This power of imagination develops the ability to think creatively. The excitement of learning to write, read, and do math is supported through movement, music, and art.
Form Drawing awakens the children to spatial relationships and helps develop the coordination of eye and hand, as well as laying a foundation for reading and writing
Japanese and Spanish language and culture are introduced through songs, verses, stories, finger plays, and games.
Weekly lessons in modeling and painting develop an artistic aesthetic and build the powers of creativity.
Knitting is learned, children learn to create lovely toys while knitting trains the brain
Eurythmy class supports reading through stories and letter sounds, and, along with Games classes, supports the growing social awareness of each child.
Second graders at ease with their school routine and with their teachers, and they are eager to learn. Animal fables and stories and legends of saints and saintly people constitute the story material of the Waldorf Second Grade. The legends exemplify all that is highest and most noble in human striving, while the fables amusingly recount the less desirable qualities of the human being, and these two perspectives mirror the polarities that the second grade child experiences. The students are continuing on their path of joyfully learning to read and further developing their skills in the four arithmetic processes of addition, subtraction, multiplication, and division.
After learning to play wooden pentatonic flutes in first grade, the second graders further develop their music skills and in handwork class crochet beautiful cases for their flutes.
Foreign language classes engage all the senses as the children taste the foods, sing the songs, recite the poetry, and play the active games of each culture.
In form drawing, students continue to master forms and shapes, and through drawing mirrored forms, clearly distinguish left and right, up and down, strengthening sensory integration.
Third graders are experiencing an awakening self-awareness, which is reflected in the story curriculum of the year, the ancient Hebrew stories in the Old Testament. These stories are filled with archetypal descriptions of human beings facing good and evil, being “cast out of paradise,” and taking hold of the world and making it fruitful and abundant. The practical activities inThird Grade, which include gardening and farming, house building, and measurement. In math, regrouping in addition and subtraction is mastered, and long multiplication and division are introduced. Telling time and measurement are important main lesson blocks. The students become more masterful writers, and work with grammar and punctuation.
Adding to the knitting stitch learned in first grade, third graders learn to perl and create hand puppets.
Mirrored forms in four quadrants are the challenging subject in form drawing.
Singing and flute playing continue, with the introduction of call and response songs and rounds.
All students participate in strings class, learning violin, viola, cello, or bass.
Water color painting takes on more form, painting various motifs from the curriculum.
The fourth grade students wish to further explore their world, and geography is an important subject introduced at this point, as it builds a sense of place and belonging. Local geography is introduced first, and the students become familiar with the topography and history of their school neighborhood, then the city, island, and state of Hawai‛i. The students draw their own maps and model relief maps with paper mache, beeswax, or clay. Fractions are introduced in math. Grammar studies continue, with further work with parts of speech and a focus on tenses. Animals and their relationship to human beings are the focus of science for the year. The story curriculum includes legends and stories of Old Hawai‛i, and Norse Mythology, stories of courage, strength, and cunning, which support the developmental needs of the fourth grade student.
Complex form drawing includes woven knots and braids from the Old Norse tradition and forms based on fractions.
After working mostly orally in the first three grades, fourth graders are introduced to writing and reading in foreign language classes.
Art classes focus on main lesson subjects, and the students paint and model a variety of animals from their zoology lessons.
Building on rounds in third grade, fourth graders begin part singing, and become more proficient on their stringed instruments.
In movement classes, students continue to play group games and work with fundamental skills to build a foundation for games and sports. Circus arts are part of the curriculum, including juggling, stilt-walking, and unicycle riding.
The Fifth Grade is a time when many children seem to strike a healthy balance between early childhood and approaching adolescence. It is appropriate, therefore, that the Waldorf curriculum for Grade Five seems aglow with balance and harmony. Fifth grade is a year of wonderful balance, where the children are finding themselves as individuals, yet still engaged in the heart of childhood. The "heart" of the curriculum, therefore, is the study of the balanced and harmonious world of the ancient Greeks, culminating in the students' participation in a Greek pentathlon, where they travel to Maui, and with other Hawai‘i Waldorf schools, re-enact the ancient games of foot races, long jump, javelin and discus throwing, and wrestling.
The beginning of the study of World History.
Natural Science is approached this year through Botany, with an emphasis on the plant kingdom's relationships to the human being.
In Mathematics, the class moves from fractions to decimal fractions. Reciprocals, averages, and the metric system are introduced.
In Form Drawing, the children work with motifs from the ancient cultures they are studying, and also work with more advanced freehand geometrical construction. | <urn:uuid:68de3568-b91f-4ba3-adbd-17b0fe2e985d> | CC-MAIN-2015-35 | http://honoluluwaldorf.org/lower-school.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060413.1/warc/CC-MAIN-20150827025420-00284-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.953742 | 2,158 | 3.375 | 3 |
An officer from a central statistical office gathers data from a village community
One of FAO's major roles is to collect, process, analyze and redistribute statistics. One of its major tasks, therefore, is the development of national and regional capacity to collect and report on statistics (through direct advice as requested) as well as to promote improved approaches and techniques for data collection. However, a major problem in many cases is that statistical systems established at high costs deteriorate with time. Degradation occurs, inter alia, through piecemeal modifications to the system, adding features, functionality and requirements, recruiting new staff and new computer systems, extending the geographical coverage, changing administrative supervision, etc. Changes in the fishery system itself aggravate this "evolutionary" problem.
The production of an annual statistical report generally falls under the responsibility of the government office in charge of fisheries (e.g. a Fisheries Department or Ministry). Often, a national fishery research institute may be deeply involved in collecting statistics, particularly if it belongs to the ministry responsible for fisheries. The institute will also use the statistics for stock assessments, elaboration of scientific advice and analyses of management performance, ensuring some sort of quality control. Other fisheries-related data (e.g. data on demographics, infrastructure, imports-exports, prices, encroachments) may be collected by other departments or ministries such as the Coast Guards (Navy), the meteorological office, the Treasury or the private sector (e.g. development banks), creating a need for coordination or integration of data systems.
Need for basic fishery statistics
Collection of basic data on fishers, catches, fishing effort, prices, values and other related information such as size at capture and length frequencies, is fundamental for most activities related to policy, planning and management of fisheries and aquaculture.
Occasional censuses (frame surveys) and sample-based fishery surveys conducted on a regular basis ought to be viewed not as an end in themselves but as an important source of fishery information for a wide range of activities such as:
The quality of the data collected impacts directly on the quality of the analyses made on them including the indicators, stock assessments and forecasts elaborated using them. In addition, the quality of the national data collection system will influence the ultimate quality of the data collected and compiled in other statistical systems, at regional level (i.e. in regional fishery commissions) or global level (e.g. at FAO).
Human and financial resources
Availability of sufficient and adequate human resources is often one of the major constraints in the implementation of medium and large-scale fishery surveys operated on a regular basis, particularly in cases of fishery administrations with limited budget allocated for data collection programmes. The backbone of a data collection programme is the team of field data recorders and supervisors. Well-trained and motivated data recorders is the principal concern of fishery statistical units since it is through them that primary information flows from fishing sites and markets up to the offices responsible for data processing and analysis. Consequently the quality and utility of the produced statistics is a direct function of the effectiveness and timeliness of field operations.
Mobility of data recorders - the ability to visit as many locations as possible during an allocated time - also affects the quantity of collected data as well as their representativeness. Lack of transportation means reduced statistical coverage and increases the risk for biased data, since data collection is then always conducted at the same few locations. As well, mobility is dependent on supervisory functions, the lack of which would leave data recorders on their own and without supervision and guidance.
Mobilization of human resources not necessarily regular staff of fishery administrations is often a good approach for obtaining information that does not require highly skilled personnel. For instance, data on the level of activity of fishing units (a basic parameter in estimating fishing effort), is sometimes obtained directly from the fishers themselves; students or scouts living in fishing sites can also act as data recorders thus significantly increasing the sample size at a reasonably low cost.
When established, statistical systems should be optimized in order to efficiently use the scarce resources available. Optimization is a constant concern because, without special attention, performance of statistical systems tends to degrade with time.
Experience shows that, in general, design and implementation of sample-based fishery surveys make little or no use of statistical indicators concerning sampling requirements that would guarantee an acceptable level of reliability for the estimated population parameters. On several occasions, statistical developers tended to "over-sample", despite the fact that a priori guidance on sample size requirements was feasible in most cases. Lack of well-defined and cost-effective sampling schemes tends to increase the size and complexity of field operations without the visible benefits in accuracy which, in turn, directly impacts on the logistics aspects of data collection and data management procedures.
Excessive stratification of the target population without due consideration to its impact on sampling requirements can be another limiting factor. Refined stratification certainly improves the homogeneity aspects of a population but has serious impact on sampling effort and survey cost. This factor is at times overlooked by statistical developers who continue to apply old sampling schemes proportionally to the size of newly created stratified populations, maintaining the same total number of samples collected over the reference period. According to basic sampling theory, this approach is not appropriate and safe sample sizes ought to be reviewed and adjusted after stratification.
Lack of sufficient and appropriate data processing tools and methods are often a negative factor in the operations of fishery statistical programmes. With the proliferation of microcomputers and increased computer literacy among data producers and users, computer systems (of varying sophistication and power) have, in fact, become an inseparable component of fishery statistical systems but, with few exceptions, they tend to be fragmented, inflexible and heavily centralized. Lack of flexibility and robustness implies frequent interventions to the software thus increasing the chances of undetected programming faults, while the lack of a modular system structure creates processing bottlenecks and deprives decentralized offices from the benefit of locally processing and analyzing their own data.
One of the key concerns in data collection is the sustainability of the adopted system, in terms of human, financial and computational resources as well as the ability to adapt to changing needs. This implies first of all that the data produced are effectively used, both for planning and management, creating a demand to "stimulate" their collection and elaboration and guarantee their long-term relevance. Sustainability also implies that an adequate compromise is found between precision and costs; theory and pragmatism; sophistication and simplicity; performance and resilience. The system should focus on the most important data. Data collection needs to be as "painless" as possible. Although in most fishery surveys the estimation process is fairly simple from the mathematical standpoint, there are other statistical and non-statistical aspects requiring well-defined, robust, modular and flexible computer systems. Samples collected from the field would have to be classified and stored. Estimation of population parameters from the collected data ought to be as automated as possible to avoid lengthy, risky and routinely performed manual computations. Finally, automatic preparation of statistical reports, indicators and diagnostics is essential in identifying problem areas and deciding as to the type of corrective action required.
Standardized computer tools
Most data collection systems have many common characteristics, irrespective of their environment and individual methodological and operational aspects. Starting from this concept FAO's Fishery Information, Data and Statistics Unit has developed a family of standardized statistical approaches which have been compiled into software (Artfish) aimed at facilitating the design and implementation of shore-based fishery surveys on fish production and values.
Despite common needs and problems, most countries have a statistical system with a long history including an administrative structure, trained staff, fixed formal requirements, where restructuring requires a customized approach taking this history into account. This approach, as applied by FAO, is comprehensive, demand-oriented and institution-centred. It addresses both the needs of:
Its implementation requires:
It also requires a range of expertise including statistical systems design, fisheries assessment and management, fishery statistics, statistical analysis and economics, use of and training on statistical packages and data presentation, computer programming and software development, information technology and connectivity, regional legal instruments and requirements. After the rehabilitation, close interaction with the staff must be maintained to provide assistance, including through remote online support.
The outputs of the process usually include: a computerised fishery census (and a national fishing register), a computerised methodology to support the management of fishing licences and related authorizations and certificates, fleet monitoring tools, a computerized statistical year-book, routines for the production of periodic electronic or paper reports, a catch and effort assessment survey as well as capacity building components.
FAO has already developed a set of computer applications with an open architecture enabling fast, stable and error-free generation of national standalone application (derived systems) incorporated into wider systems, or enabling the incorporation of other or new applications.
Training in statistics at all levels (data producers and users) and on computers (data operators and analysts) is of primary importance for adequate monitoring, co-ordination, corrective action and adaptation, as well as evaluation, in field and office operations. Training in survey design provided by FAO during the last few years has been conducted mainly through the Artplan module of the Artfish software. Artplan operates with empirical parameters and makes maximum use of existing knowledge regarding fishing operations and patterns. Its functions include:
Storage and processing
Collected data need to be safely stored and efficiently processed to generate the required estimates and reports. The Artbasic module of Artfish can be used for the storage and processing of basic data on catches and values and fishing effort. It operates on standard classifications, frame survey data and samples on catches, fishing effort, prices and values. Results consist of monthly estimates of fish production and values by species within the logical context of a calendar month (or a sub-period), a stratum and a specific boat/gear type. All Artbasic estimates are presented with the associated statistical diagnostics (such as variability explained in both space and time, expected accuracy level). Artbasic is usually operated on a decentralized basis thus offering research and reporting services nearer to the data sources. Artfish can also be used for reporting through its Artser module in which estimates from Artbasic are automatically gathered and formatted under an integrated database structure that allows for flexible and user-friendly data screening and extraction, data grouping, reporting and plotting.
To help meet these data needs, FAO has been assisting countries in upgrading their data collection, processing and reporting capabilities. Technical assistance at national and regional level is a significant component of the work programme of FAO's technical units responsible for fishery statistical development and involves both normative and field programme activities. Outputs of the normative activities include technical documents on statistical methodology and guidelines for data collection, while field programme activities involve project formulation and implementation, technical backstopping and organization of training courses and workshops. | <urn:uuid:1017a37e-ec54-45df-afc3-be1eb61b1e61> | CC-MAIN-2015-35 | http://www.fao.org/fishery/topic/13410/en | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645315227.83/warc/CC-MAIN-20150827031515-00207-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.92977 | 2,243 | 2.9375 | 3 |
Gabriele D'Annunzio (Gaetano Rapagnetta)
Gabriele d'Annunzio (12 March 1863 – 1 March 1938) was an Italian poet, writer, novelist, dramatist, daredevil, and dandy who went on to have a controversial role in politics as a political agitator and mentor of Benito Mussolini, and as figurehead of the Italian Fascist movement.
Gabriele d'Annunzio was of Dalmatian extraction. He was born Gaetano Rapagnetta on March 12, 1863 in Pescara (Abruzzi). He father's original name was Francesco Paolo Rapagnetta. At the age of 13 Francesco had been adopted by his uncle, Antonio D'Annunzio, and legally added the 'D'Annunzio' to his own name. He inherited his uncle's wealth, became a landowner and dealer in wine and agricultural products, and later became mayor of the town. In 1858 Francesco married Luisa De Benedictis and they had three daughters and two sons. At the time of Gaetano's birth, his father was at sea on a ship named Irene.
As a child, Gaetano's family nicknamed him "Gabriele" after the archangel whom they believed he resembled, and he adopted it. Later on, he also legally added 'D'Annunzio' to his own name.
Besides his cherubic looks, Gabriele's precocious talent was recognized early in life. He was sent to school at the Liceo Cicognini in Prato, Tuscany, one of the best schools in Italy at that time. He published his first poetry while still at school at the age of sixteen with a small volume of verses called Primo Vere (1879), inspired by Giosuè Carducci's Odi barbare, in which, side by side with some almost brutal imitations of Lorenzo Stecchetti, the then fashionable poet of Postuma, were some translations from the Latin, distinguished by such agile grace that Giuseppe Chiarini on reading them brought the unknown youth before the public in an enthusiastic article.
In 1881 he entered the University of Rome La Sapienza, where he became a member of various literary groups, including Cronaca Bizantina and wrote articles and criticism for local newspapers, particularly Fanfulla della Domenica, Capitan Francassa, and Cronaca Bizantina. Here he published Canto novo (1882), Terra vergine (1882), L'intermezzo di rime (1883), Il libro delle vergini (1884) and the greater part of the short stories that were afterwards collected under the general title of San Pantaleone (1886). Canto novo contains poems full of pulsating youth and the promise of power, some descriptive of the sea and some of the Abruzzi landscape, commented on and completed in prose by Terra vergine, the latter a collection of short stories dealing in radiant language with the peasant life of the author's native province. Intermezzo di rime is the beginning of d'Annunzio's second and characteristic manner.
Social and Financial Affairs
After D'Annunzio's father was reluctant to give his blessing to his son's intention to marry his first love, Giselda Zucconi, D'Annunzio broke with him. It is also generally agreed that in Il triomfo della morte (The Triumph of Death) D'Annunzio portrayed him as an incurable womanizer, as he was in real life, but the son soon followed the steps of the father. In 1883 D'Annunzio married Maria Hardouin di Gallese, a duke's daughter and they had three sons, but at the same time he had a string of infidelities. Notably, in 1886 he began an affair with Barbara Leoni which lasted a few years, then came a long liaison with the Countess Gravina Auguissola. The marriage ended in 1891.
During those years, D'Annunzio produced much hack work in order to support the expensive life style and his titled wife. He joined the staff of the Tribuna, working under the pseudonym of "Duca Minimo". To this period belongs his Il libro d'Isotta (1886), a love poem. In it are found most of the germs of his future work, just as in Intermezzo melico and in certain ballads and sonnets we find descriptions and emotions which later went to form the aesthetic contents of his first novel Il piacere (1889), translated into English as The Child of Pleasure.
In 1891 came L'innocente (The Intruder), which was admirably translated into French by Georges Herelle, which brought its author the notice of foreign critics. In 1979, it was the basis for a movie directed Luchino Visconti and starring the Istrian-born soft-porno star Laura Antonelli.
As soon as his marriage ended, D'Annunzio moved to Naples in 1891 with his painter friend Francesco Paolo, and went to work on the local newspaper, Il corriere di Napoli. In Naples he met princess Maria Anguissola, princess Gravina who abandoned her husband to live with the poet, and gave birth to his daughter. During this period, he wrote Elegie romane (1892) and Giovanni Episcopo (1892). His poetic work of this period is represented by Il Poema Paradisiaco (1893), the Odi navali (1893).
Again constrained by economic difficulties, he left Naples with Maria Gravina and their daughter in 1893 and moved to Abruzzo as guests of Michetti. A second child with the princess was born. The following year, after a journey to the Aegean islands, Gabriele began an affair with the famed actress Eleonora Duse which became a cause célèbre. That same year he wrote Il trionfo della morte (The Triumph of Death, 1894) with his already mentioned ironic portrayal of his father.
The writings that followed included Le vergini delle rocce (1896) and Il sogno di un mattino di primavera (1897). In 1898, he wrote Sogno di un pomeriggio d'autunno,and La città morta (1898), written for Sarah Bernhardt (?). In 1988, a presentation of this play starred the famed Istrian-born Alida Valli. D'Annunzio then wrote several plays for Duse: La gioconda (1899), and Francesca da Rimini (1901). Also in 1898 was published the serialized version of L'Innocente in Il corriere di Napoli.
La gioconda, was a fiasco. Nevertheless, Leonardo da Vinci's Mona Lisa - or La Gioconda as it was called in Italy - remained D'Annunzio's obsession. He had already composed a poem on the mysteriously smiling woman om 1889, and republished its shortened version in Il Giornale d'Iltalia after the painting was stolen in 1911. "Ne la bocca era il sorriso / fulgidissimo e crudele / che il divino Leonardo / perseguì / ne le sue tele." Later D'Annunzio claimed that he had seen the painting before it was smuggled to Italy and wrote a treatment for a film, 'The Man who stole the Gioconda'.
His Città Morta (1898), written for Sarah Bernhardt, which is certainly among the most daring and original of modern tragedies, and the only one which by its unity, persistent purpose, and sense of fate seems to continue in a measure the traditions of the Greek theatre.
Politics and Exile
In 1897 d'Annunzio was elected to the Chamber of Deputies for a three-year term, aligning himself in the beginning with the extreme right but moving then to the left. IN 1898 he ended his relationship with Maria Gravina, attempte dto write civic poetry, and the succeeding year he wrote La gloria (1899), an attempt at contemporary political tragedy which met with no success. That same year, D'Annunzio moved to Settignano (Firenze) and lived in the 16th century Tuscan villa of Gamberana called "La Capponcina”. He was defeated in the elections of 1900, but continued living over his income.
The end of the tempestuous relationship between D'Annunzio and Duse finally came in 1910 when another lover entered his life, the Marchioness Alessandra di Rudini-Carolotti. That same year, he also had to flee his creditors again. He sold "La Capponcina" and moved to near Cap Ferret in the Bordeaux region of France, where he found another lover, Romaine Brooks, a rich American painter. She found him a villa for him at Arcachon and, it it said, she also covered a great part of his expenses. He lived a life of luxury there, frequented the best salons, surrounded by admirers and lovers, and beginning a new career writing in French..
His most famous work of the period is Le martyre de Saint Sébastien (The Martyrdom of Saint Sebastian, 1911), a play that he wrote in verse for Ida Rubinstein (1885-1960), a female dancer with whom he likewise had an affair. In the play, she played the leading male role of St. Sebastian. The French composer Claude Debussy set incidental music to the play.
In its premiere, the writer Marcel Proust considered Ida's legs to be the most interesting thing about the event. The work was not successful as a play, but it it still being performed, however, because of the celebrated music. It has also been recorded in adapted versions several times, notably by Pierre Monteux (in French), Leonard Bernstein (sung in French, acted in English), and Michael Tilson Thomas (in French).
Return to Italy
When World War I broke out in Europe in August 1914, D'Annunzia was still in France. With his marked talent for self-publicity, he campaigned for Italy's entry into the war on the side of the Entente Powers. Following almost a year of an official policy of neutrality, Italy finally entered the war on 23 May 1915. By this time, D'Annunzio had returned home and promptly enlisted with the cavalry before commanding a torpedo boat.
A man of remarkable energy and continuous enthusiasm (generally self-directed), D'Annunzio achieved further celebrity as a fighter pilot, gaining a reputation as a war hero when he accidentally lost an eye during a bad landing in 1916. In February 1918 he took part in a daring, if militarily irrelevant, raid on the harbour of Bakar (known in Italy as La beffa di Buccari, lit. the Bakar Mockery), helping raise the spirits of the Italian public, still battered by the Caporetto disaster. On August 9, 1918, as commander of the 87th fighter squadron "La Serenissima", he organized one of the great feats of the war, leading nine planes in a 700 mile round trip to drop propaganda leaflets on Vienna. The leaftlets were penned by himself.
Following the armistice D'Annunzio resumed his aggressive pre-war nationalist stance. He charged that the Italian government (led by Vittorio Orlando) had not done enough to achieve Italy's just deserts at the Paris Peace Conference where Italy claimed the port of Fiume on the grounds of self-determination. Little aroused the indignation of so many Italians as much as the question of Fiume. The United States, Britain and France argued that Fiume be included in Yugoslavia and occupied the port. In January 1919, D'Annunzio suggested that the much-disputed port of Fiume should be simply confiscated by the Italians, along with Dalmatia.
In 1897 D'Annunzio was elected to parliament for a three-year term, aligning himself the beginning with extreme right but moving then left. established reputation not only arts world also as political agitator. Consistent his championing of Italian nationalism; he particularly vocal insistence that Italy's "lost" territory on Adriatic be reclaimed.On September 11, 1919, D'Annunzio wrote the following letter to Mussolini:
The next day, D'Annunzio marched from Rome to Fiume at the head of a thousand black shirted legionaries (Italian mutineers). The Allied troops withdrew and D’Annunzio, who announced his intention of remaining in the city until it was annexed by Italy, assumed control of the port city as the ‘Commandante’.
D'Annunzio ignored the Treaty of Rapallo and in his anger declared war on Italy itself. He then coauthored a constitution, the Charter of Carnaro, with national syndicalist Alceste de Ambris, the leader of a group of Italian seamen who had mutinied and then given their vessel to the service of D'Annunzio. De Ambris provided the legal and political framework, to which d'Annunzio added his skills as a poet. The constitution established a corporatist state, with nine corporations to represent the different sectors of the economy (workers, employers, professionals), and a tenth (d'Annunzio's invention) to represent the "superior" human beings (heroes, poets, prophets, supermen). The Cartarta also declared that music was the fundamental principle of the state.
While wildly popular with the general populace at home it nevertheless proved a heavy embarrassment to the Italian government. In the face of official opposition D'Annunzio nevertheless managed to hold on to Fiume, only to finally surrender the city in December 1920 after a bombardment of his headquarters by the Italian navy. He thus was forcibly ejected in January 1921.
After the Fiume incident, d'Annunzio retired to his home on Lake Garda and spent his latter years writing and campaigning. Although d'Annunzio had a strong influence on the ideology of Benito Mussolini, he never became directly involved in Fascist government politics in Italy.
Nonetheless, D'Annunzio is often seen as a precursor of the ideals and techniques of Italian Fascism. His own explicit political ideals emerged in Fiume, and Mussolini imitated and learned from d'Annunzio''s method of government there - the economics of the corporate state, stage tricks, large emotive nationalistic public rituals, the Roman salute, rhetorical questions to the crowd, blackshirted followers, the Arditi, with their disciplined, bestial responses and strong-arm repression of dissent. D'Annunzio advocated an expansionist Italian foreign policy and applauded the invasion of Ethiopia. He is also attributed to having originated the practice of forcibly dosing opponents with large amounts of castor oil to humiliate, disable or kill them. This practice became a common tool of Mussolini's blackshirts.
In 1924, Mussolini anointed D'Annunzio Prince of Monte Nevoso and in 1937 he was made a president of the Italian Royal Academy. On March 1, 1938, D'Annunzio passed away. Official and/or sources that are sympathetic to D'Annunzio state simply that he died of a cerebral hemorrhage or stroke at his home in Gardone Riviera. Other sources, including from purported eye-witnesses, indicate that he did not die of natural causes but was murdered. There are two versions of that claim, one is that Mussolini had him killed for having voiced opposition to Mussolini's alliance with Hitler. In any case, he was given a state funeral by Mussolini and interred at Il Vittoriale degli Italiani.
D'Annunzio was a prolific writer. At the height of his success, he was celebrated for the originality, power and decadence of his writing. Although his work had immense impact across Europe, and influenced generations of Italian writers, his fin de siècle works are now little known, and his literary reputation has always been clouded by his Fascist associations. Indeed, even before his Fascist period, he had his strong detractors. An 1898 New York Times review of his novel The Intruder referred to him as "evil", "entirely selfish and corrupt". Three weeks into its December 1901 run at the Teatro Constanzi in Rome, his tragedy Francesca da Rimini has banned by the censor on grounds of morality.
The 1911 Encyclopædia Britannica wrote of him:
In Italy some of his poetic works remain popular, most notably his poem "La pioggia nel pineto" (The Rain in the Pinewood), which exemplifies his linguistic virtuosity as well as the sensuosness of his poetry.
References and further reading: | <urn:uuid:d7c12930-21dc-430a-83eb-89d823324868> | CC-MAIN-2015-35 | http://www.istrianet.org/istria/history/1800-present/dannunzio/index.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00334-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.974242 | 3,638 | 2.75 | 3 |
The cranberry, Vaccinium macrocarpa, is one of only three cultivated fruits native to North America. The story of this perennial vine began as the glaciers retreated about fifteen thousand years ago. Through the centuries the cranberry has provided critical sustenance for humans, on land, at sea, and in times of war. It was even offered in a diplomatic gesture to King Charles II in 1677. Today, it is a powerful tool in the fight against various forms of cancer. Author Susan Playfair interviewed scientists studying the health benefits of cranberries, growers in several states, geneticists mapping the cranberry genome, a plant biologist who provided her with the first regression analysis of cranberry flowering times, and a migrant beekeeper to weave together the history and culture of the cranberry and assess the possible effects of climate change on this North American resource. America’s Founding Fruit will be available for purchase and signing after this October 20 lecture at the Arnold Arboretum beginning at 7 pm in the Hunnewell Building. Fee $5 Arboretum members, $10 nonmembers. Register online at https://my.arboretum.harvard.edu/Info.aspx?EventID=1.
Esther Klahne guides you in capturing autumn fruits’ shapes and highlights as well as achieving a balanced composition. Working on vellum, choose your subject from an assortment of acorn or chestnut specimens, either your own or those provided. This Class, sponsored by the Friends of Wellesley College Botanic Gardens, will take place on four successive Saturdays, November 1 – 22, from 9:30 – 12:30, on campus. The cost is $200 for Friends, $250 for non-members. Register by calling 781-283-3094 or email firstname.lastname@example.org. The tuition includes the cost of the provided vellum.
Tower Hill Botanic Garden, 11 French Drive in Boylston, presents An Autumn Fable: The Art of Don Carney & John Ross of PATCH NYC, which will be on view October 4 – October 26.
The internationally known artists and designers will debut their latest collection, inspired by Tower Hill Botanic Garden and installed throughout the galleries. Their work has been featured in The New York Times and Martha Stewart Living. PATCH NYC has also produced special collections for Barnes & Noble, West Elm, and Target.
For more information visit www.towerhillbg.org.
Join the Harvard Museum of Natural History at 2 pm on Saturday, October 25 for a screening of The Lost Bird Project, a film that honors five extinct North American birds: the Labrador Duck, the Great Auk, the Heath Hen, the Carolina Parakeet, and the Passenger Pigeon. Directed by Deborah Dickson, the film follows sculptor Todd McGrain as he sets out to create large bronze memorials to these lost birds and to install them in the locations where they were last seen in the wild. A discussion with McGrain and Andy Stern, the executive producer of the film, will follow the screening. A book about the project will also be available for purchase at the museum store. Free with museum admission.
Haller Hall, enter at 26 Oxford Street. Free event parking available at the 52 Oxford Street Garage.
Making your own fruit preserves requires a delicate balance of science, seasonal produce, and a touch of creative flair. Learn about what differentiates preserves, jellies, jams, and butters in this October 19 Cambridge Center for Adult Education Class, held from 1 – 4 at 56 Brattle Street in Cambridge. Snack on homemade shortbread jam cookies before rolling up your sleeves to process varietal apples such as Braeburn and Gala to make delicious rosemary-apple jelly and classic apple butter, then leave with a jar of your homemade goods to enjoy at home! $80 tuition. Image from www.bbcgoodfood.com. Register online at http://www.ccae.org/catalog/detail.php?id=573158.
“Make the cut” with Arboretum Head Arborist John DelRosso on Sunday, October 19 from 9 – 12 at Peters Hill Gate at the Arnold Arboretum in this practical workshop. John will quickly review basic chainsaw operation and safety. He will then demonstrate sawing techniques and guide you in felling and cutting using practice logs in the Arboretum’s wood recycling area. Bring your saw, if you have one. If you don’t own a saw but intend to purchase one, wait until you’ve attended the class to learn which styles and features are best for your size and needs. Participants must bring safety goggles, gloves, and ear protection. Dress for the outdoors and bring a snack and beverage. Registrants must sign an Assumption of Risk and Release to participate. Fee $45 Arboretum member, $55 nonmember. Register at https://my.arboretum.harvard.edu/Info.aspx?EventID=1.
Highlighting the intricate beauty of plants and nature, Josh Falk’s Small Worlds: Through a Small Glass Window is an ongoing macro-photo series shot with the intent of not only showcasing the subtleties of what we often take for granted in
nature, but to also create new abstract landscapes through manipulation of depth of focus and segmentation of the larger picture. As if the photos themselves and their glass-like finish are windows into a brief moment of time, Falk invites the viewer to look out, or perhaps in, to a new and reimagined world of nature and its complex beauty. The Arnold Arboretum will host the opening reception for this show in the Hunnewell Building on Saturday, October 25, from 1 – 3, and the exhibit will remain on view through February 3, 2015. For more information visit www.arboretum.harvard.edu.
Celebrate autumn with popular family activities, continuous live entertainment, 113 food and craft vendors, educational workshops, a farmers market, silent auction and numerous tag sales, a spectacular plant and bulb sale, Hall of Pumpkins and a Haunted House. The Berkshire Botanical Gardens holds its annual Harvest Fair on Saturday and Sunday, October 11 and 12, from 10 – 5, at the intersection of Routes 102 and 183 in Stockbridge. Adults $5, children under 12 free. For more information visit www.berkshirebotanical.org.
Pumpkin patches provide the base for this Boston Center for Adult Education class offering dedicated to the gorgeous gourds known for their sweet, autumnal aromatics. We’ll bypass the traditional pumpkin custard pie and head into unchartered pumpkin territory as we explore breads, delicate pastries, and cupcakes based on this bright orange seasonal vegetable-masked-as-fruit. Sweetening it up with cinnamon and making it more savory with salts, you’ll have a cornucopia of pumpkin feasting in this hands-on class. This course, taught by Dustin Rennells on Monday, October 20, from 6 – 9 at the BCAE building, 122 Arlington Street in Boston, will include alcohol and nuts in various recipes, please plan accordingly. Tuition is $40 for general public, $34 for BCAE members. Image from www.pinchmysalt.com. Register online at http://www.bcae.org/index.cfm?method=ClassInfo.ClassInformation&int_class_id=11601&int_category_id=2&int_sub_category_id=5&int_catalog_id=0.
The Boston Poultry Exposition is America’s first and oldest poultry show, having begun in 1849. It is held on the first Saturday and Sunday in November each year at the Four Winds Farm, 31 Ennis Road in North Oxford, Massachusetts. The show will be open to the public on Saturday, November 1 from 12 noon to 6 pm, and on Sunday from 9 – 11. Entry forms are available for download at www.bostonpoultryexpo.com (entries postmarked after October 17 will be charged a double entry fee.)
Some of the catagories are Champion Large Fowl, Champion Bantam, Champion Duck, Champion Goose, Champion Turkey, Champion Guinea, and Champion Pigeon. There is a junior show and a raffle as well. For more information email Stephen Blash at email@example.com, or call 508-987-8029. Photo by Charlie Sutherland from www.poultryshowcentral.com.
Spring and summer flowers produce a bounty of wild fall fruits that we will discover in this hands-on workshop on fruit form, function, and diversity, taught by Judith Sumner at Garden in the Woods, Framingham, on Sunday, October 19, from 10 – 4. We will study the significance of fruits in the flowering plant life cycle and then examine and dissect diverse fruit types, from capsules and follicles to pomes and drupes. You will learn fruit terminology and practice constructing and using dichotomous keys to sort out the remarkable variety of fruits produced by flowering plants. We will look at seed-dispersal mechanisms, the connection between fruit and seed forms, and strategies for seed dispersal. You are encouraged to bring fruit specimens from your own gardens for dissection and identification. Pack a bag lunch. $80 for NEWFS members, $96 for nonmembers. Image of serviceberry from Christian Science Monitor. Register at http://www.newfs.org/learn/our-programs/wild-fruits.
Nuts and seeds form the basic foundation of many Middle Eastern recipes – both savory and sweet. On Monday, October 20, from 6:30 – 9, Sofra Pastry Chef Emily Weber will illustrate how to use nuts more fully in your pastry dishes by experimenting with nut flour. Learn how to make Persian Love Cake: a moist, gluten-free Almond Rose Cake; spice up your house parties with Sweet and Smoky Pecans, and wow your dinner guests with a Sofra favorite – Black Walnut Baklava. Emily will finish off the evening with Fig and Almond Biscuits. Registration: $85. Picture from www.images.meredith.com. Class takes place at Sofra Bakery, 1 Belmont Street in Cambridge, and you may register on line through Eventbrite at http://www.eventbrite.com/e/are-you-nuts-tickets-12225684351.
Save your root vegetables throughout the winter with homemade sauerkraut and pickles! Learn how to pickle anything from beets to turnips and turn your cabbage into delicious sauerkraut. Pre-registration required for this free Boston Natural Areas Network class, to be held Saturday, October 18 from 9:30 – 11:30 at the Future Chefs Office and Teaching Kitchen, 560 Albany Street in Boston. Contact BNAN at 617-542-7696 or email firstname.lastname@example.org. Image from www.garlicfarm.ca.
Many North American native plants have been selected and cleverly repackaged for use in ornamental gardens. Often given charming names like ‘Pow-Wow Wild Berry,’ ‘Chocolate,’ or ‘Running Tapestry,’ they are barely recognizable as the cousins of the plants so familiar in the nearby wild landscape. These glamorous cousins can be lovely indeed, and in some cases are particularly well-suited for use in modern gardens. But sometimes the native plant, neither repackaged nor altered, is the perfect choice. Join Joann Vieira on Saturday, October 11, from 10 – 12 at Tower Hill Botanic Garden, 11 French Drive, Boylston, for a presentation and garden walk to look at North American plants used in formal garden settings and cultivated using sustainable gardening practices. Co-sponsored with the New England Wild Flower Society. $20 for members of sponsoring organizations, $30 for nonmembers. Image of ‘Pow-Wow Wild Berry’ echinacea from White Flower Farm. Register at http://www.newfs.org/learn/our-programs/wild-plants-in-the-not-so-wild-garden.
With the guidance of bestselling cookbook author Cathy Walthers and the stunning photography of Alison Shaw, every home cook can explore the multitude of ways this most healthy of foods can be made into delectable and satisfying meals. From Baked Eggs Over Kale in the morning to kale snacks and appetizers, salads, soups, side dishes and main courses like Pork Braised with Kale and Cider for dinner, Kale, Glorious Kale will be your complete guide to the greatest of green vegetables.
Catherine Walthers is an award-winning journalist and food writer. She has worked for the past 15 years as a private chef and cooking instructor in the Boston area and on Martha’s Vineyard. She is food editor of Martha’s Vineyard Magazine, and the author of Raising the Salad Bar, as well as co-author of Greens, Glorious Greens.
This event takes place at Kickstand Cafe, 594 Massachusetts Avenue in Arlington. Porter Square Books in Cambridge is delighted to partner with Kickstand, cousin to Cafe Zing here in the store. Watch for more PSB at the ‘Stand events in 2015!
Dr. Paul K. Barten, Professor and Honors Program Director, Department of Environmental Conservation at the University of Massachusetts, Amherst will speak on Thursday, October 16, from 7 – 8:30 in the Hunnewell Building of the Arnold Arboretum on the topic of The Origins and Legacy of the Catskill Forest Preserve. The Catskill Forest Preserve was established in 1885 and protected as “wild forest, forever” with an 1894 amendment to New York’s Constitution. This designation represented a major change in public opinion and political will as well as an early success for the fledgling conservation movement. The landscape paintings of Thomas Cole, Frederic Church, and other Hudson River School artists, the stirring fiction of Washington Irving and James Fenimore Cooper, and the writings of George Perkins Marsh and John Burroughs had a dramatic and formative influence on societal values and attitudes. This opened a new era in which the damage to forest ecosystems by tanbark peelers, “cut and run” loggers, and market hunters could no longer be reconciled with the “the greatest good of the greatest number in the long run” and a thriving tourism industry. The presentation will conclude with some thoughts on where we appear to be as a nation on the forest preservation—conservation—utilization spectrum in the 21st century. Fee $5 Arboretum member, $10 nonmember. Thomas Cole painting of Catskill Creek from www.images.fineartamerica.com. Register online at https://my.arboretum.harvard.edu/Info.aspx?EventID=1.
The Arnold Arboretum crabapple collection has long been recognized for its importance to the horticultural and scientific worlds. Because of the Arboretum’s many introductions and broad distribution of both cultivars and previously undiscovered Malus species from wild origin, it has been hailed as the “ ‘Mother Arboretum’ for flowering crabapples” (Fiala 1994). This collection remains popular with Arboretum visitors, especially during spring bloom and fall fruit display. Join the Arboretum on Sunday, October 19 from 1 – 3 in the crabapple collection on Peters Hill to enjoy a fall afternoon amid this historic collection. Activities will include a tour of the collection by our curatorial staff focusing on Arboretum-bred hybrid introductions, and information about pruning techniques and timing. For more information visit www.arboretum.harvard.edu.
Restoring the Beauty and Function of Residential Landscapes is the title of this year’s Ecological Landscaping Alliance Season’s End Summit, to be held Wednesday, November 12, 2014 at the Crane Estate, 290 Argilla Road, Ipswich, Massachusetts.
$85.00 ELA Member – $110 Non-Member, including Lunch and Networking with Colleagues
Space is limited – Register today! – See more at: http://www.ecolandscaping.org/event/11509/#sthash.Gtq3gges.dpuf
Featuring leading landscape experts who will share their expertise and landscape restoration projects that demonstrate:
Reestablishing healthy soil and healthy plant communities
Addressing diminished garden performance
Restoring ecological function and landscape aesthetics
The morning presentations will feature case studies representing the beautiful as well as practical aspects of restoration. The afternoon will include a panel discussion on invasive plant control, a tour of the Crane Estate restoration project, and an inspiring wrap-up presentation.
This educational event will give landscape professionals an opportunity to gather at the end of the season to review and reflect on the season; learn from respected industry leaders; network with other like-minded professionals; and get inspired for the next year – all around the topic of restoration.
Innovative new technologies may enable scientists to manipulate ancient and modern DNA to safeguard ecosystems from invasive organisms, help species recover their genetic diversity, and address issues of climate change. However, as geneticist George Church, Robert Winthrop Professor of Genetics at Harvard Medical School, will discuss, while resurrecting mammoths could help maintain the Arctic permafrost, such developments require thoughtful consideration of complex system interactions and potential unintended consequences. This Harvard Museum of Natural History program will take place Wednesday, October 15, beginning at 6 pm in the Geological Lecture Hall at 24 Oxford Street in Cambridge. Free and open to the public. Free event parking available at the 52 Oxford Street Garage.
Attend the opening reception on Saturday, October 11, from 1 – 3 of a temporary outdoor sculpture exhibition sponsored in part by United South End Settlements and the Boston Arts Commission, in coordination with Boston Parks and Recreation Department. On view through October 24, 2014, the exhibition is set in Franklin Square Park, 1536 Washington Street, Boston, in the South End.
These artworks will serve to engage the public in considering the relationship between art and the environment. | <urn:uuid:c6139b42-25b3-4c36-a877-3c8e4892aa1d> | CC-MAIN-2015-35 | http://www.gardenclubbackbay.org/page/35/?rss=true&action=product_list | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645339704.86/warc/CC-MAIN-20150827031539-00044-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.917654 | 3,828 | 2.953125 | 3 |
Edward Kasner's parents, Fanny Ritterman and Bernard Kasner, were born in Austria in 1843 and 1841 respectively. They emigrated to the United States in the late 1860s, settling in New York. They had eight children, Edward being the sixth child with older siblings Annie, Marcus, Alexander, Lottie and Adolf. Edward's :-
... affection and loyalty [for his family] over the years formed one of the cornerstones of his character.
He was brought up in New York where he attended Public School No 2 in lower Manhattan (close to his home) :-
Finding elementary school arithmetic an insufficient challenge to his ability, the boy Kasner obtained an algebra text and would often occupy himself during the arithmetic lesson by solving simultaneous quadratics on his slate ...
He completed this stage of his education in 1891 at the age of thirteen. He received the gold medal from the elementary school for the best overall performance as well as numerous other top awards for excellence. Then, in 1891, he entered the College of the City of New York. This school, founded as the Free Academy of the City of New York in 1847, provided both high school and degree level education. Accepting children of immigrants and of the poor solely on academic merit, it provided free education in its buildings in Lexington Avenue and 23rd Street in New York City. Kasner was awarded a B.S. in 1896 having studied a range of subjects including mathematics, astronomy, logic, physics, and political science. He received awards in all these subjects and was awarded the gold medal in mathematics.
Having completed his first degree, Kasner went to Columbia University for his graduate studies. He was awarded a Master's degree in 1897 and continued to work towards his doctorate advised by Frank Nelson Cole, who had been appointed to a professorship at Columbia in 1895. At this time it was unusual for an American to study for a doctorate in the United States - Germany was the preferred place for Americans to go for graduate studies. Cole, who had studied for several years with Felix Klein in Leipzig, was able to use this experience in advising Kasner who was awarded his Ph.D. in 1899 for his thesis The Invariant Theory of the Inversion Group: Geometry upon a Quadric Surface. He became one of the first people to be awarded a Ph.D. in mathematics by Columbia University. The thesis was published in the Transactions of the American Mathematical Society in 1900.
Although he had been awarded a doctorate without making the usual visit to Germany, Kasner did not totally break with tradition for he spent the year 1899-1900 in Göttingen attending courses by Felix Klein and David Hilbert. He returned to New York after the year abroad and was appointed as a mathematics tutor at Barnard College, which had been affiliated to Columbia University from its founding in 1889. In 1904 he was invited to address the International Congress of Arts and Sciences meeting at St Louis. This Congress was organised as part of the Louisiana Purchase Exposition (St Louis World's Fair) put on to celebrate the 100th anniversary of Louisiana Territory becoming a US State. Kasner gave the lecture Present Problems of Geometry and was delighted to have Henri Poincaré, another of the speakers at the Congress, in the audience. He became an Instructor in Mathematics at Barnard College in 1905, and was promoted to adjunct professor in the following year. He held this post until 1910 when he was appointed as a full professor at Columbia University. In 1937 he was named Robert Adrain Professor of Mathematics at Columbia. He held this Chair until he retired in 1949 when he was named Adrain Professor Emeritus.
Jesse Douglas, Kasner's most famous student, undertook research at Columbia between 1916 and 1920. He summarises Kasner's main research contributions as follows :-
Kasner's principal mathematical contributions were in the field of geometry, chiefly differential geometry. His basic mental outlook and equipment, imaginative, intuitive, concrete, visual, fitted him most naturally for work in this branch of mathematics. He observed that many propositions in mathematical physics, in analysis, in algebra, had an essence that was purely geometrical, and could be stated entirely in terms of spatial relations without reference to other concepts peculiar to the special subject, such as force, mass, time, or number. His program was then to distill this geometric essence, and to analyze it from various points of view, particularly with regard to any interesting problems that might be suggested in this way. A rough subdivision of Kasner's scientific career might be made into four periods according to his dominant interest at the time: Differential-Geometric Aspects of Dynamics (1905-1920), Geometric Aspects of the Einstein Theory of Relativity (1920-1927), Polygenic Functions (1927-1940), Horn Angles (1940-1955).
In Douglas relates an interesting account of Saturday afternoon excursions Kasner organised for his research students:-
We ... would meet Kasner at the Fort Lee ferry and take the ride to the Jersey shore, then the trolley car to the top of the Palisades. On the way to the woods we might stop to buy some cookies; these were to consume with the tea which Kasner brewed in a pot, dug up from its hiding place under a log, where it had been left at the end of the previous excursion. It was amusing that Kasner could direct us unerringly to this cache in spite of the absence of any markers that we could notice. Water was obtained by Kasner from a nearby stream, and heated over a fire built from dead branches and leaves which we had gathered. Around this we would then sit through the afternoon, conversing about our teaching at Columbia, about mathematical research, and about random topics of interest. As the sun faded and darkness began to fall, Kasner would arise at the psychological moment signalled by a halt in the conversation, and carefully extinguish the fire with some water kept in reserve - we then knew it was time to start for home.
Kasner is best remembered today for the term 'googol' and for his remarkable book Mathematics and the Imagination (1940) co-authored with James R Newman. We quote Kasner's description of where the term googol came from as given in Mathematics and the Imagination:-
Words of wisdom are spoken by children as least as often by scientists. The name "googol" was invented by a child (Dr Kasner's nine-year-old nephew) who was asked to think up a name for a very big number, namely, 1 with a hundred zeros after it. He was very certain that this number was not infinite, and therefore equally certain that it had to have a name. At the same time that he suggested "googol" he gave a name for a still larger number: "Googolplex." A googolplex is much larger than a googol, but is still finite, as the inventor of the name was quick to point out. It was suggested that a googolplex should be 1, followed by writing zeros until you get tired. This is a description of what would happen if one actually tried to write a googolplex, but different people get tired at different times and it would never do to have Carnera a better mathematician than Dr Einstein, simply because he had more endurance. The googolplex then, is a specific finite number, with so many zeros after the 1 that the number is a googol. A googolplex is much bigger than a googol. You will get some idea of the size of this very large but finite number from the fact that there would not be enough room to write it, if you went to the farthest star, touring all the nebulae and putting down zeros every inch of the way.
The web search engine, perhaps the most well-known web page, called Google is simply a misspelling of googol.
In Mathematics and the Imagination the authors aim:-
... to show by its very diversity something of the character of mathematics, of its bold, untrammeled spirit, of how, as both an art and a science, it has continued to lead the creative faculties beyond even imagination and intuition.
Herbert Turnbull gives an indication of the contents of the book in his review :-
It is packed with information and deals with all the elementary ingredients of mathematics, number, shape, line, series, pattern, infinity, paradox, chance, change, with copious and entertaining visual aids: and the treatment is refreshing. The book is at once a collection of mathematical puzzles and surprises, a pleasantly written history of mathematical thought, and a real attempt to bring before the interested but unprofessional reader the characteristic findings of mathematicians.
I B Cohen, like all the reviewers referenced below, is full of praise for the book and writes:-
... it is the best account of modern mathematics that we have. It is written in a graceful style, combining clarity of exposition with good humour
G W Dunnington realised a few months after publication of the book that it was a big hit:-
This volume has already taken its place among the current best sellers. Apparently it has succeeded in communicating to the layman something of the pleasure experienced by the creative mathematician in difficult problem solving.
E T Bell writes a year after the book was published:-
This is a highly successful popular presentation of certain ideas of modern mathematics. It departs from the usual trite popularizations of mathematics ... we trust that it will have a long and prosperous career. It is something new, of its own kind, in the popularization of mathematics.
Indeed Bell was right and the book continues to be popular more than 70 years after its first publication.
We should note Kasner's talents as a teacher at all levels. He was often invited to talk about mathematics in elementary schools and high schools. He was able to get across deep mathematical ideas with a careful selection of problems which brought out the flavour of mathematical research to children at an early stage in their education. As a university lecturer to undergraduates he excelled, again getting an intuitive understanding across without undue rigour. He often claimed "Rigour is for dull people" but as Douglas points out :-
... he was not in any sense careless in his scientific work or its presentation; he simply believed in such a degree of accuracy as was sufficient for the clear comprehension of the theory or problem at hand. Anything more than this he regarded as affectation, positively harmful in its tendency to block the road to that clear insight into essentials, and view of related problems, which he considered paramount.
For his graduate students, who included Joseph Ritt, Jesse Douglas and John De Cicco, Kasner put on his famous Seminar in Differential Geometry :-
I think it may truly be said that nearly every mathematician who arose in the New York area during the first half of this century - whether his main interest was differential geometry or not - attended this seminar at some time during his course of study, and derived from it illumination and inspiration.
Kasner received many honours. He was vice-president of the American Association for the Advancement of Science in 1906, and vice-president of the American Mathematical Society in 1906. The greatest honour he received was election to the National Academy of Sciences in 1917. The University of Columbia celebrated its bicentenary in 1954 and, as part of these celebrations, they gave Kasner an honorary doctorate on 31 October during :-
... a most impressive ceremony in the Cathedral of St John the Divine, near the campus.
By this time he was already seriously ill, having suffered a stroke in July 1951. Following receiving his honorary degree, his health steadily declined and he died six months later.
Douglas ends his biography of Kasner by pointing out that he:-
... began his mathematical career at the turn of the century, America occupied a minor position in world mathematics. If today this position is patently a leading one, then some significant portion of the credit must be assigned to the work and influence of Edward Kasner.
Article by: J J O'Connor and E F Robertson | <urn:uuid:c33f5da9-2dc0-48e8-8ef4-7c357cacf7d1> | CC-MAIN-2015-35 | http://www-history.mcs.st-andrews.ac.uk/Biographies/Kasner.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00276-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.981634 | 2,522 | 2.953125 | 3 |
1 - A Boy With An Idea
Men who do great things are men
we all like to read about. This
is the story of Christopher Columbus, the man who discovered
America. He lived four hundred years ago. When he was
boy he lived in Genoa. It was a beautiful city in the
northwestern part of the country called Italy. The mountains
behind it; the sea was in front of it, and it was so
place that the people who lived there called it "Genoa
Superb." Christopher Columbus was born in this beautiful
Genoa in the year 1446, at number 27 Ponticello Street.
He was a
bright little fellow with a fresh-looking face, a clear
golden hair. His father's name was Domenico Columbus;
his mother's name was Susanna. His father was a wool-comber.
cleaned and straightened out the snarled-up wool that
from the sheep so as to make it ready to be woven into
Christopher helped his father do this when he grew strong
but he went to school, too, and learned to read and write
draw maps and charts. These charts were maps of the sea,
the sailors where they could steer without running on the
and sand, and how to sail safely from one country to another.
This world was not as big then as it is now--or, should
people did not know it was as big. Most of the lands that
Columbus had studied about in school, and most of the people
had heard about, were in Europe and parts of Asia and Africa.
city of Genoa where Columbus lived was a very busy and
rich city. It was on the Mediterranean Sea, and many of
people who lived there were sailors who went in their ships
voyages to distant lands. They sailed to other places on
Mediterranean Sea, which is a very large body of water,
and to England, to France, to Norway, and even as far away
cold northern island of Iceland. This was thought to be
The time in which Columbus lived was not as nice a time
this in which you live. People were alwaysquarreling and
about one thing or another, and the sailors who belonged
country would try to catch and steal the ships or the things
belonged to the sailors or the storekeepers of another
This is what we call piracy, and a pirate, you know, is
to be a very wicked man.
But when Columbus lived, men did not think it was so very
to be a sort of half-way pirate, although they did know
would be killed if they were caught. So almost every sailor
about half pirate. Every boy who lived near the seashore
the ships and the sailors, felt as though he would like
away to far-off lands and see all the strange sights and
the brave things that the sailors told about. Many of them
said they would like to be pirates and fight with other
and show how strong and brave and plucky they could be.
Columbus was one of these. He was what is called an adventurous
boy. He did not like to stay quietly at home with his father
comb out the tangled wool. He thought it would be much
sail away to sea and be a brave captain or a rich merchant.
When he was about fourteen years old he really did go
There was a captain of a sailing vessel that sometimes
Genoa who had the same last name--Columbus. He was no relation,
but the little Christopher somehow got acquainted with
the wharves of Genoa. Perhaps he had run on errands for
helped him with some of the sea-charts he knew so well
draw. At any rate he sailed away with this Captain Columbus
his cabin boy, and went to the wars with him and had quite
exciting life for a boy.
Sailors are very fond of telling big stories about their
adventures or about far-off lands and countries. Columbus,
listened to many of these sea-stories, and heard many wonderful
things about a very rich land away to the East that folks
If you look in your geographies you will not find any
on the map as Cathay, but you will find China, and that
men in the time of Columbus called Cathay. They told very
stories about this far-off Eastern land. They said its
lived in golden houses, that they were covered with pearls
diamonds, and that everybody there was so rich that money
plentiful as the stones in the street.
This, of course, made the sailors and storekeepers, who
pirate, very anxious to go to Cathay and get some of the
jewels and spices and splendor for themselves. But Cathay
miles and miles away from Italy and Spain and France and
It was away across the deserts and mountains and seas and
and they had to give it up because they could not sail
At last a man whose name was Marco Polo, and who was a
and famous traveler, really did go there, in spite of all
trouble it took. And when he got back his stories were
surprising that men were all the more anxious to find a
sail in their ships to Cathay and see it for themselves.
But of course they could not
sail over the deserts and mountains,
and they were very much troubled because they had to give
idea, until the son of the king of Portugal, named Prince
said he believed that ships could sail around Africa and
to India or "the Indies" as they called that
land, and finally to
Just look at your map again and see what a long, long
would be to sail from Spain and around Africa to India,
Japan. It is such a long sail that, as you know, the Suez
was dug some twenty years ago so that ships could sail
the Mediterranean Sea and out into the Indian Ocean, and
to go away around Africa.
But when Columbus was a boy it
was even worse than now, for no
one really knew how long Africa was, or whether ships really
could sail around it. But Prince Henry said he knew they
and he sent out ships to try. He died before his Portuguese
sailors, Bartholomew Diaz, in 1493, and Vasco de Gama,
at last did sail around it and got as far as "the
So while Prince Henry was trying to see whether ships
around Africa and reach Cathay in that way, the boy Columbus
listening to the stories the sailors told and was wondering
whether some other and easier way to Cathay might not be
When he was at school he had
studied about a certain man named
Pythagoras, who had lived in Greece thousands of years
was born, and who had said that the earth was round "like
or an orange."
As Columbus grew older and made
maps and studied the sea, and read books
and listened to what other people said, he began to believe
man named Pythagoras might be right, and that the earth
though everybody declared it was flat. If it is round ,
to himself, "what is the use of trying to sail around
to get to Cathay? Why not just sail west from Italy or
keep going right around the world until you strike Cathay?
I believe it could be done," said Columbus.
By this time Columbus was a man.
He was thirty years old and was
a great sailor. He had been captain of a number of vessels;
had sailed north and south and east; he knew all about
a ship and
all about the sea. But, though he was so good a sailor,
said that he believed the earth was round, everybody laughed
him and said that he was crazy. "Why, how can the
round?" they cried. "The water would all spill
out if it were,
and the men who live on the other side would all be standing
their heads with their feet waving in the air." And
laughed all the harder.
But Columbus did not think it was anything to laugh at.
believed it so strongly, and felt so sure that he was right,
he set to work to find some king or prince or great lord
him have ships and sailors and money enough to try to find
to Cathay by sailing out into the West and across the Atlantic
Now this Atlantic Ocean, the
western waves of which break upon
our rocks and beaches, was thought in Columbus's day to
dreadful place. People called it the Sea of Darkness, because
they did not know what was on the other side of it, or
dangers lay beyond that distant blue rim where the sky
seem to meet, and which we call the horizon. They thought
ocean stretched to the end of a flat world, straight away
sort of "jumping-off place," and that in this
off place were giants and goblins and dragons and monsters
all sorts of terrible things that would catch the ships
destroy them and the sailors.
So when Columbus said that he wanted to sail away toward
dreadful jumping-off place, the people said that he was
than crazy. They said he was a wicked man and ought to
But they could not frighten Columbus. He kept on trying.
from place to place trying to get the ships and sailors
and was bound to have. As you will see in the next chapter,
tried to get help wherever he thought it could be had.
the people of his own home, the city of Genoa, where he
and played when a boy; he asked the people of the beautiful
that is built in the sea--Venice; he tried the king of
the king of England, the king of France the king and queen
Spain. But for a long time nobody cared to listen to such
and foolish and dangerous plan--to go to Cathay by the
way of the
Sea of Darkness and the Jumping-off place. You would never
there alive, they said.
And so Columbus waited. And his hair grew white while
though he was not yet an old man. He had thought and worked
hoped so much that he began to look like an old man when
forty years old. But still he would never say that perhaps
wrong, after all. He said he knew he was right, and that
he should find the Indies and sail to Cathay. | <urn:uuid:3bd8c974-e100-43f4-bafc-cb1f0b57aa8e> | CC-MAIN-2015-35 | http://www.apples4theteacher.com/holidays/columbus-day/true-story/chapter1.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00101-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.984062 | 2,223 | 3.4375 | 3 |
Maybe Miriam Reuter was in the perfect position to ask Curious City this question:
How has Chicago's coastline changed over the decades?
Take, for example, the beaches where she used to walk her dog in the Rogers Park neighborhood: Were those natural? “I’d be interested to know if it was man-made or originally there,” she told us. “The lake and Chicago are inseparable.”
The short answer to Miriam’s question is that Chicago’s coastline has changed a lot since the city was first incorporated in 1833 — so much so that in many places the city’s founders would likely find today’s coastline unrecognizable. And, it makes sense when you consider what the precious lakefront has been subjected to: it’s alternately been dredged and filled; it's been built up and torn down, only to be built up again; and caught in a conflagration, only to be graced by a World’s Fair that showed off a century of progress. The lakefront is, and has always been, a canvas onto which the city’s projected varying and competing images of itself.
“Natural” vs. “Man-made”
Miriam's follow-up question got at how natural the lakefront appears to be, and many of our sources addressed this. Yes, the lakefront may look natural, but the truth is that it's taken a lot of work to get this way. Here's how Blair Kamin, the Chicago Tribune's Pulitzer Prize-winning architecture critic, put it to me: “It’s disguised with trees and shrubs and grass and beaches to make it look like it’s been there from the beginning of time, but in fact, it’s very much a man-made creation.”
It’s still possible to glimpse a few spots of “natural” lakefront — there are parts of Jackson Park, Rainbow Beach and some street-end beaches on the North Side that still resemble a natural state — but according to Chicago Park District historian Julia Bachrach, none is untouched. A pair of photos from the Library of Congress taken from around the turn of the last century, for example, shows beaches at Jackson Park and Fullerton Avenue covered with paving stones — the preferred treatment for beaches at a time when the lake was filled with sewage and few people swam.
“If you find little pieces of actual beaches, it’s going to be the exception more than the rule,” Bachrach said. “It’s all completely recreated.”
Even the ambulating curve of the lakefront’s footprint is largely a human construction. Chicago’s original coastline lay much farther west in many places, including downtown, where Michigan Avenue once abutted the lake. Successive waves of landfill pushed the lakefront east over the course of the last 180-some years. Bachrach estimates that more than 2,000 acres of land along the lake were built this way. And even that characterization, she said in a recent conversation, is likely conservative.
“I went through and added up acreage I absolutely know is landfill. Lincoln Park is over 1,200 acres; more than 1,000 acres of that is landfill,” Bachrach said, ticking down her list. “Northerly Island is all completely landfill. Burnham Park is all completely landfill — the whole park.”
You can argue that building out the shore with landfill was a necessity, dictated by the lake itself. “The city from the beginning had so many problems with lakeshore erosion that this reengineering was being done first and foremost to protect the edge of the city — not just the parkland but the actual built city,” Bachrach explained. Before it became home to the city’s business district, Michigan Avenue was home to the city’s finest mansions.
“They were thinking, well, all these wealthy people are going to have to move inland,” said Bachrach. “The erosion was eating away at the actual street where people lived.”
At that time, the city often followed nature’s lead when creating new land: The first Chicagoans watched as Lake Michigan’s currents deposited mounds of sand and silt in the harbor, north of the mouth of the Chicago River. An 1858 map provided by the Newberry shows how the precise location of the coastline shifted over time as a result. Chicago’s most marginalized and enterprising citizens settled in that unincorporated territory, which became known as “The Sands.” In Forever Open, Clear and Free, a definitive history of Chicago’s lakefront and the struggle over its development, Lois Wille described the Sands as “a nest of cheap lodging houses, bordellos, saloons and gambling dens.”
In 1857 the city raided the Sands, evicted its residents and burned the shantytown to the ground. A little more than 30 years later, Cap’n Streeter’s boat marooned on the same sandy patch. He encouraged contractors to build nearly 200 acres of landfill there, which he claimed and named after himself. Eventually Streeter was run out of town, too, but the Streeterville neighborhood remains.
One story that the pictures and maps tell is that, when we radically transformed the lakefront we imposed changing views of what's natural: Just compare Frederick Law Olmstead’s graceful, meandering landscapes, to the more precise and highly-organized Beaux Arts beauty favored by Daniel Burnham. Even today's wildlife habitats and so-called “natural areas” are created with a contemporary understanding of the term.
Mitch Murdock manages the Chicago Park District’s “natural areas,” including roughly a dozen on the lakefront: the Jarvis Bird Sanctuary near Montrose Harbor, for example; the dune habitat at Osterman Beach; the Wooded Island and Bob-O-Link Meadow in Jackson Park.
“Speaking from a strictly ecological perspective, nature oftentimes is a random smattering of everything,” Murdoch said recently. “We want things to be a little more designed.”
The dunes, for example, are carefully groomed to encourage the development of native plants, including the marram grasses that help hold sand in place. All the while, crews routinely purge invasive species, such as the highly adaptable cottonwood tree, which would overrun the landscape if nature were actually left to its own devices. Murdoch’s staff grooms trails to keep them both accessible and aesthetically pleasing (at least to us humans), and his department deliberately designs wooded areas by planting shrub masses close together and placing trees at a variety of heights. The goal: attracting migratory birds and birdwatchers alike.
And although Murdoch’s team tries to recreate the kind of landscapes that once existed on Chicago’s coast, he acknowledges they favor certain settings over others. He outlined the various kinds of landscapes that existed near the lake before human intervention: dunes, prairies and savannahs, but also wetlands with their inlets, fens and bogs.
“We can’t do dunes everywhere,” he said. “It’s impossible to do a perfect, idealistic version of ecological restoration all along the landscape.”
That is, wooded bird sanctuaries may be popular, but is anyone clamoring for bogs? Not so much.
‘Open, clear and free’ . . . Forever?
Miriam's question about how the lakefront's changed wouldn't be complete without mentioning the conflicts between preservation and development, which have varied between tussles and bare-knuckled legal and political brawls. The history's a good reminder. Current-day Chicagoans sometimes claim that the coastline has always been undeveloped, that we didn’t make the same mistakes that other cities did, namely allowing industry to crowd its coastline.
“Chicago has rightly bragged about its lakefront as a single urban achievement by Daniel Burnham, A. Montgomery Ward and others,” said Blair Kamin. But that’s not the whole truth. The reality of the lakefront, Kamin said, “is actually very different when you look at it closely from the rhetoric of the perfect lakefront.”
Lois Wille, herself a Pulitzer Prize winner and the former editor of the Tribune’s editorial page, penned her definitive overview of the lakefront’s history in 1972. As she put it in a recent conversation, it was a time of a dozen small plans “to mess up the lakefront park,” including proposals to build a highway exchange over the Ohio Street beach and a concrete band shell in Grant Park — ideas that pitted city officials against preservationists and fans of green space.
But disagreements about lakefront development predate any particular political fight that today's Chicagoans can actually remember; such fights have been around since the city's inception.
“Cities always have a shortage of land so the ground becomes contested — a locus of different priorities,” Kamin said. “It’s not surprising that you get entirely different visions for what would happen there.”
And different factions were ascendant at different points in time: The name of Wille’s book comes from an inscription written on a map in 1836. Three commissioners appointed to sell undeveloped parcels of land to fund the construction of the Illinois and Michigan canal opted not to sell the stretch of lakefront between Madison and 12th Street. (You can see a similar real estate map here.)
“It was a narrow strip; during bad storms, the lake rushed up to the muddy street,” Wille wrote in her account of the deal. “Yet if the canal commissioners had chosen to sell it, the lakefront land probably would have brought a top price from shipping and packing concerns.”
Instead, the lakefront was set aside for public grounds, and the words by which that land was preserved — “forever open, clear and free” — have served as the rallying cry of preservationists since they were written — even if they only applied to that small stretch of land and the record of success is mixed.
One of Chicago’s biggest early compromises with industry was the deal the city reached with the Illinois Central Railroad Company in early 1850s, the effects of which you can clearly see in the photographs and maps we gathered. To protect the city from the aforementioned erosion, the railroad offered to build a protective breakwater in the lake in exchange for development rights along the lakefront. The company subsequently built tracks on a 300-foot-wide strip along the lake between Randolph and 22nd Street, and created a rail yard on another 73,000 sq. ft. of land downtown.
The ICR's plan proved controversial: Wille details how the deal was approved by the city council but vetoed by Chicago’s mayor. The council overrode his veto but reached a compromise of sorts. The ICR could build its tracks, but it could only do so on trestles out on the lake.
That one decision has resonated ever since. The railroads “have been a blessing and problem,” said Wille. “We’ve been coping with it for 150 years.” Chicago became the nation’s transportation hub and then a commercial powerhouse; the city would not have become as important as it did without the railroad. But gradually the city has had to build around — and even over — the ICR tracks.
The space between the rail trestles and the shoreline became a disgusting lagoon, filled with trash, debris and animal carcasses that floated down the river from the stockyards. After the Great Fire in 1871, rubble from the ruins was used to fill the lagoon, creating land now occupied by Grant Park. In 1912 the ICR depressed its tracks below ground and agreed to electrify its line to ease up on air pollution. And in the early 2000s the city covered a portion of the ICR’s rail yard by building Millennium Park.
When I asked Wille if she thought the city had made a devil’s bargain with the ICR, she was surprisingly magnanimous for such an avid preservationist.
“How could they have known? They didn’t know that strip of water would become a lakefront park,” she said. “I would forgive them that more than some other mistakes. Little by little that mistake is being corrected.”
The social equation
So, Miriam's question's complicated for sure, but it's only partly answered by an account of how preservationists butted heads with the captains of industry ... or even how we've applied different concepts of what's natural as we remade the landscape. This is Chicago, after all, and no answer would be complete without pointing out the obvious: Some residents got what they wanted, and others didn't. Any snapshot of the lakefront — or even the experience of the lakefront, like Miriam's own — is necessarily a snapshot of the city's politics.
According to Wille, the outcome of proposed lakefront developments largely depended on the strength of the opposition. When City Hall's opponents were strong and well-organized, they won. Take a preservation drive that took place in 1965. In that incident, a group of protesters successfully stopped the city from extending and straightening Lake Shore Drive through the middle of Jackson Park by chaining themselves to trees.
In other cases when opponents were scarce or weak, the council and the mayor were unchecked. That allowed City Hall to, say, turn Burnham’s Northerly Island into an airport and then give the airfield's operators a 50-year lease. Other examples include the construction of a convention center (twice, actually, as today's McCormick Place replaced the one that burned down in 1967), as well as two water treatment facilities on the lake — one north of Navy Pier and another at 79th Street.
Politics and clout also led to the lakefront’s unequal development and maintenance, and as you'd suspect, preservation victories were not spread evenly across the board.
“You notice who won?” Wille asked, listing the battles she saw fought in the ‘60s. “White neighborhoods. The protesters in Hyde Park were mainly white — not all, but mainly. The protesters in Lincoln Park [where the city had also tried to move Lake Shore Drive] were white and influential. The protesters at Oak Street Beach were white and influential. So it was money and whiteness that carried the day.”
Kamin detailed such inequalities in his six-part series for the Chicago Tribune, which landed him a Pulitzer Prize for criticism. The series, completed in 1998, examined what he saw as the shamefully unbalanced allocation of resources between the North and South Side, white and black, rich and poor.
“On the South Side, particularly in the area south of McCormick Place down to the museum of Science and Industry, you had a narrow strip of park land that was reached by ugly, rusting, outdated pedestrian bridges that discouraged people from walking or biking,” Kamin told me recently. “Once they got there the land was pockmarked with parking lots. The shoreline itself looked like it had been bombed because the revetments that were to protect the lakefront from the pounding of Lake Michigan were all broken up. So it was an absolute disaster.”
The next chapters?
Much has changed, of course, since Kamin dissected the inequalities between the north and south lakefronts and since Wille watched the city fight some of its most serious battles over open space; Burnham Park was beefed up, for starters, and Meigs Field was destroyed.
But the struggle over Chicago’s lakefront will continue. “It will go on for decades and centuries,” Kamin said.
In answering Miriam's question so far, we've only brought up what's actually changed, but there was a lot more going on behind the scenes, and there's never a dearth of ideas about what do next with the lakefront. Just consider the number of fantastical plans that never came to be: Burnham’s plan for five islands instead of just one; Richard J. Daley’s plan to build four peninsulas and a 20-mile island off the coast (ridiculed by the press and his opponents as “islands in the sky”); a subsequent plan by Daley's son to build a grand stadium for the Bears right on the lakefront; the Santiago Calatrava-designed pedestrian bridge at Queen’s Landing; or the never-built monument to Burnham proposed for the Museum Campus.
More recently, there's been talk of extending Lake Shore Drive north, visions of acquiring four additional miles of private land north and south of the city’s current holdings, and more concrete plans by the current administration to build new natural habitats on the site that Richard M. Daley bulldozed X’s into during the middle of the night.
All this can give you pause. The stretch of beach that inspired Miriam's question may seem pretty solid and it may indeed remain as-is, at least for the rest of her lifetime, if not her dog's. It can seem solid, that is, until you consider what's already happened to Chicago's lakefront — even in the span of the city's short history.
Correction: An earlier version of this story incorrectly listed the number of articles in Blair Kamin's Pulizer Prize-winning series on Chicago's lakefront. The correct number is six.
WBEZ and Curious City would like to thank The Newberry for making its historic Chicago maps available for this story. Specifically we would like to thank the following library staff: Jim Akerman, Director of the Hermon Dunlap Smith Center for the History of Cartography and Curator of Maps; John Powell, Digital Imaging Services Manager; Catherine Gass, Photographer; Patrick Morris, Map Cataloger and Reference Librarian; Kelly McGrath, Director of Communications and Marketing. The Newberry is a partner of Chicago Public Media. | <urn:uuid:bc74533b-f7a5-4066-9ca5-7b2f288564fd> | CC-MAIN-2015-35 | http://www.wbez.org/print/104328 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064445.47/warc/CC-MAIN-20150827025424-00339-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.966986 | 3,855 | 2.890625 | 3 |