text
stringlengths 277
230k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
According to the Compressed Air and Gas Institute (CAGI) and the International Organization for Standardization (ISO), the three major contaminants in compressed air are solid particles, water, and oil. CAGI promotes proper use of air compressors with various educational tools, while ISO 8573 is directed at the very specific areas of compressed air purity and test methods, which this article will address. Microorganisms are also considered a major contaminant by CAGI, but will not be discussed in this article.
ISO and CAGI
Compressed Air Best Practices® (CABP) Magazine and the Compressed Air and Gas Institute (CAGI) cooperate to provide readers with educational materials, updates on standards and information on other CAGI initiatives. CABP recently caught up with Rick Stasyshan, Technical Consultant for the Compressed Air and Gas Institute (CAGI) and with Ian MacLeod, from CAGI member-company Ingersoll Rand to discuss the topic of motors on centrifugal air compressors.
NFPA 99 Medical Air
In the U.S. as an example, the NFPA has taken the view that if your compressor draws in good clean ambient air, the air stays clean through the compressor, is then dried and filtered, when you deliver it to the patient it will be entirely satisfactory. After all, when you went into the hospital that’s what you were breathing and when you leave you will breathe it again!
After almost three and a half years of development work the Canadian Standards Association C837-16 document “Monitoring and Energy Performance of Compressed Air Systems” has finally been published and is available for download. The work in writing the document was done by a CSA Technical Subcommittee made up of personnel from power utilities and government organizations, compressed air manufacturers and end users from both USA and Canada, with the committee activities facilitated and coordinated by the CSA Group (see list of committee members).
Food Grade Air
Compressed air is a critical utility widely used throughout the food industry. Being aware of the composition of compressed air used in your plant is key to avoiding product contamination. Your task is to assess the activities and operations that can harm a product, the extent to which a product can be harmed, and how likely it is that product harm will occur. Assessing product contamination is a multi-step process in which you must identify the important risks, prioritize them for management, and take reasonable steps to remove or reduce the chance of harm to the product, and, in particular, serious harm to the consumer.
Compressed Air Best Practices® (CABP) Magazine and the Compressed Air and Gas Institute (CAGI) cooperate to provide readers with educational materials, updates on standards and information on other CAGI initiatives. CABP recently caught up with Rick Stasyshan, Technical Director for the Compressed Air and Gas Institute (CAGI) to provide readers with some insights into the benefits of CAGI’s Verified Performance Program for refrigerated compressed air dryers.
Health and safety issues are a major concern in the food industry. Not only can contaminated food products endanger consumers, but they also can cause significant damage to a company’s reputation and bottom line. Contamination can come from many sources—industrial lubricants among them. With the abundance of lubricated machinery used in the food industry, lubricant dripping from a chain or escaping through a leak in a component can prove catastrophic. Even with the most prudent maintenance and operating procedures, along with a strict HACCP (hazard analysis and critical control points) plan, contamination may still occur.
Any modern food manufacturing facility employs compressed air extensively in the plant. As common as it is, the potential hazards associated with this powerful utility are not obvious and apparent. Food hygiene legislation to protect the consumer places the duty of care on the food manufacturer. For this reason, many companies often devise their own internal air quality standards based upon what they think or have been told are “best practices.” This is no wonder, as the published collections of Good Manufacturing Practices (GMPs) that relate to compressed air are nebulous and difficult to wade through.
Compressed Air Best Practices® Magazine and the Compressed Air and Gas Institute have been cooperating on educating readers on the design, features, and benefits of centrifugal compressor systems. As part of this series, Compressed Air Best Practices® (CABP) Magazine recently caught up with Rick Stasyshan, Compressed Air and Gas Institute’s (CAGI) Technical Consultant, and Ian MacLeod of CAGI member company, Ingersoll Rand. During our discussion, we reviewed some of the things readers should consider when installing a centrifugal compressor system.
ISO 22000 is a food and beverage (F&B) specific derivative of ISO 9001, a family of standards from the International Organization for Standardization that details the requirements of a quality management system. It is a quality certification that can be applied to any organization in the food chain — from packaging machine manufacturers to the actual food processing facilities.
Compressed air is used in more than 70 percent of all manufacturing activities, ranging from highly critical applications that may impact product quality to general “shop” uses. When compressed air is used in the production of pharmaceuticals, food, beverages, medical devices, and other products, there seems to be confusion on what testing needs to be performed.
Compressed Air Best Practices® (CABP) Magazine recently spoke with Rick Stasyshan, Compressed Air and Gas Institute’s (CAGI) Technical Consultant, and Mr. Neil Breedlove of CAGI's Centrifugal Compressor Section and member company, Atlas Copco Compressors, about centrifugal air compressors. Specifically, the discussion outlined how various inlet conditions can impact the performance of centrifugal air compressors.
Organizations across the world are gaining control of their energy spending by measuring and managing their utilities. In doing so, they may be using standards such as ISO 50001:2011 (energy management systems — requirements with guidance for use) to help set up an energy management system (EnMS) that will improve their energy performance. This improved performance might lower energy bills, making products more affordable in the marketplace and improving an organization’s carbon footprint.
Compressed air is the most common utility used in a typical industrial facility. It encompasses most operating aspects of the plant. The compressed air system can end up being the most expensive utility due to the focus that if production is running - then leave the system alone. Processes and machines are added and as long as the compressor can handle the increasing load - all is good. This brings us to our subject matter. The plant adds a process, a specialty coating line, requiring respirator protection. The plant determines supplied air respirators are the best choice. They want to be responsible and do the right thing so they start by reviewing what OSHA has to say on the subject. | <urn:uuid:241152f1-3b8b-4a57-8168-b71faae3ba11> | {
"dump": "CC-MAIN-2016-44",
"url": "http://www.airbestpractices.com/standards",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721555.54/warc/CC-MAIN-20161020183841-00243-ip-10-171-6-4.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9346670508384705,
"token_count": 1448,
"score": 2.78125,
"int_score": 3
} |
Description of Historic Place
Office of the Prime Minister and Privy Council National Historic Site of Canada, stands within Confederation Square National Historic Site of Canada, located on Wellington Street in downtown Ottawa, Ontario. Prominently situated opposite Parliament Buildings National Historic Site of Canada, it is one of the finest federal examples of a Second Empire style office building. Of robust appearance, this four-storey high building, features a limestone exterior, pavilion massing, round arched windows and a copper mansard roof; complimented by a rich decorative vocabulary. The building is well-known due to its current use as the Prime Minister’s Office and the Privy Council Office. Official recognition refers to the building on its footprint at the time of designation.
The Office of the Prime Minister and Privy Council was designated a national historic site of Canada in 1977 because:
- it emphasizes the importance of the Department of Public Works’ architecture;
- constructed to the designs of Thomas Fuller, it provided accommodation for an expanding civil service;
- this impressive structure is a modified version of the Second Empire Style
Constructed between 1883 and 1889, the Office of the Prime Minister and Privy Council is one of the best surviving examples of the work of Thomas Fuller, Chief Architect of the Department of Public Works from 1881 to 1896. During his tenure as Chief Architect, Fuller supervised the construction of over 140 buildings across Canada and was responsible for designing buildings in smaller urban centres that came to symbolize the federal government. Fuller’s attention to architectural details and his interest in creating a distinguished collection of federal buildings through the use of superior materials and craftsmanship is evident in the design and construction of the Office of the Prime Minister and Privy Council.
The Office of the Prime Minister and Privy Council was the first purpose-built departmental building erected by the federal government outside the boundaries of Parliament Hill. The original Centre Block and two departmental buildings on Parliament Hill were designed to house all of the legislative and civil service functions of the United Province of Canada (present day Ontario and Quebec). After Confederation in 1867, the number of Members of Parliament, Senators and clerical staff increased substantially. In addition, the 1870 transfer of the Northwest Territories to the newly formed Dominion facilitated the rapid growth in the size and responsibility of the Departments of the Interior and of Indian Affairs. By 1880, the lack of office space on Parliament Hill became a major problem for legislators and civil servants. In 1883, the decision was made to construct a new building (the Office of the Prime Minister and Privy Council) on purchased land, rather than to expand the West Block on Parliament Hill.
Upon its completion in 1889, the building was named for Sir Hector Langevin, Father of Confederation and Minister of Public Works during the buildings’ construction. The building originally housed the departments of Agriculture, Interior, Indian Affairs and the Post Office. The Department of Indian Affairs continued to occupy it until 1965. Between 1975 and 1977 the building was renovated to house the Prime Minister’s Office and the Privy Council Office.
The Office of the Prime Minister and Privy Council is a late example of the use of the Second Empire style in government buildings. The building features a mansard roof punctuated by dormers, as well as numerous Romanesque Revival references that steer its design away from French models towards North American ones. The Office of the Prime Minister and Privy Council is one of the few surviving examples of a building constructed in this style by the Department of Public Works.
Sources: Historic Sites and Monuments Board of Canada, Minutes, November 1977; Plaque Text, 1991.
Key elements that contribute to the heritage character of the site include:
- its prominent siting on the corner of Wellington and Elgin streets in downtown Ottawa, Ontario;
- its spatial and historical relationship with Parliament Buildings and Confederation Square National Historic Sites of Canada;
- its rectangular massing and symmetrical façades articulated with slightly projecting centre and end pavilions;
- the end façades, which continue the vocabulary of the front façade, but which are asymmetrical, due in part to the irregularities of the site;
- its Second Empire style, evident in its high mansard roof punctuated by one- and two-storey dormers, which emphasize the three-dimensional quality of the silhouette;
- its Romanesque revival styling, evident in the arcading of the windows on the second and third storeys, the extensive use of round-arched windows, the twinning, tripling and quadrupling of windows, and the polychromatic stonework created by the use of polished granite and ochre-coloured sandstone;
- its fine stonework, including olive-coloured sandstone facing, polished granite for the colonettes, carved stone cornice brackets, bas-reliefs, horizontal banding and cornices, rounded corners and deep-channel ground-floor masonry;
- the high quality of craftsmanship evident in the masonry and metalwork, particularly the decorative stone carving, metal balconies, and elaborate copper-sheathed roof;
- the main entrance, that is deeply recessed and framed with panelled pilasters;
- the fine interior finishes, notably the elaborate staircase, quality windows and door trim, iron railings and polished granite columns;
- its internal axial organization with offices, common rooms and boardrooms located on either side of spacious hallways;
- the viewscapes across Wellington Street to Parliament Buildings National Historic Site of Canada and across Elgin Street to the additional components of Confederation Square National Historic Sites of Canada. | <urn:uuid:54003568-43b2-4d38-abc8-a9999337c835> | {
"dump": "CC-MAIN-2021-31",
"url": "https://www.historicplaces.ca/en/rep-reg/place-lieu.aspx?id=14127&pid=0",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00143.warc.gz",
"language": "en",
"language_score": 0.9500420689582825,
"token_count": 1156,
"score": 3.25,
"int_score": 3
} |
How homograph attacks can present a spoofed, malicious link, and a case where a secure connection doesn’t guarantee a safe site.
While trying to catch up with a huge backlog of recent email and potential blog topics, I came across this article that Graham Cluley posted a few days ago on Internationalized Domain Name (IDN) homograph attacks, the kind of spoofing attack where a site address looks legitimate but is not what it seems because a character or characters has been substituted deceptively (a technique very commonly used in phishing). The example central to Graham’s article is the work of security researcher Paul Moore, who registered the domain IIoydsbank.co.uk and then invited Twitter users to spot the difference. Just to make things more interesting, Moore bought a TLS certificate from Cloudflare for his site so that it showed the green padlock that is meant to reassure us that all is secure.
Clearly we shouldn’t overestimate the security conferred by that little graphic and the magic ‘https’ at the start of URLs. Apart from the fact that phishing sites and pop-ups have often made use of fake padlock graphics, the fact that traffic is encrypted doesn’t mean that the traffic can’t be malicious. However, Lloyds’ customers are not in danger from IIoydsbank.co.uk, since Moore has apparently transferred the domain to the real Lloyds.
In fact, this is a pretty basic example of a long-known attack. Sorry, but I’m going to quote myself (from an earlier article here), starting with a ‘simplistic example’
‘… like IIoydsbank.com, where I’ve substituted a capital I [i] for each of the two Ls at the beginning.’ A common variation today is to use a homoglyph: in the Unicode character set there are many characters that look to the casual eye (at least in some fonts) very much like others, but are for purposes of identifying a web address completely different.
(It seems to have become a convention in security that we can’t look at fake bank sites without mentioning Lloyds, in much the same way that we can’t talk about public key cryptography without using Alice and Bob as examples.)
To take another example from the same blog that I expanded on in a later article for ITSecurity, in the right font, a lower case ‘o’ is indistinguishable from an omicron. The example below is a screenshot from Notepad, where I used an omicron instead of an ‘o’ in one of the two versions of welivesecurity.com.
The first one is the ‘fake’, in case you were wondering.
Since I don’t think there’s such a thing as a .com domain where the middle character is an omicron, that might not be the best example. But I suspect that there are plenty of financial institutions whose names include a spoofable ‘o’, and there are plenty of other usable homographs: fortunately, they work better in some fonts than others, so in some contexts they are much easier to distinguish.
Only when I came upon Graham’s article as cited above did I realize that he’d actually looked at this issue in a WeLiveSecurity article before I did, else I’d have drawn attention to it in my own more recent articles, as it’s a pretty neat summary.
However, as Martijn Grooten pointed out in a Virus Bulletin article last year:
In practice, hardly any homoglyph attacks have been seen in the wild, despite the technology having been widely implemented.
But that doesn’t mean there isn’t an issue here, that it doesn’t happen, or that it couldn’t happen more often. And while Paul Moore is far from the first person to raise the topic, he’s certainly brought it to the attention of a good few more potential victims.
Fortunately, you can reduce the risk drastically and painlessly. Don’t be too eager to trust links in email and other messages, or on web sites where you didn’t expect to find yourself. If your bank needs you to log in somewhere, use links and pages you already know to be kosher. There are many ways in which to disguise a malicious link as a legitimate address: this is just one of them.
It has to be said that some service providers are sufficiently aware of the issue to implement countermeasures. Google, for instance, though it has supported non-Latin characters in Gmail since summer 2014, has also taken advantage of the Unicode Consortium’s Identifier Profiles for Restriction-Level Detection as a guide for rejecting ‘suspicious combinations’ of characters. However, even implementation of the strictest level of restriction available, ASCII-Only, would not have triggered an automatic alarm in this case, since there is no restriction on mixed case that I’m aware of. On the other hand, as Graham pointed out, the certificate provider that supplied Paul Moore with a TLS certificate, could at least have looked more carefully at an application from a site whose name included the word ‘bank’. Perhaps DNS registration services could take the same precaution: that doesn’t seem too hard to automate. | <urn:uuid:8e48f3b4-d39c-4470-a541-e658ce4fd002> | {
"dump": "CC-MAIN-2020-40",
"url": "https://www.welivesecurity.com/2015/07/14/spoofed-urls-homograph-attacks-revisited/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206763.24/warc/CC-MAIN-20200922192512-20200922222512-00496.warc.gz",
"language": "en",
"language_score": 0.9532849788665771,
"token_count": 1131,
"score": 2.71875,
"int_score": 3
} |
In Brussels’s fight against extreme pollution, the city is taking some of the most radical action plotted by a Western capital so far. After a slate of drastic new measures were approved this month, the city’s plan to fight especially poor air quality includes some standard fare, like temporarily making public transit free, but also some last-resort measures that could effectively place the city on lockdown when the air gets especially dangerous.
The moves agreed by the Belgian Capital Region are as follows. If levels of fine particulate matter in the atmosphere stay high (above 50 micrograms per cubic meter) for over 48 hours, the city will make all public transit and bikesharing free. Speed limits would be slashed, and wood burning for any home that possesses an alternative heat source would be banned.
These moves are decisive, but not unprecedented—Paris and Madrid have similar emergency measures in their arsenal. But Brussels would be prepared to go much further. If pollution peaks persisted or worsened, the heating of office buildings would be banned, a ruling that could be followed by the ultimate measure of a complete ban on all non-electric, non-emergency vehicles circulating. If the city pulled out all these stops, it could effectively grind to a halt.
These measures are only planned to target true emergencies, not ongoing problems. The regional environmental agency recommends that particulate levels remain below 20 micrograms per cubic meter, meaning pollution would have to reach two and a half times the level the agency deems desirable before emergency measures kick in.
They are, however, only the most striking of a general push to wean the city of its car dependency. On January 1, the capital region introduced a Low Emissions Zone that covers the entire capital region minus the ring road. This zone effectively bans the most polluting diesel vehicles (those with emissions at Euro 1 standard, built before 1997, or no standard at all). Owners of these vehicles can drive into the city on a maximum of 8 days annually, but only by purchasing a €35 ($43) daily pass that renders them prohibitively expensive. Each year, the Low Emission Zone’s restrictions will tighten. By 2025, only drivers of the cleanest category of diesel car, and the four cleanest categories of gas-powered cars will be allowed in.
This staggered implementation means Brussels will have to wait a long time for air quality improvements—there aren’t that many diesel vehicles built before 1997 on the road—but at least it’s moving in the right direction. With a congestion charge also being considered, Brussels’s Mobility Minister Pascal Smet has made it clear that the moves are purposefully intended to dissuade people from driving. “If you drive less than 10,000 km per year, it’s not worth buying and owning a car and you are better off sharing it with others,” he told the Brussels Times. “On average, cars are parked 95 percent of the time in Brussels.”
There’s a decisiveness to Brussels’s push for cleaner air that, at least seen from afar, is impressive. The problems they are designed to alleviate, however, are grave, and some form of action is desperately needed. The city’s air quality is appalling—the worst of any Western European capital, and comfortably surpassing larger cities such as London, Paris, and Rome in its high levels of carcinogenic particulates. The source of these problems is not hard to find. Diesel fuel has long dominated Belgium’s vehicle fleet, falling from a remarkable 78.9 percent of all cars to a still huge 51.8 percent share in 2017. It is only this week that diesel prices have in some places climbed above gas prices, finally removing the fiscal incentive for generally more polluting cars.
But diesel use isn’t the only culprit. By European standards, Brussels remains a very car-reliant city, with over 50 percent of commuters using cars for at least part of their journey—almost double the rate in Paris—partly because public transit coverage in the outskirts is patchy. The city’s bike lane network is reasonably good, but cycling and walking rates are still low: Just 6 percent of all journeys take place by bike or on foot. The result of all this is legendary traffic jams that pump the city full of harmful pollutants.
There’s been public pressure on the state to change the situation for some time. In November, a coalition of 100 doctors highlighted the city’s pollution problem in an open letter, noting that poor air quality killed an estimated 632 people prematurely across Belgium every year. The city’s residents have been getting restive, too. Earlier this month, a demonstration saw city statues being draped with protective masks, as if to spare their poor bronze and marble lungs.
— Moustique (@moustiquemag) February 20, 2017
Tellingly, the Capital Region’s politicians are sympathetic, but have struggled for collective action. Smet, the mobility minister, actually applauded the doctors’ intervention, but in an urban region where much power remains vested in 19 municipalities, brokering collective action can be laborious. This is a city, after all, with six separate police forces, and where a parking permit can be valid for one side of the street straddling a municipal border, but not the other. The Capital Region also struggles with the limits of its remit. An officially bilingual territory squeezed between the French-speaking Wallonia Region and Dutch-speaking Flanders, its leaders have strongly criticized plans to expand the city’s beltway. Because the highway lies just outside the Capital Region in Flanders, however, those protests are so far failing to change the road plans.
The poor quality of Brussels’s air has another ironic twist to it. Across the E.U., it is the Brussels-located European Commission that is charged with reproving states who fail in their mutually agreed air quality targets. Just last month, environment ministers from nine E.U. states were summoned to Brussels to account for the insufficiency of their policies for combating air pollution. The European Commission itself is in no way directly responsible for managing Brussels’s air quality and transit, but it’s at least a little awkward to summon Europe’s environmental shirkers to a city that itself is often heavy-breathing under a pall of toxic filth.
Brussels’s new emergency measures won’t flush the Capital Region’s skies clean in a hurry. But at least they’re a sign of a genuine will to clear the air at Europe’s heart.
Powered by WPeMatico | <urn:uuid:d86f7fe8-5ca1-4e01-879f-30564a2d0b30> | {
"dump": "CC-MAIN-2022-21",
"url": "https://smartcities.org/index.php/brussels-makes-an-extreme-plan-to-fight-pollution-emergencies/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00427.warc.gz",
"language": "en",
"language_score": 0.9429129958152771,
"token_count": 1386,
"score": 2.6875,
"int_score": 3
} |
Guest blog by Caoileann Murphy
Caoileann qualified as a dietitian and now works as a Postdoctoral Research Fellow in UCD in the area of nutrition and sarcopenia with an emphasis on personalized approaches to effective treatment and prevention. Caoileann has authored a number of scientific research papers, reviews and book chapters and is a recipient of the TOPMed10 Marie Curie Fellowship.
Although my Grandmother Rosemary (aged 81 years) and I (aged 30 years) have never gone head-to-head in an arm wrestling competition, tests conducted as part of a Healthy Aging Research Study in UCD show that I have more muscle and am considerably stronger than my Grandmother, despite the fact that we are almost exactly the same height.
This is not a surprise.
Beginning around our fifties we start to lose approximately 0.5 – 1% of our muscle every year. Strength is lost even faster, at a rate of about 2 – 4% per year1. The loss of muscle mass and function with age (called sarcopenia) is big problem because it makes daily activities like walking, lifting and getting out of chairs more difficult and it increases the risk of physical disability, falls and hospitalisation2.
fig 1. My Grandmother Rosemary performing leg strength testing in UCD as part of the NUTRIMAL Nutrition and Healthy Aging Study.
Why does sarcopenia occur?
There are numerous, inter-connected factors that likely contribute to the loss of muscle mass and function with age, some key ones include:
Reduced physical activity
A major cause of sarcopenia is the simple fact that, as we age, we tend to do less physical activity. Even short periods (2-3 weeks) of reduced daily steps lead to declines in muscle mass in older adults3. These periods of reduced activity can occur relatively often (e.g. due to injury/a cold/minor illness, the beast from the East!) and are difficult for older adults to fully recover from.
Compared to younger adults, older adults require more protein in their diets (found in foods like milk, yogurt, fish, eggs, meat, beans, nuts) and not eating enough protein can contribute to muscle loss. Other contributors include not eating enough food (for example due to poor appetite) and vitamin D deficiency4.
Imbalance between muscle building and muscle breakdown
Our muscles are constantly undergoing cycles of building and breakdown. When rates of muscle building and muscle breakdown are equal, which is the case in younger adults, muscle mass remains stable. In older adults, however, rates of muscle building are blunted, especially in response to “muscle building triggers” like eating protein-rich foods5 or performing exercise6. This means that the balance between building and breakdown is tipped towards less building and more breakdown.
Loss of nerve cells
An important cause of strength loss is that, as we age, there is a decline in the number of nerve cells that carry messages to our muscles to tell them to contract1.
Hormone changes and inflammation
The decline in certain hormones (such as testosterone) and the slight elevation in inflammation in the body as we age are thought to play a role in age-related muscle loss1.
Top tips to slow muscle and strength loss with age
- Keep active! Physical activity, especially resistance exercise (weight lifting) boosts rates of muscle building and enhances muscle mass and strength even in frail older adults in their 90’s7!
- Eat a healthy, balanced diet and consume protein-rich foods at each meal8.
Fig 2. These images show the inside of the thigh. The dark colour represents muscle and the light colour shows the fat both surrounding and inside the muscle2.
1 Mitchell, W. K. et al. Sarcopenia, dynapenia, and the impact of advancing age on human skeletal muscle size and strength; a quantitative review. Front. Physiol. 3, 260, doi:10.3389/fphys.2012.00260 (2012).
2 McLeod, M., Breen, L., Hamilton, D. L. & Philp, A. Live strong and prosper: the importance of skeletal muscle strength for healthy ageing. Biogerontology 17, 497-510, doi:10.1007/s10522-015-9631-7 (2016).
3 Breen, L. et al. Two weeks of reduced activity decreases leg lean mass and induces “anabolic resistance” of myofibrillar protein synthesis in healthy elderly. J. Clin. Endocrinol. Metab. 98, 2604-2612, doi:10.1210/jc.2013-1502 (2013).
4 Robinson, S. M. et al. Does nutrition play a role in the prevention and management of sarcopenia? Clin. Nutr., doi:10.1016/j.clnu.2017.08.016 (2017).
5 Moore, D. R. et al. Protein ingestion to stimulate myofibrillar protein synthesis requires greater relative protein intakes in healthy older versus younger men. J. Gerontol. A Biol. Sci. Med. Sci. 70, 57-62, doi:10.1093/gerona/glu103 (2015).
6 Kumar, V. et al. Age-related differences in the dose-response relationship of muscle protein synthesis to resistance exercise in young and old men. J. Physiol. 587, 211-217, doi:10.1113/jphysiol.2008.164483 (2009).
7 Fiatarone, M. A. et al. High-intensity strength training in nonagenarians. Effects on skeletal muscle. JAMA 263, 3029-3034 (1990).
8 Murphy, C. H., Hector, A. J. & Phillips, S. M. Considerations for protein intake in managing weight loss in athletes. European journal of sport science 15, 21-28, doi:10.1080/17461391.2014.936325 (2015). | <urn:uuid:d09b1770-5c03-404a-afe1-cdbb191e6c28> | {
"dump": "CC-MAIN-2018-22",
"url": "http://afgdp.ie/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866938.68/warc/CC-MAIN-20180525024404-20180525044404-00194.warc.gz",
"language": "en",
"language_score": 0.9088150858879089,
"token_count": 1271,
"score": 2.75,
"int_score": 3
} |
New study shows cost of extractions for under fives under general anaesthetic could be 8 times more expensive than preventive programmes
Over 10,000 children having teeth extracted under GA per annum
A leading public service dentist has called for the urgent introduction of a properly funded oral health programme for pre-school children and the restoration of regular school screenings for primary school children nationwide.
The call follows the publication of a new study which shows that a considerable number of children under five require extractions under general anaesthetic and that some children are having as many as nine teeth extracted.
The study found that the cost of treatment - €819 per patient - could be as much as eight times the cost of a preventive/oral health promotion programme for the same group.
The study of 347 preschool children in Cork was published in the latest edition of Journal of the Irish Dental Association.
Dr Michaela Dalton, President of the HSE Dental Surgeons group, said the findings showed that prevention is not just a much better option for patients; it is also much more cost effective.
“The first and most important point to make is that too many children in Ireland are having teeth extracted under general anaesthetic. We believe the number is well over 10,000 every year. This study shows that the problem starts at a very young age for many children and that economically-disadvantaged children are at a greater risk of requiring treatment. The study also found that children, who underwent extractions under GA at an early age, demonstrated poor oral health into adolescence.”
The Irish Dental Association is calling on the Minister for Health Simon Harris to introduce preventive programmes targeting preschool-aged children to tackle the high levels of dental caries within this age group and to provide a comprehensive preventive dental health programme for every child under 12 as promised in the Programme for Government.
“Having teeth extracted under GA is a very traumatic event for a young person. In the vast majority of cases it is preventable. The HSE often says it cannot introduce programmes due to lack of funds, but this study shows clearly that prevention is far more cost effective than treatment.”
“While the problem begins in pre-school, it doesn’t end there. In many parts of the country school screenings which should happen in 2nd, 4th and 6th classes are just not happening due to chronic staff shortages in the Public Dental Service. In 2015, 16,000 who were due to be screened did not receive an examination. Unless the staff shortages are addressed as a matter of urgency our young people and other vulnerable groups will continue to suffer the consequences of this neglect” Dr Dalton concluded. | <urn:uuid:784c9f00-a671-4f0d-a5a6-5d66ac758d32> | {
"dump": "CC-MAIN-2017-30",
"url": "https://www.dentist.ie/latest-news/leading-public-service-dentist-calls-for-introduction-of-oral-health-programme-for-pre-school-children.6932.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423769.10/warc/CC-MAIN-20170721102310-20170721122310-00441.warc.gz",
"language": "en",
"language_score": 0.9702637791633606,
"token_count": 545,
"score": 2.890625,
"int_score": 3
} |
General Tips on Optimizing SQL Server Indexes
SQL Server 2000 offers a function called CHECKSUM. The main purpose for this function is to create what are called hash indices. A hash indice is an index built on a column that stores the checksum of the data found in another column in the table. The CHECKSUM function takes data from another column and creates a checksum value. In other words, the CHECKSUM function is used to create a mostly unique value that represents other data in your table. In most cases, the CHECKSUM value will be much smaller than the actual value. For the most part, checksum values are unique, but this is not guaranteed. It is possible that two slightly different values may produce the same identical CHECKSUM value.
Here’s how this works using a music database example. Say we have a song with the title “My Best Friend is a Mule from Missouri”. As you can see, this is a rather long value, and adding an index to the song title column would make for a very wide index. But in this same table, we can add a CHECKSUM column that takes the title of the song and creates a checksum based on it. In this case, the checksum would be 1866876339. The CHECKSUM function always works the same, so if you perform the CHECKSUM function on the same value many different times, you would always get the same result.
So how does the CHECKSUM help us? The advantage of the CHECKSUM function is that instead of creating a wide index by using the song title column, we create an index on the CHECKSUM column instead. “That’s fine and dandy, but I thought you wanted to search by the song’s title? How can anybody ever hope to remember a checksum value in order to perform a search?”
Here’s how. Take a moment to review this code:
SELECT title, artist, composer
WHERE title = ‘My Best Friend is a Mule from Missouri’
AND checksum_title = CHECKSUM(‘My Best Friend is a Mule from Missouri’)
In this example, it appears that we are asking the same question twice, and in a sense, we are. The reason we have to do this is because there may be checksum values that are identical, even though the names of the songs are different. Remember, unique checksum values are not guaranteed.
Here’s how the query works. When the Query Optimizer examines the WHERE clause, it determines that there is an index on the checksum_title column. And because the checksum_title column is highly selective (minimal duplicate values) the Query Optimizer decides to use the index. In addition, the Query Optimizer is able to perform the CHECKSUM function, converting the song’s title into a checksum value and using it to locate the matching records in the index. Because an index is used, SQL Server can quickly locate the rows that match the second part of the WHERE clause. Once the rows have been narrowed down by the index, then all that has to be done is to compare these matching rows to the first part of the WHERE clause, which will take very little time.
This may seem a lot of work to shorten the width of an index, but in some cases, this extra work will pay off in better performance in the long run.
Because of the nature of this tip, I suggest you experiment using this method, along with trying the more conventional method of creating an index on the title column itself. Since there are so many variables to consider, it is tough to know which method is better in your particular situation unless you give them both a try. [2000, 2005] Updated 10-4-2004
Some queries can be very complex, involving many tables, joins, and other conditions. I have seen some queries run over 1000 lines of code (I didn’t write them). This can make them difficult to analyze in order to identify what indexes might be used to help the query perform better.
For example, perhaps you want to create a covering index for the query and you need to identify the columns to include in the covering index. Or, perhaps you want to identify those columns that are used in joins in order to check to see that you have indexes on those columns used in the joins in order to maximize performance.
To make complex queries easier to analyze, consider breaking them down into their smaller constituent parts. One way to do this is to simply create lists of the key components of the query, such as:
- List all of the columns that are to be returned
- List all of the columns that are used in the WHERE clause
- List all of the columns used in the JOINs (if applicable)
- List all the tables used in JOINs (if applicable)
Once you have the above information organized in this easy-to-comprehend form, it is must easier to identify those columns that could potentially make use of indexes when executed. [6.5, 7.0, 2000, 2005] Updated 10-4-2004
Queries that include either the DISTINCT or the GROUP BY clauses can be optimized by including appropriate indexes. Any of the following indexing strategies can be used:
- Include a covering, non-clustered index (covering the appropriate columns) of the DISTINCT or GROUP BY clauses.
- Include a clustered index on the columns in the GROUP BY clause.
- Include a clustered index on the columns found in the SELECT clause.
Adding appropriate indexes to queries that include DISTINCT or GROUP BY is most important for those queries that run often. If a query is rarely run, then adding an index may cause more performance problems than it helps. [7.0, 2000, 2005] Updated 10-4-2004
Computed columns in SQL Server 2000 can be indexed if they meet all of the following criteria:
- The computed column’s expression is deterministic. This means that the computed value must always be the same given the same inputs.
- The ANSI_NULL connection-level object was on when the table was created.
- TEXT, NTEXT, or IMAGE data types are not used in the computed column.
- The physical connection used to create the index, and all connections used to INSERT, UPDATE, or DELETE rows in the table must have these six SET options properly configured: ANSI_NULLS = ON, ANSI_PADDINGS = ON, ANSI_WARNINGS = ON, ARITHABORT = ON, CONCAT_NULL_YIELDS_NULL = ON, QUOTED_IDENTIFIER = ON, NUMERIC_ROUNDABORT = OFF.
If you create a clustered index on a computed column, the computed values are stored in the table, just like with any clustered index. If you create a non-clustered index, the computed value is stored in the index, not in the actual table.
While adding an index to a computed column is possible, it is rarely advisable. The biggest problem with doing so is that if the computed column changes, then the index (clustered or non-clustered) has to also be updated, which contributes to overhead. If there are many computed values changing, this overhead can significantly hurt performance.
The most common reason you might consider adding an index to a computed column is if you are using the CHECKSUM() function on a large character column in order to reduce the size of an index. By using the CHECKSUM() of a large character column, and indexing it instead of the large character column itself, the size of the index can be reduced, helping to save space and boost overall performance. Updated 10-4-2004
Many databases experience both OLTP and OLAP queries. As you probably already know, it is nearly impossible to optimize the indexing of a database that has both type of queries. This is because in order for OLTP queries to be fast, there should not be too many indexes as to hinder INSERT, UPDATE, or DELETE operations. And for OLAP queries to be fast, there should be as many indexes as needed to speed SELECT queries.
While there are many options for dealing with this dilemma, one option that may work for some people is a strategy where OLAP queries are mostly (if not all) are run during off hours (assuming the database has any off hours), and take advantage of indexes that are added each night before the OLAP queries begin, and then are dropped once the DSS queries are complete. This way, those indexes needed for fast performing OLAP queries will minimally interfere with OLTP transactions (especially during busy times).
As you can imagine, this strategy can take a lot of planning and work, but in some cases, it can offer the best performance for databases that experience both OLTP and OLAP queries. Because it is hard to guess if this strategy will work for you, you will want to test it before putting it into production. [6.5, 7.0, 2000, 2005] Updated 10-4-2004
Be aware that the MIN() or MAX() functions can take advantage of appropriate indexes. If you find that you are using these functions often, and your current query is not taking advantage of current indexes to speed up these functions, consider adding appropriate indexes. [6.5, 7.0, 2000] Updated 10-4-2004
If you know that a particular column will be subject to many sorts, consider adding a unique index to that column. This is because unique columns generally sort faster in SQL Server than if there are duplicate column data present. [6.5, 7.0, 2000, 2005] Updated 10-4-2004
Whenever you upgrade software that affects SQL Server, get a new Profiler trace and run it against the Index Wizard or Database Storage Engine Advisor to catch any obvious missing indexes that may be needed as a result of the upgrade. Application software that is updated often changes the Transact-SQL code (and SPs, if used) that accesses SQL Server. In many cases, the vendor supplying the upgraded code may not have taken into account how index use might have changed after the upgrade was made. You may be surprised what you find. [6.5, 7.0, 2000, 2005] Added 4-19-2005
DELETE operations can sometimes be time- and space-consuming. In some environments you might be able to increase the performance of this operation by using TRUNCATE instead of DELETE. TRUNCATE will almost instantly be executed. However, TRUNCATE will not work when there are Foreign Key references present for that table. A workaround is to DROP the constraints before firing the TRUNCATE. Here’s a generic script that will drop all existing Foreign Key constraints on a specific table:
CREATE TABLE dropping_constraints
INSERT INTO dropping_constraints
‘ALTER TABLE [' +
'] DROP CONSTRAINT ‘ +
t1.CONSTRAINT_NAME = t2.CONSTRAINT_NAME
DECLARE @stmt VARCHAR(8000)
DECLARE @rowcnt INT
SELECT TOP 1 @stmt=cmd FROM dropping_constraints
SET @stmt = ‘DELETE FROM dropping_constraints WHERE cmd =’+ QUOTENAME(@stmt,””)
SELECT TOP 1 @stmt=cmd FROM dropping_constraints
DROP TABLE dropping_constraints
This can also be extended to drop all FK constraints in the current database. To achieve this, just comment out the WHERE clause. [6.5, 7.0, 2000] Added 4-19-2005 | <urn:uuid:934ace00-eb5b-42d2-8f0c-3e564267fec3> | {
"dump": "CC-MAIN-2014-35",
"url": "http://www.sql-server-performance.com/2007/optimizing-indexes-general/3/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834663.62/warc/CC-MAIN-20140820021354-00120-ip-10-180-136-8.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8928527235984802,
"token_count": 2504,
"score": 3.171875,
"int_score": 3
} |
Problems can begin as early as when the child is born. Birth
Trauma and stress from a difficult labor, use of forceps
or during a Caesarean section delivery can injure your baby’s
spine. This can cause a vertebral subluxation which restricts
your baby’s joint motion.
children begin to walk, they take hundreds of falls. In
the early years they are always falling off bikes, out of
trees, etc. All of these small traumas add up and eventually
start to affect the spine.
many of you have taken young children to have their teeth
or their eyes checked? Most of you probably have. Now how
many of you have taken your child to a chiropractor to have
his or her spine checked? If you never have, I wonder why
not? People will think nothing of having braces put on their
kid's teeth and spending thousands of dollars just so they
have straight teeth. But they will never have their spine
checked to see if it is straight.
research regarding the effect of chiropractic care on kids
is amazing. A 1989 study which compared 200 children who
had seen pediatricians with 200 children who had been under
chiropractic care showed that the health of the children
who had seen chiropractors was notably superior to that
of children brought up under standard medical care. The
chiropractic children, for example, had fewer ear infections,
allergies, and cases of tonsillitis, and therefore required
has been proven that adjusting the spine to relieve nerve
pressure has a direct effect on the immune system. It also
makes sense that a better functioning nervous system makes
for a better functioning body as a whole. | <urn:uuid:b27b14e3-c2bb-43b2-a99c-0d28354dc5d5> | {
"dump": "CC-MAIN-2020-10",
"url": "http://drscottcady.com/ChiroKids.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143373.18/warc/CC-MAIN-20200217205657-20200217235657-00008.warc.gz",
"language": "en",
"language_score": 0.9566781520843506,
"token_count": 366,
"score": 2.640625,
"int_score": 3
} |
Gifted and Talented Supervisor
Can my child be both gifted and underachiever?
What are some Causes?
What can I do?
CAN MY CHILD BE BOTH GIFTED AND AN UNDERACHIEVER IN SCHOOL?
Frequently Asked Questions of the National Association for Gifted Children
Many intellectually and
creatively gifted children do not achieve to their abilities in school.
Although parents and teachers are typically aware of how bright these
children are, they are puzzled by students' lack of motivation and
productivity. Furthermore, as school performance declines, even parents
and teachers begin to wonder whether the students are as capable as
test scores and earlier performance indicated. Frequently, the children
themselves lose confidence in their ability to perform in school.
What are some signs of academic underachievement?
young people will become immersed in learning of their choice, will read
continuously, or escape to computers rather than complete school
assignments. They may be active but selective learners and refuse to do
required school work.
When should underachievement be considered a problem?
bright kids should not be expected to receive "A" grades in everything.
In facts, students who complete almost all their work perfectly may not
be sufficiently challenged. All students should be expected to have
strengths and weaknesses, as well as subjects they find more and less
interesting. Underachievement should be considered a problem if it is
severe (achievement well below grade level), is long standing (occurring
over more than one school year), or is causing the student distress.
NAGC thanks Sylvia Rimm, Ph.D., for developing this FAQ
brochure. Dr. Rimm is a psychologist, clinical professor at Case
Western Reserve University School of Medicine, and Director of the
Family Achievement Clinic in Cleveland, Ohio
Copyright 2008 National Association for Gifted Children
What causes gifted children to underachieve?
has complex causes, so it is important not to over-simplify the
problem. Gifted children may not themselves understand why they are
underachieving. Usually school and home causes combine to set the
pattern in motion.
Possible School Issues
Possible Home Issues
Many children do overcome their underachievement; others continue similar patterns throughout adult life. If the pattern has continued for more than one school year, it is important to get help. It is easier to change a pattern if you identify it early. Following are some suggestions for getting help: | <urn:uuid:d339e191-fec7-44ff-aa61-83198e83bf40> | {
"dump": "CC-MAIN-2019-22",
"url": "https://district.ops.org/DEPARTMENTS/CurriculumandInstructionSupport/GiftedandTalented/SpecialTopics/Underachievement.aspx",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257553.63/warc/CC-MAIN-20190524064354-20190524090354-00044.warc.gz",
"language": "en",
"language_score": 0.9380257725715637,
"token_count": 524,
"score": 2.765625,
"int_score": 3
} |
[Read more session reports and live updates from the EuroDig 2019]
The session followed the themes raised in the UN Convention of the Rights of the Child (UNCRC). It addressed the question of children’s rights online, their awareness of these rights, and what it means to exercise these rights within the Internet.
Ms Veronica Stefan (Co-ordinator and Founder, Digital Citizens Romania) began by stating that adults should teach children about their rights online, because the digital world is still shaped by adults. On the other hand, adults are not necessarily experts on this subject. Stefan underlined that while digital rights belong to everyone nominally, youth and children are most often excluded from the debates on Internet governance. She stressed that while social media is a very powerful tool for campaigning and advocacy, a more structured approach to participation in the digital era is needed; and it should be based on policies, meetings with decision makers, associations, etc. Stefan also highlighted the issue of access to rights rather than awareness of rights.
Mr João Pedro Martins (Youth Ambassador, Better Internet for Kids) stated that children often do not know whether their rights are being violated or not. Martins underlined that the perspective of youth is often given in a raw form, and their messages have to be translated in order to be integrated into legislation or considered in the application of initiatives. He noted that it is difficult for parents to present their children with solutions to online challenges, as often they do not know what mechanisms exist to protect children. He also noted that children should know who has the responsibility to act in situations when their rights are infringed.
Ms Simone van der Hof (Professor of Law and Information Society, University of Leiden) pointed out that the legal language used to discuss a matter related to children’s rights online does not resonate with children themselves. She also underlined that it is necessary to understand what children are doing online in order to engage with them about their online activities. Van der Hof also pointed out the issue of the digital divide – there are children that use the Internet for entertainment and socialising, children that use it to further their opportunities, as well as children with problems and vulnerable children. Children are not a homogeneous group and discussions should reflect this.
Ms Eva Lievens (Assistant Professor of Law & Technology, University of Ghent) stated that children are able to recognise what rights they have. She stated that even though UNCRC is 30 years old in 2019, it is a solid instrument whose articles can be reinterpreted in order to ensure the rights enshrined in the Convention are protected in the digital world. She stated that normative ways of involving youth and children exist and that participation by children and youth need to be taken seriously by the policymakers at all levels and by different actors. Lievens highlighted the Council of Europe (CoE) Guidelines to respect, protect and fulfil the rights of the child in the digital environment, an effort of 47 member states of the CoE to adopt a similar approach to these issues. According to Lievens, when it comes to data protection, the responsibility for practices that might actually be harmful or intrusive to children's lives should be on data controllers.
Mr Charalampos Kyristis (Member of the Organising Team, YouthDIG) opined that most adults do not know their own rights, which brings forth the question of who will explain to the children what their rights are. He also stated that more initiatives that support youth participation, such as YouthDIG or SEEDIG Youth School, need to be established. Kyristis pointed out that more digital education on all levels – including parents and teachers – is needed. He also opined that children and youth are afraid to stand up for themselves online because they are afraid they will be subjected to cyber-bullying and bullying.
Ms Liliane Leißer (Project Assistant, Österreichisches Institut für angewandte Telekommunikation) presented an idea developed during YouthDIG 2019 called Smart Active Participative Algorithm (SAPA). The purpose of this algorithm is to replace some of the ads youth are exposed to, while browsing on the Internet, with information about youth participation initiatives. SAPA will suggest differentiated opportunities based on the age of the users, in order to empower the engagement of people of all ages towards youth initiatives through the Internet.
Ms Hildur Halldórsdóttir (Project Manager, SAFT Safer Internet Centre in Iceland) stated that it is important to enhance digital literacy among children and youth. However, it is important to teach them that the same ethics and empathy that apply in real life also apply online. Halldorsdottir also noted that digital literacy contains critical thinking. Parents need to be educated on children’s rights to protection and then relay this knowledge to their children.
By Andrijana Gavrilović | <urn:uuid:e34c0148-52aa-4b2f-b4a7-f896ea2eda8c> | {
"dump": "CC-MAIN-2019-35",
"url": "https://dig.watch/sessions/children-digital-age-how-balance-their-right-freedom-and-their-right-be-protected",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315936.22/warc/CC-MAIN-20190821110541-20190821132541-00162.warc.gz",
"language": "en",
"language_score": 0.9589394927024841,
"token_count": 1009,
"score": 2.59375,
"int_score": 3
} |
Whether or not it’s actually said on a movie or television set, “Take Five” is a fairly familiar phrase that means take a break. And there are surely times in our worlds when we need to take a break.
Here’s a simple tool to learn so that we can take a safe, healthy break when we need to. This tool could help us avoid angry actions (which we might have regretted later.) It might help some of us avoid using an unhealthy coping skill.
The next time you find yourself facing a conflict or stressor..
TAKE FIVE (or more) steps away from the source of the conflict.
TAKE FIVE slow, deep breaths. Visualize the oxygen circulating through your body and visualize the stress/anger exiting as you exhale. (Deep breathing can really change how we feel!)
TAKE FIVE minutes to carefully, calmly consider if or how best to respond to the situation. If it isn’t literally on fire, you can take a little time before you respond.
Take five steps, breaths, and minutes. You deserve a break and you deserve to be free. Please learn it, memorize it, and use it. It works!
((Extra credit of you whistle “Take Five” by Dave Brubeck))
Live in Belgium 1964
Paul Desmond (alto sax), Joe Morello (drums), Eugene Wright (bass) and Dave Brubeck (piano) | <urn:uuid:25ea0dc7-40a9-4c04-8a05-e59dc8f113b5> | {
"dump": "CC-MAIN-2018-43",
"url": "https://quitterswin.blog/2018/05/25/take-five/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511872.19/warc/CC-MAIN-20181018130914-20181018152414-00025.warc.gz",
"language": "en",
"language_score": 0.8751879334449768,
"token_count": 305,
"score": 2.84375,
"int_score": 3
} |
Forget the all-night cramming. If you want to learn, get regular sleep!
A recently published study suggests that sleep helps people to remember a newly learned words and incorporate them into their thinking. But the same principles are likely to apply to other types of learning. Sleep has a role to play in the reorganization of new memories.
J Tamminen, JD Payne, R Stickgold, EJ Wamsley, and MG Gaskell. Sleep Spindle Activity is Associated with the Integration of New Memories and Existing Knowledge. Journal of Neuroscience, Oct 2010; 30: 14356 – 14360.
Abstract:Sleep spindle activity has been associated with improvements in procedural and declarative memory. Here, for the first time, we looked at the role of spindles in the integration of newly learned information with existing knowledge, contrasting this with explicit recall of the new information. Two groups of participants learned novel spoken words (e.g., cathedruke) that overlapped phonologically with familiar words (e.g., cathedral). The sleep group was exposed to the novel words in the evening, followed by an initial test, a polysomnographically monitored night of sleep, and a second test in the morning. The wake group was exposed and initially tested in the morning and spent a retention interval of similar duration awake. Finally, both groups were tested a week later at the same circadian time to control for possible circadian effects. In the sleep group, participants recalled more words and recognized them faster after sleep, whereas in the wake group such changes were not observed until the final test 1 week later. Following acquisition of the novel words, recognition of the familiar words was slowed in both groups, but only after the retention interval, indicating that the novel words had been integrated into the mental lexicon following consolidation. Importantly, spindle activity was associated with overnight lexical integration in the sleep group, but not with gains in recall rate or recognition speed of the novel words themselves. Spindle activity appears to be particularly important for overnight integration of new memories with existing neocortical knowledge.
When the researchers examined whether newly learned words had been integrated with existing knowledge, they discovered the involvement of a different type of activity in the sleeping brain. Sleep spindles are brief but intense bursts of brain activity that reflect information transfer between different memory stores in the brain-the hippocampus deep in the brain and the neocortex, the surface of the brain.
Memories in the hippocampus are stored separately from other memories, while memories in the neocortex are connected to other knowledge. Volunteers who experienced more sleep spindles overnight were more successful in connecting the new words to the rest of the words in their mental lexicon, suggesting that the new words were communicated from the hippocampus to the neocortex during sleep.
New memories are only really useful if you can connect them to information you already know. For this, you need sleep. | <urn:uuid:dec2df36-ac83-4ce8-914b-65c206090194> | {
"dump": "CC-MAIN-2021-10",
"url": "https://nottoomuch.com/not-too-much-home/all-too-much/notes-and-nonsense/if-you-want-to-learn-get-some-sleep/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00494.warc.gz",
"language": "en",
"language_score": 0.960495114326477,
"token_count": 594,
"score": 3.28125,
"int_score": 3
} |
The Theory of Search Is the Economics of Discovery:
Some Thoughts Prompted by Sir David Hendry’s Essay *
in Rationality, Markets and Morals (RMM) Special Topic:
Statistical Science and Philosophy of Science
Part 1 (of 2)
Professor Clark Glymour
Alumni University Professor
Department of Philosophy[i]
Carnegie Mellon University
Professor Hendry* endorses a distinction between the “context of discovery” and the “context of evaluation” which he attributes to Herschel and to Popper and could as well have attributed also to Reichenbach and to most contemporary methodological commentators in the social sciences. The “context” distinction codes two theses.
1.“Discovery” is a mysterious psychological process of generating hypotheses; “evaluation” is about the less mysterious process of warranting them.
2. Of the three possible relations with data that could conceivably warrant a hypothesis—how it was generated, its explanatory connections with the data used to generate it, and its predictions—only the last counts.
Einstein maintained the first but not the second. Popper maintained the first but that nothing warrants a hypothesis. Hendry seems to maintain neither–he has a method for discovery in econometrics, a search procedure briefly summarized in the second part of his essay, which is not evaluated by forecasts. Methods may be esoteric but they are not mysterious. And yet Hendry endorses the distinction. Let’s consider it.
As a general principle rather than a series of anecdotes, the distinction between discovery and justification or evaluation has never been clear and what has been said in its favor of its implied theses has not made much sense, ever. Let’s start with the father of one of Hendry’s endorsers, William Herschel. William Herschel discovered Uranus, or something. Actually, the discovery of the planet Uranus was a collective effort with, subject to vicissitudes of error and individual opinion, was a rational search strategy. On March 13, 1781, in the course of a sky survey for double stars Hershel reports in his journal the observation of a “nebulous star or perhaps a comet.” The object came to his notice how it appeared through the telescope, perhaps the appearance of a disc. Herschel changed the magnification of his telescope, and finding that the brightness of the object changed more than the brightness of fixed stars, concluded he had seen a comet or “nebulous star.” Observations that, on later nights, it had moved eliminated the “nebulous star” alternative and Herschel concluded that he had seen a comet. Why not a planet? Because lots of comets had been hitherto observed—Edmund Halley computed orbits for half a dozen including his eponymous comet—but never a planet. A comet was much the more likely on frequency grounds. Further, Herschel had made a large error in his estimate of the distance of the body based on parallax values using his micrometer. A planet could not be so close.
Herschel communicated his observations to the British observatories at Oxford and Greenwich, which took up the observations as, soon, did astronomers on the continent. Maskelyne quickly concluded the object was a planet, not a comet, but multiple attempts were made on the continent to fit a parabolic orbit. Every further observation conflicted whichever parabolic hypothesis had been fitted to the previous data. By early summer of 1781 Lexell had computed a circular orbit and (very accurate) distance had been computed using the extreme observations including Herschel’s original observation, but the accumulating data showed an eccentricity. The elements of the elliptic orbit were given by Laplace early in 1783. In all, it took nearly two years for Herschel’s object to be certified a planet.
There was a logic to the process, but there is no natural place to chop it into context of discovery and context of justification, and no value in chopping it anywhere. Herschel had a criterion for noting an anomalous object—the appearance of a disc. An anomalous object could be one of three kinds: a comet, a planet, an anomalous star. Herschel had a quick test—changing magnification—that eliminated the stellar option. Based on the history of astronomical observations a comet was far more likely than a new planet, and that option was investigated first, by attempts to compute a parabolic orbit that would predict subsequent observations. When that failed, and the body failed to show distinctive signs of a comet—a tail for example—the planet hypothesis was resorted to, first by computing the simplest orbit, a circle, and when that failed, the elliptical orbit.
The example is a near-paradigm of how to recognize and determine salient properties of a rare object. First, a cheap criterion for recognizing a candidate; then an ordering based on prior data of the alternative hypotheses; then applying the criterion for the likeliest candidate—a parabolic orbit for a candidate—and, testing on further observations, rejecting it. Then applying the criterion for the remaining candidate in its simplest form, rejecting it, applying the criterion in a more complex form, and succeeding. There is a collective decision flow chart, which I leave to the reader.
What is striking is the economy of the procedure. Cheap tests (noting the visual features of the object, changing the magnification) were applied first, tests requiring more data and more calculation (computing orbits) only applied when the cheap tests were passed. The alternative explanations were examined in the order of their prior probability which was also in some respects the order of their data requirements (elliptic orbits required more data than parabolic orbits), so that if the most probable hypothesis succeeded—which it did not–the effort of testing the less probable would be avoided. A cheap (in data requirements and calculational effort) test (circular orbit) of the less probable hypothesis was applied before the more demanding, and ultimately successful test.
There is a lesson in the example. Testing is a cog in the machine of search, a part of the process of discovery, not a separate thing, “justification.” And a larger lesson: there are strategies to search, better and worse in the conditions of success and better and worse in the costs they risk. In a slogan, the theory of search is the economics of discovery. The slogan is only semi-original. Pierce described the process of abduction/deduction/test/abduction as the “economics of research.” As usual, he was inspired, but “abduction” never came to anything practical.
Could there be search procedures behind the discovery of things more abstract than planets? Cannizaro’s discovery of the values of relative atomic weights might serve as an example. But what about real big juicy theories, the theory of relativity say? We don’t know precisely what went on in the brains of Einstein and Hilbert, the two discoverers of the theory, but we know something about what they knew or assumed that constrained their respective searches: they wanted field theories, which meant partial differential equations; they wanted the equations to be coordinate independent, or covariant; they wanted the equivalence principle—unforced motions would follow geodesics of a metric; they wanted a theory to account for the anomalous advance of Mercury’s perihelion. It is not at all implausible that an automated search with those constraints would turn up the field equations of general relativity as the most constrained explanation.
In sum, search methods are pretty common in science if not always explicit. They could be made a lot more common if the computer and the algorithms it allows were put to work in explicit search methods. Both philosophers and statisticians warn against search methods, even as statisticians use them—variable selection by regression is a search method, and so is principal components factor analysis. Herschel and Popper and Reichenbach seem not to have imagined the possibility of automated search, but Hempel did. Hempel claimed such searches could never discover anything novel because a computational procedure could not introduce “novel” properties. (Little did he know.) A thousand metaphors are apt: Search is the wandering Jew of methodology, useful but detested, a real enough bogeyman to scare graduate students. But even enlightened spirits who do not truck with such anti-search bigotry often imply that the fact that a hypothesis was found by a search method cannot itself be evidence for the hypothesis. And that is just wrong.
A search procedure can be thought of as a statistical estimator, and in many applications that is not just a way of thinking but the very thing. There is assumed a space, possibly infinite, of possible hypotheses. There is a space of possible data samples, each of which may be ordered (as in time) or not. A search procedure is a partial function from samples to subsets of the hypothesis space. If the hypotheses are probabilistic, then all of the usual criteria for statistical estimators are well defined: consistency, bias, rate of convergence, efficiency, sufficiency and so on. Some of the usual theorems of estimation theory may not apply in some cases because the search set up is not restricted to parametric models and the search need not be a point estimator—i.e., it may return a set of alternative models.
Statistical estimators have epistemic relationships and trade-offs. Some estimators have convergence theorems that hold under weaker assumptions than other estimators. The trade-off is usually in reduced information in cases in which the stronger assumptions are actually true. Hence “robust statistics.” The same is true of search methods. Linear regression as a method of searching for causal relations is pointwise consistent under extraordinarily strong assumptions; FCI, by now an old standard in graphical model search, has much weaker sufficient conditions for pointwise consistency but provides much less information even in circumstances in which regression assumptions are correct. In other cases, there is something akin to dominance. The PC algorithm, for example, gives the same causal information as regression whenever regression does (given the information that the predictors are not effects of the outcome) but also in many cases when regression does not.
I think ordinary parameter estimation provides evidence for the estimated parameter values. The quality of the evidence depends of course on the properties of the estimator: Is it consistent? What is the variance of the estimate? What is the rationale for the hypothesis space—the parametric family of probability distributions for which properties of the estimation function have been proved? But issues of quality do not undermine the general principle that parameter estimates are evidence for and against parameter values. So it is with model search, there is better and worse.
*Hendry, D. (2011) “Empirical Economic Model Discovery and Theory Evaluation”, in Rationality, Markets and Morals, Volume 2 Special Topic: Statistical Science and Philosophy of Science, Edited by Deborah G. Mayo, Aris Spanos and Kent W. Staley: 115-145.
[i] Clark Glymour is also a Senior Research Scientist at IHMC (Florida Institute for Human and Machine Learning).
He works on machine learning, especially on methods for automated causal inference, on the psychology of human causal judgement, and on topics in mathematical psychology.
His books include:
Theory and Evidence (Princeton, 1980);
Examining Holistic Medicine (with D. Stalker), Prometheus, 1985;
Foundations of Space-Time Theories (with J. Earman), University of Minnesota Press, 1986;
Discovering Causal Structure (with R. Scheines, P. Spirtes and K.Kelly) Academic Press, 1987;
Causation, Prediction and Search (with P.Spirtes and R. Scheines), Springer, 1993, 2nd Edition MIT Press, 2001;
Thinking Things Through, MIT Press, 1994;
Android Epistemology (with K. Ford and P. Hayes) MIT/AAAI Press, 1996;
Bayes Nets and Graphical Causal Models in Psychology, MIT Press, 2001.
Galileo in Pittsburgh, Harvard University Press, 2010 (with Wang Wei and Dag Westerstahl, eds.)
Logic, Methodology and Philosophy of Science, College Publications, 2010 | <urn:uuid:350e8a13-c2fa-4f45-9a3c-faf9dde69f40> | {
"dump": "CC-MAIN-2015-27",
"url": "http://errorstatistics.com/2012/07/22/c-glymour-the-theory-of-search-is-the-economics-of-discovery/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099361.57/warc/CC-MAIN-20150627031819-00245-ip-10-179-60-89.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9403368830680847,
"token_count": 2566,
"score": 2.71875,
"int_score": 3
} |
Fezzan-Ghadames (French Administration)
|Military Territory of Fezzan-Ghadames|
|Territoire Militaire du Fezzan-Ghadamès (French)
إقليم العسكرية من فزان-غدامس (Arabic)
Map of Libya during World War II, showing Fezzan
|Political structure||Military Administration|
|-||1943||Raymond Jean Marie Delange|
|Hakim / Wāli|
|-||1946–1951||Ahmad Sayf an-Nasr|
|-||Occupation of Sabha by French Forces||12 January 1943|
|-||Established||11 April 1943|
|-||Joined Tripolitania and Cyrenaica to form the Kingdom of Libya||24 December 1951|
|Today part of||Libya|
Part of a series on the
|History of Libya|
Military Territory of Fezzan-Ghadames was a territory in the southern part of the former Italian colony of Libya controlled by the French from 1943 until the Libyan independence in 1951. It was part of the Allied administration of Libya.
Free French forces from French Chad occupied the area that was the former Italian Southern Military Territory and made several requests to annex administratively their Fezzan to the French colonial Empire. The administrative personnel remained the former Italian bureaucrats.
The British administration began the training of a badly needed Libyan civil service. Italian administrators continued to be employed in Tripoli, however. The Italian legal code remained in effect for the duration of the war. In the lightly populated Fezzan region, a French military administration formed a counterpart to the British operation. With British approval, Free French forces moved north from Chad to take control of the territory in January 1943. French administration was directed by a staff stationed in Sabha, but it was largely exercised through Fezzan notables of the family of Sayf an Nasr. At the lower echelons, French troop commanders acted in both military and civil capacities according to customary French practice in the Algerian Sahara. In the west, Ghat was attached to the French military region of southern Algeria and Ghadamis to the French command of southern Tunisia – giving rise to Libyan nationalist fears that French intentions might include the ultimate detachment of Fezzan from Libya.
Fezzan joined Tripolitania and Cyrenaica to form the Kingdom of Libya on 24 December 1951. It was the first country to achieve independence through the United Nations and one of the first former European possessions in Africa to gain independence. | <urn:uuid:d74c7b32-4afe-4cd8-b717-56209e0e71cd> | {
"dump": "CC-MAIN-2015-18",
"url": "http://en.wikipedia.org/wiki/Military_Territory_of_Fezzan-Ghadames",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430456861731.32/warc/CC-MAIN-20150501050741-00021-ip-10-235-10-82.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9291583895683289,
"token_count": 553,
"score": 3.03125,
"int_score": 3
} |
About Pinto Beans
- Contain the most fiber of any bean
- Popular in U.S. Tex Mex and Latin American cuisine
- Favorite bean for making chili
The Pinto Bean Story
Both the lima and the pinto (Spanish for “painted”) bean were cultivated by early Mexican and Peruvian civilizations more than 5,000 years ago. Pinto beans, kidney beans, navy beans, pink beans, Great Northern beans, and black beans are referred to as “common beans” and are classified as the same species. Pintos contain the most fiber of all beans and are the most popular bean consumed in the United States. They are a favorite in the American West; in fact, Dove Creek, CO, claims that it’s the Pinto Bean Capital of the World!
Pintos are small but flavorful and are a central part of the cuisine of many Latin American countries. They are prepared in refried beans and chile con carne and are typically served with rice. Pintos are also used in three-bean salads, minestrone soup, stews, and casseroles. Because of their similarity, pinto beans and pink beans are often used interchangeably. | <urn:uuid:a249eee6-3423-414c-a3cb-ca81151abc0e> | {
"dump": "CC-MAIN-2021-43",
"url": "https://louisianapantry.com/product/pinto-beans/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00000.warc.gz",
"language": "en",
"language_score": 0.9481180906295776,
"token_count": 253,
"score": 2.875,
"int_score": 3
} |
3 edition of Assessing English found in the catalog.
by Taylor & Francis Group
Written in English
|The Physical Object|
|Number of Pages||176|
Assessing a Student's Level. Reading A-Z provides a three-part assessment process to help you place students in instructionally appropriate level texts. Find out at which level to start a student. Determine when a student is ready to move to the next level. The materials in this toolkit were developed as part of a British Council funded research project which explored teacher attitudes to assessment and their training needs. In this module we will explore why assessing young learners might be challenging and offer some practical suggestions.
Covering the most important aspects of assessing adult basic readers, this collection of articles is introduced by a discussion of the purpose of resource books for adult education. The first article then examines the rationale for assessing reading ability and stresses the importance of purpose and design in assessment. The second article reviews problems in defining reading and establishing. Assessing English Language Learners: Bridges to Educational Equity: Connecting Academic Language Proficiency to Student Achievement is a comprehensive resource that intends to build bridges that promote educational equity, particularly in the areas of instruction and assessment. The book consists of two parts each of which includes four chapters.
Margo, by the way, has written extensively about assessing ELLs. Recently, she co-authored a book, Assessment and Accountability in Language Education Programs: A Guide for Administrators and. The Unlikely Spy, Daniel Silva's extraordinary debut novel, was applauded by critics as it rocketed onto national bestseller lists."Briskly suspenseful, tightly constructed reminiscent of John le Carré's The Spy Who Came in from the Cold," said The New York Times. "Silva has clearly done his homework, mixing fact and fiction to delicious effect and building tension—with the.
Psychology and mental retardation
Revival and rebellion in colonial central Africa
Comparison of worldwide lift safety standards.
Studies in literature. 1789-1877
Status of land surveying in eastern Massachusetts
Defending His Own (The Protectors)
Collin County, Texas, marriage book 4, January 26, 1876 to June 22, 1880
Receiving the vision
CONVERSATIONAL INTERACTION AND SECOND LANGUAGE DEVELOPMENT: RECASTS, RESPONSES, AND RED HERRINGS?
America This Beautiful Land Cal 88
By the King
Marquess Cornwallis and the consolidation of British rule.
A new spelling book, adapted to the different classes of pupils
guide to the laser
The Major Operations of the Navies in the War American Independence
Additionally she has authored, co-authored, and co-edited 11 books this past decade: Assessing English Language Learners: Bridges to Educational Equity (2 nd Ed., ), Academic Language in Diverse Classrooms: Definitions and Contexts (with G.
Ernst-Slavit, ), a foundational book for the series Promoting Content and Language Learning (a /5(50). Additionally she has authored, co-authored, and co-edited 11 books this past decade: Assessing English Language Learners: Bridges to Educational Equity (2 nd Ed., ), Academic Language in Diverse Classrooms: Definitions and Contexts (with G.
Ernst-Slavit, ), a foundational book for the series Promoting Content and Language Learning (a Cited by: Build the bridges for English language learners to reach success.
This thoroughly updated edition of Gottlieb’s classic delivers a complete set of tools, techniques, and ideas for planning and implementing instructional assessment of ELLs. The book includes: A focus on academic language use in every discipline, Assessing English book mathematics to social.
Assessing English Language Proficiency in U.S. K–12 Schools offers comprehensive background information about the generation of standards-based, English language proficiency (ELP) assessments used in U.S. K–12 school settings. The chapters in this book address a variety of key issues involved in the development and use of those assessments: defining an ELP construct driven Author: Mikyung Kim Wolf.
Christina Coombe, Keith Folse, and Nancy Hubley. Ann Arbor, MI: The University of Michigan Press, Pp. xxx + Reviewed by Slobodanka Dimova East Carolina University, USA Christina Coombe, Keith Folse, and Nancy Hubley s book A Practical Guide to Assessing English Language Learners targets Assessing English book and in-service classroom teachers who have difficulties finding [ ].
Read’s book can be fully utilized by all of those involved in post-entry language assessment.” (Naoki Ikeda, Papers in Language Testing and Assessment, Vol. 6 (1), ) “The central topic of this book is the assessment of English language proficiency and academic literacy needs of students to enable them to cope with the demands of study Brand: Palgrave Macmillan UK.
Book Description ** WINNER OF ILTA/SAGE Best Book Award ** Assessing English for Professional Purposes provides a state-of-the-art account of the various kinds of language assessments used to determine people’s abilities to function linguistically in the workplace. At a time when professional expertise is increasingly mobile and diverse, with highly trained professionals migrating.
Assessing English Language Learners explains and illustrates the main ideas underlying assessment as an activity intimately linked to instruction and the basic principles for developing, using, selecting, and adapting assessment instruments and strategies to assess content knowledge in English language learners (ELLs).
Assessing Writing is a refereed international journal providing a forum for ideas, research and practice on the assessment of written ing Writing publishes articles, book reviews, conference reports, and academic exchanges concerning writing assessments of all kinds, including traditional ('direct' and standardised forms of) testing of writing, alternative performance.
Discover how to bridge the gap between equitably assessing linguistic and academic performance. This well-documented text examines the unique needs of the growing population of English language learners (ELLs) and describes strategies for implementing instructional assessment of language and content/5.
`The English Assassin' is the second book in the Gabriel Allon saga but, unfortunately, it fails to live up to the promise of the first book. It starts well and, within a few pages, Gabriel is commissioned to restore an Old Master in Zürich on behalf the client of a London art dealer he's worked with on many occasions.
His life quickly become. guidelines do not focus on assessing English language proficiency, as defined under Title III.
Validity Issues in Assessing ELLs As noted in the ETS Standards for Quality and Fairness, validity is one of the most important attributes of an assessment. Validity is commonly referred to as the extent to which a. Assessment has its own culture, traditions, and terminology. This training guide is intended to help classroom teachers become more comfortable creating and using assessments.
A Practical Guide to Assessing English Language Learners provides helpful insights into the practice and terminology of assessment. The text focuses on providing the Reviews: 4.
brings the assessment of grammar into sync with current thinking and practice in applied linguistics and language pedagogy.
The author of this book, Jim Purpura, has extensive experience not only in teaching and assessing grammar, but in training language teachers in grammar and assessment.
In this book, he presents a new theoretical. To help you develop your language skills and prepare for your exam, we have some free resources to help you practise your English. We also have lots of information for parents to help support your child learning English. Read's book is both timely and well-designed, and provides an accessible framework for discussion of aspects of vocabulary knowledge.'ELT Journal 'John Read's Assessing Vocabulary provides a detailed analysis of the issues related to measuring vocabulary knowledge and presents a framework that both teachers and researchers may use to analyze.
Assessing English Language Learners by Lorraine Valdez-Pierce offers advice on testing students just learning the English language from the teacher's perspective. The author stresses the appropriate use of large-scale standardized testing, but focuses on classroom assessment techniques for use with English language learners (ELL) as well/5(3).
Wondering what you can learn from other parents. We are making it easy for you. Read on to find out the best selling Singapore assessment books on OpenSchoolbag. Just what are the other parents buying. English 2. Mathematics 3. Science 4. Chinese 5. Tamil More books can be found on OpenSchoolbag.
Carefully. Assess definition is - to determine the rate or amount of (something, such as a tax, charge, or fine). How to use assess in a sentence.
Synonym Discussion of assess. This practical resource book will familiarize teachers, staff developers, and administrators with the latest thinking on alternatives to traditional assessment. It will prepare them to implement authentic assessment in the ESL/bilingual classroom and to incorporate it into instructional planning/5(12).
Program description. This minute webcast is a thorough introduction to assessment for teachers of English language learners. Dr. Lorraine Valdez Pierce will discuss performance-based standardized assessments; assessment as a tool for informing instruction; use of assessment to reinforce reading comprehension; and student self-assessment and self-monitoring.Learn English with our free online listening, grammar, vocabulary and reading activities.
Practise your English and get ready for your Cambridge English exam.Quick Checks for Assessing Leveled Book Comprehension. Comprehension Quizzes are a fast, easy way to assess how well students comprehend their reading and are great resources for text-dependent questions.
Multiple-choice questions encompass a range of cognitive rigor and depth of knowledge. | <urn:uuid:640f6843-200c-4093-b959-6db8b5d43c07> | {
"dump": "CC-MAIN-2022-33",
"url": "https://birydycolome.inspirationdayevents.com/assessing-english-book-6352el.php",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00230.warc.gz",
"language": "en",
"language_score": 0.9102690815925598,
"token_count": 2090,
"score": 3.265625,
"int_score": 3
} |
Kul Gautam is Chair of the Council of the World Day of Prayer and Action for Children (DPAC). He was formerly Deputy Executive Director of UNICEF. In Coimbatore with DPAC’s council to participate in Shanti Ashram’s Interfaith Round Table 2013, Kul shares his thoughts on the role religion can play to help children.
What is the World Day of Prayer and Action for Children and why does it work for children by networking with worldwide religious organisations in particular? what inspired its model?
November 20 is Universal Children’s Day and everybody celebrates it differently — doctors through health initiative and teachers through schools — but many religious organisations wanted to know how exactly they could contribute. Since all religions prioritise prayer, we suggested that they spend the day in prayer for children and take those prayers forward through action. That’s how, in 2009 after a meeting in Hiroshima, November 20 also became the DPAC.
Religious organisations can initiate much change because they influence society’s behaviour.
For instance, in the 90s, Latin America, despite being a middle-income region, had lower child immunisation rates than many poor nations. While the Ministries of Health acknowledged the problem, they said they didn’t have medical personnel to cover every place. That’s when the UNICEF suggested immunising children through churches because Catholicism was powerful in Latin America. Every single village, however far-flung, had a church whose pastor the village respected. Immunisation could be done by them with just basic training. We soon saw rates rise very fast. So partnering with religious organisations does work.
In India, where conflict between religious communities has often been an undeniable part of our history, how do you see this approach panning out?
This model is especially appropriate for multi-religious, multi-cultural societies like India because it encourages interfaith cooperation to overcome misunderstanding and unjustified hatred.
While religious communities may argue on issues of politics and theology, they can come together for the cause of children, because at its core every religion wants the best for its children. There are superficial and misinterpreted teachings from religious texts which are used to exploit children by keeping them from schools, marrying them young, etc. But it takes a diamond to cut another diamond. So for every one of these misinterpretations, progressive religious leaders can show the positive, enlightened path that highlights the well-being of children.
What are DPAC’s key focus areas in India and how does partnering work on the ground?
In India, we realised that while education and health for children were being addressed, movements against violence towards children needed working on. Reports of girls being sexually molested and exploited, child abuse at home and in schools, child marriages and child labour were common. On the ground, DPAC has a triumvirate partnership between religious organisations, secular bodies such as UNICEF and Save the children Fund, and sometimes local governments too because while they pass good laws, implementation can be helped by others. In India, we’ve partnered with the Ramakrishna Mission Vivekananda University in West Bengal and Shanti Ashram in Tamil Nadu to work for emphasis on positive parenting. Discipline can be implemented without violence, and through love. We also focus on making children aware that India is a signatory to the UN Convention on the Rights of the Child, and that therefore they can demand their rights. But those come with responsibilities which they must fulfil too.
Nepal is your country of birth and upbringing, and you’ve spearheaded the Rollback Violence Campaign (RVC) there. What are the similarities you find between India and Nepal in the challenges that face children?
Nepal and India have similar traditions, history, culture and religion. Even in politics, you have had a Maoist/Naxalite Movement, as have we. And while its goals were to achieve justice for people, violence was an accepted means. That’s where the RVC stepped in and upheld Gandhi’s principle of non-violent means towards justice. Just as in India, DPAC in Nepal works against child marriage by partnering with religious organisations. Traditions such as these have been ingrained for centuries and justified by religion. Priests conduct these marriages! So we need to work against it from within the religious framework.
How have your years with the UN influenced the vision that DPAC has?
Parallel to the 2002 UN General Assembly Special Session for Children, there was a meeting of the world’s top religious leaders who pledged to support the summit’s commitment to ‘A World Fit for Children’. So DPAC’s vision was sown then. I was also instrumental in drafting many of the summit’s goals towards child survival which we continue to strive for today. Having worked with UNICEF for 35 years, I’m a child of the UN and my philosophy of life is influenced by the UN, so I know its many positives. But as an insider, I also know its shortcomings which we now try to overcome.
Published in The Hindu 6-May-2013 | <urn:uuid:09c8a11f-bdd2-4abf-9c32-1eeeccb8b432> | {
"dump": "CC-MAIN-2021-04",
"url": "http://kulgautam.org/the-religion-of-children/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00247.warc.gz",
"language": "en",
"language_score": 0.9684589505195618,
"token_count": 1060,
"score": 2.90625,
"int_score": 3
} |
Family systems theory essays
Bowen family systems theory is a theory of human behavior that views the family as an emotional unit and uses systems thinking to describe the complex interactions in. Reflective essay -” the bowen’s family theory” paper details: according to family systems theory, the following eight forces shapes family functioning explain. Read this essay on family system theory come browse our large digital warehouse of free sample essays get the knowledge you need. Free essay: the second family consists of a mother-son system the mother is a wealthy, prominent judge in new york city and her son is the cities most. Family systems theory essays - work with our writers to get the quality report following the requirements benefit from our inexpensive custom term paper writing.
Free essay: these changes are any events or occurrences within the family that cause imbalances in the dynamics and patterns of behavior which cause conflict. Attempting to understand family life can be done through many different perspectives the most central theory in the study of family sciences is the family systems. This adaptation of systems theory was coined by dr murray bowen and is referred to as bowen’s family systems theory according to murdock (2013), this par. Family system theory essays: over 180,000 family system theory essays, family system theory term papers, family system theory. Coming to grips with family systems theory in a bowen family systems theory and practice: illustration in positing the 'nuclear family emotional system'.
Family systems theory essays
The family systems theory suggests that individuals cannot be understood in isolation from one another, but rather as a part of their family. View family systems theory research papers on academiaedu for free. An essay or paper on family systems theory family systems theory was introduced by dr murray bowen in 1952 (bryannan, 1999) it was believed at the time to be a. Why study families •traditional psychology – problem an individual one – externalise distress – act out – internalise distress – withdraw. Get access to family systems theory essays only from anti essays listed results 1 - 30 get studying today and get the grades you want only at.
Family systems theory: family cohesion when growing up families are and have been considered systems because they are made up of interrelated elements or objectives. · guidelines: students will complete an 8page scholarly paper relevant family systems theory and individual self students are encouraged to include a. According to richard charles (2001) “the effectiveness of family systems theory rests not much on empirical research but on clinical reports of positive treatment. Family systems theory 10 pages 2458 words relationships evolve and are continuously changing very much like our climate the earth changes with time and so. Be quickly traced and the bowen family systems theory will be described this essay concentrates family systems & murray bowen theory page 4 of 10 bowen theory.
Family systems theory contains six areas of focus which two of these areas will, on average, most directly involve how a child will behave on a daily basis in the. Read this essay on bowen family systems theory come browse our large digital warehouse of free sample essays get the knowledge you need in order to pass your. Check out our top free essays on family system theory to help you write your own essay. Essays - largest database of quality sample essays and research papers on family systems theory. The quality of my essay was worth the money i had paid i got a 2:1 grade oscar j london, uk essay writing australia 50 2015-11-24t02:44:47+00:00 oscar j.
- This essay is intended only to bring to light a few to family system theory the goal derived from the family systems theory is to gain differentiation or.
- View this essay on when to use family systems theory the purpose of family systems theory is to utilize the family unit along with systems approach to explain.
- Family system theory after reviewing each theory the best theory for my personal model of helping is family system theory i like how the family system.
- Abstract – breadth this specific dissertation will examine the breadth of family systems theory from a variety of critical perspectives the specific family sys.
Name instructor task date family systems theory in radio introduction radio, directed by michael tollin, analyzes the life of a young black man who, despite bei. Family systems theory as literary analysis: the case of philip this essay illustrates how family systems theory can function as a critical framework. | <urn:uuid:51a59599-b41a-49fa-b2da-afad2d134aad> | {
"dump": "CC-MAIN-2018-05",
"url": "http://xcassignmentqujy.bodyalchemy.info/family-systems-theory-essays.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889325.32/warc/CC-MAIN-20180120043530-20180120063530-00525.warc.gz",
"language": "en",
"language_score": 0.9444941878318787,
"token_count": 908,
"score": 2.890625,
"int_score": 3
} |
A University of Ottawa researcher has shown that the socio-economic effects of the malaria epidemic on African populations have been largely underestimated and that new research is needed to quickly gain a more accurate assessment of the situation.
The results of a study conducted by Sanni Yaya, a professor at the Interdisciplinary School of Health Sciences at the University of Ottawa's Faculty of Health Sciences, suggest that current research has neglected some of the macroeconomic effects of the malaria epidemic and its long-term impact on family planning.
Up until now, economists who have studied this problem have tried to assess the economic impact of the malaria epidemic by analysing the rate of GDP growth per inhabitant explains Professor Yaya. According to these analyses, the malaria epidemic dampens the economic growth of affected countries by 1.5 to 2%. However, the situation is much more catastrophic when the cost of malaria on the individual and national prosperity of African communities is taken into account.
For example, Professor Yaya lists the significant reduction in family revenue that is directly related to lost productivity, and the negative effects of malaria on family planning.
Childhood education and family structure are also affected in that the high infant and childhood mortality rates caused by malaria encourage couples to have more children to counter this risk. Larger family sizes lead parents to work longer hours, which in turn prevents them from properly raising their children states Professor Yaya in a book that has just been published by the Presses de l'Université Laval (in French).
His study shows that there is a strong correlation between the number of malaria cases and per capita health expenditures. Generally, the income levels of African inhabitants do not allow such populations to pay for all the costs associated with treating this disease, which leads many to abandon the health care system in favour of traditional healers. Moreover, the low level of health care spending per inhabitant is a significant determinant in the high prevalence of malaria on this continent.
Internationally, nearly 88% of all deaths of children under age five can be attributed to malaria. Despite efforts to reduce the incidence of this disease (through insecticide-coated insect netting, increased access to malaria medication, etc.) malaria remains Africa's most serious public health problem.
Media Relations Officer
University of Ottawa | <urn:uuid:be420373-afb6-48d1-a3f9-6f7b27cceeb0> | {
"dump": "CC-MAIN-2020-10",
"url": "https://media.uottawa.ca/news/4799",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145859.65/warc/CC-MAIN-20200223215635-20200224005635-00149.warc.gz",
"language": "en",
"language_score": 0.946905791759491,
"token_count": 460,
"score": 3.0625,
"int_score": 3
} |
Advertisements: positive effects of electronic media on society and culture the media like television, radio and the internet increase an overall awareness of the masses essay article shared by: advertisements. List of electronic arts games this is a list of video games published or developed by electronic arts name a sports game of basketball featuring the two eponymous teams: bundesliga 99: 1998: windows: burnout 3: takedown: 2004: playstation 2. How to write a short story analysis paper 6 evaluate the material you have developed do you have enough for a three-page paper if yes, determine the working thesis of your essay and move on to step 7. A new york times magazine essay contest involving college students responding to a question posed by rick perstein on college education. Sample essay (800 words) for the assignment question and analysis, see computers, the internet, and advanced electronic devices are becoming essential in everyday life and have changed the way information is gathered the essay begins with a general lead into the broad. View this essay on electronic health records the medical community has the medical community has begun using electronic health records ehr as an alternative.
Children playing in competitive sports essay electronic sports essay what is esports electronic sports are the competitive side of video gaming people use video games to compete against other players around the world. Writing 122- cora agatucci english composition [argumentation & critical reading-response] example analysis-evaluation essays #1 webpublished with student permission. But i think as generation is passing by the importance of sports and games is diminishingtodays youth is more interested in virtual games paragraph on importance of games and sports posted in essays electronic sports, esports for short, is the act of playing video games. Electronic arts strategy essays: over 180,000 electronic arts strategy essays, electronic arts strategy term papers, electronic arts strategy research paper, book reports 184 990 essays, term and research papers available for unlimited access. What is the importance of sports in our life and how sports benefits our society a short essay and speech on the importance of sports for kids and adults children in the modern world lead a sedentary lifestyle because of the invention of different electronic gadgets.
Today's society is faced with the continually growing problem of electronics and social media wha. Essay on dalits youth sport essay henry doorly zoo internship application essay persuasive essay on why school should start later bariatric surgery research paper help research paper night vision technology tasp application essays related post of electronic print media essay about radio. The importance of sports and games is being increasingly recognised in india essay on the importance of games and sports in our life which matches the usa in industrial, especially electronic ad vancement, does well in sports despite its small size. Free essay: sites like gamebattles which is the largest online destination for competitive console and pc gaming (gamebattles/mlg, 2011) there are many. In a study published in the june issue of the journal obesity research, researchers from the children's hospital of philadelphia and the university hospital zurich present a strong association between playing electronic video games and childhood obesity in school-aged swiss children a new study. Should dangerous sports be banned yes millions of people play sport every day, and, inevitably, some suffer injury or pain most players and spectators accept this risk however, some people would like to see dangerous sports such as boxing banned this essay will examine some of the reasons for.
The best examples of memoirs and personal essay writing from around the net short memoirs by famous essay writers. Chapter three types of assessment interest in alternative types of assessment has grown rapidly during the 1990s, both as a response to dissatisfaction with multiple-choice essays are familiar to most educators they are lengthy written re.
Markedbyteacherscom coursework, essay & homework assistance including assignments fully marked by teachers and peers get the best results here. Unlike most editing & proofreading services, we edit for everything: grammar, spelling, punctuation, idea flow, sentence structure, & more get started now. Free analysis essay example on computer games: electronic arts. A report on technological development sport essay print reference this apa mla mla-7 the use of film, cine and video and many other electronic analyzing devices provide the chance to analyze the movements of athletes in a sports essay writing service essays more sports essays sports.
We the keen essays staff, offer quality assistance to students by providing high quality term papers, essays, dissertations, research writing and thesis. Get your cheap electronic sports essays just in two clicks best free samples will be in your hands with topics what you need. Home opinions sports should video gaming be considered a sport add a new topic should video gaming be considered a sport add a that is of course in e-sporting (electronic sporting) which is where a bunch of people gather up and play against each other they contain the following. | <urn:uuid:53f229c7-e141-473b-8b71-8a04e376c34f> | {
"dump": "CC-MAIN-2018-34",
"url": "http://mphomeworkadrv.ameriquote.us/electronic-sports-essay.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209856.3/warc/CC-MAIN-20180815024253-20180815044253-00699.warc.gz",
"language": "en",
"language_score": 0.9354128241539001,
"token_count": 995,
"score": 2.75,
"int_score": 3
} |
There’s a lot of confusion around the symptoms and effects of dementia. Now, neuroscientists are partnering with playwrights to give a voice to the research.
In labs and clinics across New Zealand, researchers are working towards an ambitious goal: to understand the biological mechanisms behind Alzheimer’s, Huntington’s and Parkinson’s diseases, as well as stroke and sensory loss. What links these conditions is age. Neurodegenerative disorders like these tend to be much more common in people aged 65+ than in any other age group. And the size of that older population is ever-increasing. In 1981, those aged 65+ represented less than 10% of the New Zealand population. Today it’s 15%, and the latest statistics suggest that by 2038, one in four New Zealanders will be aged over 65.
“Living longer is one thing,” says Otago neuroscientist Professor Cliff Abraham, “[but] we want ageing to be a positive experience, and through our research, to improve the quality of life for older people.” Abraham is the co-director of Brain Research New Zealand (BRNZ), a partnership between the University of Auckland and the University of Otago. BRNZ’s research is interdisciplinary, with scientists working on everything from disease biomarkers to population health. Abraham explains that, as a result, they and their collaborators at the University of Canterbury and Auckland University of Technology use a variety of tools.
“In some areas, it might be cultured human brain cells or different animal models. We have developed new technologies and trialed therapies. We also work on large-scale longitudinal studies, which are helping us to identify the risk factors, lifestyle factors and social factors that contribute to brain decline.”
One of BRNZ’s focus areas is dementia, of which Alzheimer’s disease is the most common form. Different from ‘normal’ age-related memory loss, dementia is a progressive condition that alters the structure and function of a person’s brain. Its symptoms vary between individuals, but even in its early stages, dementia can significantly limit a person’s ability to perform everyday tasks. And the impact of the disease is felt not only by those living with it but also their whānau.
It’s this wider impact of poor brain health that, Abraham says, motivates him to look beyond the lab. “As scientists, we naturally turn to peer-reviewed articles and public lectures to share what we do, but it’s important to find other ways to connect with our community.” Through the national network of Dementia Prevention Research Clinics, Abraham and his team work directly with people experiencing memory problems, as well as collaborating closely with charity groups and schools. This year, BRNZ are trying something a little different – they’re sponsoring a national tour of the ground-breaking play The Keys Are in The Margarine.
Created by a Dunedin-based theatre company, The Keys uses a performance technique known as verbatim, or word-for-word, to showcase the realities of living with dementia. Co-creator Cindy Diver says that this approach is a way to “tell truthful stories in a truthful manner.” She first started exploring verbatim theatre 11 years ago as part of a research group at the University of Otago. Initially, Diver says, they explored the “quirkiness of humans, and their fears and comforts”, but she quickly realised how valuable this form of theatre could be in vulnerable settings where there is “a stigma around a topic, which means that it isn’t necessarily safe for a person to share that story.”
The first topic that Diver tackled was a difficult one: family violence. Working with Professor Stuart Young and Associate Professor Hilary Halba from Otago’s School of Performing Arts, Diver met with survivors and perpetrators of family violence, as well as medical professionals and members of the police force. Their research bettered their understanding of a crucial, complex topic, and was central to creating the play.
In verbatim theatre, productions aren’t strictly ‘written’. Rather, each play is assembled from real-life conversations. The first step is to record interviews with a wide variety of people, known as collaborators. These interviews are edited into a documentary that is shared only with the actors. Then, as Diver explains, each actor uses the film to learn their parts by “studying the collaborator’s eye movements, hesitations, hand gestures, vocal intonations, and every word of the edited interview.” While they perform on stage, each actor also listens live to the audio of their collaborator, via an earpiece. “The actor speaks in time with that person, which means they stay 100% honest,” says Diver. “This process allows us to bring the audience into the collaborator’s room, while keeping that collaborator safe and completely anonymous.”
The result of the Otago research project was the team’s first verbatim play, Hush, which debuted to wide acclaim in 2010. The idea for creating a similar play on dementia came later, through a connection with Dunedin GP Dr Susie Lawless. Among her patients, Lawless had noticed that Alzheimer’s and other dementias were shrouded in stigma, and that people were confused about their symptoms and the disease’s progression. Having been a collaborator on Hush, Lawless felt that a verbatim play might help untangle some of that complexity. Together, Lawless and Diver collaborated again with Young and began developing The Keys Are in The Margarine.
The play finally hit the national stage in 2015 and immediately caught the attention of former BRNZ director, Professor Richard Faull. Diver knew that Faull and the wider BRNZ team would be fantastic collaborators. “They really seemed to appreciate that research could be made more palatable through art.” Shortly after that first tour, Diver was invited to speak to BRNZ’s early career researchers, and alongside her Lawless, later presented at an International Alzheimer’s Conference in Australia. “The doctors and researchers understood that what we were doing didn’t fit on a clinical diagram,” she says, “but it had immense value at a personal level.”
Abraham agrees, saying that although dementia is a difficult thing to discuss, “we shouldn’t shy away from its emotional impact. The Keys is a fantastic, entertaining piece of theatre, but it also makes people think – about what it means to be diagnosed with the disease, or to live with and care for someone who has it. The verbatim format gives it nowhere to hide – the physical, emotional and social costs of Alzheimer’s are there for all to see.”
This is not the only time neuroscience and the arts have overlapped. Long before biomedical imaging technologies, physician Ramón y Cajal’s remarkable drawings helped him to prove that neurons are individual cells. It was a discovery that saw him share the 1906 Nobel Prize in Physiology or Medicine. The two fields have previously collided on stage, too. Canadian rapper Baba Brinkman and his wife, neuroscientist Dr Heather Berlin, regularly combine their talents to create brain-related shows, and US playwright Edward Einhorn has been working in what he’s dubbed ‘neuro-theatre’ for more than a decade. In the UK, Professor Sarah-Jayne Blakemore recently worked alongside teenage performers to create Brainstorm, a play that explores adolescent brain development. And earlier this year, neuroscientists tried to understand the neural basis of dramatic acting by placing actors into an MRI machine while they adopted characters from Shakespearean plays.
From the actor’s point of view, collaborating on such an emotional topic is a unique experience. “Alzheimer’s is a disease that is full of uncertainty, and we’re understanding more about it all the time,” says Diver. “But the emotions of coping with it – as a patient or for the people around them – they all come down to what it means to be human. Combining art and science to unpick all of that is very, very powerful.”
For the full list of dates, venues and details of where to buy tickets to The Keys Are in The Margarine, click here.
A panel on the science of dementia follows the Wellington performance this Saturday, with Cindy Diver, Prof Cliff Abraham, Dr Phil Wood (Ministry of Health chief advisor on healthy ageing) and Prof Lynette Tippett (Director of NZ’s Dementia Prevention Research Clinics).
Laurie Winkless contributes to the BRNZ website
The Spinoff Weekly compiles the best stories of the week – an essential guide to modern life in New Zealand, emailed out on Monday evenings. | <urn:uuid:eb989878-86d2-40cb-acf8-51f02e853cd3> | {
"dump": "CC-MAIN-2020-24",
"url": "https://thespinoff.co.nz/science/10-10-2019/bringing-memory-loss-to-life-through-theatre/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394074.44/warc/CC-MAIN-20200527110649-20200527140649-00071.warc.gz",
"language": "en",
"language_score": 0.962733805179596,
"token_count": 1865,
"score": 3.1875,
"int_score": 3
} |
Math, along with science, is considered as one of the most difficult subjects in school. If you ask a number of students, very few would probably say that math is their favorite because a huge number of students actually hate math. The complicated, sometimes long, formulas and solutions make it difficult for students to understand the lessons. However, sometimes it would only take a proper method and a good teacher for students to comprehend and cope up with the lesson. Enrichment class for kids could also help students who are facing difficulties to clear up their doubts and questions that they dare not voice out in the classroom.
Estyn will tell us about a science, technology, engineering and mathematics enrichment program that drives curriculum development.
A science, technology, engineering and mathematics enrichment program that drives curriculum development
Information about the school
Glan-y-Môr is an 11-16 community focused school in Burry Port with 480 pupils on roll of whom approximately 30% are eligible for free school meals. The school was formally federated with Ysgol Bryngwyn School in 2014 becoming the pioneer pilot for secondary federations in Wales. Glan-y-Môr works in strong partnership both with its primary feeders and with the local FE provider through 14-19 initiatives. The school is currently a pioneer school for professional learning.
Context and background to sector-leading practice
Glan-y-Môr introduced a STEM (science technology engineering maths) enrichment programme in September 2014, initially as an extra-curricular activity to promote transition. Since then it has rapidly evolved as a key strategy in driving curriculum development to meet the requirements of the new curriculum for Wales. The aim of the programme was to engage and excite pupils about the career paths offered in the fields of science, technology, engineering and maths. The programme is based on the principles of active learning, raising aspirations and increasing opportunities. The school is confident that this STEM enrichment programme is already helping its pupils to develop in line with the recommendations of the Donaldson report and its 4 key principles. The strategies and lessons learned from implementing this successful STEM programme are now being applied to the main curriculum. Read more here.
Glan-y-Mor is just one of the schools that offer enrichment programs for students in different subjects. One of the goals of their program is to encourage students to take a career in the fields of science, technology, engineering and maths.
On a different topic, Esther Duflo will tell us about the new findings on children’s math learning in India which demonstrates the importance of field research for cognitive science. Let us read below.
New findings on children’s math learning in India demonstrate the importance of field research for cognitive science
Cognitive Science is a relatively new field that has made dramatic advances over the last decades: advances that shed light on our conscious and unconscious minds, bring insights into fields from neuroscience to economics, and now play no small role in the development of machines that are smart enough to take over tasks that until now, only humans could perform.
But cognitive science has underperformed conspicuously in one domain: Its development has brought no clear breakthroughs in an area where it seemed likely to be most useful: human development and education.
A new report in Science suggests why: Just as clinical trials are critical to enhancing human health and medicine, field experiments are critical to understanding human learning.
New findings from a team of economists and psychologists at MIT and Harvard, working together with Pratham, a non-governmental organization that seeks to make sure that every child in India is in school and learning well, demonstrate both the feasibility and the necessity of such experiments. Read more here.
The research done has given them methods for assessing children’s knowledge. They said that the research doesn’t tell us how to create better schools, but it gives us the tools to do field experiments that can.
Moving forward, some parents send their children to enrichment classes every week and people are wondering if it is necessary. The aim of education should not just be able accumulating knowledge, but it should be equipping the next generation with necessary skills. So Nirmala Karuppiah, through her article below, will tell us if children really need enrichment classes.
Commentary: Do our young really need expensive enrichment classes?
SINGAPORE: Many parents in Singapore spend weekends sending their young children to enrichment classes.
These classes can take many forms, from early preparatory classes that introduce concepts children will eventually learn in primary school, to broad-based programmes that seek to help young children develop basic motor and social skills, to creative arts classes that encourage individual expression.
However, are expensive, structured enrichment classes really necessary for children in their early years? Would children be left behind, if they do not attend such enrichment classes?
Children are naturally creative and curious. What is more important is for parents to provide them with lots of opportunities to support both their creativity and curiosity in their early years – and this may not involve expensive, structured enrichment classes. Read more here.
Actually, the decision is up to the parents. But we could always save up without taking away the chance of learning from our children. We could actually do it on our own. We could go to the park with our children and let them explore. We could always educate them during simple activities inside and outside of home. Remember that learning doesn’t only take place in school, it can happen anywhere. Math, just like any other subjects, could be learned and enriched anywhere and anytime. | <urn:uuid:7e075ff8-fe6c-43b3-95c9-a811e4ad5a5f> | {
"dump": "CC-MAIN-2019-30",
"url": "http://www.ramblingsofacrazymomma.com/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527204.71/warc/CC-MAIN-20190721205413-20190721231413-00405.warc.gz",
"language": "en",
"language_score": 0.9669063091278076,
"token_count": 1131,
"score": 3.53125,
"int_score": 4
} |
Randy Pausch dies of pancreatic cancer
Randy Pausch, the noted Carnegie Mellon computer science professor, has died at the age of 47 from pancreatic cancer. Pausch had become internationally known for his now famous Last Lecture, which was viewed by millions (you can watch it below) and was the subject of his #1 bestseller of the same name, which has already been translated into at least 30 languages. Since his diagnosis, Mr. Pausch had been treated with surgery, chemotherapy, radiation and an experimental cancer vaccine. So what is it about pancreatic cancer that makes it so deadly?
Let’s start with some basic information. Pancreatic cancer will be diagnosed in about 37,680 Americans in 2008 and will take the lives of about 34,290, making it the fourth most common cause of cancer death after lung, colon, prostate and breast cancers. Because of its hidden location, lack of symptoms and absence of screening tests, pancreatic cancer is typically diagnosed at an already advanced stage, with only 7 percent of cases caught early. Even then, the 5-year survival is only 20 percent; for all stages combined the current figures are 24 percent and 5 percent for 1- and 5-year survival respectively.
The pancreas is located deep in the center of the abdomen, just above the spine. The so-called head of the pancreas (one end of the oblong organ) sits cupped in a C-shaped loop of intestine called the duodenum, which connects the stomach to the small intestine. The pancreatic enzymes are deposited into the duodenum where they help to digest fats. The pancreas is also the site of insulin production. One can live without a pancreas, after surgical removal for example, by taking externally supplied insulin and eating a low-fat diet.
Cancer of the pancreas usually develops without symptoms until it reaches an advanced stage. Symptoms can include vague upper abdominal pain, potentially leading to more persistent back pain, weight loss and, depending on the tumor’s location, jaundice from obstruction of the bile duct. Potentially curative surgery is only feasible in fewer than 20 percent of cases at the time of diagnosis.
Pancreatic cancer is rare before the age of 45. About 5 to 10 percent of people have a first degree relative with the disease. Cigarette smoking clearly increases the risk of pancreatic cancer, while obesity, chronic pancreatitis, diabetes, cirrhosis and using smokeless tobacco may increase its risk. The Western high-fat, high-meat containing diet has also been suggested as increasing its risk.
So why is pancreatic cancer so deadly? First, as noted above, the organ is hidden and not amenable to screening techniques in the way that the colon, prostate and breast are. In addition, it spreads relatively quickly to regional lymph nodes and adjacent organs so that fewer than 20 percent of cases are diagnosed early enough to warrant potentially curative surgery (pancreatic surgery is notoriously difficult to do the organ’s location and intimate connection with surrounding structures and blood vessels). Compounding these problems is that pancreatic cancer is, for unknown reasons, inherently resistant (or poorly responsive) to even the most modern chemotherapy and radiation treatments. An experimental cancer vaccine has shown some promise in extending survival in some patients who have undergone surgery, although even this is, on average, only in the range of six to seven months of added survival.
In spite of his diagnosis and its bleak prognosis, Randy Pausch delivered his inspirational lecture on September 18, 2007. It was titled “Really Achieving Your Childhood Dreams.” Because the lecture was written in less than one week and was time-limited, he felt the need to expand upon its themes in a book entitled “The Last Lecture,” which was published on April 8, 2008 and which has since become an international best-seller. One of Mr. Pausch’s widely quoted comments is “We cannot change the cards we are dealt, just how we play the hand.” You can watch his now famous lecture here:
[kml_flashembed movie="http://www.youtube.com/v/ji5_MqicxSo" width="425" height="350" wmode="transparent" /] | <urn:uuid:d5b0829d-5f9b-41ed-a413-8fe69b536cac> | {
"dump": "CC-MAIN-2017-09",
"url": "http://www.everydayhealth.com/columns/zimney-health-and-medical-news-you-can-use/randy-pausch-dies-of-pancreatic-cancer/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00198-ip-10-171-10-108.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9672461748123169,
"token_count": 902,
"score": 2.703125,
"int_score": 3
} |
SACRAMENTO, CA - The world is getting hotter, drier and at greater risk of catastrophic fires, according to a new climate change report put out by two state agencies.
The Natural Resources Agency and the California Energy Commission claim the data in their reports warns that California can expect more scorching heat waves, severe and damaging wildfires, and strain on the electric grid in the coming years.
They claim the statewide average temperature has increased by about 1.7 degrees Fahrenheit from 1895 to 2011. By 2050, the report claims the temperatures will rise another 2.7 degrees. That extra heat will cause water levels to rise, cause the state's snowpack's, which are vital to the state's water supply, to start to melt.
The temps also could mean more energy and electric needs, as more Californians reach for their air conditioners in the summer, spring and fall season. These are all factors the California Energy Commission and the Natural Resources Agency said the state needs to take into consideration as it tackles budget priorities, aging infrastructure and growing populations.
"The challenges are enormous, but certainly this state has the capability to rise to these challenges and with these types of studies we are going to be prepared," California Energy Commission Chair Bob Weisenmiller said.
Yet, there are scientists who dispute CEC's and Natural Resources Agency claims. Their research suggests the changing temperatures are part of the earth's natural cycle and have little to nothing to do with human activity. | <urn:uuid:29152231-a7b9-497b-95fb-e11f0864e3a4> | {
"dump": "CC-MAIN-2014-23",
"url": "http://archive.news10.net/news/local/article/203256/2/Climate-change-study-Calif-will-have-more-wildfires-heat-waves",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273874.36/warc/CC-MAIN-20140728011753-00427-ip-10-146-231-18.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9430090188980103,
"token_count": 301,
"score": 3.015625,
"int_score": 3
} |
Guiding Patients With Stroke Back to the Best Quality of Life Possible: Physical, Occupational and Speech Therapy can maximize recovery
By Jessica Dellaghelfa, DPT and Natacha Dockery-Livak, M.A.,M.S., CCC-SLP
Stroke can be a very complicated diagnosis. Some patients walk out of the hospital a few days after suffering a stroke without any obvious signs of disability. Others may be hospitalized for weeks or more and leave with very visible signs of a serious blow to their brain. But in every case, people who have had a stroke are never quite the same.
This doesn't mean that they can't get back to their lives, their families and friends. A stroke occurs when a blood vessel is blocked or ruptured, depriving a part of the brain with the blood and oxygen it needs and causing brain cells to die. Strokes come in all varieties, from minor to very severe. In almost every case, stroke patients need a period of rehabilitation to help them relearn skills that were lost when stroke affected part of their brain.
Their journey will begin as soon as they are able to participate in rehabilitation while still hospitalized in an acute care hospital bed. From there, they may be transferred to the rehabilitation unit of the hospital, where therapy will increase to several hours a day. Patients can next be transferred to a skilled nursing facility for more therapy, or to home – often with therapists visiting them daily. And many continue with therapy on an out-patient basis until they have regained as much function as possible.
In fact, therapy will become an integral part of the lives of stroke patients and their loved ones for weeks, months, and maybe even years. People can and do continue recovering for a long time. In the best-case scenario, patients should be under the care of a multidisciplinary team that includes their primary care physician, a physician who specializes in rehabilitation, plus physical, occupational and speech therapists who each bring their own area of expertise to the patient, and who work creatively together so that the individual needs of each patient are met.
Rehabilitation that begins as soon as possible can make a world of difference in the outcome. Therapists meet each patient where they are in terms of their ability and create a rehab plan that will gradually bring the stroke survivor to the next step. Once there, they work toward new goals. The focus is on maximizing essential functions that will improve their quality of life and help patients be as independent as possible.
Physical therapists focus on improving movement and balance with exercises to strengthen muscles needed for standing, walking and other activities. Occupational therapists help stroke survivors better manage daily activities such as dressing, preparing meals and living safely at home. Speech and language pathologists work with stroke patients to re-learn language skills like talking, reading and writing. They also help with swallowing problems that can affect some stroke patients.
In the process, the rehab team works together to enhance the progress of each patient, creating unique strategies that can help them compensate for lost function, adapt to a new way of life, and maximize every possibility for improvement. Their collaboration means that they look at each patient from three potential angles, often embedding a physical, occupational and speech-related component into many of the patient’s therapy sessions.
Stroke rehab therapists are highly-skilled caregivers who have deep understanding of the complex function of the brain, and how to guide patients back to the best life possible after stroke. They also understand that this disease can impact an entire family. They help their patients’ loved ones maintain realistic expectations and suggest ways to continue useful practices at home. And they encourage participation in stroke support groups where patients and families can voice their hopes and fears, while also finding strength and even great ideas from each other.
Jessica Dellaghelfa, DPT, is a Physical Therapist and Natacha Dockery-Livak, M.A.,M.S., CCC-SLP, is a Speech and Language Pathologist at the BMC Center for Rehabilitation | <urn:uuid:f6247b72-0eba-49fc-9eb8-aaf8ba76fcc7> | {
"dump": "CC-MAIN-2019-09",
"url": "https://www.berkshirehealthsystems.org/rehabilitation-from-stroke-onset",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489933.47/warc/CC-MAIN-20190219101953-20190219123953-00170.warc.gz",
"language": "en",
"language_score": 0.9633117318153381,
"token_count": 831,
"score": 2.515625,
"int_score": 3
} |
Climate and topography in West Greenland along a vast west-to-east transect from the ocean and fj ... Atuaruk
Climate and topography in West Greenland along a vast west-to-east transect from the ocean and fjords to the ice sheet contains evidence of 4200 years of human history. Fisher-hunter-gatherer cultures have created an organically evolved and continuing cultural landscape based on hunting of land and sea animals, seasonal migrations and settlement patterns, and a rich and well-preserved material and intangible cultural heritage. Large communal winter houses and evidence of communal hunting of caribou via hides and drive systems are distinctive characteristics, along with archaeological sites from the Saqqaq (2500-700 BC), Dorset (800 BC-1 AD), Thule Inuit (from the 13th century) and colonial periods (from the 18th century). The cultural landscape is presented through the histories and landscapes of seven key localities from Nipisat in the west, to Aasivissuit, near the ice cap, in the east. The attributes of the property include buildings, structures, archaeological sites and artefacts associated with the history of the human occupation of the landscape; the landforms and ecosystems of the ice cap, fjords, lakes; natural resources, such as caribou, and other plant and animal species that support the hunting and fishing cultural practices; and the Inuit intangible cultural heritage and traditional knowledge of the environment, weather, navigation, shelter, foods and medicines. | <urn:uuid:85e484c0-a0fc-426a-8407-3a5bc6a66bd7> | {
"dump": "CC-MAIN-2021-49",
"url": "https://www.uni.gl/saqqummersitaq.aspx?subject=Nipisat",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359073.63/warc/CC-MAIN-20211130201935-20211130231935-00203.warc.gz",
"language": "en",
"language_score": 0.929328441619873,
"token_count": 322,
"score": 3.75,
"int_score": 4
} |
The Women’s Suffrage movement diligently worked for almost 80 years in establishing the Right to Vote without regard to gender. In my opinion, along with the Bill of Rights, they are and were the most necessary constitutional amendments.
Throughout Colonial America, women had historically possessed the right to vote and it was never an issue until a new Constitution was written by 87 men and ratified by 13 states. The founders just didn’t even mention women; they weren’t really excluded. They were just left out.
The restriction on female voting began with states requiring that only property owners be franchised with the right to vote. And guess what: 18th century women for the most part were not entitled to own property independent of a husband. So in 1789, a compromise was struck. Non-property owners were handed the Right to Vote, but amazingly, this right was not extended to women.
Many prominent women publicly complained about this disenfranchisement, notably Abigale Adams, the Massachusetts wife of the second President of the United States, John Adams. The only state to loudly complain about the absent right of women to vote was the State of Rhode Island.
One of the problems with ratifying the 15th Amendment in 1870, which granted the right to vote to any male without regard to race, was the question of women’s suffrage.
In 1843, the Seneca Falls (NY) Convention initiated a grass roots effort to gain the right for women to vote. It was led for years by Susan Anthony. It took 50 years, until 1920, for women to earn the right to vote in Federal elections. This followed the right gained for black males under the 15th Amendment in 1870.
The Supreme Court in 1877 upheld prohibition on a black female’s right to vote following the 15th Amendment. In Brazwell vs. U.S., the Court decided that the 1870 15th Amendment’s true language did not specifically enfranchise black women like it had black men. Other Supreme Court decisions throughout the late 1800s consistently upheld the prohibition of the women’s vote. Also, several states specifically voted down women’s suffrage, including New York, which was at that time the most populous state in the Union.
However, a ray of early enlightenment arose in the formation of western territories and states.
Women were allowed to vote in local and state elections as early as 1867 (Wyoming Territory), 1869 (Montana, Territory), and 1870 (Utah Territory), and then Idaho. Utah later rescinded women’s suffrage when it obtained statehood. These western states and territories were thinly populated. A woman’s vote became very important and sometimes necessary.
The spur to the successful women’s suffrage movement had began just prior to World War I, and finally gained widespread support following the war. One thing holding back the Women’s Suffrage campaign was the Christian Temperance Union. A fair number of the Suffragettes also wanted to ban the evils of alcohol. For example, Carrie Nation of Kansas was a prime mover in each organization. Some men could live with women voting, but not without drink!
Also, this was the time of “Jim Crow” laws and symbolism.
The southern delegations of the movement preferred that the Afro-American women not get involved and be silent so as not to raise widespread opposition.
It is often stated the first presidential candidate to successfully and enthusiastically court the female vote was the dashing Warren Harding in 1920.
President Woodrow Wilson had reluctantly supported the amendment beginning in 1916, and from that point forward, for the next 100 years, the female vote has been wildly polled AND eagerly sought after.
I may not see it the years remaining to me, but I feel confident that the country will relatively soon elect a women as President of the United States. And why not? Men have had the chance to screw up the country for almost 240 years. Let’s give the real “bosses” and household managers a chance. It could be very refreshing!
Note (June 6):
Summer is here and spring is about over. 90-plus degree temps are forecast for later in the week. Unfortunately, a lot of rain is forecast in the next 7-9 days. This is bad news for farmers attempting to put up their hay. And, it is bad news for people who depend on low-water bridges over swiftly-flowing creeks for access to their property, like me. However it should be about time for the “spigot” to be turned off by Mother Nature.
On a positive note, I am still seeing and hearing a lot of turkeys and the local streams contain many fish. For at least 10 days now, I am seeing lighting bugs at night and listening to bull frogs announce their loud romantic bellows.
Get ready for an Ozark delicacy, frog legs. Frogging season will open at the end of June. Just remember to only take what you need and follow the daily Missouri Conservation limits. Eight daily, I believe. And don’t ever clean out a pond or lake. It may take 10 years or longer for frogs to re-establish themselves on that body of water.
Now, get up and go enjoy our beautiful Ozarks outdoors! | <urn:uuid:ac1408e0-fdd9-4f29-8763-c29b77eda321> | {
"dump": "CC-MAIN-2021-39",
"url": "https://www.douglascountyherald.com/2020/07/09/the-nineteenth-amendment/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057039.7/warc/CC-MAIN-20210920131052-20210920161052-00694.warc.gz",
"language": "en",
"language_score": 0.9744632840156555,
"token_count": 1102,
"score": 4.25,
"int_score": 4
} |
Electrical discharge machining is a method by which we obtain the desired shape through intentional erosion created using electrical charges.
This process allows us to shape even the hardest materials, including steel, aluminum, and high-temperature alloy. Regardless of how hard the material is, EDM helps us get the job done.
Wire and sinker EDM can be very helpful in certain situations, including:
- Tight tolerance parts, sensor holes or keyways
- Closed-look manufacturing
- Producing prototypes
- Custom forms and contours
- Removing broken tools
- Cutting complex shapes
- Tapered holes
- Tiny holes
Sinker EDM, also known as Ram EDM, involves two metal parts that are submerged in a dielectric liquid that insulates the current. The two parts coming together create electrical tension. Sparks then fly over the surface of the metal—one at a time, though at the rate of thousands of electrical discharges per second, heating the metal to its melting point and creating the desired shape.
As its name implies, sinker EDM sinks the shape of the electrode onto the material. The cutting tool never actually touches the material itself. During the process, the dielectric fluid constantly washes away the eroded particles until the shape emerges. This method is commonly used in the mold and die making process.
Wire EDM uses a wire as the electrode. Invented in the 1940s, sinker EDM has been around longer than wire EDM, which was first introduced in 1969. Both methods have their merits and very specific applications that they are best-suited for.
What EDM is best used for
The EDM process can be used on any electrically conductive material and is especially good for materials that are difficult to machine using other methods. We use this method in the making of dies and molds and also for working surfaces that are so thin that no other machining system can achieve the desired result.
State of the art EDM machinery is a must-have for any application with a need for precision accuracy. Working with an experienced shop that is equipped with newer machines can give you access to accuracies in the range of 0.000005 inches – a level of tolerance that is a must for industries such as aerospace, aviation, diagnostic, and medical devices.
However, it does not always give you the best value to use wire EDM or sinker EDM exclusively; sometimes, it might be more practical to have some parts machined in this way and others by more conventional means.
Complete range of machining techniques for optimum results
We offer a complete range of machining services under one roof, giving you access to the best possible options for your machining project. Our team will work closely with you during every phase, considering all of your needs and wants before recommending a plan of action.
Ultimately, we are all focused on one thing: delivering top-notch results for all of your precision parts needs. Working closely with our engineers, we can assure you of the most efficient and economical process from start to finish.
EDM is best suited for hard metals including:
- Hard steel
Our shop has the capability to handle small or large-scale production according to your needs. Whether it’s simple or complex, we bring our trademark enthusiasm and professionalism to the table every time, giving you the benefit of our experience, our passion, and our insight to make your concept shine.
If you have a project that requires precision machining in Houston, Westpoint Tool and Molding should be your first call. We are a small, independent organization that thrives on bringing ideas to life. We truly love what we do – and it shows.
Get started: Give us a call today. We’d love to hear about your project and talk to you about how we can make it real. | <urn:uuid:b0ec2d41-dbe4-4dd5-b7c8-889376210af5> | {
"dump": "CC-MAIN-2020-16",
"url": "http://westpointtoolandmolding.com/wire-edm-sinkers/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371609067.62/warc/CC-MAIN-20200405181743-20200405212243-00416.warc.gz",
"language": "en",
"language_score": 0.9326789975166321,
"token_count": 795,
"score": 3.109375,
"int_score": 3
} |
Tags: A History of the Navy in 100 Objects, USS Kearsarge
The Navy underwent significant change in the years following the War of 1812. We have already discussed some of the significant technological innovations that came about during that period, so today we look at some of the results of those changes. New technologies created some new problems, and we address one of those problems today. Additionally, we think about how the fundamentals of naval warfare were changed by the steamship.
- DEF[x] Annapolis: Encourage the Innovators
- A History of the Navy in 100 Objects #48: Models of HMS St. George (1701) and USS Missouri (1944)
- Engineering and the Humanities: The View from Patna’s Bridge…
- A History of the Navy in 100 Objects #47: British Dockyard Models
- A History of the Navy in 100 Objects #46: WWII Japanese Radio Headset | <urn:uuid:7f9b3afb-b186-442f-9978-a60df51894c9> | {
"dump": "CC-MAIN-2014-10",
"url": "http://blog.usni.org/2013/12/13/a-history-of-the-navy-in-100-objects-23-engine-order-bell-and-telegraph-from-uss-kearsarge",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010115284/warc/CC-MAIN-20140305090155-00025-ip-10-183-142-35.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9327203631401062,
"token_count": 193,
"score": 3.125,
"int_score": 3
} |
Digital History enhances history teaching and research through primary sources, an online textbook, extensive reference resources, and interactive materials.
Joseph Brant or Thayendanegea, Mohawk chief, led four of the "Six Nations" against the American rebels. Detail of lithograph by Thomas McKenney (produced between 1836 .
I've been wanting to pull together images of Native Americans from all my old books and magazines for a long time, and I've finally done it.
Revised January native american armor history 3, 2012 * Denotes Southwest Books of the Year selection. 100 Years of Native American Painting. Exhibition catalog, March 5 - April 16, 1978.
In this taut and generously illustrated overview, Taylor (Buckskin and Buffalo: The Artistry of the Plains Indians) zeroes in on North American Indian arms and armor .
Native American! I talk throughout this web site the importance of remaining open to new information and the importance of getting our history straight.
At the beginning of the fifteenth century, many native peoples populate North America. They speak countless languages and follow diverse patterns that are adapted to .
Dutch Americans, Ecuadoran Americans, Egyptian Americans, English Americans, Eritrean Americans, Estonian Americans, Ethiopian Americans, Filipino Americans, Finnish .
Welcome to Composite Factory, Inc A Native American Manufacturing Company Specializing In Ceramic Composites (CMC), Carbon and Glass Fiber Polymer Composites .
NativeNet is dedicated to sharing Native American and Indigenous Peoples History.
David E. Jones offers the first systematic comparative study of the defensive armor and fortifications of aboriginal Native Americans.
Armor and Weapons native american armor history of the Spanish Conquistadors Steel Weapons and Armor Even the Odds in the Conquest. By Christopher Minster, About.com Guide
Information about traditional and contemporary Native American clothing, with links to clothes sold by American Indian artists from various tribes.
Native American Exploration and Conquest by Hernando de Soto for Teens.
Explore HISTORY shows, watch videos and full episodes, play games and access articles on historical topics at History.com.
Kickapoo Native American Tools & Weapons. Throughout history, Eastern Woodland Indians gathered resources located within their environment for fashioning
bad car credit loan
Norco havoc 2010 review
olympic games closing ceremony review
car concrete staining fuel
ti whatever you like album | <urn:uuid:d41ce2ed-fcf0-4061-8b6e-2e3577da04b6> | {
"dump": "CC-MAIN-2019-39",
"url": "https://doggfabgingdeg.page.tl/native-american-armor-history.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573258.74/warc/CC-MAIN-20190918065330-20190918091330-00251.warc.gz",
"language": "en",
"language_score": 0.8924421668052673,
"token_count": 492,
"score": 2.78125,
"int_score": 3
} |
Table of Contents
Do you know who invented the first Computer? Steve Jobs, Bill Gates or more geniuses like Alan Turing or Konrad Zuse. The answer is neither one of them. Have you heard of Charles Babbage? Charles Babbage was a Philosopher and Mathematician born in 1791. During the first half of the 19th Century he attempted to built a machine called the “Difference Engine” or “Differential Engine”. It was a type of mechanical calculator consisting of a large number of mechanical parts and its job was to perform computations on sets of numbers. So, some people argue that Charles Babbage is the father of the modern computer and not Allan Turing or Konrad Zuse.
Due to its extreme cost and some other disputes with locals and politicians of his times, Babbage was not able to built the Difference Engine. However, his idea became the foundation of modern computers, and his Difference Engine project was completed by the Science Museum in London in 1991. And it is a fully functional Difference Engine.
The problem with the Difference Engine was that it could only perform calculations and was not able to check the results and perform different operations on a set of numbers as desired. Therefore, with greater plans, Charles Babbage came up with the idea of developing another engine called “the Analytical Engine”. The Analytical Engine was a gigantic machine, the size of a room, it had its own CPU, memory and was programmable through punch cards, but unfortunately, it wasn’t completed. However, later on, his son, Henry Babbage was able to complete a portion of this enormous machine and was functional. After 150 years, the Science Museum of London plans to build the Charles Babbage’s Analytical Engine. One might ask that whether Charles Babbage would have been able to build such a majestic machine back then.
Who invented the first programmable computer?
Konrad Zuse, a German Scientist, during 1936 and 1938 built the Z1. Which was considered as the first Programmable Computer.
Ideas that lead to the development of modern computers:
Alan Turing, in 1936, proposed the idea of the “the Turing Machine”. This lead to the foundations of the modern days Computers.
Who invented the first Digital Computer?
J. Presper Eckert and John Mauchly were the first to invent “the ENIAC” at the University of Pennsylvania and was completed in 1946. The ENIAC was a gigantic computer, it weighed almost 50 tons and was accommodated in a building with 18,000 Vacuum Tubes. So, the ENIAC is considered as the world’s first Digital Computer.
Which is the earliest computer company?
The first official and earliest computer company was established by J. Presper Eckert and John Mauchy and was known as “the Electronic Controls Company”. Later on, it was renamed to “Eckert – Mauchly Computer Corporation” (EMCC). The UNIVAC Mainframe Computers were released by EMCC.
Which one is the first commercial Computer?
The first commercial computer is Z4, developed by Kondrad Zuse in 1942.
Visit again, as this article will be updated regularly, with more useful information about early computers. | <urn:uuid:387f0576-8da9-4061-b4a0-8d60f8657fea> | {
"dump": "CC-MAIN-2018-05",
"url": "http://www.byte-notes.com/who-invented-first-computer",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00229.warc.gz",
"language": "en",
"language_score": 0.9770374894142151,
"token_count": 689,
"score": 3.578125,
"int_score": 4
} |
A closer inspection reveals that these instruments were made by Andrea Amati
(1566), his sons Gerolamo and Antonio (viola, 1615), grandson Niccol (1658), Giuseppe Guarneri (1689), his son Giuseppe Guarneri del Gesu (1734), and Antonio Stradivari (1715).
The great luthiers of 16th to 18th century Italy -- Andrea Amati
, Antonio Stradivari, Giuseppe Guarnieri -- understood subtleties that today's makers have yet to decode.
The Cremona school was founded by Andrea Amati
in about 1560; his grandson Nicolo became the family's most eminent craftsman.
The instrument was made by Andrea Amati
in 1564 and is a very rare survivor from the earliest period of violin making. | <urn:uuid:d0133ef5-a2c2-4464-a874-f2a3d3a15ca8> | {
"dump": "CC-MAIN-2021-25",
"url": "https://www.freethesaurus.com/Andrea+Amati",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488540235.72/warc/CC-MAIN-20210623195636-20210623225636-00619.warc.gz",
"language": "en",
"language_score": 0.9530035853385925,
"token_count": 178,
"score": 2.703125,
"int_score": 3
} |
You should be brave enough to clean out the mess buried in your cat’s litter box. According to a new study, a parasite found in the cat’s feces can help humans to become more comfortable with fearful situations. The parasite named Toxoplasma gondii is found in the feces of cats that infects a vast majority of people of more than 2 billion across the globe. The parasite is also believed to uplift the behavior of rodents making them less afraid of the cats. The researchers from the University of Colorado studied the bug that might affect humans making them more comfortable towards risks in business and other aspects of life.
Although, researchers need more advanced research to explain the findings, however, they did not find any correlations. People at the entrepreneurial events and functions who carried the parasite were two times more likely to have started their own businesses as compared to other attendees. The college students who were infected with this parasite were seen 1.4 times to excel in businesses as compared to their counterparts. Stefanie Johnson, associate professor and the leader of the study published the findings in the Proceedings of the Royal Society B on Wednesday.
When the rodents were infected with the parasite, they showed behavioral changes where they were less afraid of cats and more likely to be eaten.
Toxoplasma Gondii is the parasite found in the cat’s feces that has been linked with increased risks of car accidents, neuroticism, mental illness, suicide, and drug abuse, as stated by the researchers who wrote the study. The parasite is also considered to alter several brain chemistry and behavior aspects especially around dopamine which is a chemical linked to pleasure. Around 60 million US people carry this cat’s parasite and often after making contact with raw meat or cat’s feces, as stated by the centers for disease control. It cannot be noticed because the human immune system usually combats all the symptoms. | <urn:uuid:3cee7858-de92-4611-9ace-4f75b54c4faa> | {
"dump": "CC-MAIN-2020-05",
"url": "https://tecake.com/news/health/parasite-found-cats-feces-can-help-humans-reduce-fear-32645.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606872.19/warc/CC-MAIN-20200122071919-20200122100919-00387.warc.gz",
"language": "en",
"language_score": 0.9781164526939392,
"token_count": 386,
"score": 3.25,
"int_score": 3
} |
UPDATE! Scroll to the bottom to see the #HourOfCode in action. The 5th Grader made Flappy Birds!
What do you do when you want to introduce your child to coding and computer science, but don’t know where to begin? Get them involved in The Hour of Code, of course! From December 8th-14th, more than 10 million students across the world will be spending an hour learning to code. In fact, in 2013 more than 15 million students spent an hour coding…in one week. That makes a tech loving, homeschooling mama’s heart happy.
Homeschool Tech Tuesday: Hour of Code – What Is It?
“The Hour of Code is designed to demystify code and show that computer science is not rocket-science, anybody can learn the basics,” said Hadi Partovi, founder and CEO of Code.org. “In one week last year, 15 million students tried an Hour of Code. Now we’re aiming for 100 million worldwide to prove that the demand for relevant 21st century computer science education crosses all borders and knows no boundaries.”
So, what exactly is Code.org®? It’s a 501c3 public non-profit dedicated to expanding participation in computer science and increasing participation by women and underrepresented students of color. Its vision is that every student in every school should have the opportunity to learn computer programming. After launching in 2013, Code.org organized the Hour of Code campaign — which has introduced millions of students to computer science — partnered with more than 30 public school districts nationwide, and launched Code Studio, an open-source, online learning platform for all ages. Although we aren’t participating as a public school, we are participating as a homeschool.
Homeschool Tech Tuesday: Hour of Code – How You Can Participate
Participating is easy and definitely kid-friendly. In fact, Little Miss is psyched to try her hand at the Frozen inspired coding assignment!
Sign-up for free at Code.org and they’ll send you information about getting started. The Code Studio is where your children will do their actual coding lessons and activities. Don’t worry if they (or you!) have never done any coding before; it walks you through everything step-by-step and is designed to be easy to follow so that all children can be successful and learn to code. Yes, even if you’re tech-challenged, you can still participate in The Hour of Code.
In fact, here are some great resources for figuring out how it will be best to introduce your student to coding. No computer? No problem! There are even coding possibilities on paper. Scroll through the ideas to explore them all.
If you decide to be a part of The Hour of Code this week, let us know! We’d love to know who else is participating and hear how your kids liked coding.
Hour of Code in Action! | <urn:uuid:bbb16731-bc12-4ece-bb43-07ca2b164780> | {
"dump": "CC-MAIN-2018-09",
"url": "http://mamateaches.com/homeschool-tech-tuesday-hour-of-code/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813088.82/warc/CC-MAIN-20180220185145-20180220205145-00518.warc.gz",
"language": "en",
"language_score": 0.9393441677093506,
"token_count": 620,
"score": 3.578125,
"int_score": 4
} |
When it comes to predictive investing, model risk is one of those things that you need to be aware of. The risk of modeling is that you fail to see the forest for the trees. Model risk is about financial firms that use algorithms to measure and monitor market fluctuations and risks. They use big data practices with a focus on automation and process efficiency when they design their models. When this company’s analysts have been making decisions based off of complicated mathematical formulas, it is when they are most at risk from the errors in their models.
Why is model risk important?
Model risk is important because it can be financially detrimental to an organization. If they are using models that are not properly designed, it can lead to poor investment decisions, but also at times when there is a model risk event, the company has no idea what happened or why.
What are the advantages of model risks?
Advantages to model risks:
1) Any company can use them to forecast markets and make decisions.
2) They provide better forecasts than traditional methods such as asking experts for their predictions.
3) Managers can get a more in depth view of data when they try to solve problems with the help of models rather than by themselves.
You might also be interested in reading:
What are the disadvantages of model risks?
1) Model risk can lead to “over-fitting” which is when a model too closely fits its training data. This means that the model may suffer from bias and poor accuracy in unseen samples of the data, so it would have been better not to design a model at all.
2) Even if there is no over-fitting, the model may over-predict market movements and the resulting loss of investors would be great.
3) The company had no idea which data was relevant so it might design a model with too many factors to consider. As a result, it could lose accuracy and be distracted from what’s important when they create their forecasts.
4) The company may use too much human judgment in its models, which can also result in biased forecasts.
5) If the model is designed poorly it could even create a danger for the entire market and risk causing an economic collapse.
In conclusion, model risks are something that you want to identify when making investments because they can lead to terrible consequences such as economic collapse.
How does one reduce Model Risk?
1) One way of reducing model risk is by making sure your analysts are familiar with the data they are using when designing their models. When this specific company designs its models, it also makes sure to use software that can help with monitoring model risk events.
2) Analysts need to be aware of common pitfalls in modeling. They should do all they can to reduce bias in evaluating forecasts and avoid overfitting which leads to volatility in the forecast.
3) Watch out for model risk events to help reduce the chances of critical failures. It also provides feedback, so team members are constantly aware of their progress when it comes to model making.
4) It is important for the analysts to have a deep knowledge about market, industry and business to reduce the number of any model risks.
5) Incorporate many data sources into one calculation which can then generate a model that is accurate.
What are the model risks?
Big data technologies are being used to more easily access complicated information, which is obviously beneficial in many ways. These technologies can also be dangerous for companies struggling with the details of their own structure, however. For example-If a model risk management company doesn’t have complete information on how financial markets work, they don’t have accurate data to input into their models. This can be dangerous, because they could underestimate the risk involved in making specific decisions-which can lead to inappropriate actions that hurt the company!
Financial analysts need to be sure that they understand how big data technology operates if they want information to turn out accurate and correct. The purpose of this technology is to make complicated information accessible, which is great when the company knows what they are doing. The company needs to be sure its data is complete and accurate in order to use technology for optimal forecasting purposes. If the model risk management team doesn’t understand how to design their models properly-this can lead to potential erroneous forecasts that could hurt the company in terms of investor confidence.
Here are some examples of what Model Risk can look like:
Stock Market Example: You use a particular model to estimate the value of businesses. You don’t fully understand how it works, so there is hidden information that you’re not aware of. If the data you have is biased, you could make a faulty estimate that leads to poor decision-making and your inaccurate forecasts will hurt investor confidence.
Model Risk can also look like this: You have a specific model that predicts market volatility. In order for the data to be accurate it needs to include information about human behavior. If you don’t get enough feedback from your model you could put your company in danger by underestimating how much volatility is actually in the market.
Risk Management Example: You need to make a forecast about an event that affects many people, such as a hurricane. You want to use certain data sources to make this forecast, but not all of them take into consideration all types of factors. This could lead to potential errors in the data, which can then create an inaccurate forecast that causes people to act in ways that are not necessary-such as evacuating when there is no actual reason to do so.
You might also be interested in reading:
What is a model risk management?
Model risk management is the use of models to help make strategic decisions. There are different types of models that can be used, including mathematical models and simulation models. Mathematical models include things like forecasting, which can be done by large businesses or small teams. Simulation models need to be tested with real-life data before they are implemented so that the team understands their accuracy and potential flaws. In order to manage model risk properly it is important for a company to understand all of the types of models available to them -and how they work.
The main goal is to reduce the number of risks involved in business-related decision making. This includes understanding how big data technology operates while also being able to identify model risks quickly when they occur.
What does a model risk analyst do?
A model risk analyst’s main job is to reduce risks that can be associated with using models. They do this by creating the best simulation and mathematical models possible, as well as testing them with real-life data before its implemented. Model risk analysts also help companies understand how big data technology works and how it can be used for forecasting purposes.
What does a model risk management team do?
A model risk management team helps companies understand how to best use models when predicting events in order to make better business decisions. They achieve this through identifying the different types of models available, using real-life data to test them and managing risks associated with their inaccuracies.
What is meant by risk modeling?
Risk modeling is the process of using models to help make investments. There are different types of risk modeling that can be used, including statistical risk modeling, financial risk modeling and econometric risk modeling.
- Statistical Risk Modeling is the practice of predicting events through probabilities expressed in numbers; this includes things like insurance claims.
- Financial Risk Modeling is used to predict changes in financial markets and it can be done by either large companies or small teams.
- Econometric Risk Modeling focuses on real-life data to make predictions about business and includes things like forecasting.
The main goal is to reduce the number of risks involved in investment decision making. This will help companies maximize their returns while maintaining a low amount of risk.
What is meant by the term risk modeling?
The phrase ‘risk modeling’ refers to using models to make better decisions about investments. The idea is that you can reduce the amount of risk involved with your investments while maximizing your returns at the same time.
How do you model risks?
There are many ways to model risks, but it is important to understand how they work. Statistical risk modeling allows you to predict events through probabilities expressed in numbers, while econometric risk modeling focuses on real-life data to make predictions about business. Financial risk modeling is used to predict changes in financial markets and it can be done by either large companies or small teams.
What are the different types of risk modeling?
The most common types of risk modeling are statistical risk modeling, econometric risk modeling and financial risk modeling.
Statistical risk models are used for things like insurance claims, where your data is collected and analyzed to be used for future predictions. It takes into account things like demographics and location to come up with accurate data.
Econometric risk modeling focuses on real-life data to make predictions about business, which is then used for forecasting events in the future. Real-life data includes things like commodity prices and interest rates. It also focuses on inflation, interest rates and commodity prices.
Financial risk modeling is used to predict changes in financial markets and it can be done by either large companies or small teams. This forecasts on data like stock markets or bond market data. It looks at how much the market goes up or down in an average day, which is then used for future predictions. These types of models include things like arbitrage pricing theory and equilibrium models.
Who does a model risk analyst work with?
Model risk analysts work with model managers, (risk management) who help them create the best simulation and test out their models. They also work with model users, who help them make sure the models are easy to use when creating forecasts.
The main goal is to reduce the number of risks involved in investment decision making, which will help companies maximize their returns while maintaining a low amount of risk.
What causes model risk?
There are many things that can cause model risk. For example, the data collection methodology must be understood, the data itself must be analyzed to figure out what it contains and how accurate it is. The more accurate the model, the better it will do in predicting financial markets.
You might also be interested in reading:
What is model transparency?
Model transparency is the idea that you can explain your model and how it works, which helps reduce risks. You should be able to explain how data is collected, what each piece of data means and why you use it in order to create a good model. Understanding your own data and how it’s collected will help you come up with an accurate model because there will be less chances for error.
What are common outcomes of misused model outputs?
Common outcomes include things like over-investing and under-investing. You could also make risky or false investment decisions, which can be detrimental to your business.
Having a good understanding of how models work will allow you to properly analyze and model risks and can help you reduce the number of risks involved with your investments while maximizing your returns at the same time.
In conclusion on managing model risk and model validation
In the article, we learned about how model risk happens when you don’t understand your own data or collection methodologies. In this case, it could lead to inaccurate models which will make investment decisions based off of these inaccurate results- not a good idea! We also looked at different types of risk modeling and where they are used in business today. The goal is to reduce the number of risks involved with investments while maximizing returns by using accurate predictive models that take into account all relevant factors. To learn more about any one type of model discussed here, please refer back to this post for detailed explanations on each one’s methodology and use cases.
Caveats, disclaimers & effective model risk management
We have covered many topics in this article and want to be clear that any reference to, or mention of Model risk management, managing model risk, model validation, model development, implementation of models, and other activities are not recommendations or endorsements of particular model risk mitigation techniques.We’ve also addressed model risk identification, management, governance, and exposure. But do quantitative forecasts really help to reduce model risk? Then there’s the model risk policy, which includes a validation procedure as the final step. Internal models are bolstered by alternative model and risk appetite, which is supported by institutional risk culture. Predictive models at each stage of the lifecycle might produce higher model risk and reduced model performance, while we do not endorse any specific course of action.
If we have touched on model monitoring, stress testing and complex models in this article it is because there are many different factors to consider when looking into how well or badly financial modeling can be done. There’s always an uncertainty that needs further investigation which might lead you down the path of making bad business decisions based off hard data rather than sound assumptions about what will happen if certain things were changed within your model environment.
A lot goes into running these types money-making machines at both large banks like JP Morgan Chase & Co., but also indie investment firms such as Blackrock Incorporated – who currently manages over trillion worth of assets under management worldwide! The challenge isn’t just coming up with unique insights for clients. If we have touched on model monitoring, stress testing, model investments, complex models, model uncertainties, model inventory, model transparency, misused model outputs, business decisions, model fails, given model, defective models, long term capital management, misused models, sensitivity analysis, quantitative method, financial contract, regulatory expectations, industry best practices, business processes, federal reserve, hedge fund, model validators, ongoing monitoring, banking supervision, cost reduction, incorrect assumptions, technical errors, independent review, adverse consequences, cost reductions, machine learning, implementation errors, capital adequacy, sufficient resources, programming errors, decision makers, calibration errors, model, supervisory guidance, model use, models, regulatory standards, risk or potential value in the context of this article is purely for informational purposes and not to be misconstrued with investment advice or personal opinion. Thank you for reading, we hope that you found this article useful in your quest to understand ESG and sustainability. | <urn:uuid:43a591a2-34b1-40c4-8d18-5e3d746c3f21> | {
"dump": "CC-MAIN-2022-40",
"url": "https://www.esgthereport.com/what-is-meant-by-model-risk/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00039.warc.gz",
"language": "en",
"language_score": 0.9541681408882141,
"token_count": 2929,
"score": 2.75,
"int_score": 3
} |
What pops into your mind when you think "comfort"? Maybe you imagine curling up on the sofa with a thick blanket while watching TV or lounging in a hammock on a tropical island. We usually associate comfort with relaxation, but physical comfort in the workplace is the cornerstone of productivity. Spaces that are uncomfortable because they are too hot, too cold, too noisy, too dark, or too bright restrict the ability of workers to perform to their full potential.
Comfort as a Basic Need
Dr. Jacqueline Vischer, a professor in the Department of Environmental Design at the University of Montreal, created a model for work environments that ranks comfort into an ascending continuum of the physical, functional, and psychological. Like Maslow's Hierarchy of Needs, it argues that before people can strive for psychological needs, basic physiological needs must be satisfied first. Vischer's model suggests that addressing physical comfort—like the quality of light, air, temperature, sound, and ergonomics—has the greatest immediate and direct impact on productivity.
It's not surprising that in a 2015 Leesman survey of nearly 136,000 respondents, the top 3 features identified as the most important part of an effective workspace were all directly related to physical comfort: desk, chair, and temperature control. Before a company starts thinking about what team someone will join or what type of computer they'll have, they need to ensure that their employees have a safe, comfortable place to work.
Office workers are losing 86 minutes a day due to distractions. That's about 1.5 hours of productivity lost every day.
A 2014 IPSOS and Steelcase survey of more than 10,000 workers found that office workers are losing 86 minutes a day due to distractions. That's about 1.5 hours of productivity lost every day. The comfort-distraction problem is twofold. First, if you are uncomfortable, you are too busy thinking about your discomfort to focus on the task at hand. Second, if you are really uncomfortable, you may start wasting time on behaviors to cope with your discomfort.
Think about the last time you were too hot or too cold at your desk: Maybe you left your desk to go for a walk; got up for a drink; complained to co-workers (effectively spreading your distraction to other people); tried to track down the facility manager; or tried to make your own make-shift solution. If only you had been empowered to have some degree of control over your physical environment and an easy way to achieve comfort.
Personal Control Helps Productivity
Numerous studies have measured the productivity gains achieved when employees have greater personal control over their comfort. Experimental research conducted over the last 30 years has found anywhere between a 2.7% to a 8.6% increase in productivity. Research includes studies that are not only based on self-reported surveys, but ones that measured actual productivity increases when workers moved from a baseline building to a new building offering personal control of lighting, air ventilation, and other ambient systems. Everything from clerical tasks to tasks that require logical thinking and skilled manual work to very rapid manual work increased with personal control.
Workplace Comfort for Increased Productivity
In the workplace, comfort levels don’t necessarily represent a state of relaxation, they reflect a state free from pain and poised for optimal productivity. A space that is slightly “uncomfortable” can keep people more alert, because it's not about discomfort, it's about breaking up a neutral comfort state. Slight variations in the ambient environment can become levers for productivity.
A 2000 review of experimental research on green buildings and occupant productivity published by Dr. Judith H. Heerwagen, who is currently a Program Expert at GSA's Office of Federal High Performance Green Buildings, notes that a static comfort state does not always lead to the highest performance outcomes. In fact, there are times when being cooler or warmer rather than in a neutral comfort state may enhance performance. For instance, studies have shown that slightly warm temperatures reduce anxiety and may generate a feeling of wakeful relaxation—an emotional state associated with creative problem solving. (This would explain our design team’s productive brainstorming sessions in the warm, sunny corner of the office!) The complexity of workplace comfort and productivity underscores the need for variety and personal control.
We ask ourselves: What would a space look and feel like if it was truly designed to help everyone do their best work?
The Comfy Team is obsessed with creating the ultimate productive workplace. We ask ourselves: What would a space look and feel like if it was truly designed to help everyone do their best work? What does it mean to create a place that makes you feel psychologically, socially, and physically comfortable? The first step in creating a productive and delightful space—one workplace designers would refer to as a "congenial environment"—is eliminating the negatives that are holding people back. By delivering comfort, eliminating distractions, and offering greater individual control, we are able to help everyone be their most productive selves. | <urn:uuid:a2f696cc-7ab7-4e62-a70c-6b9c3c475989> | {
"dump": "CC-MAIN-2020-10",
"url": "https://www.comfyapp.com/blog/the-comfort-productivity-connection/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141749.3/warc/CC-MAIN-20200217055517-20200217085517-00152.warc.gz",
"language": "en",
"language_score": 0.9527186751365662,
"token_count": 1018,
"score": 2.515625,
"int_score": 3
} |
1.3.2 Back to the experts
Whether you are a digital optimist or pessimist, it’s obvious that while technology brings about opportunities, it also has associated risks. This has led to some paediatricians, psychiatrists and psychologists arguing that parents should limit young children’s use of, and exposure to, new digital technologies. But is this really the answer? Is simply restricting children’s access actually the best way to ensure their safety?
Sonia Livingstone is a professor in social psychology and a leading researcher in children’s media. In the following video, she tackles some of these important questions and considers whether prevention really is the best cure. She considers how restricting access to technology may also restrict opportunities for children to develop resilience against future harm.
Back to the experts
What do you think about her advice on minimising online risks and on how parents can best support children’s engagement with technology? | <urn:uuid:aed980d6-4b7e-494a-9e3a-eb6e13af950e> | {
"dump": "CC-MAIN-2018-39",
"url": "http://www.open.edu/openlearn/ocw/mod/oucontent/view.php?id=21138§ion=4.2",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156224.9/warc/CC-MAIN-20180919122227-20180919142227-00288.warc.gz",
"language": "en",
"language_score": 0.9461080431938171,
"token_count": 193,
"score": 3.515625,
"int_score": 4
} |
Indoles are a kind of aromatic heterocyclic compounds, which possess a bicyclic structure, containing a benzene moiety fused to a pyrrole ring. Indoles are ubiquitously distributed in natural world and can be produced by various bacteria. The synthesis and functionalization of indoles has also been a major area of focus for synthetic organic chemists, and numerous methods for the preparation of indoles have been well developed.
Fig. 1 indole and its vital derivatives
Indole and the simple alkylindoles are colorless crystalline solids with a range of odors from naphthalene-like in the case of indole itself to faecal in the case of skatole (3-methyl-1H-indole). Like pyrrole, indole is a very weak base and hence indole itself and its simple derivatives are merely reactive towards strong acids. As an electron-rich heteroaromatic, indole is subject to oxidative processes, including photosensitized electron transfer. Many indoles are readily oxidized by exposure to atmospheric oxygen. The indole ring is also reactive toward electrophilic substitution, the 3-position being the most reactive site for substitution, which has been extensively studied to synthesize a large number of indole derivatives.
The indole nucleus is an important element of many natural and synthetic molecules with crucial biological activity, which have been widely applied in many pharmaceutical fields to cure different diseases. There are several drugs, such as sumatriptan (1, used for the treatment of migraine), ondansetron (2, used for the suppression of the nausea and vomiting caused by cancer chemotherapy and radiotherapy), indomethacin (3, used for the treatment of rheumatoid arthritis), delavirdine (4, used for anti-HIV drug), apaziquone and mitraphylline (5 and 6, both work for anticancer drug) shown in fig. 2.
Fig. 2 indole-based drugs
Indoles can be used as plant hormones that play a significant role in growth and development of plants. Indole-3-ylacetic acid or IAA, the first discovered plant hormone, participates in regulations of many physiological processes of a plant such as the pattern of cell division and differentiation, apical dominance and the directional growth. 3-dimethylamino indole is a botanical pesticide, which not only greatly inhibit the growth of undesired weeds but also make insects resistant to feed.
Fig. 3 indoles applied in agriculture
Indole derivatives have strong odors and can be modulated artificially for perfumes manufacturing. 3-methyl-1H-indole, also called skatole, emits flowers fragrant flavor when it is diluted appropriately and extensively applied in preparations of manmade flower oils.
Indoles have wide use in dyestuffs industries such as phthalocyanine, azo dyes and cationic dyes. In this case, many indole-based dyes are also invented to meet different demands. For instance, indocyanine green is a diagnostic dye for angiogram in medical field.
Fig. 4 indole-based dyes
Twenty Four Seven
Ask a Question | <urn:uuid:48b02a1b-0ea0-4f16-add7-796ccd79095d> | {
"dump": "CC-MAIN-2024-10",
"url": "https://buildingblock.bocsci.com/products/indoles-3180.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948235171.95/warc/CC-MAIN-20240305124045-20240305154045-00389.warc.gz",
"language": "en",
"language_score": 0.9195944666862488,
"token_count": 668,
"score": 3.578125,
"int_score": 4
} |
Before there were programmable computers, humans were the only computers. Processes were broken down into sequential steps, which a pool of people worked on, hour-after-hour, day-after-day. This process was labor-intensive and prone to error. Mathematicians sought to find a more efficient means of simulating the human computer.
During this period, our world of computing progressed, as sequential tasks were finally captured into a form that a machine could process. This period brought the world relay-logic, the precursor to modern day computer circuits. People worked as programmers, converting sequential instructions into the form machines and circuits could execute. The first commercial computer was delivered in 1951. This period was a major turning point in our computing history—one which shaped our field and roles.
Founder of Computer Science and Modern Computing – Alan Turing
This discussion begins with the founder of computer science and modern computing, Alan Turing. Every computer science and software engineering student is required to learn about Turing, as computing began here. In 1936, Turing invented the Turing machine. You can read his paper titled, “On Computable Numbers, with an Application to the Entscheidungsproblem”. All modern-day computers are based upon the Turing machine. Therefore, we need to spend some time discussing it.
What is the Entscheidungsproblem?
In the early 1920s, German mathematician David Hilbert challenged the world to convert the human computer into provable, repetitive, reliable, and consistent mathematical expressions. He essentially wanted to arrive at a true or false state.
A true statement equates to a numeric value of one. In electricity, a true value equates to a state of on. Conversely, a false value, which is a numeric value of zero, equates to an off state in electrical circuits.
Think about this challenge. How can you capture deductive reasoning into discernible proofs in maths? Can every mathematical problem be solved in this manner? Could we capture the logical steps to problem-solving into the form of math?
He called this challenge Entscheidungsproblem, which is the “decision problem.”
In Turing’s paper, he set off to tackle computable numbers and the Entscheidungsproblem. He disproved Hilbert’s challenge by showing there is no absolute method which can prove or disprove a problem in all cases. What changed our world was his proof, i.e. the Turing machine.
What is the Turing Machine?
The Turing machine solves any computing problem which can be translated into sequential and logical steps. Stop and think about the impact of this discovery. If you can describe how to solve a problem, then his machine can solve it. The Turing machine converted the human machine into a mechanical machine.
How does it achieve this?
Think of his machine as a very simple robot with the ability to move from side-to-side to specific points on a long paper tape. Now imagine this tape having small boxes in a row all the way down the length of the tape. Within each box it can have a single 1, 0, or nothing. That’s it. The robot slides along to a specific box (or state), reads the value, fetches the instructions for that box location (code), and then does what it says. It can leave the value as is or change its state (just like in memory). Then, it would move to the position that particular code said to go to for the next state. This process continues until it receives a halt command.
We will go into more depth about Turing’s machine and how it works in CS 0100 Big Picture of the Computer and the Web. For now, remember that Turing gave us a fundamental understanding that code and data can be represented the same way in a machine.
Meet John von Neumann
In 1945, John von Neumann took Turing’s ideas and developed the machine’s architecture. He designed a central core to fetch both the data and code out of memory, execute the code (perform the maths), store the results, and then repeat the process until the halt command was given. This may not sound amazing now, but in its day, it was revolutionary.
His architecture included what he called, “conditional control transfer,” or subroutines in today’s terms. Think about Turing’s machine. Each time it came to a box, it fetched the instructions for that location, i.e. it went into another chunk of code or subroutine. Instead of a linear, sequential approach, where a machine can do these steps in order, this design allowed for moving around or jumping to specific points in code. This concept lead to branching and conditional statements, such as
IF (instruction) THEN (instruction) as well as looping with a
FOR command. This idea of “conditional control transfer” led to the concept of “libraries” and “reuse,” each of which are cornerstones of software engineering principles and quality code.
Building off of Turing
In 1938, Konrad Zuse built the first mechanical binary programmable computer. Then, in 1943, Thomas Flowers built Colossus, which many consider to be the first all-programmable digital computer. It was used to decipher encrypted messages between Adolf Hilter and his generals during World War II. Machines had not yet been capable to be general purpose; rather, they did a specific function for a specific purpose. The first commercial computer was delivered to the U.S. Bureau of the Census in 1951. It was called UNIVAC I and it was the first computer used to predict the presidential election outcome.
Early Programming Languages
Early programming languages were written in machine language, where every instruction or task was written in an on or off state, which a machine can understand and execute. Recall that on is the same as a decimal value 1, while off is a decimal value of 0. These simple on and off states are the essence of digital circuitry.
Think of a light switch on your wall. You flip the switch in one direction and the light comes on. Flip it the other direction and the light goes off. This switch’s circuit is similar to the simple electrical circuit diagram in Figure 1. The switch on your wall opens and closes the circuit. When closed, the electricity flows freely through the power source (a battery in this case) to the switch on your wall and then to the lamp in your room, which turns the light on. When the switch is open (as it is shown in the diagram), the circuit is broken and no electricity flows, which turns the light off.
Now think of the on state as a value of 1. When the switch is closed (picture pushing the switch down until it touches wires on both sides), power flows, the light comes on, and the value is 1 or on. Invert that thought process. Open the switch. What happens? The switch is open, the circuit is broken, power stops flowing, the light goes off, and the value is 0 or off.
Within a machine, past and present, powering through circuits are represented by 1s and 0s. Machine language uses these 1 and 0 values to turn on and off states within the machine. Combining these 1s and 0s allows the programmer to get the machine to do what is needed. We will cover machine logic and language later. The takeaway for now is:
- An “on” state is represented by a decimal value of 1.
- An “off” state is represented by a decimal value of 0.
This combination of 1s and 0s in the first programming languages was manually converted into a binary representation. This is machine code.
Imagine programming in just 1s and 0s. You would need to code each and every step to tell the machine do this, then that, and so on. Let’s see what steps you would walk through to compute the following equation: A = B + C.
Steps to compute A = B + C in Machine Code
In machine code, the first part of the binary code represents the task to be done. It is a code in itself. Therefore, a load instruction may be 0010, an add instruction may be 1010, and a store may be 1100. Using these instructions, let’s break down the steps required to tell the computer how to solve the equation A = B + C.
Note: In this next section, don’t worry about understanding binary code, as we will explain the binary numbering system later in this course. The memory locations and instructions are arbitrary. If you want to see what the decimal value converts to, you can use a converter such as this one.
Step 1: Load B
First, we need to load the data stored in B into a working memory location for us to manipulate and use. In machine code, this means we are doing a load instruction. We tell the computer to go and grab the location where B is stored in memory (e.g. memory location 1) and then put it (load it) into the working memory (e.g. at location 11). In binary code, this task becomes:
0010 1011 0001
- 0010 is the Load instruction
- 1011 (decimal value 11) is the working memory location of where to store B
- 0001 (decimal value 1) is the storage memory location of B, i.e. where it is stored.
Let’s do the next step.
Step 2: Load C
Just like in Step 1, we need to load the data for C into memory in order to work on it. Therefore, it is a load instruction again, where C is in memory location 2 and we put it into the working memory location of 12.
0010 1100 0010
- 0010 is the Load instruction
- 1100 (decimal value 12) is the working memory location of where to store C
- 0010 (decimal value 2) is the storage memory location of C, i.e. where it is stored.
At this point, the computer has both B and C loaded into the working memory. Now we can do the next step.
Step 3: Add the numbers in working memory
In this step we add the two numbers together. Therefore, we need to tell the computer to do an add instruction, store it into a memory location (e.g. 5) using the memory location for B (which is 11 from above) and C (which is 12 from above).
1010 1101 1011 1100
- 1010 is the add instruction
- 1101 (or decimal 13) is the memory location where to store the result.
- 1011 (or decimal 11) is the memory location where B is stored in working memory.
- 1100 (or decimal 12) is the memory location where C is stored in working memory.
Now we have a result for A. Next we need to store it.
Step 4: Store the result
It’s time to store the result into a stored memory location. We need to tell the computer to do a store instruction from memory location 13 into the memory location (3).
1100 0011 1101
- 1100 is the store instruction
- 0011 (or decimal 3) is the memory location where to store the result.
- 1101 (or decimal 13) is the memory location where result is stored in working memory.
Congratulations, you just wrote machine code.
Complete machine code
The complete machine code for A = B + C is:
0010 1011 0001
0010 1100 0010
1010 1101 1011 1100
1100 0011 1101
Wrap it Up
Looking at the steps and having to think in binary, remember the instruction in binary code, and then ensure you don’t make a mistake would be very tedious and time-consuming. It’s no wonder that the early programmers were mathematicians and engineers.
Your Code to Machine Code
The binary code you just stepped through is the code your human-readable code is eventually converted into after it goes through the parsing, translation, and final conversion processes.
Working hour after hour in numbers is not efficient. As we will see in the next section, our role expands as translators were invented, thereby allowing our role to abstract away the machine code into a form we could use and understand.
Code. Eat. Code. Sleep. Dream about Code. Code.
Total Lab Runtime: 01:07:56 | <urn:uuid:6d7a76af-09ed-4eee-953e-09537e0339d7> | {
"dump": "CC-MAIN-2021-21",
"url": "https://knowthecode.io/labs/evolution-of-computing/episode-2",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00396.warc.gz",
"language": "en",
"language_score": 0.937768280506134,
"token_count": 2615,
"score": 4.09375,
"int_score": 4
} |
Transgender and gender-diverse adults are three to six times more likely as cisgender adults (individuals whose gender identity corresponds to their sex assigned at birth) to be diagnosed as autistic, according to a new study by scientists at the University of Cambridge’s Autism Research Centre.
This research, conducted using data from over 600,000 adult individuals, confirms previous smaller-scale studies from clinics. The results are published today in Nature Communications.
A better understanding of gender diversity in autistic individuals will help provide better access to health care and post-diagnostic support for autistic transgender and gender-diverse individuals.
The team used five different datasets, including a dataset of over 500,000 individuals collected as a part of the Channel 4 documentary “Are you autistic?” In these datasets, participants had provided information about their gender identity, and if they received a diagnosis of autism or other psychiatric conditions such as depression or schizophrenia. Participants also completed a measure of autistic traits.
Strikingly, across all five datasets, the team found that transgender and gender-diverse adult individuals were between three and six times more likely to indicate that they were diagnosed as autistic compared to cisgender individuals. While the study used data from adults who indicated that they had received an autism diagnosis, it is likely that many individuals on the autistic spectrum may be undiagnosed. As around 1.1% of the UK population is estimated to be on the autistic spectrum, this result would suggest that somewhere between 3.5.-6.5% of transgender and gender-diverse adults is on the autistic spectrum.
Dr. Meng-Chuan Lai, a collaborator on the study at the University of Toronto, said: “We are beginning to learn more about how the presentation of autism differs in cisgender men and women. Understanding how autism manifests in transgender and gender-diverse people will enrich our knowledge about autism in relation to gender and sex. This enables clinicians to better recognize autism and provide personalized support and health care.”
Transgender and gender-diverse individuals were also more likely to indicate that they had received diagnoses of mental health conditions, particularly depression, which they were more than twice as likely as their cisgender counterparts to have experienced. Transgender and gender-diverse individuals also, on average, scored higher on measures of autistic traits compared to cisgender individuals, regardless of whether they had an autism diagnosis.
Dr. Varun Warrier, who led the study, said: “This finding, using large datasets, confirms that the co-occurrence between being autistic and being transgender and gender-diverse is robust. We now need to understand the significance of this co-occurrence, and identify and address the factors that contribute to well-being of this group of people.”
The study investigates the co-occurrence between gender identity and autism. The team did not investigate if one causes the other.
Professor Simon Baron-Cohen, Director of the Autism Research Centre at Cambridge, and a member of the team, said: “Both autistic individuals and transgender and gender-diverse individuals are marginalized and experience multiple vulnerabilities. It is important that we safe-guard the rights of these individuals to be themselves, receive the requisite support, and enjoy equality and celebration of their differences, free of societal stigma or discrimination.”
Reference: “Elevated rates of autism, other neurodevelopmental and psychiatric diagnoses, and autistic traits in transgender and gender-diverse individuals” by Varun Warrier, David M. Greenberg, Elizabeth Weir, Clara Buckingham, Paula Smith, Meng-Chuan Lai, Carrie Allison and Simon Baron-Cohen, 7 August 2020, Nature Communications.
This study was supported by the Autism Research Trust, the Medical Research Council, the Wellcome Trust, and the Templeton World Charity Foundation., Inc. It was conducted in association with the NIHR CLAHRC for Cambridgeshire and Peterborough NHS Foundation Trust, and the NIHR Cambridge Biomedical Research Centre. | <urn:uuid:a6f2a75a-97ac-4833-a476-4092b34e3e4d> | {
"dump": "CC-MAIN-2021-39",
"url": "https://scitechdaily.com/transgender-and-gender-diverse-individuals-far-more-likely-to-be-autistic/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053759.24/warc/CC-MAIN-20210916204111-20210916234111-00636.warc.gz",
"language": "en",
"language_score": 0.9472150802612305,
"token_count": 824,
"score": 3.140625,
"int_score": 3
} |
"It gets late early around here." - Yogi Berra
January 31, 2013
HISTORY: Reflections on the American Revolution, Part II of III: The Generals
How did America win its independence? In Part I of this essay, I looked at the population trends, foreign alliances, and equipment and weather conditions under which the American Revolution was fought. Let's add some thoughts on the leaders of the principal combatants: the American and British generals. The American command was far from perfect - but the war could have turned out very differently if the American side had not had the advantages of leadership it did, first and foremost the singular character of George Washington.
Washington, Washington: Any history of the Revolutionary War has to consider the unique leadership of George Washington. 43 years old when he assumed command, Washington came to the war with combat leadership experience from the French and Indian War, training as a surveyor that prepared him well to deal with maps and terrain, a decade of active fox hunting that had made him an excellent horseman, and experience in the Virginia House of Burgesses that had educated him in practical politics. Physically, Washington was a man of great strength, vigor and endurance and almost supernatural good luck. Washington's robust constitution survived smallpox, diphtheria, multiple bouts of malaria, pleurisy, dysentery (in 1755, the 23-year-old Washington had to ride to Braddock's defeat on a padded saddle due to painful hemorrhoids), quinsy (an abcess of the tonsils that laid him out in 1779) and possibly typhoid. In the rout of the Braddock expedition, Washington had two horses shot from under him and four bullet holes in his coat, yet neither then nor at any time during his military career was Washington wounded despite often being in the thick of battle and presenting an enormously conspicuous target (one of the tallest men in the Continental Army, in the most brilliant blue uniform, mounted on horseback).
But he had his weaknesses: he'd never had command of anything as large, diverse and complex at the Continental Army (whose very name bespoke its ambitions), and while Washington was smart, adaptable, detail-oriented and sometimes inspired, he was not a naturally brilliant military mind: his errors throughout the New York campaign would illustrate that he was no Napoleon, just as - in more fateful ways - Napoleon was no Washington.
I've noted before the success of Washington's frequent tactic of hit-and-run attacks followed by retreats and more retreats. Washington's overall long-term strategy ended up being one of simply enduring in the field, never putting his whole army at risk until he had the enemy trapped. But it's crucial to always bear in mind that this strategy ran contrary to everything in Washington's temperament. By nature, he was an aggressive, audacious military man who loved the offensive. Frequently throughout the war, Washington developed complex and daring offensive plans. Sometimes, as at Trenton in December 1776 and the following year's effort at a coup de main at Germantown in October 1777, he put those plans in action. The attack at Germantown was designed to catch Cornwallis' 9,000-man army by surprise with a numerically superior force and destroy it while it was divided from the rest of Howe's army quartered at Philadelphia. The plan, calling for four columns to fall on the British more or less simultaneously, was too complex and ambitious (the largest Continental Army column arrived late and the two militia columns had little effect) and ended in defeat. But like the 1968 Tet Offensive, it was a morale and propaganda winner for the Americans just to mount such an assault. It raised the Continental Army's morale, stunned the British command (which had thought Washington beaten and in retreat after the prior month's defeat at Brandywine that had cleared the way for the occupation of Philadelphia) and, together with the victory at Saratoga, it helped persuade the French that the American war effort was serious and had staying power. Washington's audacity on this occasion paid dividends even in defeat.
But at least as often, Washington allowed his war council (composed of his subordinates and, after the arrival of the French, Gen. Rochambeau, who made clear that he would defer to Washington's ultimate decisions) to talk him out of his own overly ambitious plans even after he had drawn them up at length: a hazardous amphibious assault on Boston during the 1775-76 siege (complete with, in one version of the plan, a vanguard of soldiers on ice skates attacking across the frozen harbor); a march on the British war purse at New Brunswick with an army exhausted after Trenton and Princeton in January 1777; an attack on New York in 1780 or 1781 when Rochambeau wanted to chase Cornwallis to Yorktown instead. His willingness to listen to the counsel of cooler heads is what separated the practical Washington from more tactically brilliant but ultimately undone-by-hubris generals from Napoleon to Robert E. Lee.
Relatedly, Washington learned from his mistakes. The desire for decisive battle and protection of politically important turf had led him to risk annihilation of the largest army he would have during the war at the Battle of Brooklyn; thereafter, he would not stage a do-or-die stand to protect any particular spot of land. Washington had signed off on the disastrous 1775 invasion of Quebec; he would resist all further entreaties to stage a second offensive.
If Washington's decisionmaking was sometimes imperfect, his temperament and leadership were flawless. Washington was neither deluded nor emotionless; time and again, his correspondence showed him verging on despondency at the condition of his army and the perils it faced, and we know he was capable of towering rages. But in the presence of his men (who were apt to get too high after heady victories and too low in defeat) and occasionally his adversaries, he never projected anything but steady confidence and endurance. Washington was not, perhaps, a nice man; even his close associates tended to regard him as the same distant marble statue we see him as today (Hamilton once bet a colleague at the Constitutional Convention a dinner if he'd go slap Washington on the back and act familiar with him; Washington pried his hand off and froze him with such a stare he told Hamilton afterwards he wouldn't try it again for anything). But Washington put tremendous, conscious effort into acting the part of a great man at all times in order to become one. Washington had his vices, chief among them his ownership of slaves, but his virtues were almost a textbook of the qualities needed of the leader of a long, dangerous struggle through major adversity: perseverance, discipline of himself and others, attention to detail, fairness, integrity, resourcefulness, physical courage, endurance of hardship, and an unblinking practicality. There's a great story about Washington breaking up a snowball fight that escalated into an enormous brawl between soldiers from Massachusetts and newly-arrived Virginia riflemen in Harvard Yard during the siege of Boston, possibly with racial overtones due to the presence of black soldiers in the Massachusetts regiment; a young observer recounted:
Reinforced by their friends, in less than five minutes more than a thousand combatants were on the field, struggling for the mastery.
You didn't mess with George Washington. But men would follow him anywhere.
Greene and Knox: The Continental Army's leaders were a mixed bag, and more than a few of those who served with distinction are largely forgotten today. If there are two names besides George Washington that every American schoolchild should learn from the Revolutionary War, it's Nathanael Greene and Henry Knox. Of all the Continental Army's generals, only Washington, Greene and Knox served the entire duration of the war. While men like Charles Lee, Horatio Gates, Ethan Allen and Benedict Arnold contributed more than their share of ego, drama, and backbiting, both Greene and Knox were unswervingly, uncomplicatedly loyal both to Washington and the cause he fought for. In the long run, that served them better than any thirst for glory. Greene was offered the post of Secretary of War under the Articles of Confederation; when he declined, Knox took the job and continued to hold it under Washington's presidency.
As soldiers, Greene and Knox were emblematic of one of the major characteristics of early America: they were self-educated, learning most of what they knew of military matters from books. Formal education was spotty in the colonies; even Washington, as a wealthy Virginia planter, never went to college and was mainly what we would call today "home schooled." Yet early Americans didn't let a lack of schooling bar them from the quest for knowledge. Ben Franklin had nearly no formal education at all, but by the closing decades of his public life was arguably the world's most respected intellectual. Knox was educated at Boston Latin, but unschooled in war; his military experience was five years in an artillery company of the Massachusetts militia. Greene had neither schooling nor military experience, but read whatever he could get his hands on. At the outset of the war, they were young small businessmen: Knox a 25 year old bookseller from Boston, Greene a 32-year-old Quaker from Rhode Island who ran the family forge. Both prepared for combat by reading books on military strategy and tactics; had there been a "War for Dummies" in 1775, they would have read it without embarrassment. (Washington, too, ordered a number of military volumes when heading off to Philadelphia in 1775; as Victor Davis Hanson has noted, one of the distinctive features of Western civilization is a long written tradition of military science, allowing the widespread dissemination of the latest ideas about warmaking). Yet, self-educated though they were, they knew what they were missing: Knox spent years agitating for the establishment of an American military academy to teach the art of war, which would eventually be founded at West Point under the Jefferson Administration.
Knox pulled off perhaps the most remarkable and dramatic feat of the war in the winter of 1775-76, when a team led by he and his brother made the long, snow-covered trek from Boston to Fort Ticonderoga in upstate New York, loaded up all its heavy artillery, returned with every single artillery piece intact, and then in one night in March set up the guns on Dorchester Heights, the peninsula overlooking Boston from the south. The British were staggered, and forced to evacuate. The full tale, as told by David McCullough in 1776, is as amazing as anything in American history, and I can't hope to do it justice here. Knox would go on to prove himself time and again as the chief artillery expert in the Continental army from Boston to Trenton (where his guns commanded the center of the town) all the way through Yorktown (where the shelling of Cornwallis' encampment brought him to his knees), and would be present for all of Washington's major engagements. Knox' amateurism led him astray on occasion; a few of the guns under his command exploded on their handlers in Boston and again later in New York, and he is generally credited with the misguided decision to send waves of troops against a barricaded position at Germantown on the basis of an inflexible application of the maxim (which he'd probably picked up from a book) about not leaving a fortified position to your rear. But his overall record was one of practicality, resourcefulness and unwavering dedication to the cause.
As for Greene, he too can be found at all Washington's major battles of the first half of the war, as Washington's operational right-hand man; the Quartermaster General of the Army after Valley Forge; the commander (with Lafayette) of the first joint operation with the French, a failed effort to break the occupation of Newport, Rhode Island; and finally Washington's choice to assume command of the southern theater of the war after the serial failures of Robert Howe at Savannnah, Benjamin Lincoln at Charleston and Horatio Gates at Camden. The fiasco at Camden ended the military career of Gates, the victor of Saratoga, and left the Continental Army in the South in shambles, but it would prove Greene's finest hour. Greene had little time to rebuild the shattered army; he rarely commanded more than a few thousand men, and often had to rely on the aid of the local militia. And yet, with a major assist from those militia, he staged a brilliant series of retreats and maneuvers to keep Cornwallis from taking control over the region or from capturing and crushing his army. It was Greene who said of this campaign, "We fight, get beaten, rise, and fight again." After the costly March 1781 Battle of Guilford Court House, Cornwallis came to the decision that he needed to stop chasing Greene around the Carolinas and head north to Virginia, setting in motion the fateful chain of events that led to Yorktown.
Unfortunately, and characteristically of life in the 18th century, many of the leading figures of the Continental Army and Revolutionary militia did not live that long after the war's end, including Greene, who died of sunstroke at age 43. Charles Lee died in 1782, Lord Stirling in 1783, Greene in 1786, Ethan Allen in 1789, Israel Putnam in 1790, John Paul Jones in 1792, John Sullivan and Francis Marion in 1795, Anthony Wayne in 1796, and Washington himself in 1799. While numerous places in the United States today bear their names (here in New York, Greene as well as Putnam, Sullivan and militia leader Nicholas Herkimer are the namesakes of counties), their popular memories today are less vivid than Revolutionary War figures like Alexander Hamilton who had more prominent political roles. But nobody aside from Washington himself contributed more to victory than Greene and Knox.
The European Adventurers: The American cause was, of course, aided as well by a handful of Continental European volunteers - Marquis de Lafayette, Baron von Steuben, Casimir Pulaski, Tadeusz Kosciuszko, Baron de Kalb (this is aside from some of the American leaders like John Paul Jones and Charles Lee who were native to the British Isles). Two of those, Pulaski and de Kalb, were killed in battle in the early unsuccessful battles of southern campaign, Pulaski at Savannah and de Kalb at Camden. Both Lafayette and Kosciuszko would return to try - with mixed success - to lead their own homelands to a republican future; Jones would serve in Catherine the Great's navy after the war, terrorizing the Turkish navy and becoming an honorary Cossack in the process. Only von Steuben would enjoy a quiet life in his adopted country.
Lafayette's exploits, especially during the Yorktown campaign, were significant and memorable, and in a general sense he contributed to the cementing of the alliance with France. And Pulaski played a key role in organizing the American cavalry. But von Steuben was likely the most important to the Continental Army's victory.
In contrast to the self-educated citizen soldiers running the American army, von Steuben came from the opposite end of the 18th century military spectrum: born a sort of aristocrat and a Prussian army brat, he had served as a staff officer on the professional Prussian general staff, the first of its kind in the world, and been instructed by Frederick the Great himself. Unlike some of the other Europeans - but like Jones, who fled to America because he was wanted for the murder of a sailor he had flogged to death - von Steuben was no starry-eyed idealist. He was an unemployed professional soldier, deeply in debt, who came to the American cause only after running out of prospective employers in Germany, and was trailed by an unverified rumor that he was fleeing prosecution for being "accused of having taken familiarities with young boys." He was passed off to Congress, perhaps knowingly and possibly with the complicity of Ben Franklin (who recognized his value), as one of Frederick the Great's generals rather than a captain on the general staff, and even his aristocratic title had been inflated and possibly invented. He spoke little or no English and often asked his translator to curse at the soldiers on his behalf.
But whatever his background, von Steuben's discipline and professional rigor was crucial. He established badly-needed standards for sanitary conditions in the army, introduced training in use of the bayonet, and taught the men the sort of manuevers that were essential to 18th century warfare. He is, on the whole, credited with the improved drill and discipline that emerged from Valley Forge and was displayed in the 1778 Battle of Monmouth. Monmouth, in combination with the French entry into the war, induced the British to mostly abandon the strategy of trying to hunt down Washington's army and focus instead on offensive operations in the South. Von Steuben's field manual was still the U.S. Army standard until the War of 1812. If Greene and Knox are emblems of traditional American virtues, the Continental Army's debt to von Steueben and the other Europeans is emblematic of America's adaptability and openness to the contributions of new arrivals.
The British Command: While there were many important figures on both sides of the war - I've only scratched the surface here on the American side - essentially all the important decisions on the British side were made by six generals: Gage, Howe, Clinton, Cornwallis, Burgoyne, and (in Quebec in 1775-76) Guy Carleton (Carleton also briefly commanded the British evacuation of New York in 1783 at the war's end). Where they went wrong provides an instructive contrast with Washington's command.
All six were professional military men, veterans of the Seven Years'/French and Indian War: Clinton, Cornwallis and Burgoyne had fought only in Europe, while Howe and Carleton had fought in the Quebec campaign that culminated in Wolfe's capture of the fortified city, and Gage had been a part of the Braddock expedition and thus seen Washington up close in action. And by and large, with the arguable exception of Gage, they fought with tactical skill and professionalism against the Americans. Yet they have gone down in history as architects of a great failure, weak in comparison to predecessors like Wolfe and dwarfed by the likes of Wellington who succeeded them. Aside from Carleton, only Cornwallis really survived the war with his domestic reputation and career intact, going on to years of highly influential service as a colonial administrator in Ireland and India that shaped the Empire in important ways. Howe was the only other one of the six besides Cornwallis to command troops in combat again, for a time during the early Napoleonic Wars.
The British failure was partly a matter of the personalities involved, but also one of basic strategic incoherence. They never really had a fully thought-out strategy. Only Clinton and Cornwallis really seemed to understand the paramount importance of putting Washington's army out of business early in the war, and their aggressive plans of flanking attacks and hot pursuits were frequently overriden by Gage and Howe, who were less apt than Washington to heed the good advice of their subordinates. Washington learned from his mistakes; Howe, in particular, did not, on multiple occasions settling down to stationary positions when he should have been finishing off Washington.
The British could have adopted a scorched-earth approach like Sherman in the Civil War; General James Grant urged the burning of major cities in the North, and in the southern campaign Banastre Tarleton's forces (including Loyalist partisans) did what they could to spread terror in the countryside, including some notorious examples of bayoneting wounded or surrendering Americans. Cornwallis near the end of the war in Virginia would set thousands of slaves free as a foreshadowing of Lincoln's Emancipation Proclamation, albeit solely for tactical purposes. But, as regular forces facing guerrilla insurgencies often do, they took a halfway path that was the worst of both worlds: heavy-handedness and occasional atrocities were crucial to raising the militia against Burgoyne in New York and Cornwallis and Tarleton in the Carolinas, yet they failed to pursue a sufficiently merciless approach to annihilate the Continental Army or destroy its economic base of support.
Like the Americans, the British were riven by petty jealousies and contending egos; unlike the Americans, they never had a Washington to keep those divisions from impeding operations, and unlike the Americans, their civilian government was too far away to provide supervision. Burgoyne's appointment to lead the Saratoga expedition alienated both Carleton, who resigned in protest, and Clinton. In the case of Clinton, while he was usually right about tactics (notably his preference for outflanking the militia from the rear at Bunker Hill and for encircling Washington in New York), his flaw (which probably contributed to his advice being ignored) was his inability to work well with others. Though not entirely through faults of his own, it was Clinton's failure to arrive with timely reinforcements that led to the surrenders of Burgoyne at Saratoga and Cornwallis at Yorktown.
The human element of good generalship can be fortuitous, but it is also a product of the civilian and military cultures that produce armies. In the long run, the Americans had a clearer strategy, greater unity of purpose and command and more adaptable leadership, and that made the difference.
In Part III: the role of the militia.
January 29, 2013
HISTORY: Reflections on the American Revolution (Part I of III)
I've recently been reading a fair amount on the American Revolution, especially David McCullough's 1776 (which should be required reading for every American).* The more you read of the Revolutionary War, the more there is to learn, especially about the vital question of how the colonists pulled off their victory over the vastly wealthier and more powerful Great Britain. The standard narrative of the American Revolution taught in schools and retained in our popular imagination today overlooks a lot of lessons worth remembering about where our country came from.
The Population Bomb: In assessing the combatants and indeed the causes of the war, it's useful - as always - to start with demographics. There was no colonial-wide census, but this 1975 historical study by the US Census Bureau, drawing on the censuses of individual colonies and other sources, breaks out the growth of the colonial population from 1630 to 1780, and the picture it paints is one of explosive population growth in the period from 1740 to 1780:
The black population was principally slaves and thus - while economically and historically important - less relevant to the political and military strength of the colonies. But as you can see above, the main driver of population growth was the free white population rather than the slave trade.
Authoritative sources for the British population during this period are harder to come by (the first British census was more than a decade after the first U.S. Census in 1790); most sources seem to estimate the population of England proper between 6 and 6.5 million in 1776 compared to 2.5 million for the colonies. Going off this website's rough estimated figures for the combined population of England and Wales (Scotland had in the neighborhood of another 1.5 million people by 1776), the colonies went from 5% of the British population in 1700 to 20% in 1750, 26% in 1760, 33% in 1770, and 40% in 1780:
It was perhaps inevitable that this shift in the balance of population between the colonies and the mother country would produce friction, and of course such a fast-growing population means lots of young men ready to bear arms. Men like Franklin and Washington were already, by 1755, envisioning the colonies stretching across the continent for the further glory of the then-nascent British Empire; 20 years later, both were buying Western land hand over fist and picturing that continental vision as a thing unto itself.
The distribution of population among the individual colonies was somewhat different from today. Virginia (encompassing present-day West Virginia) was by far the largest colony and, along with the Carolinas, the fastest-growing, while Massachusetts, Maryland and Connecticut were much larger - and New York much smaller - relative to the rest of the colonies than today:
This is one reason why Maryland gained a reputation as the "Old Line State": it had the manpower to supply a lot of the Continental Army's best troops. Connecticut was, in fact, seen as a crucial economic engine of the war, the most industrialized of the colonies at the time and mostly undisturbed by combat. That said, when you look solely at the white population, the southern states loom less large, and the crucial role of Pennsylvania and Massachusetts comes into focus:
The smaller colonies present a similar picture:
Note that Rhode Island, alone, lost population during the war, due to the 1778-1780 British occupation of Newport. That occupation had lasting effects. According to a 1774 census, Newport's population before the war was more than twice that of Providence (more than 9,000 to less than 4,000) and it was a booming seaport; the city's population dropped by more than half to 4,000, and it never really recovered its status as a port, losing business permanently to New York and Boston. Another lasting side effect: Rhode Island, founded by Roger Williams as a haven of religious tolerance and welcoming even to Jews and Quakers, forbade Catholics from living in the colony, but after the British abandoned Newport in 1780 and the French garrison took up residence, the grateful Rhode Islanders permitted the French troops to celebrate the first Mass in Rhode Island; today, it is the most heavily Catholic state in the union.
Britain's population would surge in the 1790s, and by about 1800 there were a million people in London alone, the first city in world history confirmed to exceed that threshold. But that remained in the future; at the time, France's population of 25 million and Spain's of some 10 million would easily exceed that of George III's domain. Moreover, like its colonies, England had a longstanding aversion to standing armies; while the Napoleonic Wars would ultimately compel the British Army (including foreign and colonial troops) to swell to a quarter of a million men by 1813, a 1925 analysis found that "[a]t the outbreak of the Revolution, the total land forces of Great Britain exclusive of militia numbered on paper 48,647 men, of which 39,294 were infantry; 6,869 cavalry; and 2,484 artillery," with 8,580 men in America. And those forces were always stretched; according to this analysis of Colonial & War Office figures, the British never had much more than 15,202 redcoats in the American theater (including the Floridas, where they fought Spain), and never exceeded 30,000 troops in total, counting "Hessians" (companies of professional soldiers hired from the Hesse-Hanau, Hesse-Kassel, Brunswick and other German principalities) and American Loyalists (a/k/a "Tories"):
The Close Call: More modern American wars like the Civil War and World War II eventually developed a momentum that made victory effectively inevitable, as America's crushing material advantages came to bear on the enemy. By contrast, the Revolutionary War was, from beginning to end, a near-run thing (to borrow Wellington's famous description of Waterloo). At every stage and in every campaign of the war, you can find both British and American victories, as well as a good many battles that were fought to a draw or were Pyrrhic victories for one side. The length of the 7-year war in North America was a burden for the increasingly war-weary British, for a variety of reasons, but a long war was also a great risk for the Americans: the longer the war ran on, the harder it was in terms of both finances and morale to keep the all-volunteer Continental Army in the field. Whole units dissolved en masse at the end of their enlistments throughout the war, and there were mutinies in the spring of 1780 and again in January 1781. As late as 1780, Benedict Arnold's treason and debacles at Charleston and Camden, South Carolina put the American cause in jeopardy of being rolled up by the British, causing America's European allies to strike a separate peace. At one point or another in the war, the then-principal cities of most of the colonies - Massachusetts (Boston), Pennsylvania (Philadelphia), New York (New York), Virginia (Richmond and Charlottesville), Rhode Island (Newport), South Carolina (Charleston), Georgia (Savannah), Delaware (Wilmington) and New Jersey (Trenton, Princeton, Perth Amboy, New Brunswick) were captured and occupied by the British. Only Connecticut, Maryland, North Carolina and New Hampshire remained unconquered, as well as the independent Vermont Republic (Maine, then governed by Massachusetts, was also under British control for much of the war; the failed Penobscot Expedition was aimed at its recapture, and ended with a disastrous naval defeat). In the spring of 1781, Thomas Jefferson - then the Governor of Virginia - escaped capture by Cornwallis' men by a matter of minutes, fleeing on horseback as the government of the largest colony was dispersed. It was only the complex series of events leading to Yorktown in the fall of 1781 - Cornwallis retreating to Virginia after being unable to put away Nathanael Greene's Continentals and the North Carolina militia, Washington escaping New Jersey before the British noticed where he was going, Admiral de Grasse bottling up Cornwallis' escape route in the Chesapeake by sea, Henry Clinton failing to come to Cornwallis' aid in time - that created the conditions for a decisive victory and finally forced the British to throw in the towel.
Moreover, a great many individual battles and campaigns throughout the war turned on fortuitous events ranging from fateful decisions to apparently providential weather. It is no wonder that many of the Founding generation (like many observers since) attributed their victory to the hand of God.
Weather and Suffering: Both the Continental Army and its British and Hessian adversaries endured conditions that no armies before or since would put up with, including a staggering menu of extreme weather ranging from blizzards to colossal thunderstorms to blazing summer heat. Ancient and medieval armies would not campaign in freezing cold and snow; modern armies (like the combatants at Leningrad and the Marines in the retreat from Chosin Resovoir) would at least face them with something closer to proper clothing and shelter. But both sides in the war suffered chronic shortages: the British from lack of food for their men and forage for their animals, the Americans from lack of clothing (especially shoes), shelter and ammunition. The British lost more sailors to scurvy in the war than soldiers to combat, and during the long siege of Boston they had recurring problems with their sentries freezing to death at night. Smallpox, malaria and other diseases were endemic and especially hard on European troops with no prior exposure (one of Washington's great strokes of good judgment was having his army inoculated against smallpox, a disease he himself had survived and which left him pock-marked and probably sterile**). The British were rarely able to make use of their cavalry due to a lack of forage, and their infantry had other equipment problems:
[T]he flints used by the British soldier during the war were notoriously poor. Colonel Lindsay of the 46th lamented that the valor of his men was so often "rendered vain by the badness of the pebble stone." He exclaimed indignantly against the authorities for failing to supply every musket with the black flint which every country gentleman in England carried in his fowling piece. In this respect the rebels were acknowledged to be far better off than the king's troops. A good American flint could be used to fire sixty rounds without resharpening, which was just ten times the amount of service that could be expected from those used by the British forces. Among the rank and file of the redcoats, the saying ran that a "Yankee flint was as good as a glass of grog."
The war was conducted during the Little Ice Age, a period of low global temperatures (it's a myth that "climate change" is a new phenomenon or must be caused by human activity), and the winters of the period (especially 1779-80) were especially brutal. American soldiers and militia forded waist-deep icy rivers to reach the Battle of Millstone, marched miles without boots in snowstorms on Christmas Night after crossing the icy Delaware to reach the Battle of Trenton, and even tried (insanely) to lay siege to the fortified Quebec City in a driving snow on New Year's Eve. These were only a few of the examples of Americans marching great distances in weather conditions that would defeat the hardiest souls. The British performed their own acts of endurance and valor; drive over the George Washington Bridge some time and look at the cliffs of the Palisades, and picture Cornwallis' men scaling them at night to attack Fort Lee. Other battles were fought in heavy wool uniforms in the broiling heat, from Bunker Hill to much of the southern campaign, or in rains that left gunpowder useless, or - on the eve of the Battle of Brooklyn - colossal lightning strikes that killed groups of American soldiers in Manhattan. In the 1776 siege of Sullivan's Island, the British were shocked to discover that their cannonballs wouldn't splinter the soft palmetto wood from which the American fort was constructed, leaving the British ships to take a pounding from American artillery.
Except for Quebec, the weather - however hostile - nearly always managed to favor the American cause, rescuing the Americans when the hand of fate was needed most. McCullough recounts the especially significant shifts in the wind and fog that allowed Washington's army to escape in the night, undetected, across the East River after the catastrophic Battle of Brooklyn, while the blizzard at the Americans' backs was key to their surprise at Trenton.
The Allies: Most educated Americans still recall that France came to the aid of the fledgling nation after the victory at Saratoga, and played a significant role in tipping the scales in the war. In World War I, Pershing's refrain of "Lafayette, we are here" was still a popular invocation of that collective memory. Besides French money and supplies and French land and naval combat at Yorktown, the French also stretched the British defenses with extensive campaigns in the Caribbean and with a threatened invasion of England. But as important as the French alliance was, the emphasis on France understates the role that other of America's allies and Britian's enemies played in the Revolution.
First and foremost, at least as history is taught here in the Northeastern U.S., the Spanish role in the Revolutionary War is scandalously underplayed. There are reasons for this: Spain was a less impressive international power in the late 18th Century than France and would become drastically less so by the end of the Napoleonic Wars in 1815, and unlike the French, the Spanish rarely fought shoulder-to-shoulder with Americans or within the Thirteen Colonies. But Spain performed three vital roles in the war. First, under Bernardo de Galvez (namesake of Galveston, Texas, among other places), the Spanish Governor of the Louisiana Territory, the Spanish shipped significant war materiel up the Mississippi River through the American agent Oliver Pollock, supplementing the French aid that kept the American cause afloat. Second, after Spain's 1779 declaration of war against Britain, Galvez opened a significant second front against the British-held Floridas (which then included, in the territory of West Florida, much of what is now the Gulf Coast of Georgia, Alabama and Mississippi). Galvez was arguably the most successful commander of the war in North America, his multi-national, multi-racial force sweeping through the British defenses, preempting any British move on New Orleans and culminating the capture of Pensacola (then the capital of East Florida) in the spring of 1781. This campaign resulted in the Floridas being transferred from Britain to Spain in the resulting peace treaty; the absence of a British foothold on the southern border of the U.S. would have lasting consequences, and the Floridas would end up being sold by Spain to the United States in 1819. And third, the Spanish played a pivotal role in the Yorktown campaign, not only raising more funds in Cuba for the campaign but also providing naval cover in the Caribbean that allowed Admiral de Grasse to sail north and close off the Chesapeake just in the nick of time. (Spain also conducted a long, costly siege of Gibraltar that ended unsuccessfully and a successful assault on Minorca, both of which spread British manpower thin between 1778 and 1783).
The other main fighting allies of the American colonists were two of the Iriquois Six Nations in upstate New York, the Oneida and Tuscarora (the other four fought with the British), as well as a few other tribes on the western frontier. But other sovereigns caused the British additional problems. The Kingdom of Mysore, a French ally in Southern India, went to war with Britain (the Second Anglo-Mysore War) in 1780, inflicting thousands of casualties with innovative rocket artillery at the September 1780 Battle of Pollilur. The Dutch, who frustrated John Adams' efforts to arrange financial assistance and an alliance until after Yorktown, nonetheless ended up dragged into the Fourth Anglo-Dutch War beginning in December 1780. (Some things never change: Adams was accused of unilateral "militia diplomacy" for ignoring diplomatic protocols and negotiating with the Dutch without consulting the French, but crowed after inking the deal in 1782 that "I have long since learned that a man may give offense and yet succeed."). The Russians, then moving towards an alliance with Great Britain against the French, nonetheless pointedly refused to get involved; Catherine the Great refused a 1775 request in writing from George III that she send 20,000 Cossacks to America (necessitating the hiring of Hessians instead) and eventually joined the League of Armed Neutrality with the Dutch and others to resist British naval embargoes (the step that brought the British and Dutch to blows). Catherine II thought the British were fools for provoking the conflict and predicted from the outset that the Americans would win. All in all, the international situation by the end of 1780 left the British increasingly isolated and drove the strategic imperative to seek out a decisive battle in Virginia - an imperative that led Cornwallis directly into a trap of his own devising but which the American, French and Spanish forces sprung with great skill and coordination.
In Part II: Washington and the other American and British generals. In Part III: the role of the militia.
Read More »
* - Besides 1776, others I've read recently include H.W. Brands' Benjamin Franklin bio, and Joseph Ellis' George Washington bio (I'd previously read McCullough's John Adams bio and Ellis' Thomas Jefferson bio). For the beginner, relative to some other subjects, Wikipedia's writeups on many Revolutionary War battles are pretty good introductions to the chronology and sweep of the war; what Wikipedia lacks in stylistic flair and, at times, accuracy, it makes up in organization and structure. But as always, remember to never rely on Wikipedia without checking a second source.
** - We do not know for sure that Washington was unable to have children, but sterility was a common side effect of smallpox for men fortunate enough to survive the disease. Andrew Jackson, who contracted smallpox at 14 during the Revolution but was the only member of his family to survive the war, also never had children of his own.
« Close It
January 14, 2013
POLITICS: Harry Reid's Priorities: Immigration, Not Assault Weapons
Senate Majority Leader Harry Reid gives some revealing insight into how he sees the Senate's priorities this spring - priorities, in line with his support back home in Nevada, that are long on addressing immigration and not so high on banning "assault weapons":
Calling for a "cautious" approach to gun control, Senate Majority Leader Harry Reid downplayed the chances of the Senate renewing an assault-weapons ban in a weekend TV interview, suggesting he will instead move forward on measures with a better chance to pass muster in the Republican-controlled House.
But Reid is much more enthusiastic about getting bipartisan support for immigration bills:
"Immigration's our No. 1 item," Reid said. He later added, "It's going to be the first thing on our agenda."
Your mileage may vary on which of these topics is more likely to produce mischief. But clearly, Reid in reading the tea leaves of the last election thinks Republicans are more apt to bend on immigration than guns. He may be right.
POLITICS: More Cigarette Taxes Equals More Cigarette Smuggling
A recent study from the Tax Foundation and the Mackinac Center for Public Policy looking at cigarette taxes and cigarette smuggling reminds us, yet again, of how big government always ends up legislating the Law of Unintended Consequences.
Tax That Smoker Behind The Tree
You have to tax something to fund government, and if you're taxing sales, cigarettes are as good a target as any: while legal, they're universally known to be unhealthy and sometimes regarded as immoral. On the other hand, they're also a predominantly American-made product that's disproportionately consumed by lower-income Americans, meaning that a cigarette tax is more regressive than most taxes. In theory, the tax is supposed to serve the public health purpose of discouraging smoking; it's refreshing to hear this argument from liberals who usually deny that taxes discourage behavior, but in practice, it takes a lot more taxing to discourage smoking than most other activities because people are physically addicted to the product. This is to say nothing of the concern that state governments themselves get more or less addicted to tobacco revenues.
We know from long experience that when you ban something there's a public demand for, it gets less common, more expensive and more under the control of the criminal class - but it doesn't go away entirely. That's true whether you are talking about cigarettes, guns, alcohol, drugs, gambling, abortion, prostitution, pornography, or illegal immigration. And what's true of outright bans can be true as well of activities that are heavily taxed or regulated: the more costs government imposes, the more you get black markets. And that's exactly where we stand today with cigarette taxes.
Smuggling and Black Market Cigarettes
Any review of tax-hiking Democratic governors in recent years - or indeed, even Republican governors looking to raise more tax revenue without calling it "tax hiking" - will reveal a lot of hikes to cigarette taxes. That's nowhere more true than here in New York City - with predictable results:
New York has the highest cigarette tax rate of any state, and nearly two-thirds of the state's cigarette market is illegal, announced the think tank Tax Foundation on Thursday.
The issue is especially acute in New York due to a long-running state dispute over tax-free cigarettes manufactured and sold by the Oneida Indian Nation, one of the Native American tribes with significant sovereign land in the state (the Oneidas are one of the Iroquois Six Nations). The Michigan-based Mackinac Center has more on how the issue plays out in Michigan, which is not only a market for smuggled cigarettes but also an exporter of them to Canada.
As you can see from the study, the rates of smuggling correlate pretty strongly with tax rates, with smugglers having a strong incentive to export cigarettes from low-tax jurisdictions and sell them on the black market in high-tax jurisdictions. The study looks at tax rates and rates of smuggling in 2006 and 2011. Here's the rates of smuggling in 2006, plotted against the per-pack tax rate:
Here's the same graph for 2011:
And here's the change in rates of smuggling plotted against the change in per-pack taxes between 2006 and 2011:
One study is never the be-all or end-all of any policy debate, and as I said, some cigarette taxes are a sensible way of raising money at the expense of a socially undesirable activity. But at the end of the day, black markets are one of the ways in which high tax rates push us to the far right end of the Laffer Curve, and the Tax Foundation/Mackinac study suggests fairly strongly that a lot of jurisdictions have passed that point with these taxes.
January 9, 2013
BASEBALL: The Hall of Fail
This afternoon, we will see how the baseball writers voted, and it looks like it will be a very close call for the Hall of Fame to elect anyone (at last check, based on the publicly disclosed votes, it looks like Craig Biggio may be the only candidate in striking distance, with Jack Morris and Tim Raines trailing).
I don't have a ton to add right now to what I wrote last year about many of these same candidates and the same issues - like steroids - that dominate the debate (follow the links in that post for more detailed arguments). But a few points.
1. The limitation of the ballot to ten names isn't normally a problem, but this year, there's such a backlog of qualified candidates that it presents a real dilemma. I don't have a ballot, of course, but I divide my list of who I'd vote for as follows:
SHOULD GO IN WITHOUT DEBATE: (8) Barry Bonds, Roger Clemens, Mike Piazza, Jeff Bagwell, Craig Biggio, Tim Raines, Fred McGriff, Rafael Palmeiro.
To put Biggio in the simplest terms: 4,505 times on base (18th all time), plus 414 steals, while playing 1989 games at second base, 428 as a catcher, and 255 as a center fielder. That is a career. From 1992-99, adjusted for the fact that he lost 41% of a season over 1994-95 to the strike, Biggio's average season was 160 games, 732 plate appearances, .299/.394/.460, 120 Runs, 73 RBI, 41 2B, 17 HR, 36 SB and only 11 CS, 101 times on base by walk or hit by pitch, and only 7 GIDP. And all of that while playing second base in the Astrodome and winning four Gold Gloves in eight years.
DEBATABLE BUT I'D VOTE THEM IN: (3) Mark McGwire, Sammy Sosa, Curt Schilling.
I GO BACK AND FORTH: (2) Edgar Martinez, Bernie Williams. As noted last year, I do struggle with the fact that Edgar and McGwire have more similar cases than they seem at first glance.
CLOSE BUT NO CIGAR: (3) Alan Trammell, Larry Walker, Kenny Lofton.
BAD BUT NOT RIDICULOUS CHOICES: (3) Jack Morris, Lee Smith, Dale Murphy. As I've noted before, Murphy was good enough, but not for long enough; Morris, too, might deserve induction if his 1980 and 1988-90 seasons were of the same quality (plus quantity) as his 1981-87 seasons.
WORTH A LOOK BUT NOT A VOTE: (4) Don Mattingly, David Wells, Julio Franco, Steve Finley. Mattingly, of course, would have been an easy Hall of Famer if his back had held up.
JUST ENJOY BEING ON THE BALLOT: The other 14 guys, any of whom should be flattered to get a vote and honored by having had distinguished enough careers to be on the ballot. I mean that: if I was, say, Todd Walker, I'd want to frame my name on the Hall of Fame ballot. Only a tiny handful of the kids who start out dreaming on the sandlots get that far.
2. The postseason is an ever larger factor in modern baseball, and certainly a big part of what puts Bernie Williams and Jack Morris in the conversation, and Curt Schilling over the top. That's as it should be.
There's actually an awful lot of hitters on the ballot this year who struggled in October (not even counting Barry Bonds, who struggled the rest of his Octobers but made it up with a 2002 rampage). And of course, postseason numbers can be unfair to a guy like Raines who got a disproportionate amount of his October at bats in his declining years. We should not overlook, however, the value of Fred McGriff's postseason contributions. Of the 16 somewhat serious position player candidates, five had somewhat limited postseason experience (less than 100 plate appearances); Trammell hit .333/.404/.588 in 58 plate appearances, Sosa .245/.403/.415 in 67 PA, and Palmeiro .244/.308/.451 in 91 PA. Mattingly and Murphy got one series apiece, Mattingly hitting .417/.440/.708, Murphy .273/.273/.273.
Here's how the rest stack up:
Looking at the postseason numbers also suggests that the case for McGwire over Edgar is even narrower; yes, McGwire played for a World Champion and three pennant winners whereas Edgar's often-insanely-talented teams never reached the Series, but like Edgar's teams, Big Red's lost some big serieses to obviously less talented opponents, and McGwire's overall postseason performance was terrible.
Anyway, looking at McGriff, in over 200 plate appearances in the postseason he has the best batting and slugging averages of this illustrious group, and the second-best OBP to Bonds (and Bonds drew 5 times as many intentional walks in October - leave those out and McGriff beats Bonds .374 to .369). Projected to a 162 game schedule, his postseason line produces 36 2B, 32 HR, 87 BB, 117 R, and 120 RBI. McGriff slugged .600 in a postseason series six times in ten series (including all three series en route to the 1995 World Championship), an OPS over 1,000 five times. If you're giving points for producing with seasons on the line, the Crime Dog should get more than any of these guys. (Bernie Williams slugged over .600 in 8 series, but he appeared in 25 of them; he also slugged below .320 ten times.).
3. On the steroid issue...well, you have to ask whose Hall of Fame is this? It's a question Bill James asked 30 years ago about the All-Star Game, and people tend to skip over it as if everybody has the same answer.
We know Major League Baseball is operated for the purpose of making money for the owners, but that (as James also pointed out in the early 80s) it exists to satisfy popular enthusiasm for baseball, and the maintenance and cultivation of that fan interest is something the owners, in their self-interest, have to attempt to respect.
If you've read James' indispensable book The Politics of Glory, you know that the question - whose Hall is it? - has long been a complicated and fraught one between MLB, the players, the BBWAA, the owners of the Hall, the Town of Cooperstown, and the fans who visit the museum.
Honoring the players is certainly an important and honorable purpose; for most of these guys, getting the call and being inducted into the fraternity of the Hall is the highlight of their entire lives, and that's not a small thing. And to the extent that we view the Hall primarily as a personal honor, it makes some sense to cast a jaundiced eye at least on those players we know for a fact to have cheated to win, whether by breaking the game's rules or breaking the law (some of the performance enhancing drugs at issue were legal under one of the two regimes but not the other at various times).
But at the end of the day, to me, the Hall is bigger than the players for the same reasons as why the games are played in full stadiums in front of TV cameras, for the same reasons as why scores of visitors make the pilgrimage to sleepy Cooperstown each summer. The Hall belongs to the fans, too. There is one red line, in my view: Shoeless Joe Jackson and his co-conspirators belong outside the Hall because they participated in a conspiracy to lose games. But everything else is about guys who were doing their best to win. The fans paid the owners to watch those wins, the writers wrote about them; they belong to history now, and to memory. We can't re-live the 1990s to change the memories we have. It's the job of the game to enforce the rules while the games are being played; having failed that (failed badly enough that clouds of unproven suspicion linger over many players without the evidence to resolve them), we are cutting off our noses to spite our faces by keeping a generation of the game's best players out of the Hall, in a way that ultimately degrades the whole point of the place: to be a commemoration of the best in the game's history. The game survived segregation and wars and gambling and cocaine and spitballs and assaults on umpires; we can keep those memories alive too and try to remedy them going forward, but we still enshrined the players who won baseball games through all of them. Because it's not just their Hall, it's ours.
It's not the Hall of Fame if it doesn't have guys like Barry Bonds and Roger Clemens and Mike Piazza (and, for that matter, Pete Rose). Their flags still fly over their stadiums, their records are still in the books, and their plaques should be in the Hall.
January 2, 2013
POLITICS: Silver Linings in the Fiscal Cliff Deal
I will not try to convince any conservative that the final fiscal cliff deal that passed the Senate with only a few dissenting votes and needed Democratic votes to pass the House with a divided GOP caucus is a good deal, nor that it is the best deal available under the circumstances. It is, however, important to remember that this was a deal negotiated under just about the worst possible conditions: the president freshly re-elected, the largest tax hike in American history set to trigger automatically in the absence of a deal, the GOP leadership divided among itself and estranged from its grassroots/activist base, which itself was divided on how best to proceed. Republicans have illustrated dramatically why poker is not a team sport.
For all of that, there is some good news here for Republicans and conservatives if we know how to use it.
What's In The Deal?
The tax deals mostly bring a permanent settlement (subject, of course, to new legislative action) to a variety of previously temporary tax policies:
-The 2001 and 2003 Bush tax cuts to income, capital gains and dividend taxes will be made permanent for income up to $400,000 ($450,000 for married joint filers), but will be allowed to expire for income above those levels. Taxes will go up on many small business owners as a result.
-A similar half-a-loaf extension is being done for the estate tax, with the rate rising on estates above $5 million.
-The Alternative Minimum Tax will be indexed permanently to inflation, reducing the number of taxpayers hit with it and ending the annual debate over fixing it.
-The temporary payroll tax cut will be allowed to expire.
-5-year extensions are given to the Child Tax Credit and EITC as well as the college tax credit known as the American Opportunity Tax Credit, all of which can involve tax "credits" that are actually payments to people who pay no income taxes.
-Some exemptions and deductions will be phased out for incomes above $250,000 ($300,000 for joint filers).
-A variety of mischief was included or extended in the corporate tax code.
The good news is that the Bush Tax Cuts are now permanent for some 98% of all taxpayers; the bad news is the 1-2 punch of the expiration of the payroll tax cut and of the top-rate cuts. Even the left-wing Tax Policy Center admits that the net result of all this is higher taxes in 2013 for 77.1% of taxpayers, due in large part to the expiration of the payroll tax cut:
More than 80 percent of households with incomes between $50,000 and $200,000 would pay higher taxes. Among the households facing higher taxes, the average increase would be $1,635, the policy center said....The top 1 percent of taxpayers, or those with incomes over $506,210, would pay an average of $73,633 more in taxes....The top 0.1 percent of taxpayers, those with incomes over about $2.7 million, would pay an average of $443,910 more, reducing their after-tax incomes by 8.4 percent. They would pay 26 percent of the additional taxes imposed by the legislation.
That's increased new federal taxes; it doesn't take into account the numerous new Obamacare-related federal tax hikes already hitting in 2013 (including big hikes on the same people getting socked in this deal) let alone Democratic efforts to 'soak the rich' with state tax hikes in some states. And the tax changes are most of the deal. Matthew Boyle:
According to the Congressional Budget Office, the last-minute fiscal cliff deal reached by congressional leaders and President Barack Obama cuts only $15 billion in spending while increasing tax revenues by $620 billion - a 41:1 ratio of tax increases to spending cuts.
That's $62 billion a year, when you decode the CBO/JCT math, as unreliable as that is. RB has a chart illustrating exactly how little a dent that makes in the deficit.
On the spending side, little was definitively resolved, although conservatives are rightly concerned that yet another crisis came and went with no real action on spending and entitlements. New spending was authorized for unemployment insurance to be extended yet again, raising the question of whether Democrats think there is any limit to such insurance or any reason to believe the economy under Obama will ever produce a significant number of new jobs. Most of the rest of the automatic cuts in the sequester were put off for two months; the Medicare "doc fix" put off cuts for one year. Nothing was done to Social Security. No agreement was reached to extend the debt ceiling, which looms as the next crisis as early as February and Obama still pledging to refuse to negotiate.
Around The Web
Let's round up some reactions from around the web and then I'll offer my own thoughts.
From the Right
Ben Domenech (subscription):
Well, this looks like an insult to fig leaves everywhere....For all the talk of solving deficit problems, grand entitlement bargains, and steps toward dealing with out of control spending, Republicans and Democrats came together in the past 48 hours to endorse a solution which was about as small as it could possibly be. On the spending side, it trades the endorsement of higher taxes for every working American by Republicans for essentially nothing, with the promise of more nothing in the future.
Ben Howe: "I'm hoping that these last few years of constantly debating temporary tax rates will forever close the door on the use of such a negotiating tactic."
Democrats have made one major miscalculation. The pro-deal Democrats think that they have set a precedent for getting Republicans to agree to future tax increases -- that Grover Norquist's pledge is dead. This is a fantasy. This tax increase happened only because a bigger one was scheduled to take place. Republicans are not going to vote affirmatively to raise taxes, especially after taxes just rose. The deal makes future tax increases less likely, not more.
[L]iberals have a real reason to be discouraged by the White House's willingness - and, more importantly, many Senate Democrats' apparent eagerness - to compromise on tax increases for the near-rich...if I were them I'd be more worried about the longer term, and what it signals about their party's willingness and ability to raise tax rates for anyone who isn't super-rich....Is a Democratic Party that shies away from raising taxes on the $250,000-a-year earner (or the $399,999-a-year earner, for that matter) in 2013 - when those increases are happeningly automatically! - really going to find it easier to raise taxes on families making $110,000 in 2017 or 2021? Color me skeptical: The lesson of these negotiations seems to be that Democrats are still skittish about anything that ever-so-remotely resembles a middle class tax increase, let alone the much larger tax increases (which would eventually have to hit people making well below $100,000 as well) that their philosophy of government ultimately demands.
Maybe the expiration of the payroll tax cut really will amount to a significant economic hit in 2013 [quoting uniquitous liberal economist Mark Zandi]...Perhaps this - along with the rest of the fiscal cliff-hanger - will be a useful lesson about "temporary" tax changes. Congress usually enacts them to provide a spark to the economy, and intends to end them once the economy is in better shape. But the economy is rarely in such great health that taxes can be raised without some sort of deleterious impact; as we may experience, taxes jump back up before there's a robust recovery and the hikes cause the economy to sputter again. (In this light, the permanency of the Bush tax cuts for those making less than $450,000 per year may be one of the most significant economic reforms in the recent era.)
For liberals, this was not a moment of danger to be minimized but by far their best opportunity in a generation for increasing tax rates (which is the only fiscal reform they seem to want) and for robbing Republicans of future leverage for spending and entitlement reforms. And it is likely the best one they will encounter for another generation...some liberals believed [extending most of the rate cuts] could be overcome through much expanded caps on deductions...which would both raise more revenue and make Republican-style tax reform (a broader base with lower rates) much more difficult later. And they believed that the Republicans' opposition to tax increases would also give Democrats an opportunity to score some other points, like forcing Republicans to sign on to Obamacare-style counterproductive provider cuts in Medicare, so that Republicans couldn't criticize those anymore.
From the Left
By any measure, the fiscal deal that finally passed the House yesterday should have been something House Republicans could have enthusiastically supported. After all, as Jonathan Weisman put it, the bill 'locks in virtually all of the Bush-era tax cuts, exempts almost all estates from taxation, and enshrines the former president's credo that dividends and capital gains should be taxed equally and gently."
To listen to all the moaning out of the House of Representatives yesterday, you could be forgiven for thinking that the Republicans are losing the fiscal battle in Washington.
Kevin Drum: " my real preference was for a deal that would have allowed the Bush tax cuts to expire completely...there's not much question we're going to need more revenue" to pay for health care entitlements.
The Path Forward
Conservatives these days tend to be gloomy about the road ahead, partly due to lack of faith in the GOP's leadership and establishment and partly due to lack of faith in the electorate. But this is no time to throw in the towel. There is good news here, too, as a number of those quoted above on both sides have noted, and we should not hesitate to celebrate it.
First, the nonsense idea of "temporary" tax policy has hopefully had a fatal stake driven through it: both parties had lauded their ability to deliver temporary tax relief in the past, and must now swallow voter anger that those tax cuts were allowed to expire. One of the golden rules of Washington is that bad policies rarely end until both parties have suffered a downside from them. The only reason for tax policy to be "temporary" in the first place is to game the broken system of budget scoring.
Second, the Democrats have truly conceded far more ground on taxes than the Republicans. The ATR no-tax-hikes pledge was bent and mutilated badly, but not completely broken, given that Republicans accepted the expiry of temporary cuts and did so only after exhausting numerous efforts to save them. But Democrats who spent a decade blaming deficits, the housing crisis, and weeds in your lawn on the Bush Tax Cuts have now delivered the votes to make nearly all of them permanent - something that was unthinkable any time during Bush's presidency and even as recently as 2010.
Third, the table is set for Republicans in 2014 and especially 2016 to seize anew the initiative on taxes: on broad-based reforms that simplify the code, make it more pro-family, and cut taxes for everyone (possibly even slashing or abolishing the payroll tax) - variations on a platform that worked in 1980 and 2000 and can work again. After four years of bobbing and weaving, Obama now has signed off on raising taxes on nearly everyone, and that is sure to play into the GOP's natural strengths.
Fourth, the table is also stacked against the Democrats demanding new tax hikes in the next spending battle. Maybe Boehner and McConnell won't bring much back home in spending cuts - I never really believed that Obama would ever sign off on significant spending cuts or entitlement reform, and I still don't - but there really is no case at all to be made for returning so soon to the well of tax hikes.
Fifth, the tone is set for Obama's second term, and while it is hardly a great tone for Republicans, it also signals that Obama will need to either keep his ambitions small, stop demanding Republicans vote for deal-beakers, or start offering them something real in exchange if he wants to get anything accomplished. It's unlikely that he will be negotiating from as strong a position again.
Sixth, it will now be much harder for Obama to avoid ownership of the economy, having embraced most of the centerpiece of Bush's economic agenda while adding his own personal stamp. He's socked new taxes on investors, on small business owners, and on ordinary working people. Nobody forced him to do any of these things. Politically, that's a double-edged sword (Republicans have a lot of governors up for re-election in 2013 and 2014 who could be innocent bystanders if their states get blindsided by bad federal tax policy), but it is rarely good news for the party in power in the sixth year of a president's term.
The temporary-tax-cut trap had stuck Beltway Republicans in an uncomfortable morass that was, to a large extent, one of their own devising. They did not emerge unscathed, but at least they have put it behind them, and that creates a lot more flexibility going forward - an important consideration in a party that is largely united on policy but deeply divided on strategy. That's an opportunity, and no amount of gloom should cause us to lose sight of that. | <urn:uuid:87d2ce44-c470-4c11-909b-ed3d02978be3> | {
"dump": "CC-MAIN-2017-39",
"url": "http://baseballcrank.com/archives2/2013/01/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687820.59/warc/CC-MAIN-20170921134614-20170921154614-00155.warc.gz",
"language": "en",
"language_score": 0.9720823764801025,
"token_count": 13863,
"score": 3.25,
"int_score": 3
} |
From boutiques offering online shopping and sending monthly e-newsletters to their customers to churches holding virtual services and collecting donations electronically, the growth in online business has given organizations faster and easier ways to engage with their communities. However, doing business online also produces a reliance on networks, applications, social media and data to keep up with modern behaviors, leaving organizations vulnerable to cyber threats.
Every day, common cyber-attacks have the potential for substantial negative business impact. Fortunately, these impairments are often preventable. Let’s take a look at some of today’s cyber-risk realities.
There is an old adage in the military, “The weakest part of any fort is the people inside.” This is a concept that criminals and other bad actors in the cyber arena have taken to heart. The hacking stereotype — someone wearing a hoodie, sitting at a keyboard in a dark room furiously typing away, stripping away firewalls and disabling security protocols by sheer force — isn’t reality. The reality is a well-intentioned user gives away the keys to the castle.
Social engineering is the root cause of most cyber incidents. Criminals simply ask nicely, and someone lets them in. Confidence schemes have been around since the dawn of human history, and technology has made it so much easier. Instead of fighting through layers of security, criminals can send emails to thousands of addresses with malicious links included and wait for someone to click on a link, giving away their login and password.
Social engineering can occur by email, text, phone calls, instant messaging, in-person and in any way that people connect. In this time of quarantine, there’s even been a resurgence of social engineering by snail mail — sending people malicious USB sticks that compromise computers when inserted. Criminals are nothing if not inventive; any method of communication can be a method of deception.
What happens once they’re in? There are many possibilities and reasons for someone to intrude on a network, but the two biggest risks right now are ransomware and data theft.
Ransomware has been in the news due to a high volume of attacks on state and local government agencies, but what is it exactly? Ransomware is a type of attack where criminals encrypt data and hold it hostage until a ransom is paid, usually in untraceable cryptocurrency. From a cybercriminal’s perspective, a ransomware attack is effective because it’s uncomplicated. Moving a large amount of data takes a lot of time and computing power, so putting a lock on it so no one can get to it is much simpler. Ransomware is also a quicker path to monetizing the crime since the criminal gets paid directly upfront instead of selling the stolen data. Ransomware is particularly insidious because all it takes to spread throughout an entire network is one person to fall for a social engineering attack.
A recent example of a ransomware incident was the 2017 attack on Møller-Maersk, the world’s largest shipping conglomerate. The computers of employees in 574 offices in 130 countries around the world were infected — each demanding $300 to be unlocked. The total cost of this incident to Møller-Maersk is estimated at $300 million, and they were just one of many affected businesses by the same type of ransomware. As of 2019, Cybersecurity Ventures predicts the global cost of ransomware damages will reach $11.5 billion annually.
While ransomware is on the rise, there’s still plenty of money for criminals to make by selling data. Data leakage is simply the unauthorized transmission of data — usually data that has value, Personal Identifiable Information (PII), Personal Health Information (PHI), banking information or any data that can be sold or directly used to make money, usually by methods of identity theft. Criminals do not stop because of a pandemic. They are using PII stolen or sold on the dark web to steal Coronavirus Aid, Relief, and Economic Security (CARES) Act payments from the U.S. Treasury.
Why is data leakage the risk instead of data theft? Because stolen data is not the only way that unauthorized data gets out. Sometimes it’s merely a mistake, such as someone sending the wrong attachment or disclosing the wrong piece of information. There is a lot that can be done to stop data theft with technical controls, but it’s much harder to detect a simple mistake.
One thing making data loss riskier is an organization’s desire to move data to the cloud. Storing data in the cloud (off-premises) provides many advantages, but it also presents many risks. A simple misconfiguration can expose all your stored data to the world rather than keeping it private. For cybercriminals, it’s even easier than social engineering: they just scan for misconfigured cloud storage and the data is simply there for the taking. No muss, no fuss.
A Terrible Union
Globally, businesses and governments have determined that paying ransom is a bad idea in the long run, because there is no guarantee the data will be returned, and it only serves to give the criminals more resources to launch more attacks. Many enterprises are choosing to restore and rebuild instead of paying the ransom to get their data back. In response to this trend, criminals now download the data before locking it up to give them more leverage — pay or the data gets released.
What assurance do you have that the data won’t be released anyway once the ransom is paid? None.
Managing These Risks
Ransomware and data loss are just two of many cyber-risks. Programming errors, insider threats, denial of service, vendor compromise, lost or stolen devices, business e-mail compromise, supply chain attacks and a variety of others present a clear danger to your business or organization. What you should keep in mind is that cyber risk, like all risk, is manageable.
When talking about risk, people often think they’re helpless against cybercriminals. The cyber-world does present risks, but as long as those risks are planned for and controlled appropriately they’re no different than any well-understood, established risk in the physical world.
If your business or organization doesn’t have cyber insurance solutions in place, talk with your GuideOne distribution partner about our coverages and tools that could meet your needs. To learn more, check out our Cyber Suite Overview.
Read the original blog from GuideOne here. | <urn:uuid:de9a36c2-8506-4885-b264-21f4568d4390> | {
"dump": "CC-MAIN-2023-06",
"url": "https://richardpittsagency.com/cyber-risk-realities/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00663.warc.gz",
"language": "en",
"language_score": 0.9418540596961975,
"token_count": 1354,
"score": 3.015625,
"int_score": 3
} |
By Lisa Hay
Learn more about Lisa on NerdWallet’s Ask an Advisor
When it comes to the world of financial aid, we know how confusing it can be. Let’s take a look at the difference between Stafford Subsidized and Unsubsidized Loans.
What is a subsidized loan?
A subsidized loan is a federally guaranteed loan based on financial need. Interest does not accrue on this loan until you begin repaying it because the federal government subsidizes, or covers it, while you are in school and usually for a short time after you finish.
Here’s a quick overview of Direct Subsidized Loans:
- Direct Subsidized Loans are available to undergraduate students with financial need.
- Your school determines the amount you can borrow, and the amount may not exceed your financial need.
- The U.S. Department of Education pays the interest on a Direct Subsidized Loan.
- While you’re in school at least half-time,
- for the first six months after you leave school (referred to as a grace period*), and
- during a period of deferment (a postponement of loan payments).
*Note: If you receive a Direct Subsidized Loan that is first disbursed between July 1, 2012, and July 1, 2014, you will be responsible for paying any interest that accrues during your grace period. If you choose not to pay the interest that accrues during your grace period, the interest will be added to your principal balance.
What is an unsubsidized loan?
Stafford Unsubsidized Loans are likewise federally-guaranteed loans, but they are not based on financial need. Interest starts accruing on these loans once they hit your college’s account and will continue until the loan is paid in full.
Here’s a quick overview of Direct Unsubsidized Loans:
- Direct Unsubsidized Loans are available to undergraduate and graduate students; there is no requirement to demonstrate financial need.
- Your school determines the amount you can borrow based on your cost of attendance and other financial aid you receive.
- You are responsible for paying the interest on a Direct Unsubsidized Loan during all periods.
- If you choose not to pay the interest while you are in school and during grace periods and deferment or forbearance periods, your interest will accrue (accumulate) and be capitalized (that is, your interest will be added to the principal amount of your loan).
How much can I borrow?
Your school determines the loan type(s), if any, and the actual loan amount you are eligible to receive each academic year. However, there are limits on the amount in subsidized and unsubsidized loans that you may be eligible to receive each academic year (annual loan limits) and the total amounts that you may borrow for undergraduate and graduate study (aggregate loan limits). The actual loan amount you are eligible to receive each academic year may be less than the annual loan limit. These limits vary depending on
- what year you are in school and
- whether you are a dependent or independent student.
If you are a dependent student whose parents are ineligible for a Direct PLUS Loan, you may be able to receive additional Direct Unsubsidized Loan funds.
If the total loan amount you receive over the course of your education reaches the aggregate loan limit, you are not eligible to receive additional loans. However, if you repay some of your loans to bring your outstanding loan debt below the aggregate loan limit, you could then borrow again, up to the amount of your remaining eligibility under the aggregate loan limit.
The following chart shows the annual and aggregate limits for subsidized and unsubsidized loans.
- The aggregate loan limits include any Subsidized Federal Stafford Loans or Unsubsidized Federal Stafford Loans you may have previously received under the Federal Family Education Loan (FFEL) Program. As a result of legislation that took effect July 1, 2010, no further loans are being made under the FFEL Program.
- Effective for periods of enrollment beginning on or after July 1, 2012, graduate and professional students are no longer eligible to receive Direct Subsidized Loans. The $65,500 subsidized aggregate loan limit for graduate or professional students includes subsidized loans that a graduate or professional student may have received for periods of enrollment that began before July 1, 2012, or for prior undergraduate study.
Graduate and professional students enrolled in certain health profession programs may receive additional Direct Unsubsidized Loan amounts each academic year beyond those shown above. For these students, there is also a higher aggregate limit on Direct Unsubsidized Loans. If you are enrolled in a health profession program, talk to the financial aid office at your school for information about annual and aggregate limits.
What are the current interest rates?
Here are the interest rates for loans first disbursed between July 1, 2013, and June 30, 2014. | <urn:uuid:33cb9fa7-53bd-4339-9414-cb924ecf20d1> | {
"dump": "CC-MAIN-2018-05",
"url": "https://www.nerdwallet.com/blog/investing/subsidized-unsubsidized-student-loans/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888113.39/warc/CC-MAIN-20180119184632-20180119204632-00513.warc.gz",
"language": "en",
"language_score": 0.9692876935005188,
"token_count": 1027,
"score": 2.796875,
"int_score": 3
} |
This paper aims to describe the role of key players in the value chain for milkfish Chanos chanos (Forsskål, 1775) in a mariculture Park in Balingasag, Misamis Oriental in the Philippines with an emphasis on gender dimensions. It also estimates the value additions done by the key players and assesses implications on income distribution. Mapping the chain involved primary data collection through observations, key informant interviews, and focus group discussions. The big, medium and small-scale fish cage operators – 90 % men – are the key players in production.
Fish is crucial to food and nutrition security in Solomon Islands, and demand is expected to increase due to a growing population. However, it is projected that current capture fisheries production will not meet this growing demand. Aquaculture has the potential to mitigate the capture fishery shortfall, and the Government of Solomon Islands is prioritizing aquaculture as a solution to meet future food and income needs. Aquaculture in Solomon Islands is still in early development.
The main purpose of the study is to collect information on the input-output relationships in milkfish (Chanos chanos ) production in the Philippines. The data (analysis of which will be complete in mid-1980) can then be used to improve production operations.
Milkfish, Chanos chanos Forskal, are widely distributed in tropical and subtropical areas of the Pacific and the Indian oceans. Their geographical distribution ranges from 40°E to about 100°W, and 30-40°N to 30-40°S. They are found in warm offshore waters of the Red Sea; the Indian Ocean from east Africa to the south and west coasts of India, the coasts of Ceylon, Malaya, Thailand, Vietnam and Taiwan; and the Pacific Ocean from southern Japan to Australia in the west and from south of San Francisco to southern Mexico in the east.
The authors have assembled a unified body of information on milkfish (Chanos chanos) aquaculture in the Philippines to pin-point where further efficiencies of resource use in the milkfish system can be obtained. Each of the subsystems-procurement, transformation, and delivery-is examined in turn. The major inefficiencies in the Philippine milkfish resource system occur in the transformation sub-system rather than in the fry procurement or delivery sub-systems.
Recall and recordkeeping surveys of 324 farms demonstrate the potential for improving yields in Philippine milkfish farms. Implications of observed economies of scale are also discussed.
The study focuses on the current structure of the milkfish industry by examining the development and changes in the production and processing technologies, and product demand, markets and institutions over the past decade. In particular it looks into the policy structure, the role of research and technology, and identification of parameters/ variables that has enhanced and/or hindered technology adoption by the small-holder operators, e.g., farmers, traders and processors.
The existing gap between experimental yield and potential yield under field conditions and actual yield is highlighted. The determinants of actual yield are investigated by estimating a Cobb-Douglas production function relating yield to 11 explanatory variables. The inputs found to have a significant impact on output were stocking of fry and finglerlings, age of pond, farm size, fertilizers, and miscellaneous operating costs. Estimates of the marginal physical productivity of the inputs are used to study the optimization of input allocation, e.g.
One real constraint to expanding production from aquaculture is the lack of knowledge or information on the economic relationships between inputs and output, in other words between what goes into a pond and what comes out. In the case of Philippine milk-fish farming, the inputs include everything from seed (fry or fingerlings) to farm labor, feed, fertilizers, pond maintenance and repairs, rental and pesticides. Some other variables that can affect production relate to the experience of the farmer and the size, age and tenure of his ponds, as well as their geographic location.
This publication is adapted from the report of the project "Dissemination and adoption of milkfish aquaculture technology in the Philippines. 2007" The key lessons learned are highlighted: 1)Strengthen extension systems to better disseminate improved milkfish hatchery and nursery technologies. 2) Enhance the efficiency of milkfish grow-out culture by introducing restrictive feed management and polyculture with shrimp. 3) Train producer communities to add value by processing their milkfish harvest. 4) Improve milkfish farmers access to credit. | <urn:uuid:b2cb4d67-99cc-440c-be53-f3e1d0bc20c6> | {
"dump": "CC-MAIN-2020-29",
"url": "https://www.worldfishcenter.org/tags/milkfish",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655889877.72/warc/CC-MAIN-20200705215728-20200706005728-00104.warc.gz",
"language": "en",
"language_score": 0.9183090329170227,
"token_count": 924,
"score": 2.53125,
"int_score": 3
} |
Many Iowa fields require phosphorus (P) and potassium (K) fertilization for optimum soybean production. Suggested application rates, when required and based on soil testing, are fairly large amounts. Fertilizer application methods and equipment commonly used to supply these nutrients are adapted to apply these large fertilizer rates to the soil before planting. Foliar fertilization, on the other hand, can be used to apply only small amounts of nitrogen (N), P and K, and also to apply sulfur (S) and micronutrients.
Hundreds of field experiments during the 1970s and 1980s in Iowa and other regions of the United States investigated foliar fertilization of soybeans at late reproductive stages. The soybean plant's nutrient use is characterized by a sharp decline of root activity during seed development stages and increased translocation of nutrients from leaves and pods into the seeds. Researchers theorized that if nutrients were applied directly to the foliage at this time, grain yield might be increased.
A few Iowa experiments during the early to mid-1970s seemed to confirm this hypothesis by suggesting that spraying soybeans with a 10-1-3-0.5 N-P-K-S nutrient mixture between the R4 and R6 growth stages could increase yield up to 8 bu/acre. However, numerous follow-up trials did not confirm these results. Additional work in Kansas during the mid-1990s suggested significant yield response of high-yielding, irrigated soybeans to foliar application of N. However, more recent trials in Kansas, Iowa, and other Midwestern states showed no consistent yield increases from spraying N at R2 to R6 growth stages, and sometimes yield decreases when rates were higher than 20 to 30 lb N/acre.
Field observations in Iowa during the early 1990s suggested that poor growth perhaps due to nutrient deficiencies may occur during early growth of soybeans. This was observed in fields where producers applied P and K before planting corn at the fertilization rate for the two-year corn-soybean rotation (as most producers do). Deficiencies were even observed in some fields where broadcast fertilizer was applied before the soybean crop. Deficiencies have been partly explained by inhibited activity of roots when the topsoil is dry, reduced nutrient uptake because of excessively wet and cold soil, and other factors resulting in slow early root growth. In these conditions, small amounts of N, P, or K sprayed at early critical periods could be effective to supplement soil P and K fertilization and symbiotic atmospheric N fixation. Therefore, approximately 100 replicated field trials were conducted in Iowa producers' fields since 1994 to evaluate this possibility. The fields were managed with no-till, ridge-till, or chisel-plow tillage. Because the majority of Iowa fields test optimum or higher in soil-test P or K, only a handful of fields tested below optimum levels. Treatments changed over time, and the mixtures used (N-P-K-S) included 3-18-18-0, 3-18-18-1, 10-10-10-0, 10-10-10-1, 8-0-8-0. In some instances, mixtures included various micronutrients. Treatments involved single or double applications (spaced about 10 days) of 2 to 6 gal/acre sprayed during the V5 to V7 growth stages.
This early-season foliar fertilization resulted in statistically significant yield increases (usually more than 2 bu/acre) in about 15 percent of the fields, with frequency depending on the year. However, average yield increase across all sites was about 0.5 bu/acre. Application rates higher than 3 or 4 gal/acre of formulations with a higher proportion of N reduced yield in a few fields (and leaf burn sometimes was observed). Differences between treatments in increasing yield were not consistent across fields and could not be explained with certainty, but yield responses were more consistent for a rate of 3 gal/acre of 3-18-18. Addition of S or micronutrients to the mixtures or double applications seldom produced higher yield. It is important to remember that most fields tested optimum or higher in P and K, and that responses were observed in low-testing fields and also in high-testing fields. More detailed results of this research were published in the 2003 Iowa State University ICM Conference proceedings ("Starter and foliar fertilization: Are they needed to supplement primary fertilization?).
Study of relationships between yield response and various field or crop characteristics indicated that the conditions in which foliar fertilization would increase yield are difficult to predict. However, the research did indicate conditions in which a response might be more likely. In some years, responses were higher and more frequent in ridge-till and no-till fields compared with chisel-plow tillage, which is reasonable because foliar fertilization could alleviate early nutrient deficiencies sometimes occurring with these systems. Yield responses also were higher and more frequent when early plant growth and P or K uptake were limited by a variety of conditions (such as nutrient deficiency, cool temperatures, and either too low or excessive rainfall).
Although the research suggests that foliar fertilization can sometimes be effective to supplement the primary fertilization program for soybeans, spraying across all production conditions will not be economically effective because the expected probability of a positive yield response is only 15 percent and the average expected yield increase across all fields is 0.5 bu/acre or less. For suspected problem fields, a single application of N-P-K fluid fertilizers having a low proportion of N and a low-salt K source should be safe (to minimize leaf burning and yield decrease) and may produce economical yield responses when these problem fields are targeted for application. Addition of micronutrients will seldom increase yield unless early deficiency symptoms are observed (such as iron chlorosis in Iowa). The research on foliar fertilization continues this year by evaluating several treatments, including combined application of foliar fertilizer and fungicides.
This article originally appeared on pages 125-126 of the IC-494(15) -- June 20, 2005 issue. | <urn:uuid:6747e394-8aa1-4edd-9c88-e3346430faff> | {
"dump": "CC-MAIN-2014-52",
"url": "http://www.ipm.iastate.edu/ipm/icm/2005/6-20/folfert.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768167.64/warc/CC-MAIN-20141217075248-00054-ip-10-231-17-201.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9620801210403442,
"token_count": 1256,
"score": 3.59375,
"int_score": 4
} |
Benign Prostatic Hyperplasia (BPH) is a condition caused by an enlarged prostatic gland. If the prostate, which is located just below the bladder and surrounds the urethra, grows too big, it blocks the flow of urine through the urethra because prostatic tissue takes up too much space. Contrary to prostate cancer, BPH is a benign condition, i.e. healthy tissue is not being destroyed. BPH is a space problem related to the squeezing of the urethra by the enlarged prostate.
Symptoms of men suffering from BPH all relate to obstruction of urinary flow from the bladder though the urethra. BPH symptoms include:
Untreated BPH leads to increasingly worsening symptoms and may result in complete inability to urinate (Acute Urinary Retention, AUR), which is treated by inserting a catheter to relieve the bladder, or through surgery.
Incidence and Causes
BPH is a very common condition amongst men over 50. Causes are not well known but international statistics suggest the Western lifestyle facilitates development of BPH.
The aim of BPH treatment is to remove excess prostatic tissue that obstructs urinary flow. One medium to remove tissue is laser light. Dornier MedTech, pioneer in development of surgical lasers, has developed UroBeam, a laser that specifically addresses the needs of patients suffering from BPH.
Using a thin endoscope, laser light is applied on the prostate, causing vaporisation of the excess prostatic tissue. Generally this procedure is done under spinal anaesthesia. This form of BPH treatment has the advantages that it is quick and less invasive than other treatment approaches, while effectively removing excess tissue in a single session with minimal side effects. | <urn:uuid:b91ed3f8-b2a8-454d-b5e4-0f3f66963188> | {
"dump": "CC-MAIN-2019-22",
"url": "http://ctcoscan.com/en/content/bph-treatment",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258058.61/warc/CC-MAIN-20190525124751-20190525150751-00154.warc.gz",
"language": "en",
"language_score": 0.9516951441764832,
"token_count": 354,
"score": 2.625,
"int_score": 3
} |
Eating right before bed is never a healthy habit. The ideal time for dinner is at least three hours before going to bed. Eating the right food at night is very crucial in ensuring sound sleep and better digestion. There are certain foods you should never eat before going to bed. They can end up causing a turbulent night ahead. Here are the six foods you should mandatorily avoid at night:
Though rich in protein and calcium, milk is something you should definitely avoid at night. The lactose content of the milk causes indigestion thereby depriving you of a perfect sleep. Additionally, if you are lactose intolerant, you should keep milk at an arm's length during the night time. You can have pasta instead.
A small cube of chocolate is something everyone loves to have after supper. However, your favorite dessert can be very unhealthy. The high sugar content and caffeine keeps away you from sleeping well. Consequently, lack of sleep causes sluggishness and inefficiency in work.
Say a big no to Pizza before going to bed. Your choicest pizza contains high calories and trans fats which can sit in your stomach throughout the night leaving you disturbed for a protracted period of time.
The idea of gulping down a glass of fruit juice instead of dinner is rather very bad. Fruit juices during night tend to have an acidic reaction on your body. They can cause heartburn as well.
A peg of alcohol at night can lead to an acid influx. Alcohol relaxes your valves that connect the stomach to the oesophagus. Consequently, the body is unable to place the food where it belongs resulting in an acid influx.
Are you suffering from an acid reflux and thus planning to drink a glass of soda to counter the malady? You might not get the desired results. Soda is itself highly acidic in content which can harm the valves due to carbonation thereby leading to increased pressure on the stomach.
Prepared by Mohima Haque of NewsGram. | <urn:uuid:efe23162-f10d-4407-9d25-19266f23bee1> | {
"dump": "CC-MAIN-2023-14",
"url": "https://www.newsgram.com/general/2017/10/06/6-foods-you-should-mandatorily-avoid-at-night",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00312.warc.gz",
"language": "en",
"language_score": 0.9523976445198059,
"token_count": 413,
"score": 2.5625,
"int_score": 3
} |
Here’s a new post on this issue by Carlsson-Paige, who for decades has been at the forefront of the debate on how best to educate — and not educate — the youngest students. She is a professor emerita of education at Lesley University in Cambridge, Mass., where she taught teachers for more than 30 years and was a founder of the university’s Center for Peaceable Schools. She is also a founding member and senior adviser of a nonprofit called Defending the Early Years, which commissions research about early childhood education and advocates for sane policies for young children.
Carlsson-Paige is author of “Taking Back Childhood.” The mother of two artist sons, Matt and Kyle Damon, she is also the recipient of numerous awards, including the Legacy Award from the Robert F. Kennedy Children’s Action Corps for work over several decades on behalf of children and families, as well as the Deborah Meier award given by the nonprofit National Center for Fair and Open Testing.
By Nancy Carlsson-Paige
Soon many of our nation’s young children will be starting school for the first time. What they will likely find is something dramatically different from what their parents experienced at their age. Kindergartens and pre-K classrooms have changed. There is less play, less art and music, less child choice, more teacher-led instruction, worksheets, and testing than a generation ago. Studies tell us that these changes, although pervasive, are most evident in schools serving high percentages of low-income children of color.
The pressure to teach academic skills in pre-K and kindergarten has been increasing since the passage of the No Child Left Behind act 15 years ago. Today, many young children are required to sit in chairs, sometimes for long periods of time, as a teacher instructs them. This goes against their natural impulse to learn actively through play where they are fully engaged–body, mind, and spirit.
Play is an engine driving children to build ideas, learn skills and develop capacities they need in life. Kids all over the world play and no one has to teach them how. In play children develop problem solving skills, social and emotional awareness, self-regulation, imagination and inner resilience. When kids play with blocks, for example, they build concepts in math and science that provide a solid foundation for later academic learning. No two children play alike; they develop at different rates and their different cultures and life experiences shape their play. But all children learn through play.
Many urban, low-income children have limited play opportunities outside of school, which makes in-school playtime even more vital for them. But what studies now show is that the children who need play the most in the early years of school get the least. Children in more affluent communities have more classroom play time. They have smaller class sizes and more experienced teachers who know how to provide for play-based learning. Children in low income, under-resourced communities have larger class sizes, less well-trained teachers, heavier doses of teacher-led drills and tests, and less play.
We’ve seen a worrisome trend in recent years showing high rates of suspension from the nation’s public preschools. The latest report from the Office for Civil Rights reveals that these suspensions are disproportionately of low-income black boys. (This pattern continues for children in grades K-12.) Something is very wrong when thousands of preschoolers are suspended from school each year. While multiple causes for suspensions exist, one major cause for this age group is play deprivation. Preschool and kindergarten suspensions occur primarily in schools serving low-income, black and brown children and these are the schools with an excess of drill-based instruction and little or no play.
There are many children who simply cannot adapt to the unnatural demands of early academic instruction. They can’t suppress their inborn need to move and create using their bodies and senses. They act out; they get suspended from school, now even from preschool.
There are also impressive numbers of young children who do manage to adapt to overly academic programs. But even for them, this comes at a cost. They lose out on all the benefits of play-based learning. Instead they learn facts and skills by rote practice; they learn that there are right and wrong answers, that the teacher defines what is learned. They learn compliance. They don’t get to discover that they can invent new ideas. They don’t get to feel the sense of empowerment found in playful learning.
What we now call the “school to prison pipeline” — the pathway that leads many young people from school into the criminal justice system — is embedded in the context of racial and economic injustice that has always shaped our nation’s schools. And now, in a misguided effort to close the achievement gap, we are creating a new kind of inequality. In the current education climate, now focused on academics and rigor even in pre-K and kindergarten, economically advantaged children have many more opportunities to play in school than do kids from low-income communities. We are planting the seeds of disengagement for the young children we want to see succeed and stay in school.
Too much education policy has been written by those who don’t know how young children develop and learn. As we hear calls for universal pre-K growing louder, policymakers must listen to the knowledge and experience of early childhood professionals. Long-term studies point to lasting gains for children who have quality early childhood programs, especially those that are play-based rather than more academically oriented.
Our public education system is riddled with disparities. Let’s not create a new play inequality as we move toward providing greater access to early childhood education for all children. | <urn:uuid:33549394-2da6-4565-824e-af2f088f9e30> | {
"dump": "CC-MAIN-2020-34",
"url": "https://www.washingtonpost.com/news/answer-sheet/wp/2016/08/23/our-misguided-effort-to-close-the-achievement-gap-is-creating-a-new-inequality-the-play-gap/?utm_term=.ee9270f09184",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737233.51/warc/CC-MAIN-20200807231820-20200808021820-00064.warc.gz",
"language": "en",
"language_score": 0.9682716131210327,
"token_count": 1174,
"score": 3.28125,
"int_score": 3
} |
What If Our Guesses Are Wrong?
Published in the Journal of Forestry • May 2014
By Dr. David B. South – Emeritus Professor of Forestry, Auburn University.
This old professor would like to comment on four “climate change” articles. A 1973 article entitled “Brace yourself for another ice age” (Science Digest 57:57– 61) contained the following quote: “Man is doing these things… such as industrial pollution and deforestation that have effects on the environment.” A 1975 article about “Weather and world food” (Bulletin of the American Meteorological Society 56:1078–1083) indicated the return of an ice age would decrease food production. The author said “there is an urgent need for a better understanding and utilization of information on weather variability and climate change…”
Soon afterwards, Earle Layser wrote a paper about “Forests and climate” (Journal of Forestry 78:678–682). The following is an excerpt from his 1980 paper:
“One degree [F] may hardly seem significant, but this small change has reduced the growing season in middle latitudes by two weeks, created severe ice conditions in the Arctic, caused midsummer frosts to return to the upper midwestern United States, altered rainfall patterns, and in the winter of 1971–1972 suddenly increased the snow and ice cover of the northern hemisphere by about 13 percent, to levels where it has since remained” (Bryson 1974).
Spurr (1953) attributed significant changes in the forest composition in New England to mean temperature changes of as little as 2 degrees. Generally, the immediate effects of climatic change are the most striking near the edge of the Arctic (Sutcliffe 1969, p. 167) where such things as the period of time ports are ice-free are readily apparent. However, other examples cited in this article show that subtle but important effects occur over broad areas, particularly in ecotonal situations such as the northern and southern limits of the boreal forest or along the periphery of a species’ range.
Among these papers, Layser’s paper has been cited more often ( 20 times), but for some reason, it has been ignored by several authors (e.g., it has not been cited in any
Journal of Forestry papers). Perhaps it is fortunate that extension personnel did not choose to believe the guesses about a coming ice age. If they had chosen this “opportunity
for outreach,” landowners might have been advised to plant locally adapted genotypes further South (to lessen the impending threat to healthy forests). Since the cooling trend ended, such a recommendation would have likely reduced economic returns for the landowner.
A fourth article was about “state service foresters’ attitudes toward using climate and weather information” (Journal of Forestry 112:9 –14). The authors refer to guesses about the future as “climate information” and, in just a few cases, they confuse the reader by mixing the terms “climate” and “weather.” For example, a forecast that next winter will be colder than the 30-year average is not an example of a “seasonal climate forecast.” Such a guess is actually a “weather forecast” (like the ones available from www.almanac.com/weather/longrange).
Everyone should know that the World Meteorological Organization defines a “climate normal” as an average of 30 years of weather data (e.g., 1961–1990). A 3-month or 10-year guess about future rainfall patterns is too short a period to qualify as a “future climate condition.” Therefore, young foresters (50 years old) are not able to answer the question “have you noticed a change in the climate” since they have only experienced one climate cycle. They can answer the question “have you noticed a change in the weather over your lifetime?” However, 70-year-olds can answer the question since they can compare two 30-year periods (assuming they still have a good memory).
Flawed computer models have overestimated (1) the moon’s average temperature, (2) the rate of global warming since the turn of the century, (3) the rate of melting of Arctic
sea ice, (4) the number of major Atlantic hurricanes for 2013, (5) the average February 2014 temperature in Wisconsin (13.6° C), etc. Therefore, some state service foresters
may be skeptical of modelers who predict an increase in trapped heat and then, a few years later, attempt to explain away the “missing heat.” Overestimations might explain
why only 34 out of 69 surveyed foresters said they were interested in “long-range climate outlooks.” Some of us retired foresters remember that cooling predictions made
during the 1970s were wrong.
Even “intermediate-term” forecasts for atmospheric methane (made a few years ago with the aid of superfast computers) were wrong. Therefore, I am willing to bet money that the “long-range outlooks of climate suitability” for red oak will not decline by the amount predicted (i.e., tinyurl.com/kykschq). I do wonder why 37 foresters (out of 69 surveyed) would desire such guesses if outreach professionals are not willing to bet money on these predictions.
I know several dedicated outreach personnel who strive to provide the public with facts regarding silviculture (e.g., on most sites, loblolly pine seedlings should be planted in a deep hole with the root collar 13–15 cm belowground). However, if “right-thinking” outreach personnel try to convince landowners to alter their forest management based on flawed climate models, then I fear public support for forestry extension might decline. I wonder, will the public trust us if we don’t know the difference between “climate” and “weather,” won’t distinguish between facts and guesses, and won’t bet money on species suitability predictions for the year 2050?
David B. South
I am David B. South, Emeritus Professor of Forestry, Auburn University. In 1999 I was awarded the Society of American Foresters’ Barrington Moore Award for research in the area of biological science and the following year I was selected as Auburn University’s “Distinguished Graduate Lecturer.” In 1993 I received a Fulbright award to conduct tree seedling research at the University of Stellenbosch in South Africa and in 2002 I was a Canterbury Fellow at the University of Canterbury in New Zealand.
Scientist tells U.S. Senate: Global Warming Not Causing More Wildfires – ‘To attribute this human-caused increase in fire risk to carbon dioxide emissions is simply unscientific’ – Forestry professor David B. South of Auburn University says that atmospheric carbon dioxide concentrations have nothing to do with the amount and size of wildfires. It’s largely forest management that determines the number and size of wildfires, not global warming. “Policy makers who halt active forest management and kill ‘green’ harvesting jobs in favor of a ‘hands-off’ approach contribute to the buildup of fuels in the forest,” South told the Senate Environment and Public Works Committee on Tuesday. “This eventually increases the risk of catastrophic wildfires,” South said. | <urn:uuid:d50746d1-fd18-401b-99e1-1d46e6b6966b> | {
"dump": "CC-MAIN-2021-31",
"url": "http://test.climatedepot.com/2014/06/05/scientist-what-if-our-guesses-are-wrong-flawed-computer-models-have-overestimated-the-rate-of-global-warming-since-the-turn-of-the-century/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150308.48/warc/CC-MAIN-20210724191957-20210724221957-00479.warc.gz",
"language": "en",
"language_score": 0.9460935592651367,
"token_count": 1559,
"score": 2.71875,
"int_score": 3
} |
One news story that kept coming up throughout the summer and fall of 2018 was about the role of women in the 2018 midterm elections. Predicting the “women’s wave” that would mark a “breakthrough moment” in our democracy, reports and articles often separated the “unprecedented number of women” on the ballot into demographic groups. They said that 2018 was the year of the Latina, the year black women are taking power, the year two Muslim women join Congress. “There’s never been a Native American Congresswoman,” The New York Times announced in the spring. “That could change in 2018.”
On the whole, this media narrative was a good thing. Women across the country were being recognized for their willingness to engage and lauded for their commitment to run. Instead of ignoring intersectional identities, mainstream media embraced them.
But the story these articles present is incomplete—and, given the history of Native women particularly, it’s our job as educators to flesh it out.
Granted, the United States Congress is a powerful body. But the language of “record numbers,” “groundbreaking,” and “historic” is flawed: These words ignore the long history of Native women in leadership roles.
You can push back against it by giving students a thoughtful foundation to help them understand how leaders who are women of color—and Native American women leaders, specifically—are presented in the media.
Because language and representation matter. And the profiles our students find of these women influence the ways they see other women in power. Here are a few steps you can take to ensure that your students have a sense of the broader story.
Start the conversation by directing students to the media narratives about Native American women.
Students might, for example, compare the way one woman is described in different stories. Business Insider introduces “Sharice Davids, an openly gay Native American lawyer,” and her opponent, “GOP incumbent Kevin Yoder.” Students can compare these descriptions to one another and to the introduction of Davids in Indian Country Today: “Sharice Davids, Ho Chunk, is running for Congress.” What does each story assume its readers will think is important? What does each story assume its readers will think is unusual?
Challenge students to research Native women and leadership expectations in Indigenous communities for an essay or presentation assignment.
If you live in a community with a clearly accessible and available Indigenous nation, contact the nation and invite their cultural director or leaders to speak with your class about the role of women leaders in their current governmental and social structure.
These secondary sources can help you and your students understand this subject better:
- Native Knowledge 360 provides a useful introduction to “Power, Authority and Governance” in Native communities.
- Martha McLeod includes a special section on Native women leaders in her essay “Keeping the Circle Strong: Learning About Native American Leadership.”
- In his report on Native American women and nonprofit leadership, Raymond Foxworth includes a passage on “Narratives of Native American Women and Leadership.”
- The TED Talk “Changing the Way We See Native Americans” by Matika Wilbur may be useful for students.
- So may the TED Talk “What It Means to be a Navajo Woman” by Jolyana Bitsu.
Learn about Native women leaders from their own words.
Students can round out their understanding by turning directly to the source and learning how Native women leaders themselves have described their experiences. Here are four to start with:
- LaDonna Harris. Nation: Comanche. Political and social activist, founder of Americans for Indian Opportunity. In her KNME-TV “New Mexico’s Makers” interview, students will find quotes like this one: “When I got [to Washington, D.C.] and saw that women who were in government couldn’t travel because it’s not acceptable to go unchaperoned...Can you imagine? A woman that was in government, they couldn’t climb the ladder, the federal ladder even, and I thought that was so grossly unfair. That’s really what motivated me to get into the women’s movement, because when you see discrimination in one form, you can’t accept it in another.”
- Wilma Mankiller. Nation: Cherokee. First woman elected Principal Chief of the Cherokee Nation. In her speech “Challenges Facing 21st Century Indigenous Peoples,” she says, “Cooperation has always been necessary for the survival of tribal people and even today, cooperation takes precedence over competition in more traditional communities. It’s really quite miraculous that a sense of sharing and reciprocity continues in our communities into the 21st century, given the staggering amount of adversity Indigenous people have faced.”
- Sharice Davids. Nation: Ho Chunk. Candidate for U.S. Congress. Students might check out her campaign announcement video or her interview with The Kansas City Star on Facebook Live, during which she explains, “My lived experience has taught me how to listen better, how to be as inclusive as possible when thinking about approaches to problem-solving and leadership.”
- Deb Haaland. Nation: Laguna Pueblo. Candidate for U.S. Congress. In a CNN interview, she says, “We need to have good relations with our community in order to be successful.” Students can also watch her campaign announcement video.
Finally, teach students to be skeptical when news outlets proclaim that “record numbers” of Native women in leadership positions is “historic” or “groundbreaking.”
Talking to high school students about Native women leaders is an ideal opportunity to introduce the way that the intersections of gender, race and power for Native women—and women of color more generally—are presented in mainstream media.
When they compare contemporary media representations to their research, students should find that the headlines don't tell the full story of Indigenous women's leadership. The focus on “firsts” neglects how common it is for Native women to be leaders within their nations and communities.
Indigenous women have traditionally held leadership positions in their nations and continue to do so today. What is “historic” and “groundbreaking” is white settler American culture finally starting to pay attention.
Morris teaches writing and Native American/Indigenous Rhetorics at Kutztown University of Pennsylvania. | <urn:uuid:1bd98b66-bb86-4d16-98dc-bc707c145b42> | {
"dump": "CC-MAIN-2023-23",
"url": "https://www.learningforjustice.org/magazine/teach-about-native-american-women-leaders",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651815.80/warc/CC-MAIN-20230605085657-20230605115657-00111.warc.gz",
"language": "en",
"language_score": 0.9559685587882996,
"token_count": 1371,
"score": 3.3125,
"int_score": 3
} |
Saturday, February 26, 2011
Wednesday, February 23, 2011
Sunday, February 13, 2011
- Impairments in social interaction
- Impairment in communication
- Restrictive Interests
- Repetitive behaviour
- Sensory Challenges.
- OCD (Obsessive Compulsive Disorder)
- ODD (Oppositional Defiant Disorder)
- DCD (Development Co-ordination Disorder)
- Anxiety (common with HFA and AS)
- Seizures (up to a 50% increase of seizures when a child has Autism - usually in Adolescence)
- Pica - 5% of children with Autism have Pica
- 30-35% of children with ASD have sleep issues
- PECS communication learning (see communication section)
- Token Economy (see tools section)
- Discrete Trial Format for behaviour (see tools section)
- Escape/Avoidance/Protest - "I DON'T WANT"
- Sensory avoidance or seeking
- Attention (to gain or avoid - usually gain in a social bid)
- Tangible - "I WANT..."
- illness/hormonal imbalance
- Medication; change, introduction, ceasation
- Tired/lack of sleep
- Changes at home (birth, death, house-move, divorce)
- Emotional changes (e.g. Stress/Anxiety)
- Educational changes; SEA or teacher change
- Environmental conditions; change in furniture
Thursday, February 10, 2011
- Visual Calenders showing what will happen over week/month
- Visual schedules (showing activities that will be done through-out the day)
- "Now" and "Then" strips; cut-down versions of a visual schedule
- Visual scripts/routines: A visual way of showing the various parts of a task; i.e. hand-washing
- Cue cards: cards that can be used to show what should be said or done next in a script, or showing single actions (like STOP)
- Visual Rules: a visual list showing what should/shouldn't be done
- Countdown strips: a way of visually showing the time till a task is to be completed; helps with transitional situations. "You have 5,(4,3,2,1,) minutes till you are all done"
- Visual bridges to convey information between home and school
- Choice boards; allowing a child to indicate what he/she would like (to do)
- Social stories; a visual way to show a child what he/she should do in a social situation - a way of learning the social rules.
- Problem solving cards; a visual way to show have a "social problem" can be solved
- Consequence mapping sheets: a visual way to show a consequence of an action
- Relaxation and calming down routines
- Zone meters: showing how loud/quiet you should be in an activity/area/time
- PIC communication: picture cards used for non/reduced-verbal situations allowing reciprocal communication. (asking for something). This is part of the PECS system (Picture exchange)
- Breaks larger tasks into smaller steps
- Allows more independence from verbal cueing
- gives all staff the format to teach in the same way
- reduces anxiety by providing predictability
- Allows the accomplishment of large goals.
- Locate a communication partner
- Present a picture for a desired item
- Get the item in exchange for the picture
- The exchange is intentional
- Initiative is taught
- Is a child centred approach
- Actual Object (then miniature)
- Similar object
- Photograph of actual object
- Photograph of similar object
- Colour line drawings
- Black line drawings
- Pictorial Symbols (PCS)
- Written words
Wednesday, February 9, 2011
Below are some of the links and books recommended by the lecturer from my 'POPARD Introduction into ASD - practical applications' course. I'll update the list when I get more info.
Defeat Autism Now. There is a list of doctors and specialists.
POPARD is the outreach program for Autism in BC. They run the course I am on. They have good resources in the E-learning section.
This video was created to show what it is like for an Autistic person who has auditory and visual sensory integration issues. It's only 11 minutes long and it makes you realise what it must be like for people who have to deal with this all day, every day.
SetBC- There is a section called 'PictureSet' which has pre-made .pdf and boardmaker visual prompts.
ACT BC the Autism department for BC, Canada. List of qualified practitioners in BC.
Course text - "How to be a para pro - A comprehensive training manual" by Diane Twachtman-Cullen (Good insight into how the Autistic mind works, reasonable expectations you should expect and an introduction into some techniques on dealing with situations. DOES NOT go into ABA or PECS)
How Autistic people think- "10 things a child with Autism wishes you knew" by Ellen Notbohm
How Autistic people think-"Look me in the eye" by John Elder Robison. (Biographical)
How Autistic people think-"Curious incident of the dog in the Nighttime" by Mark Haddan. (Fiction/Biographical)
How Autistic people think-"Freaks, geeks and Aspergers Syndrome" by Luke Jackson. (Biographical)
Sensory Integration- "The out-of-Sync Child" by Carol Stock Kranowitz (The go to text about what SPD is and what you should expect).
Sensory Integration-The out-of-sync child has fun" by Carol Stock Kranowitz. (The techniques on how to create a sensory diet).
Sensory Integration- "Building Bridges through Sensory Integration" by E. YAck, P. Aquilla and S. Sutton. (A book written by OT's on the different types of Sensory Integration and techniques on how to create a sensory diet. I brought this one - it's been recommended a few times to me too)
Teaching ASD children - "How do I teach this kid?" Kimberly A Henry
Teaching ASD Children - "The Hidden Curriculum" Brendan Myles Smith
Teaching ASD Children - "Teaching Children with Autism in School - a resource guide" by the BC Government (available on-line)
Emotional regulation- "The incredible 5 point scale" by Kari Dunn Buron.
Emotional regulation-"When my Worries get too Big" by Kari Dunn Buron (I brought this to help D with Anxiety and to give him calming techniques in a "social story" context")
Emotional regulation-"A 5 could make me loose control" By Karo Dunn Baron (We brought this a while back to try to discover D stressors. The child just has to put a card with an event (i.e. recess) or object (horses) into a pocket going from 1-5. 1=like, 5=upset. They use words and PIC symbols)
Emotional regulation-"My Book of Feelings" by Amy V Jaffee. (A dry wipe book where the child can put down events/objects that illicit certain emotions, and then the child can work through techniques on how to combat negative emotions)
Social Thinking- "The New Social Story Book" by Carol Grey. (A book with all stories on a CD so they can be altered to suit the situation/child. This book gives 150 social stories that teach social skills for varying situations. I brought this one to as D is high-functioning and reading social-situations is one of his difficulties).
Social Thinking- "Superflex Superhero Social Thinking Curriculum" by Michelle Garcia Winner. (This again teaches children with High-functioning ASD about how their mind works and how they can alter their social thinking in certain situations. Written in a comic book format. D likes this one)
Social Thinking- "Social Behaviour and Aspergers Syndrome" by Tony Atwood. (Can't actually find it, but I believe Attwood is one of the main researchers around social thinking and how it applies to ASD)
Social Thinking - "The Circles Program" by Nicolas Watkins. (Not sure where to get this... I'll post more when I find out)
Anxiety- "Meet Thotso, your thought maker" by Rachel Robb Avery. (I brought this one because D was starting to suffer from Anxiety and I wanted to show him it was okay and how he could change his thinking. It's a board book with "lift the flap" and removable items. Kind of fun). | <urn:uuid:e42e73dd-ec1d-4196-9372-6830986d3055> | {
"dump": "CC-MAIN-2017-43",
"url": "http://barefootkatiek.blogspot.com/2011_02_01_archive.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825497.18/warc/CC-MAIN-20171023001732-20171023021601-00029.warc.gz",
"language": "en",
"language_score": 0.9226453304290771,
"token_count": 1835,
"score": 3.171875,
"int_score": 3
} |
Subsets and Splits